Now Reading
How machine learning can combat misinformation – Monash Lens
[vc_row thb_full_width=”true” thb_row_padding=”true” thb_column_padding=”true” css=”.vc_custom_1608290870297{background-color: #ffffff !important;}”][vc_column][vc_row_inner][vc_column_inner][vc_empty_space height=”20px”][thb_postcarousel style=”style3″ navigation=”true” infinite=”” source=”size:6|post_type:post”][vc_empty_space height=”20px”][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row]

How machine learning can combat misinformation – Monash Lens

How machine learning can combat misinformation – Monash Lens

[ad_1]

“A lie can travel halfway around the world before the truth can get its boots on.”

This quote appears in many forms. In some cases, this quote may refer to footwear. In other cases, it refers to the truth of trying to get its pants on.

The sentiment, regardless of the details, encapsulates the key challenge of misinformation. Once the fact-checking has been completed and the corrections have been distributed, misinformation has spread widely and caused all kinds of mischief.

Consequently, misinformation researchers speak wistfully of the “holy grail of fact-checking” – automatically detecting and debunking misinformation in one fell swoop. Machine learning offers the potential of both speed and scale – the ability to identify misinformation the instant it appears online, and the technical capacity to distribute solutions at the scale required to match the size of the problem.

The holy grail quest is not easy. Misinformation grows and takes on new forms. How can you identify a myth before you know what it is and what form it will take.

Climate change and misinformation

When it comes to misinformation about climate change, you often hear the terms “whack-a-mole” or “climate zombies” – typically expressed through clenched teeth. These are a reference to the fact climate myths don’t seem to die and continue to be debunked over-and-over. In fact, the same misleading arguments that were used in climate misinformation in early 1990s are still being used in 2021.

Climate zombies are a research opportunity, even though it can be annoying. It is possible to train a machine that can detect misinformation claims because climate misinformation has such stability.

My colleagues Travis Coan and Mirjam Nago from Exeter University, along with Constantine Boussalis of Trinity College Dublin, started this project a number of years back. our quest for the fact-checking holy grail – specifically focused on misinformation about climate change.

This process began with the building of a house. taxonomy of contrarian claims. As we developed and refined the many claims we were seeing in climate misinformation, five main categories became clear – it’s not happening; it’s not us; it’s not bad; solutions won’t work; and experts are unreliable.

These five categories are important because they directly mirror the climate misinformation. five key climate beliefsDeveloped from survey data by Ed Maibach – it’s happening; it’s us; it’s bad; there’s hope; and experts agree. Therefore, we dubbed our five climate misinformation categories “climate disbeliefs” and gave them five key terms.

Once we had our taxonomy, we were ready to start training the machine.

The principle of supervised machine learning is straightforward – take a paragraph of text from known sources of climate misinformation, and match it to a contrarian claim in our taxonomy (if there is a match). This process can be repeated tens of thousands more times until the machine is capable of detecting each misinformation claim. (Easy, right?).

We were able to use the assistance of the climate-literate. Skeptical Scienceteam (which had been formed on crowd-sourcing content analysis of large climate datasets).

Once we had trained our machine to detect and categorise different misinformation claims, we fed our model 20 years’ worth of climate misinformation – more than 250,000 articles from 20 prominent conservative think-tank websites and 33 blogs. It’s the largest content analysis to date on climate misinformation, making it possible to construct a two-decade history of climate misinformation.

The results weren’t what I expected at all.

The erosion in public trust in climate scientists

During the past 15 years, I’ve been debunking scientific climate misinformation – the type of myths that fell under the categories “it’s not happening”, “it’s not us”, or “it’s not bad”.

These were the most common forms of climate misinformation. Climate misinformation was dominated by attacks on scientists, and climate science itself.

Climate misinformation isn’t about providing its own alternative explanation of what’s happening to our climate. Instead, it’s focused on casting doubt on the integrity of climate science, and eroding public trust in climate scientists.

This has important implications for educators, scientists, and fact-checkers. The majority of our efforts have focused on debunking scientific myths such as “global warming isn’t happening” or “climate change is caused by the sun”.

But that’s not where misinformation is focused – the focus is on attacking scientists and science itself. There’s a dearth of research into understanding and countering this type of misinformation, let alone public engagement and education campaigns to counter their damage.

Another strong trend was a growing prevalence of misinformation targeting climate solutions – claims that climate policies were harmful, attacking renewables, or spruiking fossil fuels. This is a growing category of climate misinformation. This is especially true for conservative think-tanks that tend to be more concerned with climate policy than science denial.

The overall pattern in our data is clear – solutions denial is the future of climate misinformation.

Our research was done recently published in the Nature journal Scientific Reports. This was an important first step in our quest for the fact checking holy grail. Next, we will synthesise the machine learning research. critical thinking research into deconstructing and analysing climate misinformation.

This task requires that you bring together many disciplines from computer science and critical thought philosophy. This is a difficult task, but it is possible. interdisciplinary solutions are essential when dealing with complex, interconnected issues like misinformation.

We still have a long way to go, but for now it’s important to recognise the lessons already learnt while pursuing this quest. Not to mention the many friends made along this journey.

[ad_2]

View Comments (0)

Leave a Reply

Your email address will not be published.