Projects / AI Innovation Challenge

Detecting Fake News to Combat Misinformation and Promote Plurality Using AI

Project completed!


Detecting Fake News to Combat Misinformation and Promote Plurality Using AI

Background

The rapid increase in global internet access has democratized information, but it has also fueled the proliferation of false and biased content. This phenomenon erodes trust in news, increases polarization, and creates algorithmic echo chambers, particularly on social media platforms. Events like the 2016 U.S. elections and the spread of COVID-related misinformation demonstrate the offline dangers of such online issues.

Objective

The primary goal was to develop a highly explainable AI model that evaluates the trustworthiness of online news articles and claims. The model would also link articles presenting opposing perspectives to encourage balanced viewpoints and transparency in the news.

Approach

This two-month challenge united 50 AI changemakers to tackle the problem through the following methods:

  1. Claim Extraction and Matching: Identifying key claims from news articles and matching them with related content.
  2. Sentiment Analysis: Conducting both document and entity-level sentiment analysis to understand bias.
  3. Stance Classification: Determining whether pairs of articles or claims agree, disagree, or are unrelated.
  4. Named Entity Recognition: Recognizing entities to assess credibility and relevance.
  5. Data Utilization: Leveraging a mix of labeled and unlabeled data from Newsroom’s database and open-source datasets, including political articles and COVID-related content.
  6. Multi-Model Approach: Building a set of models addressing specific components of the trust evaluation process rather than a single monolithic system.

Results and Impact

  • Developed an explainable trust scoring model that provides users insights into the criteria influencing a score.
  • Incorporated tools that identify opposing perspectives, encouraging diverse and pluralistic news consumption.
  • Promoted accountability and reduced polarization by combating algorithmic echo chambers.
  • Showcased the potential of AI in identifying misinformation, contributing to safer online spaces and more informed societies.

The Newsroom about the AI Challenge results

Future Implications

The outcomes of this project pave the way for broader applications in combating misinformation. These models could influence the design of future news distribution algorithms, enhance policy frameworks for content regulation, and encourage further research on mitigating confirmation bias in digital ecosystems.



Thumbnail Image
Optimizing and Deploying a Platform for Measuring Public Opinions on Political Actors in El Salvador with AI
Thumbnail Image
Combating Mis/Disinformation in Mali using Machine Learning
Thumbnail Image
Deploying a Platform for Detecting Dis/Misinformation in El Salvador with AI

Become an Omdena Collaborator

media card
Visit the Omdena Collaborator Dashboard Learn More