AI Insights

Combating the Infodemic: How NLP is Revolutionizing the Fight Against Fake News and Misinformation

May 16, 2024


article featured image

In this article, we explore how Natural Language Processing (NLP) techniques can be leveraged to combat the growing problem of misinformation and fake news. We will discuss a comprehensive project undertaken by Omdena in collaboration with The Newsroom, where a team of 45+ data scientists and machine learning experts developed a pipeline to identify, classify, and summarize misinformation in news articles. The project culminated in the creation of a browser extension that empowers users to assess the trustworthiness of news content they encounter online.

The Background

The rapid spread of misinformation and fake news has become a global phenomenon, posing significant threats to the integrity of media, political processes, and social stability. In an increasingly digitalized world, where information can be disseminated at an unprecedented pace and scale, the need for effective tools to combat this issue has never been more pressing. Studies have shown that fake news can have a substantial impact on public opinion and even influence the outcome of elections. Social media platforms, in particular, have become a breeding ground for the proliferation of false and misleading information, making it crucial to develop sophisticated techniques to identify and mitigate the spread of such content.

The Goal

We had three main goals for this project:

1. Assign a trust score to news articles based on the extent and types of misinformation they contain. To ensure transparency, we decided to build separate models for specific attributes of misinformation: hate speech, clickbait, and political bias. This goal was further divided into two parts:

    • Prepare three labeled datasets from the unlabeled data provided by The Newsroom, each focusing on one of the attributes.
    • Build classification models for hate speech, clickbait, and political bias using both open-source and our newly labeled datasets.

2. Build models for claim detection in news articles. Given our two-month timeline, we focused solely on the claims detection problem, leaving claim verification for future work.

3. Develop a minimal viable product (MVP) in the form of a Google Chrome extension to demonstrate the practical application of our models.

The workflow of this project is visualized below:

Goals and deliverables

Goals and deliverables

Our Approach

In-house dataset preparation

One of the primary goals of the project is to prepare in-house datasets from unlabeled news articles provided by The NewsRoom. The resulting datasets will be used to solve diverse misinformation-associated problems, for example, hate speech detection, political bias identification, clickbait detection, claims detection, and verification. The following subsections provide an overview of the in-house dataset generation process and a summary of the resulting datasets.

Dataset labeling process

We developed a generic approach to label datasets for hate speech, clickbait, and political bias. The dataset labeling life cycle starts with selecting a labeling tool like HumanFirst, chosen for its speed and ease of use.

Next, we prepared problem-specific guidelines to ensure consistent labeling. The remaining steps involve the NewsRoom unlabeled data. Given the vast amount, we used supervised and unsupervised techniques to subsample a small, representative dataset for sentence-level labeling within 1-2 weeks.

We then crowdsourced the labeling, aiming for 3x per sentence but mostly achieving 2x. Despite guidelines, we encountered conflicts, which we resolved by assigning additional people to reach consensus, resulting in our final in-house dataset(s).

Case Study: Clickbait

With HumanFirst selected as the labeling tool, we start by generating guidelines. We define ‘clickbait’ and provide examples, as shown in Table 1.

Next, we parse headlines from NewsRoom articles and train a Universal Sentence Encoder (USE) model on an independent dataset to predict clickbait probability scores for each headline. We randomly sample 10,000 headlines with different clickbait scores for uniform representation, convert them to HumanFirst format, divide them into 5 datasets, and upload for labeling.

The datasets are then 2x labeled independently by collaborators using HumanFirst. We export the datasets, resolve conflicts, and prepare the final 9,954 article headline ‘in-house’ labeled clickbait dataset.

Summary of in-house datasets

We summarize our three independently prepared datasets using the dataset labeling lifecycle:

1. The hate speech dataset is the most imbalanced (1% hate vs 99% no hate), likely due to the underrepresentation of hate speech in mainstream news and limitations of our shortlisting approach.

2. The clickbait dataset has the best quality and representation, partly because clickbait detection is relatively easier. We consistently ensured 2x labeling.

3. The political bias dataset, labeled last, suffered from limited coverage (only half labeled) and lower quality due to most examples being single-labeled, despite significant effort finding a good candidate unlabeled dataset.

We also prepared a smaller in-house labeled dataset (1000 examples) for claim detection, which did not follow the lifecycle and was single-labeled due to limited capacity. Further exploration and extension could be future work.

Claims Detection Modeling

We defined a claim as “A statement about the world that can be verified”. The Claims Detection models function as binary classification tasks, grouping input sentences as Check-Worthy Factual Sentences (CFS) and Non-Factual Sentences (NFS). This labeling convention and the model codes tested stem from the open-sourced ClaimSpotter publication and GitHub.

Baseline models BiLSTM and SVM were proposed, along with transformer models BERT, DistilBERT, and RoBERTa. For this project, only the BiLSTM model was tested and integrated into the MVP, achieving an F1-score of ~74% detection rate for CFS after fine-tuning.

Transformer models from the ClaimSpotter publication were also tested, including BERT, DistilBERT, and RoBERTa, with and without adversarial perturbations. Prioritizing detection accuracy and model training time, the BERT-based model without adversarial perturbations outperformed all others (F1-score of 0.8338 for CFS).

Claims Detection on the ‘Claimbuster’ dataset

Claims Detection on the ‘Claimbuster’ dataset

Transparency Modeling

Transparency modeling includes the preparation of classifiers based on published datasets for hate speech, clickbait, and political bias classification. Collaborators build many independent models for these problems. We benchmarked the models and selected the best one(s) based on the F1-score of the positive class (hate, clickbait, or politically biased). Finally, we evaluated the models on the in-house datasets prepared earlier.

Hate Speech Classification

Hate speech classification is a binary problem where sentences are labeled as ‘hate’ or ‘no-hate’. We used two datasets: StormFront (forum-based) and Crowdflower (tweet-based), focusing mostly on StormFront as it’s binary by nature.

We prepared two datasets from StormFront: one with the full, imbalanced dataset and another with a balanced subsample. On the balanced dataset, a BERT + CNN model achieved the best F1-score of 0.812, closely followed by a USE-based model. Traditional machine learning algorithms (Naive Bayes, Random Forest, and SVC) provided comparable performances.

Hate Speech classification on ‘StormFront’ dataset

Hate Speech classification on ‘StormFront’ dataset

Clickbait classification

Clickbait is a binary classification problem where we predict if an article headline is clickbait or not. We used a combined dataset from Kaggle.

We built several models including xgboost, BERT, and USE. Based on F1-score, xgboost with comprehensive features performed best, but was time and memory inefficient. The gain over a simpler xgboost with fewer features was minimal (0.905 vs 0.902 F1-score). Therefore, the simpler xgboost is likely the most practical solution.

Clickbait classification on the ‘combined’ dataset

Clickbait classification on the ‘combined’ dataset

Political bias Classification

We investigated classifying political bias as either a binary (biased or not) or three-label problem (left, center, right). Due to poor performance on the three-label task, we focused on binary classification.

Different datasets label bias at either the article level, like DeepBlue and Baly et al., or the sentence level, like the IBC dataset. We present results on the Baly et al. and IBC datasets.

For article-level classification on Baly et al., we built tree-based (Random Forest, xgboost) and transformer-based (RoBERTa, LongFormer) models. RoBERTa performed best with an F1 of 0.79. For sentence-level classification on IBC, a USE-based model (F1 0.90) outperformed Naive Bayes. Our results suggest that article-level classification is considerably more challenging than sentence-level.

Political bias classification on ‘article’ level and ‘sentence’ level datasets

Political bias classification on ‘article’ level and ‘sentence’ level datasets

In-house data modeling

We evaluated our in-house labeled datasets for the three transparency modeling problems. Off-the-shelf models performed poorly, so we modeled the in-house datasets separately.

For hate speech, balancing the dataset improved the F1-score from 0.31 to 0.51 using a USE-based approach, but it was still inferior to the StormFront dataset.

For clickbait, a USE-based model outperformed an xgboost model, but both performed worse than the combined dataset, with the best F1-score reducing to 0.42 from ~0.90.

For political bias, we merged the left and center labels for binary classification. A USE-based model only achieved an F1-score of 0.17.

Performance of in-house datasets

Performance of in-house datasets

Minimal Viable Product: NewsScore

Newscore

To show how the models can be used, a basic browser extension called NewsScore was developed, aligning with The Newsroom’s vision. When a user visits a news article, the extension analyzes it in the back-end. Clicking the extension displays a report and provides options to interact further.

The web extension, NewsScore, in practice. Source: Omdena

The web extension, NewsScore, in practice. Source: Omdena

NewsScore offers:

  • An initial report on the article’s clickbait, bias, or hate speech, with detailed info on each section’s score. The reliability section currently only lists detected CFS, which will later enable features like claim verification and reporting. (Users can also highlight sentences to apply these tools, though now this just adds text to the detected claims list for demo purposes.)
  • A user feedback section to improve the extension’s scoring over time through methods like active learning.
  • A (disabled) related articles section for potentially showing similar articles with better scores on the same topic in the future.
A browser extension MVP

A browser extension MVP

Some features were given to The Newsroom team nearly complete, allowing direct end-user use. Future enhancements could include integrating various modeling approaches in the MVP back-end and adding helpful data visualizations to the front-end. The finished Chrome extension will provide an article summary with an overall news score, transparency scores for hate speech, clickbait, political bias, and a score for claim verification (reliable information).

Key Achievements

Labeled Datasets: Successfully prepared three high-quality labeled datasets from the provided unlabeled news data, each focusing on a specific attribute of misinformation: hate speech, clickbait, and political bias. These datasets enable the training of accurate classification models.

Transparency Models: Developed separate machine learning models to detect hate speech, clickbait, and political bias in news articles. Building attribute-specific models allows for greater transparency in assigning overall trust scores to articles.

Claims Detection: Built models focused on identifying claims made within news articles. While claim verification was left for future work due to the project’s timeline, the ability to detect claims is a crucial first step in combating misinformation.

Minimal Viable Product: Created a functional Google Chrome extension called NewsScore that demonstrates the practical application of the developed models. The extension provides users with a trust score for news articles based on detected levels of hate speech, clickbait, political bias, and claims.

Future Steps

Our work could be further extended and enhanced in several areas:

  • Dataset Improvement: Preparing higher quality datasets by ensuring 3x labeling for all in-house datasets. Extensively explore model training and evaluation on these improved datasets.
  • Domain-Specific Approaches: Using domain-specific approaches like NewsBERT to model the data, which we did not have time to explore in this project.
  • Aggregated Trust Score: Exploring different approaches to generate an aggregated news trust score from the results of transparency and claim detection models.
  • Additional Transparency Models: Implementing models for other attributes of misinformation, such as detection of machine-generated text, which was initially explored but halted due to project timeline constraints.
  • Efficient Storage Strategies: Designing storage strategies to efficiently process each news article for users by reusing previous visits to the article.
  • Common Feature Representation: Designing a common feature representation to allow models to reuse these features across each score generation.
  • Model Deployment: Exploring different approaches to model deployment on the MVP, particularly for transformer models.
  • Scalable MVP Technologies: Porting the MVP extension code to more scalable and robust technologies like Vue for increased performance.
  • In-Article Predictions: Applying each prediction directly to the news article text in the form of highlighted text, allowing users to easily spot occurrences of clickbait, bias, hate speech, etc.
  • Sentence Segmentation Optimization: Further calibrating the sentence segmentation process, which is used across all classification problems, to optimize it specifically for news article sentences, e.g. by considering the role of social media citations within the text.
  • Active Learning: Incorporating active learning practices to allow user feedback to help modeling algorithms improve their predictions over time.

Potential Applications

The NewsScore methodology and underlying NLP models have broad potential for application across various industries, including:

  • Social Media Moderation: Social media platforms can leverage these techniques to automatically detect and flag posts containing hate speech, political bias, or misinformation, helping to maintain a healthier online discourse.
  • Brand Reputation Management: Companies can monitor news and social media for mentions of their brand, using these models to quickly identify and respond to any negative or misleading coverage that could harm their reputation.
  • Political Campaign Monitoring: Campaign teams and election monitoring organizations can use these tools to track media coverage of candidates and issues, detecting biased or false reporting that could unduly influence voters.
  • Educational Media Literacy: Educators can incorporate these technologies into media literacy curricula, equipping students with the skills to critically evaluate the news and information they encounter online.
  • Scholarly Publishing: Academic publishers and research institutions can apply these methods to vet submissions for signs of bias, sensationalism, or unsupported claims, upholding the integrity of scientific communication.

Want to work with us too?

media card
Top 14 Organizations Restoring Trust by Tackling Disinformation
media card
Empowering Journalists in El Salvador with AI to Combat Misinformation and Disinformation
media card
Harnessing AI for Global Good: Omdena’s Expansive Impact from Tanzania, Mongolia, Bhutan and Beyond
media card
The Ethical Role of AI in Media: Combating Misinformation