Detecting Harmful Video Content and Children Behavior on Online Platforms
Background
Children are increasingly drawn to participating in online challenges shared on social media platforms, often without understanding the associated risks. These challenges, designed to boost engagement and visibility, can encourage unsafe behaviors such as the Blackout Challenge, Tide Pod Challenge, and others. Tragically, some of these trends have resulted in severe injuries or fatalities. Addressing this growing concern, the US-based startup Preamble collaborated with 50 Omdena AI engineers and data scientists to create an AI-powered solution for detecting harmful content, specifically targeting video challenges that promote dangerous activities.
Objective
The primary goal of this project was to develop an efficient, scalable solution to detect harmful video content using computer vision. This system aims to identify and filter dangerous challenges in online videos, mitigating risks to children by reducing exposure to harmful trends.
Approach
To achieve its objectives, the team utilized advanced methods in data gathering, labeling, and modeling. The approach included the following steps:
- Data Collection and Preparation:
- Over 240 videos containing potentially harmful content were manually sourced from platforms like YouTube, TikTok, and VK using Python libraries.
- Frames were extracted from videos at regular intervals to create a dataset for analysis.
- Labeling:
- Frames were categorized as harmful, ambiguous, or not harmful through a combination of manual and automated processes.
- Model Development:
- A range of image classification models was tested using the labeled data.
- The model with the best performance was selected to determine if video content posed potential harm.
- Evaluation:
- The team rigorously evaluated the models to ensure reliability and accuracy in classifying harmful content.
Results and Impact
The project delivered a robust, AI-powered model capable of detecting harmful content in online videos with high accuracy. Key outcomes include:
- A curated dataset of over 240 challenge videos with labeled frames.
- A best-fit computer vision model trained to identify harmful content, helping platforms preemptively filter dangerous videos.
- Enhanced awareness of how computer vision for harmful content detection can safeguard vulnerable audiences like children.
This solution not only reduces the risk of exposure to dangerous online trends but also sets a benchmark for future safety-focused AI applications.
Future Implications
This project lays the groundwork for further innovation in detecting harmful content and child safety online. Future steps could include:
- Expanding the dataset and improving the model’s accuracy under diverse conditions.
- Integrating the model into content moderation systems on social media platforms.
- Influencing policymaking by demonstrating the effectiveness of AI in protecting children from digital harm.
By continuing to advance these technologies, we can foster a safer digital environment for children worldwide.
This challenge has been hosted with our friends at
Become an Omdena Collaborator