Projects / Local Chapter Challenge

Detecting Harmful Video Content and Children Behavior Through Computer Vision

Challenge Completed!


Omdena Featured image

US-based startup Preamble collaborated with 50 Omdena AI engineers and data scientists to develop a cost-effective solution to detect harmful situations in online video challenges. Using computer vision the team was ablo to detect if a video is harmful or not.

The results from this project are intended as a baseline to help Preamble build solutions for safer online platforms. 

The problem

Children are more susceptible to acting impulsively and participating in online internet challenges. Internet challenges can encourage kids to replicate unsafe behaviors to increase user engagement and materiality on social media. Some of these outrageous challenges have led to severe bodily harm and even death. To protect children from these types of dangerous ideas and peer pressure, we are building a model to filter out this content. 

computer vision violence

Source: AsiaOne

Some prior internet challenges that are dangerous to participants and especially children:

  • Blackout challenge 
  • Eating Tide detergent pods
  • Cinnamon challenge (can cause scarring and inflammation)
  • Super gluing their lips together 
  • Power outlet challenge

The project outcomes

The process

The team divided several tasks across contributors according to their expertise in the following process: 

  • Select and download videos with harmful content from social media platforms
  • Extract frames (images) from the videos at regular intervals.
  • Label each image as harmful, ambiguous, or not harmful
  • Train an image classification model supervised by the labeled frames.
  • Evaluate the image classification model

The data

The bulk of the teamwork was concentrated on data collection and labeling. The team developed scripts to facilitate video and metadata retrieval from social media platforms. Specifically, existing python libraries were used to download videos from YouTube, Vk, and TikTok. Through this process, the team manually collected more than 240 challenge videos.

Violent Online Challenges

Figure 1. Challenges distribution

The model

After a manual and partly automated labeling process of images and challenges as harmful, ambiguous, or not harmful, the team tested several computer vision models. As an outcome of this eight weeks project, the best-fit model was able to detect if a video is harmful or not using the labeled data set. The following steps will be to expand the model performance and applicability to a broader set of conditions.

First Omdena Project?

Join the Omdena community to make a real-world impact and develop your career

Build a global network and get mentoring support

Earn money through paid gigs and access many more opportunities



Your Benefits

Address a significant real-world problem with your skills

Get hired at top companies by building your Omdena project portfolio (via certificates, references, etc.)

Access paid projects, speaking gigs, and writing opportunities



Requirements

Good English

A very good grasp in computer science and/or mathematics

Student, (aspiring) data scientist, (senior) ML engineer, data engineer, or domain expert (no need for AI expertise)

Programming experience with Python

Understanding of Data Analysis, Machine Learning, and Computer Vision.



This challenge has been hosted with our friends at



Application Form

Related Projects

media card
Developing a Platform for Detecting Dis/Misinformation in Mongolia with AI
media card
AI-Powered Electoral Integrity: Combating Misleading and Unreliable Information in the Dominican Republic
media card
Developing an AI-Driven Sentiment Analysis Tool for Political Actors in El Salvador

Become an Omdena Collaborator

media card
Visit the Omdena Collaborator Dashboard Learn More