Violence Detection Between Children and Caregivers Using Computer Vision

Violence Detection Between Children and Caregivers Using Computer Vision

A team of 50 Omdena AI changemakers collaborated with Israel-based startup EyeKnow AI to apply deep learning to build a computer vision model for violence detection. The model can help not only detect but in the future also prevent violent behaviour applied to children by caregivers.

 

The problem

Child maltreatment presents a substantial public health concern. Estimates using Child Protective Service (CPS) reports from the National Child Abuse and Neglect Data System (NCANDS) suggest that 678,810 youth were subjected to maltreatment in 2012, with 18% of these experiencing physical abuse (). Additionally, a large proportion of cases are undetected by CPS, suggesting that more youth are likely subjected to abusive or neglectful behavior (). Most seriously, maltreatment was responsible for an estimated 1,640 youth fatalities in 2012 ().

 

The project outcomes 

The data

Two datasets, one is a caregiver-to-senior violence dataset, made out of 500 clips sourced entirely from YouTube. The 2nd dataset comprises 500 clips of caregiver-to-child aggression/violence, driven by YouTube clips and unique data obtained through partnerships with EyeKnow’s partners. 

 

The machine learning models

The contributors of the challenge defined several approaches to build a model to detect violent interaction or any relevant interaction between the entities (caregivers, elderly, children). The first step of this approach was to see the entities, which the team did by utilizing object detection.

The team applied frame-level entity annotation to label the caregivers, children, and elderly. After this step, the collaborators trained an object detection model and implemented an ML pipeline. This pipeline ingests video recordings from CCTV or other sources and outputs frame-level information about the number and type of entities on the frame level. In addition, bounding box-based overlap analysis was included in the pipeline, which flags frames that potentially contain interaction of high intensity (potentially violent). 

Next to this pipeline, the team applied video classification modeling utilizing deep neural networks. This approach combined pre-trained models for feature extraction with sequence modeling to capture temporal relationships. 

All the developed models and approaches run in a Python application. The application is highly modular and serves multiple purposes. By modifying a configuration file (parameters JSON file), the user can execute training of component models or manage inference and process video files.

 

Digitizing Floor Plan Layouts using AI

Digitizing Floor Plan Layouts using AI

50 AI engineers collaborated in this high-impact 2-months innovation project to identify and construct digital objects from floor plans using computer vision.

 

The problem

To recognize floor plan elements in a layout requires manual labor to draw the different elements over the image. The goal of this project has been to improve the efficiency of this manual effort by automatically identifying the relevant types of objects present using state-of-the-art deep learning and computer vision approaches.

 

The project outcomes

The team built computer vision models to digitize the floor plan from architectural blueprints. The team successfully applied the following methods in achieving the tasks:

  • Object detection,
  • Image segmentation using Mask RCNN
  • Improved Optical character recognition (OCR) using the provided datasets,
  • Identifying languages other than English on floor plans

 

Data

Archilyse provided a large set of bitmap images of different sizes and dimensions, along with the bounding boxes of the relevant type of elements manually drawn. Examples of those are walls, columns, railings, kitchen furniture, shower, windows, doors, bathtub, bedroom area, kitchen area, etc.

 

Smart Electric Vehicle (EVs) Charging Network Management Using Machine Learning

Smart Electric Vehicle (EVs) Charging Network Management Using Machine Learning

The Energy Tech Hib and Centre for New Energy Technologies collaborated with a team of Omdena AI engineers to analyze EV charging data and applying machine learning for optimizing Electric Vehicle (EVs) Distributed Network Service planning.

 

The outputs from this project can help to derive insights on when and how much EVs get recharged as well as showing where the consumption peaks are. Knowing these charging patterns can inform the system operators to manage their networks and optimize the efficiency of EV charging operations. 

 

The problem

The increasing uptake of electric vehicles and their charging pose challenges to today’s energy networks, e.g. unexpected peak load and voltage problems in the distribution network. There is a growing interest in understanding how and when EVs are charged to inform the design of charging incentives and energy management schemes. However, it is challenging to keep track of EV charging at a large scale in a cost-effective way. With smart meter data collected, this sheds light on using AI and machine learning techniques to detect EV charging from the meter data, providing an effective yet non-intrusive solution for Distributed Network Service Providers (DNSPs) to know how EVs are charged on their networks, which will inform their network planning, upgrade, and operations.

 

The project outcomes

 

Exploratory Data Analysis to understand consumption patterns

 

The overall method of the approach taken in this project has been as follows: 

1. Data Preparation: A good portion of the time was spent on data preparation to make a usable dataset for modeling. 

2. Exploratory Data Analysis (EDA): This step was focused on understanding consumption patterns for different types of users (EV, producers, peak consumers, etc). 

 

The EDA process helped to derive insights on what information is most relevant for the modeling part.

 

AI Smart EV Charging

Figure 1: Average hourly consumption for one day for EV and non-EV users

 

AI modeling to detect EV charging patterns

The team implemented a machine learning clustering algorithm that groups similar timestamps of consumption in smart meters. Initially, the team chose three clusters. Meaning each of the time slots (for every smart meter) is grouped into three buckets (low, medium, high). Low indicates “low consumption”. Medium indicates medium-sized consumption and potentially non-EV appliances. High indicates potential EV charging.

The outputs from this project can help to derive insights on when and how much EVs get recharged as well as showing where the peaks are. Knowing these EV charging patterns can inform the system operators to manage their networks and optimize the efficiency of EV charging operations. 

The model and insights were built into an Excel tool and dashboard.

 

Detecting Harmful Video Content and Children Behavior through Computer Vision

Detecting Harmful Video Content and Children Behavior through Computer Vision

US-based startup Preamble collaborated with 50 Omdena AI engineers and data scientists to develop a cost-effective solution to detect harmful situations in online video challenges. Using computer vision the team was ablo to detect if a video is harmful or not.

 

The results from this project are intended as a baseline to help Preamble build solutions for safer online platforms. 

 

The problem

Children are more susceptible to acting impulsively and participating in online internet challenges. Internet challenges can encourage kids to replicate unsafe behaviors to increase user engagement and materiality on social media. Some of these outrageous challenges have led to severe bodily harm and even death. To protect children from these types of dangerous ideas and peer pressure, we are building a model to filter out this content. 

computer vision violence

Source: AsiaOne

 

Some prior internet challenges that are dangerous to participants and especially children:

  • Blackout challenge 
  • Eating Tide detergent pods
  • Cinnamon challenge (can cause scarring and inflammation)
  • Super gluing their lips together 
  • Power outlet challenge

 

The project outcomes

 

The process

The team divided several tasks across contributors according to their expertise in the following process: 

  • Select and download videos with harmful content from social media platforms
  • Extract frames (images) from the videos at regular intervals.
  • Label each image as harmful, ambiguous, or not harmful
  • Train an image classification model supervised by the labeled frames.
  • Evaluate the image classification model

 

The data

The bulk of teamwork was concentrated on data collection and labeling. The team developed scripts to facilitate video and metadata retrieval from social media platforms. Specifically, existing python libraries were used to download videos from YouTube, Vk, and TikTok. Through this process, the team manually collected more than 240 challenge videos.

Violent Online Challenges

Figure 1. Challenges distribution

 

The model

After a manual and partly automated labeling process of images and challenges as harmful, ambiguous, or not harmful, the team tested several computer vision models. As an outcome of this eight weeks project, the best-fit model was able to detect if a video is harmful or not using the labeled data set. The following steps will be to expand the model performance and applicability to a broader set of conditions.

 

Modeling Triggers and Symptoms of Hashimoto’s Disease

Modeling Triggers and Symptoms of Hashimoto’s Disease

An Omdena team of 30 AI changemakers collaborated with the rising impact startup Hashiona to better understand the relationship between triggers and symptoms of Hashimoto’s disease. The team extracted user data from Hashiona´s app, consolidated the data, did extensive data exploration, and applied statistical analysis and clustering models on triggers and symptoms for Hashimoto’s disease. The models were deployed and visualized in a Streamlit app.

 

Results from this project are intended to be used in Hashimoto´s app to help the user cope better with Hashimoto´s disease.

 

The problem

Hashimoto’s disease is an autoimmune disorder that causes hypothyroidism. Hashimoto’s and other frequent autoimmune diseases are incurable and hard to treat. Hashimoto’s is described with at least 45 medical symptoms, which are hard to detect. Exactly as in the treatment of diabetes, patients have to implement comprehensive lifestyle changes to mitigate the autoimmune response in their body & feel better (i.e. go into remission).  Patients with Hashimoto’s disease may experience persisting symptoms despite normal hormone levels.

Hashiona is the first mobile application dedicated to people with Hashimoto’s disease and/or hypothyroidism. Hashiona´s user base exceeds 10,000 and their approach has been validated by Stanford, Draper University, MIT, and more. 

 

Source: Hashiona

 

 

The project outcomes

 

The data

The first step was to apply Exploratory Data Analysis (EDA) to discover the hidden patterns and anomalies in the entire dataset. Next, the team divided the data into subsets for analysis:

  • Profiles who suffer from Hashimoto
  • Profiles who suffer from Thyroidism
  • Profiles with Thyroidism symptoms
  • Profiles having any triggers, mind or body symptoms

 

Next, top triggers and symptoms were identified. Examples include:

  • Overtraining
  • Cold
  • Caffeine consumption
  • Relationship with family
  • Heat

 

The team identified user profiles who either suffer from Hashimoto’s or Thyroidism and have recorded their hypothyroidism symptoms. Among these profiles, correlation analysis was performed for body symptoms, mind symptoms, and thyroid symptoms with triggers.

Next, clustering analysis and modeling have been applied to show the most occurring triggers for various clusters. After grouping data by cluster, the average population in the cluster affected by a trigger or symptom was calculated. 

 

The models and dashboard

To visualize the models and outcome the team choose Streamlit. Streamlit allows users to quickly build highly interactive web applications and is generally used for deploying Machine learning models.

For the demo, static charts have been implemented.

 

Screenshot: Streamlit Dashboard