Identifying Malnourished Children through Computer Vision
In this real-world project with Child Growth Monitor, 61 technology changemakers have been building a machine learning solution for identifying malnutrition of children. A problem affecting more than 200 million children.
Hunger is one of the most pressing global social challenges of our time. One of the by-products of hunger, malnutrition, is the leading global cause of child mortality under age 5. Around 200 million children under age 5 worldwide suffer from malnutrition and among those children, ~3 million die annually. These deaths are totally preventable if timely diagnosis of malnutrition and treatment can happen. Child Growth Monitor (CGM) is a game-changer application in this space as it replaces traditional methods of anthropometric measurement which are complex, slow, and expensive, frequently resulting in poor data and wrong assessments of the situation. CGM predicts measurement of height, weight, and mid-upper arm circumference (MUAC) of children under age 5 using its open-sourced state of the art neural network algorithms to determine if a child is malnourished or not.
The project goals
The goal in this challenge of increasing the accuracy of CGM’s neural networks’ prediction, so that 90% of children get a height measurement with less than 1cm error.
Demo of Child Growth Monitor´s application
Predicting the height of a child
Prediction on manually cleaned and on real-world datasets
Evaluation of the results into 5 max-error categories (< .2cm, <.4cm, <.6cm, <1cm and >1cm = rejected).
Currently, CGM produces good-enough results for ~60% of children after manual cleaning.
Selecting scan artifacts
One picture/ point cloud is called an artifact (Analogy: A movie consists of multiple frames)
Input files in JPG, PCD, or depthmap format) for result generation
Each child is scanned in three steps: front, back, and 360° view of the child. The scanning process is continuous and produces ~6-12 JPGs (RGBs) and exactly three depthmaps or point clouds per second for each step.
The challenge is to determine which of those scan artifacts should be used as input for our result generation to reliably generate accurate results.
Ensuring reliability on real-world data
How can we ensure that no child gets a measurement result with an error of more than 1cm? We are developing technical and organizational measures to ensure the safety of our solution. In clinical trials and impact studies in 5 countries starting 2021, we want to prove that no child will get wrong measurements or even diagnosis from the product.
We want to collaborate on technical measures that will ensure the following:
Tested with or without manual inspection and cleaning
Automatic detection of hard error cases such as:
A child not fully visible
Light conditions are too bad
A confidence interval for every measure prediction that correlates with the error and a statistically safe assumption of the maximum error based on:
Age predicted height and weight of the child and the training targets that the model has seen
Region of measurement
Pose predictions and visibility of the child’s body parts
Selection of all scans that don’t satisfy the error margins and predictions for manual inspection