Detecting Pathologies Through Computer Vision in Ultrasound
Envisionit Deep AI is an innovative medical technology company using Artificial Intelligence to transform medical imaging diagnosis and democratize access to healthcare. In this two-month Omdena Challenge 50 technology changemakers have been building an Ultrasound solution that is able to detect the type and location of different pathologies. The solution works with 2D images and also is able to process a video stream.
The problem
The health care services in Africa are under-resourced and overused. Africa is the youngest continent in the world, where pneumonia is the number 1 cause of death in children younger than five. Breast cancer is the most frequently diagnosed cancer among women, impacting over two million women worldwide each year and causing the greatest number of cancer-related deaths amongst women. Whilst breast cancer rates are higher among women in more developed regions, rates are increasing in nearly every region globally, with some of the most rapidly increasing incidence rates being from African countries.
Ultrasound is a relatively inexpensive and portable modality of diagnosis of life-threatening diseases and for use in point of care. The procedure is a non-invasive tool, and it quickly gives doctors the information necessary to make a diagnosis. Sonography machines are being made smaller and smaller, making them more and more accessible to developing countries.
An AI solution integrated with the mobile ultrasound tool will achieve radiology-level performance for diagnosis on ultrasound images. This will assist in delivering impactful and feasible medical solutions to such countries where there are significant resource challenges.
The project outcomes
The AI solution is split into the following components:
1. Image preprocessing and normalization
Envisionit Deep AI has access to several Ultrasound sets that will be provided. These datasets are in a number of different formats, resolutions, and quality settings, which will be the case with production environments. Different practices and /or hospitals use different Ultrasound equipment that stores the images in a variety of formats and quality settings. The most common storage format is DICOM, archived in a practice /hospital PACS (picture archiving and communication system). Envisionit Deep AI already has the ability to interface with PACS platforms to retrieve and exchange images. However, for this project, an additional image normalization routine has been developed to ensure the consistency of images in the training, testing, and production datasets.
2. Model training
Even though the title of this component is about training an AI model, this should be preceded by an algorithm selection /design. The algorithm is capable of fulfilling the following requirements:
- Identification of pathologies from a set of pre-defined pathologies on a given image/video frame. Initially, we’d look at 10+ pathologies /labels, but the algorithm should be capable of identifying more pathologies /labels.
- Identification of the location of pathologies from #1 above – object detection rather than only classification.
3. Model validation, including field specialist review
This step involves all the relevant tools to extract model performance metrics as well as provide a UI to a Field Specialist (Radiologist) in order to perform an independent model validation with their own dataset; this dataset may or may not include images that were used in the testing dataset that was used during the model training.
The aim of this validation step is to ensure the adequate performance of the model. Envisionit Deep AI has a set performance metric of 95% accuracy and above before a model is considered to be ready for field pilots and ultimate deployment. Only models with combined (automated and Field Specialist validation) accuracy of 98% is considered for production environments.
4. Field deployment, including collecting concordance/discordance feedback
With all AI models that are deployed by Envisionit Deep AI, an ability for the users to provide concordance /discordance feedback is made available. These stats are collected not only with mere status feedback (Agree /Disagree) but also by allowing the users to augment AI predictions by adjusting the locations of the identified pathologies as well as adding /removing pathologies from the identified set.
Envisionit Deep AI predominantly used NVIDIA-based GPUs, and all cloud and on-premise Docker hosts include support for NVIDIA GPUs in containers themselves.
5. Concordance/discordance field specialist review
This is an important step to ensure that the model is not fed incorrect data for further training. Thus, concordance /discordance feedback is validated by a radiologist before it is added to the training set (uploaded to the AWS S3 bucket that training containers use).
A certain level of automation should be available to ensure that Envisionit Deep AI internal staff, as well as any Field Specialist consultants, are not overwhelmed with feedback validation.
This includes automated quorum testing by using a number of different models (different training stages, different input datasets, perhaps limited to a specific type of images, image quality, or contents of images, i.e., specific body part) and only images with discordance feedback significantly deviating from the expected norm should be forwarded for human validation.
Your benefits
Join a thriving AI community in 88 countries
Connect with changemakers from around the world
Adress a real-world problem with your skills
Apply your skill-set while setting the stage for a meaningful career
Requirements
Good English
A good/very good grasp in computer science and/or mathematics
Student, (aspiring) data scientist, (senior) ML engineer, data engineer, or domain expert (no need for AI expertise)
Programming experience with C/C++, C#, Java, Python, Javascript or similar
Understanding of ML and Deep learning algorithms
Application Form
Become an Omdena Collaborator