Developing a Multimodal Model for Chest Disease Detection Using Radiology Images and Text Data
Challenge Background
In this project, collaborators will develop a multimodal model using chest X-rays and electronic health records (EHR)/clinical data to detect pneumonia and tuberculosis. The goal is to leverage both visual information from X-rays and textual information from EHR/clinical data to improve the accuracy of disease detection. Today Multi-Modal data is commonly collected and used in diagnosing diseases; this technique is relatively new and will give participants a gist of this methodology.
The Problem
Chest diseases are prevalent in countries, and this methodology will help us use two modalities to diagnose accurately and provide richer contextual information leading to better patient outcomes.
Goal of the Project
Learn about Multimodal models. How to build them. Become more familiar with NLP and or Computer Vision. This project touches on both topics and more.
Project Timeline
Project Setup and Data Exploration
NLP and Computer Vision Basic to advance
CNN for Computer vision
Natural Language Processing (NLP) for EHR/Clinical Data
Multimodal basics
More Multimodal learning and Project wrap up
What you'll learn
NLP, Computer Vision, Multimodal Learning, Leadership & Public Speaking Skills, Teamwork and Collaboration
First Omdena Local Chapter Project?
Beginner-friendly, but also welcomes experts
Education-focused
Duration: 4 to 8 weeks
Open-source
Your Benefits
Address a significant real-world problem with your skills
Build your project portfolio
Access paid projects (as an Omdena Top Talent)
Get hired at top organizations
Requirements
Good English
Suitable for AI/ Data Science beginners but also more senior collaborators
Learning mindset
Application Form
This Challenge is hosted by:
Become an Omdena Collaborator

