In this project, collaborators will develop a multimodal model using chest X-rays and electronic health records (EHR)/clinical data to detect pneumonia and tuberculosis. The goal is to leverage both visual information from X-rays and textual information from EHR/clinical data to improve the accuracy of disease detection. Today Multi-Modal data is commonly collected and used in diagnosing diseases; this technique is relatively new and will give participants a gist of this methodology.
Chest diseases are prevalent in countries, and this methodology will help us use two modalities to diagnose accurately and provide richer contextual information leading to better patient outcomes.
Project Setup and Data Exploration
NLP and Computer Vision Basic to advance
CNN for Computer vision
Natural Language Processing (NLP) for EHR/Clinical Data
More Multimodal learning and Project wrap up
NLP, Computer Vision, Multimodal Learning, Leadership & public speaking skills, teamwork and collaboration