Building a Smart Assistant for the Deaf and Dumb Using Deep Learning

Local Chapter Jordan Chapter

Coordinated byJordan ,

Status: Completed

Project Duration: 20 Apr 2023 - 09 Jun 2023

Open Source resources available from this project

Project background.

Deaf and mute individuals have used sign language as a means of communication for centuries. Sign language is a visual language that uses a combination of hand gestures, facial expressions, and body language to convey meaning. Sign language is a complex and nuanced language that has its own grammar and syntax, and it is used by millions of people around the world.

There are many different sign languages used throughout the world, each with its own unique characteristics and regional variations. Some of the most widely used sign languages include American Sign Language (ASL), British Sign Language (BSL), and Australian Sign Language (Auslan). Each sign language has its own set of signs and gestures that are used to convey meaning, and many sign languages are not mutually intelligible.

Sign language has played an important role in the deaf community, providing a means of communication that is accessible to deaf and mute individuals. Sign language has also been recognized as an official language in many countries around the world, including the United States, Canada, and New Zealand.

The problem.

Despite the importance of sign language in the deaf community, many hearing individuals are not familiar with the language and may struggle to communicate with deaf and mute individuals. This is where tools like the Deaf & Dumb Assistant come in, providing a means of bridging the communication gap between deaf and mute individuals and others who may not be familiar with sign language.

Project goals.

In this project, the Omdena Jordan Chapter team aims to develop a Deep Learning model that will train to recognize sign language gestures and translate them into text. This process involves feeding labeled data into the model, tweaking parameters, and iterating until we achieve the desired level of accuracy. The project's primary goal is to display the Arabic sign language gestures in real time and generate them in a text format.With a duration of 8-weeks, this project aims to: ● Data Collection and Exploratory Data Analysis ● Preprocessing ● Feature Extraction ● Model Development and Training ● Evaluate Model ● App development

Project plan.

  • Week 1


  • Week 2

    ● Data Collection and Exploratory Data Analysis

  • Week 3

    ● Feature Extraction

  • Week 4

    ● Model Development and Training

  • Week 5

    test model

  • Week 6

    App development

Share project on: