Projects / AI Innovation Challenge

Improving Driver Voice Assistance Using Machine Learning to Reduce Road Accidents

Challenge Completed!


Omdena Featured image

Consenz has set a mission to empower drivers worldwide and dramatically reduce fatal road traffic accidents.

The problem

Each year over 1.3 million people die in traffic accidents and many more get seriously injured. Over 90% of these accidents happen in developing countries, where access to modern driver support technology is less accessible.

Enzo is an appealing, connected and fully voice-assisted heads-up display that gives the driver a safe and comfortable experience on an individual level. Being an affordable retrofit solution for any car, Consenz enables smart, interactive, and dynamic traffic planning. Not only for the selected few in a new, connected car but for the complete rolling car fleet.

With a few late exceptions, the user experience of voice assistance in cars has left a lot to desire. With the latest development in voice technology, not at least from the more and more frequent use of home assistants, the time is right for voice assistance in cars. With an intuitive voice assistance experience more like a normal conversation and with a broad, but correct, understanding of the driver’s intents, many lives can be saved just by avoiding the need for physical contact with mobile phones while driving.  

The project outcomes

Using Machine Learning to develop Enzo, the Consenz driver assistant, towards an ideal driver assistant with the use of voice technology in combination with visual input from a Head-up Display (HUD). Since voice is the only way to communicate with Enzo, each driver individually should feel that speaking with Enzo and receiving instruction from him is intuitive and a good conversational experience. 

This means that Enzo should be able to adapt to different driver behaviors and preferences in everything from how often he interacts, the balance between voice and visual input, and how commands are being made, etc. The HUD comprises a System on Chip with capable processing power, 4 microphones, a loudspeaker, two cameras, and a screen for projection. The dedicated connectivity enables cloud computing. So, necessary learning and personalization could be a combination of edge and cloud. 

The accomplishments of this project include defining the project pipeline, building models for filtering driver assistant voice when a driver and the assistant speak simultaneously, and developing models for music and restaurant recommendation. In addition, the team built models for driver mood with personality classification, as well as developing a conversation agent that recognizes a wide range of intents and entities.

Voice assistance HUD in car, while driving - Omdena

Voice assistance HUD in car, while driving – Omdena

First Omdena Project?

Join the Omdena community to make a real-world impact and develop your career

Build a global network and get mentoring support

Earn money through paid gigs and access many more opportunities



Your benefits

Address a significant real-world problem with your skills

Get hired at top companies by building your Omdena project portfolio (via certificates, references, etc.)

Access paid projects, speaking gigs, and writing opportunities



Requirements

Good English

A very good grasp in computer science and/or mathematics

Student, (aspiring) data scientist, (senior) ML engineer, data engineer, or domain expert (no need for AI expertise)

Programming experience with Python

Understanding of Data Analysis, and Machine Learning.



This challenge has been hosted with our friends at



Application Form

Related Projects

media card
Detecting Fault Location within Power Distribution Systems in Iraq using AI
media card
Developing a Platform for Detecting Dis/Misinformation in Mongolia with AI
media card
Developing a Conversational AI-Powered Child Protection Dashboard

Become an Omdena Collaborator

media card
Visit the Omdena Collaborator Dashboard Learn More