AI Insights

Top 5 New Computer Vision Real-World Applications and Trends for 2023

July 12, 2022


article featured image

In this article, we look at a few very interesting applications, use cases, and algorithms of computer vision in AI that are innovative trendsetters in several industry domains today. You will see some of the use cases are high potential and can transform the world for the better.

What is Computer Vision? If you don´t know it yet 😉

Computer Vision is creating a lot of excitement in recent times, supported with increasing interest and rigorous research over the last few decades. Computer vision is an artificial intelligence (AI) field that provides machines with a perception of sight. Computers can derive meaningful information using computer vision applications and algorithms, coupled with machine learning and deep learning techniques. The usual assets used to gain knowledge are digital images, videos, and other visual inputs that help make recommendations or take actions. Computer vision allows machines to see, observe and understand. In this article, we look at some exciting computer vision real-world applications.

The Future of Artificial Intelligence & Computer Vision

With ongoing research and refinement in AI technologies, computer vision applications in the industry see enormous potential. New use cases and applications cover a broader range of functions. The capabilities of present-day computer vision are limited compared to the vast possibilities using computer vision algorithms and applications.

Some of the famous applications of computer vision include Reverse Image search by Google that can search for similar images on the web. Apple’s Face ID feature is popular among users for unlocking their phones or accessing their personal data.

5 Computer Vision Applications in Our Daily Lives for 2024

1. Computer Vision Applications in the Automotive Industry

Tesla Autopilot

Tesla cars’ Autopilot advanced safety and convenience features assist drivers with cumbersome driving tasks. Autopilot enables the driver to steer the car, accelerate and brake automatically within its lane. The navigate feature suggests optimal lane changes, and at the end of the destination, it can automatically search a parking spot and park itself.

Algorithms

The vehicle senses its environment through its eight cameras. The eight feeds get fused into a single environmental prediction model. Tesla uses the Neural Net Planner, a collection of AI algorithms that handle the car’s actual trajectory routing and behavior. Tesla’s AI architecture uses Dojo, a neural network training computer. The neural network processes vast amounts of camera imaging data four times faster than other computing systems.

2. Computer Vision Applications in Disaster Relief and Emergency Situations

Predicting a safe path during earthquakes using deep learning

Computer vision can help in natural disasters like hurricanes, earthquakes, wildfires, or floods, where a quick assessment of the situation in terms of damage and proper action is needed. Natural disasters occur unannounced, and there is always a rush to save human life and property. The occurrence may have a catastrophic impact on human life and the environment. 

Algorithms
Fatih, one of the most popular and crowded districts in Istanbul. Source: Mapbox API

Fatih, one of the most popular and crowded districts in Istanbul. Source: Mapbox API

Computer vision can offer beneficial solutions, and Omdena has created an innovative computer vision-based solution to help victims in case of an earthquake. 

The team developed a safe and fast route planning solution in case of a disaster like an earthquake for Istanbul. 30 AI engineers created a risk heatmap and a pathfinding algorithm using Convolutional Neural Networks (CNNs), which help find the safest and shortest path between two points. 

Tree identification on satellite images to prevent forest fires

Deforestation, climate change, and the risk of wildfires are all directly linked. Growing deforestation impacts climate change, which increases the chances of the vegetation drying out, which in turn further increases the risk of fires.

According to Greenpeace, around 8 billion tons of CO2 are released by fire every year. This is about half as much as the emissions caused by the burning of coal around the world.

[embedyt] https://www.youtube.com/watch?v=kcLcOsrnxXA[/embedyt]

Algorithms

AI startup Spacept worked with 36 Omdena collaborators to build a deep learning model for tree identification. The model helps to prevent power outages and forest fires sparked by falling trees that are too close to power stations.

Identifying trees on satellite images was a tricky task, and the collaborators combined human judgment with machine suggestions. The winning model, a deep U-Net model detected trees with 95% accuracy. This would not have been possible without the tremendous effort done by the community in terms of data labeling and data augmentation, e.g. by applying GANs. In addition, one task group built an elevation map to show forest cover.

Ultimately, these solutions are being implemented in Spacept’s product to help drive forward their mission of using AI to save lives, and infrastructure costs, and reduce CO2 emissions.

Applying deep learning to detect wildfires

Wildfires have cost thousands of lives and are responsible for one-third of global CO2 emissions! Deforestation and agriculture damages contribute 17 percent to climate change.

In Brazil, over 8,000 fires have been recorded in the Pantanal until 30 October, which is up 462 percent in the same period last year.

It’s the job of Sintecsys, a commercial agriculture technology company in Brazil, to monitor 8.7M acres of forest and agricultural land across four biomes, including the Amazon forest. Their system processes images from 360-degree cameras mounted on towers and alerts staff if there appears to be a fire. In 3 years, they’ve dramatically reduced fire detection time from an average of 40 minutes to under 5.

Algorithms

Brazilian startup Sintecsys hosted an eight weeks AI project to scale their wildfires detection solution. 47 Omdena collaborators built a deployable model to identify fires early to save lives and reduce infrastructure costs.

For this eight-week machine learning project, Omdena pulled together a diverse team of 47 data scientists from 22 countries to join Sintecsys’ small internal AI group. Leonardo Sanchez, a data scientist in Brazil, was eager to join the Omdena challenge and address a problem of such significance for his country and the world. You can read about his perspective, and the image processing approaches behind the Omdena solution, in his article “How to Stop Wildfires with Artificial Intelligence”.

3. Computer Vision Applications in Medicine & Healthcare

Computer vision applications in healthcare are beneficial in diagnosing various illnesses. Computer vision advances medical imaging to aid health professionals in making better-informed decisions on a patient’s health.

Detecting tumors using X-rays 

Project InnerEye open-source deep learning toolkit: Democratizing medical imaging AI.

Project InnerEye open-source deep-learning toolkit: Democratizing medical imaging AI. Source: Microsoft’s InnerEye

Microsoft’s InnerEye software reads 2D images and renders them into 3D. The software is capable of detecting tumors and other abnormalities in X-rays. In the same way, there are computer vision applications in medical imaging, cancer screening, and diagnostics.

Visualizing pathologies using Ultrasound

The ultrasound solution detects the type and location of different pathologies and works with 2D images and video streams.

Omdena has worked in the healthcare space by visualizing pathologies in ultrasound images using OpenCV and Streamlit. 

Source: Omdena Bounded box and mask outline

Source: Omdena Bounded box and mask outline

Algorithms

The algorithm has been built using OpenCV, a Computer Vision library, and is embedded into Streamlit, an open-source app framework; The heatmaps get overlayed on processed images with different opacities. The Streamlit app uses the image and mask inputs and displays the bounded box and mask outline on the image, along with a heat map showing the tumor intensity.

Preventing malaria infections through topography and satellite image analysis

Malaria is a mosquito-borne disease, claiming over 400,000 lives each year, mainly children under 5. By targeting the water bodies where mosquitoes lay eggs, the disease can be controlled or even eliminated completely.

Combining satellite images, topography data, population density, and other data sources a team of 40 AI changemakers built an algorithm that identifies the areas in which stagnant water bodies likely exist. The model helps to identify breeding sites quicker and more accurately.

Highlighted grids have a higher risk of containing water bodies

Source Omdena: Highlighted Grids have a higher risk of containing water bodies

Algorithms

The project falls under the UN’s Sustainable Development Goal 3, which is to“ end the epidemics of AIDS, tuberculosis, malaria, and neglected tropical diseases” by the year 2030. Given a region, the task was to automatically identify areas with water bodies. To be able to cater to a large area like Ghana or Kenya, governments and other entities need to be able to direct resources to the most susceptible regions in the most cost-efficient manner. The time is limited since the water bodies need to be analyzed before the wet season arrives, leading to a rise in mosquito breeding.

The dataset for this project was particularly for the Ghana and Amhara regions. The project partner Zzapp Malaria was surveying these regions during the project phase hence the data came in periodic batches.

Below you can watch how the app of Zzapp Malaria works. For more details on the challenge read this case study.

[embedyt] https://www.youtube.com/watch?v=DVCxfV33a8Q[/embedyt]

4. Computer Vision Applications in Agriculture

Enhancing agricultural crop yield prediction

There are many examples of computer vision applications in agriculture. The agricultural crop yield prediction field has had its fair share of challenges over the past few years. RSIP vision uses computer vision and deep learning to perform agricultural yield prediction. Estimation of seasonal yield before harvesting needs plenty of information like a fusion of sensory and satellite imagery information, soil conditions, moisture, seasonal weather, nitrogen levels, and historical yield information. The software utilizes stationary sensors and airborne drone images for multi-spectral imaging or satellite images. Algorithmic tools can make the yield prediction available in smartphone apps.

Detecting weeds through edge computer vision

Fig.10, the predicted results of Yolact++ using Weedbot data.

Source Omdena: The predicted results of Yolact++ using Weedbot data.

Impact-driven startup Weedbot is developing a laser weeding machinery for farmers that can localize plants, distinguish between crops and weeds and remove weeds with a laser beam. As part of Omdena´s Incubator for Impact Startups, 50 technology changemakers built machine learning models to facilitate pesticide-free food production. 

Algorithms 

YolactEdge instance segmentation approach runs on small edge devices at real-time speeds in the computer vision and pattern recognition space. Omdena and Weedbot used YolactEdge for the precise classification and location of crops and weeds. The solution detects weeds Through YolactEdge Instance Segmentation to support innovative farming initiatives.

Computer vision technology can prevent diseases in poultry farms (activities like disease detection, weight measurement, behavior analysis, animal welfare, egg examination), do security monitoring for remote agricultural farms, and perform drone-based crop monitoring.

Applying remote sensing and computer vision for farming habitat classification

The collation of national datasets for farm-level environmental impacts is normally conducted at a governmental level and is usually determined using general calculations based on estimated performance. This information is never utilized by farmers in order to adapt or change for improved performance. Changing the flow of information from the bottom up will disrupt and dramatically improve the quality and accuracy of this information. This is reliant on engaging the farming community to deliver on required data. However, if this is achieved successfully there will be a novel and new market for the farm-level data. Farmers will be at the center of this data revolution and should see the benefits both in the supply chain and through government supports. 

Algorithms 

In this two-month Omdena Challenge, 50 AI changemakers built an open-source Earth Observation reference dataset for classifying commercial crops and peripheral habitats on-farm that can be used by food industry bodies to contextualize on-farm data that is self-reported by farmers.

Using satellite imagery to detect and assess the damage of armyworms in farming

Detect and assess the damage of armyworms in farming

Source: Omdena

Detect and assess the damage of armyworms in farming

Source: Omdena

More than 30 Collaborators developed an AI pipeline for generating, preprocessing, and training computer vision classification and clustering algorithms; with a developed web application connected to the deployed model. The solution helps solve the problem of damage assessment of armyworm attacks on plants. The project partner OKO is a Google for Startups accelerated startup using satellite and mobile technology to bring affordable and simple crop insurance to smallholder farmers.

5. Computer Vision Applications in Insurance

Speeding up insurance appraisal processes

Auto-insurer Tokio Marine deployed an AI-based computer vision system for examining and appraising damaged vehicles. Tractable developed the AI solution that uses deep learning for computer vision and machine learning techniques. The insurer leveraged image recognition to speed up the insurance appraisal process. Benefits included faster and more accurate settlements leading to increased customer satisfaction.

There are also many other computer vision applications across the insurance industry. With relevant and sufficient training data, deep learning and machine learning algorithms can self-improve without the need for explicit programming. AI-based claims management systems can effectively process HD video or imagery, shot by equipped drones, Geospatial data (GIS) data from satellites, and IoT data sets (including parameters like temperature, pressure, object position, and more.

Conclusion

We have seen examples of computer vision applications in various industries. Computer vision application across multiple industries relies on image and video data sources. Some of the vital advantages of computer vision applications are better accuracy, improved efficiencies, lower costs, and automation opportunities. 

Ready to test your skills?

If you’re interested in collaborating, apply to join an Omdena project at: https://www.omdena.com/projects

Related Articles

media card
Advancing Health Insurance with AI: Omdena’s Impactful Solutions
media card
MyCover.AI: Elevating Vehicle Insurance with AI
media card
Revolutionizing Insurance Claims Management: A Story of Innovation and Success
media card
AI-Assisted Mapping Tool for Disaster Management