Detecting Wildfires with Artificial Intelligence

Detecting Wildfires with Artificial Intelligence

How a small commercial team leveraged Omdena’s AI community to detect wildfires in the Amazon forest.

Article written by Laura Clark Murray

 

What can be done about wildfires?

The power of wildfires to destroy has been tragically apparent around the globe. Depending on terrain and conditions, they can double in size every 10 minutes. And every minute that a fire burns makes it harder to contain. With early detection and a quick response, a fire may be quickly extinguished. In contrast, a fire left unchecked can wipe out huge amounts of forest, killing and destroying as it rapidly spreads.

 

How do you stop a fire before it becomes wild?

It’s the job of Sintecsys, a commercial agriculture technology company in Brazil, to monitor 8.7 million acres of forest and agricultural land across four biomes, including the Amazon forest. In order to identify fire, their system works around the clock to process images from 360-degree cameras mounted on towers distributed throughout that land. If there appears to be flames or smoke, the system alerts the staff. As a result, in the last 3 years they’ve dramatically reduced fire detection time from an average of 40 minutes to under 5. 

 

Sample images Sintecsys

 

Osmar Bambini, Head of Innovation at Sintecsys, knew that artificial intelligence could be used to reduce that detection time further. In addition, AI held promise in separating genuine cause for alarm from false alarms. In order to avoid missing any actual fires, the system triggered a high rate of “false positives”. Therefore the staff needed to validate each fire alert, before calling in firefighters. This extra processing delayed the response to real fires by valuable minutes.

 

How can a company apply AI without a large in-house team?

Bambini brought on Omdena. Omdena is a global platform where AI experts and data scientists from diverse backgrounds collaborate to build AI-based solutions to real-world problems. You can learn more here about Omdena’s innovative approach to building AI solutions through global collaboration. 

For this eight-week machine learning project, Omdena pulled together a diverse team of 47 data scientists from 22 countries to join Sintecsys’ small internal AI group. Notably, a data scientist in Brazil, Leonardo Sanchez, was eager to join the Omdena challenge. It gave him the opportunity to address a problem of significance for his country and the world. You can read about his perspective, and the image processing approaches behind the Omdena solution, in his article “How to Stop Wildfires with Artificial Intelligence”.

Yash Mahesh Bangera and Ashish Gupta had their own reasons for becoming Omdena collaborators. Specifically, they have a dream of working with organizations that undertake initiatives for social good. Joining this Omdena challenge allowed them to do just that. Moreover, they deepened their own AI and machine learning skills in the process, as they explain in their article on the project.

In two months time, the team built a system that is accurate in identifying smoke and flames in daytime images more than 95% of the time. Due to that accuracy, false positives are dramatically reduced. As a result, firefighters can be called onto the scene without delay. Bambini is thrilled with the results: “Outstanding! The Omdena challenge provided the Sintecsys team an intense and accurate deep dive into AI with amazing results.” By early March, the AI system will be fully deployed.

 

What’s next?

Sintecsys and Omdena are exploring a second project which will tackle the detection of smoke and fire outbreak in nighttime images. In addition, we’ll pull in satellite imagery to get a more complete view of what’s happening on the ground.

Bambini has big plans for making the system even smarter with follow-on projects with Omdena. We’ll use AI to identify areas in the forest that are especially high-risk for a fire. Above all, human activity is the most significant indicator of fire risk. “More than 90% of fires are caused by humans, either intentionally or accidentally. The places where farmers have been clearing land and where people are settling are the highest risk spots,” says Bambini. “If we can use AI to pinpoint those areas, we’ll be able to predict where fires are most likely to happen.” 

If Bambini is looking in the right place, he just might be able to detect a fire the moment it breaks out.

“Speed, accuracy, and power sum up my perception of Omdena”, says Osmar Bambini, Sintecsys Head of Innovation. “For Sintecsys, from now on Omdena is the official AI partner.”

 

Learn More

Keep up with our work with Sintecsys to refine their fire detection system with AI here.
This is Omdena’s second fire-related challenge. Read about our work with Swedish AI startup Spacept to prevent fires sparked by falling trees near power lines.
Want to work with us? Tell us about your project here.
The video sums up the project. You can find it on LinkedIn here.

 


 
 

About Omdena

Building AI solutions through global collaboration

Omdena is a collaborative platform where organizations work with a diverse AI community to build solutions for real world problems.

Learn more about us and Collaborative AI.

How-to Stop Wildfires through Flame and Smoke Detection

How-to Stop Wildfires through Flame and Smoke Detection

How Brazilian company Sintecsys leveraged Omdena’s AI community to build a fire detection algorithm to stop wildfires before an outbreak can occur.

Article written by Leonardo Sanchez

 

The Problem

2019 was marked by very big fires.

Not only the Notredame cathedral in Paris, and the National Museum in my country Brazil but entire complex ecosystems like the Amazon forest fires and more recently in Australia.

Before we dive into how to detect and stop wildfires early on with our community-built AI tool, let us understand how forest fires start.

  • Natural fires: Generally, natural fires are started by lightning, with a small portion originated by spontaneous combustion.
  • Human-caused fires: Humans cause fires in multiple ways such as smoking, recreation, soil preparation for agriculture, and so on. Man-caused fires represent the greatest percentual share of fires, but natural-caused fires represent larger burned land areas. This happens because the man-caused are detected earlier, while natural fires can take hours to be identified by the competent authorities.

Regardless of the causes, when a forest like in the Amazon starts to burn, the fire can spread and reach speeds of up to 23 km/h and reach temperatures of 800 °C (1470 °F) destroying plant and animal life within a few hours (sometimes even contributing to species extinction).

Even worse, fires damage the planet through CO2 that will contribute to global warming.

In addition to disrupting the climate, it impacts the sky and the quality of the air of a huge metropolitan city like São Paulo, the most important economical and productive center for my country.

At 3 pm, August 19th, 2019, a black sky appeared as a result of the meeting of a cold front with the fire particulates stemming from the Amazon and midwest fires in my country.

The day became night, and the feeling was that we were living in a biblical plague as described in the Old Testament. Really scary!

Pictures of São Paulo’s sky at 3 pm on August 19th, 2019

 

Among much misinformation, one post from NASA stood out by shedding the fundamental light of science on the matter.

In the image below, you see a colored high-resolution satellite image showing how the fire smokes spread to the southeast states of my country.

By NOAA/NASA’s Suomi NPP using the VIIRS (Visible Infrared Imaging Radiometer Suite) on August 20th, 2019

 

In Brazil and many other places in the world, we have seen that fires left thousands of homeless, resulted in many deaths, property damage and unfortunately, it will not be the last time in human history for devastating fires to spread.

 

How can AI help to stop wildfires?

Is it possible to help my country (and other countries)? Is there a way to use the power of community and of AI to achieve this?

 

The Omdena Challenge

According to the Brazilian wildfire detection company Sintecsys: yes!

Sintecsys´s growing customer base of clients on farms and forests can confirm. The company installs cameras on top of communication towers to capture images that are sent to a monitoring center. Once there is fire (or smoke) detected on images, it sends alerts and fire fighting actions. This saves lives and infrastructure costs.

Sintecsys is not alone in its mission to stop wildfires as there are many other companies around the world dedicated.

So far, the company installed 50 towers distributed in Brazil (2019 data).

To extend the customer reach and scale their business model to thousands of cameras with the capability of accurately and quickly detecting wildfire outbreaks, Omdena’s AI capabilities come into play.

Omdena is a global platform where organizations collaborate with a diverse AI community to build solutions for real problems in a faster and more effective way.

 

How the team solved the problem

#1 Scoping the problem

To stop wildfires early on before further damage is caused, Omdena and Sintecsys agreed to deal with day images in their first joint challenge and in a second challenge improve the solution by dealing with night images.

The main difference between day and night images for fire detection is that during the day images usually show smoke and during the night these images show live fire. Both sunset and dawn, where smoke and live fire coexist on images, represent boundary conditions for the problem.

 

#2 Working on the dataset

The dataset was really big comprising footage and images from different cameras with and without fires outbreaks happening. Combining the original images given, our team had almost 7.600 images of 1920 x 1080 size (day images without fires outbreaks, day images with fires and some night images (around 16%)) to start labeling.

Samples from the datasets

 

To add even more images, Gary Diana built an algorithm to successfully extract images from the footage and at the same time avoiding the generation of images with the same landscape among them (de-duplication). This initiative added another 1.150 images of 1280 x 720 size to our dataset.

 

#3 Labeling with Labelbox

Having the datasets prepared for labeling, we gathered around 20 people dedicated to the task, created the environments on Labelbox, which is the best tool available for computer vision by allowing labeling data, managing its quality and operating a production training data pipeline, and then, at last, we started to make tests and to label the final datasets.

I managed the task but I received a huge support of Alyona Galyeva who helped the whole team not only by labeling but also by reviewing and managing everyone´s work.

In her own words:

It always starts with a mess when a group of people collaborates on a labeling project. In our case, Labelbox saved us a lot of time and effort by not allowing multiple users to label the same data. On top of that, it made our lives easier by proposing 4 roles: Labeler, Reviewer, Team Manager, and Admin. So, nobody was able to mess with data sources, data formats, and, of course, the labels made by other people.

Labelbox interface for labeling, managing and reviewing labels

 

Having both datasets labeled, next train, validation and test files were generated by the data pipeline team.

 

#4 Building the models

From the start, the team searched and studied several top-notch papers with different techniques that could be applied to solving the problem.

The challenge team created several teams in different tasks, each one focused on trying different approaches: mobilenet, semantic segmentation, Convolutional Neural Networks (CNNs) —  from simple architectures to more sophisticated ones.

Another great testimony of this step comes from Danielle Paes Barretto:

It was inspiring to see people eager to use their skills to stop wildfires and make an impact. I tried to help in all tasks; from labeling the data to building CNN models and testing them on our dataset. We also had frequent discussions which in my opinion is one of the greatest ways of learning. All in all, it was an amazing opportunity to learn and to use my knowledge for the good while meeting great people!

In addition, different techniques were successfully applied to improve results like creating patches of different sizes on original images and training over patches, data augmentation (e.g. horizontal and vertical flipping), denoising images, etc.

 

#5 Results

The final solutions were able to reach a recall between 95% and 97% while having a false positive rate between 20% and 33%, which means that these solutions were extremely successful in catching 95% to 97% of the real fires outbreaks. While the challenge partner Sintecsys is extremely happy with the results, in our second challenge, we will improve the current models by adding night time images.

 

Learning Process

As with every challenge at Omdena, it was a rich and epic journey of learning.

There is no tool more powerful to learn than getting your hands dirty in the real world.

In the words of Collaborator, Iliana Vargas:

When I heard about Omdena, I did not think twice and I applied for the challenge.The experience I had in the project in general was very gratifying for me, not only in the technical part, but also in terms of being part of a community. We had an excellent team of professionals, but above all, we had people willing to contribute with their knowledge for a project that has a social benefit.

 

What´s next?

The next natural step is to improve the model and achieve even better results. A path of building solid cutting edge technology that is not only strengthening Sintecsys position but will also allow it to move even further in their business model and value proposition.

As a community at Omdena, I am excited to build a better world, move the human spirit forward and help organizations to build AI solutions for real problems.

 


 

The Collaborators

I wouldn´t be able to end this article without thanking each of my colleagues in this challenge that made everything possible:

Iliana Vargas, Joon Sung Park, Sanyam Singh, Rohith Paul, Temitope Kekere, Ashish Gupta, Avikant Srivastava, Danielle Paes Barretto de Arruda Camara, Eric Massip, Kent Mok, Kritika Rupauliha, Leona Hammelrath, Nazgul Mamasheva, Nithiroj Tripatarasit, Rajashekhar Gugulothu, Rizki Fajar Nugroho, Robin Familara, Salil MishraSam Masikini, Tanya Dixit, Shaun Damon, Yash Bangera, Abhishek Unnam, Alexandr Laskorunsky, Amun Vedal, Angelo Manzatto, Billy Zhao, Carson Bentley, Lukasz Murawski, Sahand Azad, Yang Gao, Mikko Lähdeaho, Alyona Galyeva, Ana Maria Lopez Moreno, Brian Cerron, Gary Diana, Kennedy Kamande Wangari, Lasse Bøhling, Poonam Ligade, Serhiy Shekhovtsov, Kumar Mankala, François-Guillaume Fernandez and Yemissi Kifouly.

You can connect with me via LinkedIn.

 


 
 

About Omdena

Building AI solutions through global collaboration

Omdena is a collaborative platform where organizations work with a diverse AI community to build solutions for real world problems.

Learn more about us and Collaborative AI.

Using AI for Earthquake Response

Using AI for Earthquake Response

From a broad problem statement to a functional AI prototype in only two months to unite families in the aftermath of an earthquake.

Article written by Laura Clark Murray.

 

The Issue

Since the devastating 1999 earthquake in Istanbul that killed more than 18,000 people and left half a million homeless, experts have been warning about the need to prepare for another major earthquake. 

Late in 2019, a group of changemakers in the heart of Istanbul were looking to embark on a new project, in which they could apply innovative AI technology to solve a major issue for their local community. Well aware of the predictions that a major quake could hit at any time, these members of Impact Hub Istanbul, part of a global community focused on social innovation, decided to concentrate on response efforts in the immediate aftermath of an earthquake.

 

Earthquake response — that’s a broad problem.

Impact Hub Istanbul pulled in Omdena for our proven expertise in quickly transforming vague problem definitions into well-defined AI use cases. Omdena is a global platform where AI experts, engaged citizens and aspiring data scientists from diverse backgrounds collaborate to build AI-based solutions to humanity’s toughest problems. In our eight-week challenges, a community of 40 to 50 collaborators from around the world delivers deployable solutions using real-world data. 

You can learn more about Omdena’s innovative approach to building AI solutions through global collaboration here. Our partners include the UN, non-profit organizations and private corporations. In less than a year since our founding, over 700 AI enthusiasts from 70 countries have come together on Omdena challenges to solve social problems related to hunger, PTSD, sexual harassment, gang violence, wildfire prevention, and energy poverty

With the Omdena collaborators on board, the team considered the human experience of surviving an earthquake. They recognized a simple but significant dilemma. If family members are in different locations when an earthquake hits, as is the case when children are at school and parents are at work, reuniting is their highest priority. And that can be a terrifying prospect when it’s unclear how to safely navigate a city amid damage and destruction.

Could an AI system be built, using machine learning techniques, that would provide safe and fast route planning to reunite families in the hours after an earthquake hits? 

 

Is machine learning applicable to this problem?

We find that the best way to know if machine learning can solve a problem is to get out of the lab. By brainstorming with a diverse group of collaborators who bring different perspectives and expertise, potential solutions to a problem emerge. Modeling these alternative approaches using real-world data tells us what works. 

Together with Impact Hub Istanbul, we explored many alternate strategies for route planning. It was clear that parents and children should take paths that are likely to be safe after an earthquake. However, we needed to define what “safe” means in this context and then find the data required to assess it.

Collaborator Nguyen Tran shares the internal debates about framing the problem in his article on the challenge. 

As is often the case in real-world situations, the ideal data didn’t exist. When we uncover data roadblocks — nonexistent, incomplete or inaccurate data —  we invent ways to get around them. The domain expertise of our partners and the creativity of our diverse collaborators come into play. Semih Boyaci, Co-Founder of Impact Hub Istanbul, sees the benefit of working with an inclusive team: “As different community members contribute to the solutions, a significant level of diversity is integrated into the solutions. This not only prevents potential errors in a timely manner but also brings a higher level of creativity to the challenge process.”  

Through this collaborative and iterative process, we are able to start with a vague question and quickly refine it to one that is clear and actionable. And in the process, build a solution.

 

A prototype points to the path forward.

The goal was to create a prototype to accurately predict and verify safe routes between schools, hospitals, workplaces, and homes in one region of Istanbul. Focusing on the city’s Fatih District, the team pulled together and crafted data on the buildings and streets and built a model that calculated a risk score for each section of the district. Using various algorithms, the model identifies the shortest and safest path between two locations.

 

Fatih, one of the most popular and crowded districts in Istanbul.

 

The pathfinder algorithm

 

You can learn more about the data science behind this project from collaborator Sergio Ramírez Gallego. 

Building a prototype is a low-risk way to assess the applicability of AI techniques to a broad problem area and establish a direction for further investment and resources. Collaborator David Tran shares, “In a span of two months, we’ve explored the boundaries of the solution space, navigated through the limitations in data availability and technical complexity, and arrived at a functional solution.”

With a proof-of-concept in-hand, Impact Hub Istanbul has defined a concrete way to have an impact on earthquake response in their city using AI. 

 

What’s next?

Just as the project wrapped up in the last weeks of January its relevance was underscored when a  6.7 magnitude earthquake shook eastern Turkey, tragically killing at least 30 people and injuring more than 1600 — further motivating the Impact Hub Istanbul team to turn the prototype into a deployable application that incorporates all the districts of Istanbul. Before the next earthquake hits. 

“The result of this stunning process with Omdena was an AI-powered tool that helps families reunite in case of an earthquake by calculating the shortest and safest roads. It was really incredible to witness the productivity level when a group of talented people comes together around common motivations to solve problems for the social good. I believe that this is the way of working that all companies need to achieve in order to work with new generations.”

Semih Boyaci, Co-Founder, Impact Hub Istanbul

 


 
 

About Omdena

A global platform where AI experts, engaged citizens, and aspiring data scientists collaborate in two-month AI challenges to solve humanity’s toughest problems.

Join our community.

Artificial Intelligence for Predicting The Safest Path After an Earthquake

Artificial Intelligence for Predicting The Safest Path After an Earthquake

Uniting families through Artificial Intelligence and Open Street Map data by finding safe routes in the aftermath of an earthquake.

Article written by Anju Mercian.

There has been a lot of research to predict earthquakes and how to increase safety during an earthquake. A question that has remained relatively unexplored is what happens after an earthquake? And how can Artificial Intelligence help?

The aftermath of an earthquake comes with a lot of challenges:

  • Trying to get to a safe spot
  • Organizing rescue missions
  • How do parents get to their kids at school?

The last question is a big issue to tackle, which we focused on in this challenge.

Scientists predict that there will be an earthquake in Istanbul in the near future but the exact date is difficult to identify since Istanbul resides on a fault line. This means there is a possibility of major destruction.

Keeping families together and safe is the number one priority.

The problem-to-be-solved was proposed by Impacthub Istanbul, which wanted to collaborate with Omdena to leverage their AI capabilities and community to move from a broad problem statement to a functional AI solution.

Here is a non-technical summary of the challenge. 

The data

We used data from Open Street Map (OSMNX). Also, we sourced data from Google maps to get 3D images of the buildings and roads and did the modeling using Unity.

Excerpt: 3D modeled representation of Istanbul using Unity

Excerpt: 3D modeled representation of Istanbul

 

Using Artificial Intelligence for earthquake response

 

  • Getting all the building coverage details for the district working with ‘Fatih’ district
  • Finding the width of streets
  • Identifying the buildings and width of streets to help us calculate the risk score. The more buildings and narrow streets the lesser the safety after an earthquake. Combing building coverage and street map we calculated the risk score.
  • Finding out whether districts are adjacent by querying the OSM database.
  • Mapping the streets
  • Deriving the risk score and the walking distance to the destination.
  • To get a path from source to the destination we used Dijkstra’s algorithm combined with Q learning to find the shortest path between source and destination.

How did we go about this?

We used all the data from OSMX; the latitude, longitude data, edges, nodes.

  • Calculate the building coverage and street width map
  • Using OSMNX data: Interpolate the Points making up the line. Extract lon_lat coordinates of such points. Use the generated dictionary to convert each point to x_y positions in Numpy Matrix. Use the generated dictionary to convert each point to x_y positions in Numpy Matrix. Extract numpy values of all the points making up this edge/Line. Aggregate these values by: min, max, mean, median. Return aggregated values
  • Calculate the risk scores based on the two factors building coverage and street width. Residential areas had a higher risk score so when they imposed on the heat map it should show that these areas are possibly not very safe after an earthquake.
  • Get the district to district adjacent to map out the streets and map the risk level for the streets in order to find the shortest path.
  • Use the Dijkstra algorithm to find the shortest path from source to destination.

Problems we faced

  • OSMNX is not very fast we had a lot of buffer issues so we have to move only the Istanbul data to AWS S3 to get better performance.
  • OSMNX doesn’t have building coverage data so we have to scrap geo data from google maps to process it through Unity to get the 3 D images of images.

 

The output

 

For a more technical understanding of the challenge check out this article.

How did I get involved in the challenge?

Neither I’m an expert in the aftermath management of an earthquake nor was I an expert on network analysis using Artificial Intelligence.

I had gotten to know of Omdena through a friend on Social Media.

I was inspired by their vision of Building AI for Good through a collaborative environment and wanted to be a part of their challenge.

This project was a great learning experience. I joined this challenge not knowing how to solve this problem but the collaborative learning environment made it possible.

I had an understanding that Artificial Intelligence and Machine Learning was a vast field, but this project gave me an opportunity to experience it first hand.

Collaborating with my teammates across timezones helped me to co-ordinate my schedule more effectively, as well as learn from diverse perspectives.

Thanks to a very supportive team.


 
 

About Omdena

A global platform where AI experts, engaged citizens, and aspiring data scientists collaborate in two-month AI challenges to solve humanity’s toughest problems.

Join our community.

Improving Safety in an Earthquake Aftermath with Machine Learning

Improving Safety in an Earthquake Aftermath with Machine Learning

How to use Machine Learning for identifying the safest route after an earthquake.

Article written by Nguyen Tran.

Close your eyes for a moment. Imagine a nice day like any other nice day. The Sun is shining on the sea, and birds are singing in the trees. Suddenly, the trees start to shake, and the chair you’re sitting on is also shaking. You look around and begin to notice the confusion on the faces of your colleagues transitioning to dread: an earthquake is happening.

 

 

Remembering the drills, you take refuge under the cover of your work desk as objects around you fall from their places onto the ground. You pray for your life and the lives of those you love.

Seconds feel like minutes, and minutes feel like hours.

But at last, it was over. You thank for your life, but you felt an overwhelming need to ask for more than just your own safety as a surge of anxiety takes hold of you: Are your families OK? Your mind fixates on one thing: You MUST get to them, and that takes priority over anything else.

 

The Problem: Reaching your family and loved ones

Well, you, the reader, are probably right in that the scenario above is a bit overly dramatic, but it is hard to overstate the level of distress natural disasters, such as an earthquake, can cause to people.

And all participants in the Istanbul Earthquake Challenge share a similar sense of urgency and seriousness in finding ways to improve the situation for the earthquake victims.

That’s why early on in the challenge, we decided to focus to apply machine learning for the aftermath of an earthquake.

 

Machine Learning: Earthquake prediction

Machine Learning: Earthquake prediction

 

The aftermath of the earthquake might see communication cut-off or made extra difficult, causing problems with coordination between family members. Roads might be damaged and/or blocked up by debris falling from buildings. Traveling in the post-earthquake scenario will be chaotic and dangerous, but it won’t deter people from trying to reach their loved ones.

 

Machine Learning for earthquake response

It took our community of collaborators a fair bit of meandering exploration and a few fierce internal debates to finally boil the problem down to three main components: a) the representation of riskiness, b) the path-finding component, and c) the integration of a) and b).

 

Quantifying riskiness

 

 

This part proved to be a tricky part from the very start. Many team members devoted much time into gathering data that might contain some information regarding riskiness in the aftermath of an earthquake. We looked into various types of heatmaps: soil-composition heatmap, elevation heatmap, population heatmap, seismic hazard map,… but we faced two problems here.

First is the problem of sparsity of data. The resolutions of the heatmaps are both low (meaning a single value maps over a very large area) — therefore resulting in low precision, and mismatched — thus forcing us to interpolate and introduce high level of uncertainty. Second is the problem of accurately modeling the level of risks. Not unlike trying to predict the earthquake itself, we could not find enough labeled data to comfortably make riskiness-prediction a supervised learning exercise. After a lot of persistence but fruitless efforts, we finally realized that this component of the problem needs a fresh approach that does not follow the well-trodden path of seismology.

 

Our breakthrough :  Building coverage

A breakthrough came when a small group of members discussed the idea of using something concrete as a proxy for the abstract idea of safeness (which is the inverse of the abstract concept of riskiness). Safeness is a much more intuitive concept to a human’s mind. Safeness is also deducible from the observable surrounding: After an earthquake, it’s safer to stick to larger roads, and it’s safer to keep to areas with open-air.

 

Performing Distance-Transform on street segmentation mask (above). The numeric result (below) can be used to calculate the average width of the streets.

And so, we arrived at the two proxies for safeness:

  • Density of building,
  • Width of road.

Areas with lower building density are safer areas, so paths that run through these areas should be less risky. Similarly, wide streets are safer streets, so paths that move along these streets are also less risky. Two task groups were quickly formed to collect satellite images of the city and extract from them street masks and building masks using two newly trained image segmentation models.

After this, distance-transformation is applied on the prediction mask, giving us a rasterized map in which each pixel represents either the distance from a nearby building (in other words, how open is the space) or the distance from the side edge of the road (after some data aggregation, the road’s width).

You can read more about the segmentation models here.

 

Building Density Heatmap for Fatih District. Blue-Cyan spectrum represents areas with clusters of buildings. Green-Yellow spectrum represents open areas with few buildings.

Building Density Heatmap for Fatih District. Blue-Cyan spectrum represents areas with clusters of buildings. Green-Yellow spectrum represents open areas with few buildings.

 

The Path-finding component

This part was a great deal more straight forward than the previous part. Path-finding is a very well-established and well-tuned area of Machine Learning.

So, very early into the challenge, multiple demos were uploaded, leveraging the existing graphs of streets and intersections made available by the Open Street Map project. The initial demos used randomly-generated dummy risk data and attempted to find path based on singular objective either minimizing length or minimizing risk. Q-learning, Dijkstra, and A-star were the algorithm used. Later, the Graph ML team also successfully looked into a Multi-objective path-finding algorithm based on A-star algorithm with some tweaks that check for Pareto efficiency on every search iteration.

 

Illustration for Pareto Efficiency. Rather than optimizing for either the single objective on Y-axis or the single objective on X-axis, we look for a collection of solutions that are compromises between the two objectives.

 

A problem arose when testing the algorithm on large graphs made of multiple city districts: computation time increased dramatically as the number of nodes and edges ballooned.

A solution was proposed to prune nodes and edges using a hierarchical approach to path-finding. Rather than ingesting the entire city graph, a helper algorithm finds a path on a district level; the district-to-district pathings are fed to another helper algorithm that looks for the path on a neighborhood level. On the last level is the street-level graph constructed at a much-reduced size compared to the non-hierarchical approach. The smaller graph, with fewer nodes and edges, ensures that computation time remains reasonable even in the case of finding a path across the entire city.

 

An illustration of the concept of Hierarchical Pathfinding. Compared to the low-level graph, the high-level graph is much simpler and could be calculated quickly. The result is used to construct the low-level graph ONLY in the relevant area, filtering out unnecessary parts of the low-level graph to help speed up computation time.

 

Integration: Machine Learning for earthquake response

A working solution needs to be more than the sum of its parts, and integration is the process of realizing that. During the final weeks of the project, we found ourselves with two components that are not obviously compatible. On the one hand, there’s the path-finding algorithm that runs on Open Street Map graphs; on the other, there’s the rasterized heatmap whose values are encoded on individual pixels. One deals with vectors and vectorized shapes that are meant to represent real-world objects regardless of scale, and the other deals with numeric values that are scale-dependent.

If you’re familiar with Graphic Design, imagine transferring selective pixel values from Photoshop outputs to a Illustrator output while maintaining the integrity of the vector file.

After many trials and errors, the team came up with a way to map both outputs onto a shared space, in the form of an algebraic matrix. Interpolations were made between vector coordinates to simulate the presence of a vector line while a Moving Average filter runs along said line to collect the values of nearby pixels. The aggregated value is then mapped to the corresponding street vector in the graph. And voila, we were able to encode the riskiness value from the heatmap to the OSM street graph.

 

 

 

From top to bottom: the OSM graph of streets and intersection of the area; the building density heatmap of the area; and the combined result.

 

The Results

Thanks to the flexibility of Google Colaboratory, we were able to create a demo notebook that allows the user to test out the solution.

The demo is limited to only one district in Istanbul at the moment, namely, district Fatih. Users can test on a single-case basis with one pair of FROM-TO addresses or upload a CSV file, select the FROM column and TO column, and have the algorithm do searches in bulk.

Check out this colab notebook.

 

Comparison of pathfind results. In absence of the safeness data, a pathfinder will attempt to look for a path that cuts across the city, going through many areas with dense buildings and small streets. With the help of the safeness data, the algorithm can opt for paths that are longer but go through safer areas.

 

Conclusion

In this project, we looked into reducing the risk of traveling after the event of an earthquake using suitable machine learning approaches. In the urban environment of Istanbul, we made the assumption that safety correlates with areas with low building density and safety correlates with wider roads and streets.

Using state-of-the-art image machine learning models, we were able to represent said assumptions in the forms of rasterized heatmaps. We were also able to integrate the heatmaps with existing street graphs made available by the Open Street Map project. The final result allows the user to find the safest path from one place to another that favors going through open areas on wide streets and roads.

More importantly, all of this work was done by a team of volunteer collaborators across the world. Yet in a span of two months, we’ve explored the boundaries of the solution space, navigated through the limitations in data availability and technical complexity, and arrived at a functional solution.

I am very proud of having been a part of this challenge and, personally, thoroughly convinced that collaboration done right can bring about amazing results.

Thank you, Omdena, for this opportunity to learn and make a real impact on the world!

 


 

About Omdena

A global platform where AI experts, engaged citizens, and aspiring data scientists collaborate in two-month AI challenges to solve humanity’s toughest problems.

Join our community.