The Ultimate Guide on Using AI & Open Source Satellite Imagery to Address Global Problems

How Omdena uses AI and open-source satellite imagery to fight poverty, hunger, disasters, climate displacement, wildfires, and advance space research.

April 17, 2025

14 minutes read

article featured image

Omdena delivered six global satellite AI projects that achieved up to 99% crop classification accuracy, 95% tree detection accuracy, and insights supporting UNHCR decisions for 2.6M displaced people. These results show how open-source satellite imagery and AI can drive real impact in hunger reduction, energy access, disaster response, and space exploration.

Introduction

Satellite imagery from a rapidly growing network of Earth-observing satellites now allows us to monitor our planet in near real time, revealing changes in agriculture, environmental health, urban growth, economic activity, and human movement. When transformed into insights, this data becomes a powerful tool for guiding policy, strengthening humanitarian response, and supporting global development. The European Space Agency has already shown how satellite data contributes to all seventeen Sustainable Development Goals.

Despite expanded open access from ESA, NASA, and the US Geological Survey, many organizations still struggle to use this data effectively. High-resolution imagery is costly, and processing large volumes of unstructured data requires specialized expertise. This limits the ability of institutions working in critical areas to fully benefit from Earth observation.

To bridge this gap, Omdena carried out six collaborative projects demonstrating how artificial intelligence can turn publicly available, low-resolution satellite imagery into practical, on-the-ground insights. Working with partners such as UNHCR, the World Food Programme, Impact Hub, and multiple startups and NGOs across regions including Somalia, Nigeria, Nepal, Brazil, Sweden, Turkey, and even Mars, the teams developed scalable solutions for food security, renewable energy access, disaster response, conflict and displacement analysis, wildfire prevention, and space exploration.

The Six Case Studies Below Provide Examples of Our Work

1. Addressing Global Poverty with Renewable Energy: 

Omdena partnered with Renewable Africa 365 to map areas of severe energy poverty in Nigeria and identify where solar microgrids could have the greatest impact. By integrating satellite imagery, population density, night-time light patterns, and power grid coverage, the team created a data-driven framework for prioritizing remote communities lacking reliable electricity. The insights now support renewable energy planning for both local authorities and national decision-makers.

2.  Reducing Hunger with Crop Classification: 

Working with the World Food Programme in Nepal, Omdena developed machine learning models that analyze multispectral satellite imagery to classify rice and wheat fields with high accuracy. The project revealed detailed crop boundaries, seasonal patterns, and field conditions, helping the WFP plan procurement, anticipate shortages, and improve food distribution strategies. The team also produced a best-practices guide for using open-source satellite data in agricultural monitoring.

3. Improving Disaster Response:

In collaboration with ImpactHub Istanbul, Omdena addressed a challenge common in earthquake-prone regions: enabling families to reconnect and navigate damaged terrain safely. The team combined satellite imagery, building density data, and street maps to generate a safety heatmap of Istanbul. This powered a routing tool that suggests the safest and shortest paths between two points, avoiding narrow streets, high-risk buildings, and potential hazards. The approach can be replicated in other at-risk cities worldwide.

4. Humanitarian Aid – Insights on Climate Change, Conflict & Forced Displacement:

With UNHCR in Somalia, Omdena examined how climate pressures, resource scarcity, and conflict drive forced displacement. By analyzing satellite-derived indicators such as vegetation loss, water availability, and urban expansion and correlating them with UNHCR displacement and conflict records, the team built predictive models that reveal emerging hotspots. These insights support better field deployment, early-warning systems, and long-term planning to protect vulnerable communities.

5. Preventing Wildfires with Tree Identification: 

Omdena collaborated with Swedish startup Spacept to reduce wildfire and outage risks near power lines. Using deep learning and high-quality satellite imagery, the team trained a tree-detection model capable of distinguishing trees from shadows, buildings, and terrain with 95 percent accuracy. This solution significantly cuts the cost and time required for manual inspections, enabling energy companies to perform preventive maintenance more efficiently and reduce fire-related CO₂ emissions.

6. Advancing Space Exploration through Anomaly Detection: 

In partnership with the University of Bern, Omdena analyzed thousands of satellite images from Mars to detect and classify eight types of geological and man-made anomalies. The team built a custom Python package for large-scale data processing and tested multiple AI architectures before selecting a U-Net model that achieved precision scores between 90 and 99 percent. The system supports scientific research into planetary surfaces, helps identify potential technosignatures, and lays groundwork for future Mars exploration missions.

Omdena Satellite Imagery Case Studies 2019-2021

1. Addressing Global Poverty with Renewable Energy

Location Nigeria
Project Partner Renewable Africa 365 (RA365)
Project Goal Address energy poverty in Nigeria by identifying those areas most suited for solar power stations.
Results Built an interactive map indicating the Nigerian regions best suited for solar power installations, along with a spreadsheet ranking the opportunities.
Approach Omdena’s AI community used satellite imagery, energy grid analysis, population analysis, and AI to identify where the energy poverty crisis is most dire and where solar power is most likely to be effective.

Methods

The team addressed challenges associated with incomplete and inconsistent public datasets by correlating multiple data sources. These included satellite imagery and Google Earth data, combined with population datasets from programs such as Demographic and Health Surveys (DHS), WorldPop, and GRID3.

Because the best candidates for solar power adoption are mid- to large-sized communities far from the existing power grid, Omdena’s data scientists first used satellite imagery to identify regions of Nigeria that are completely dark at night. They then used cluster analysis on population data to locate mid- to large-sized communities, including schools and hospitals in need of reliable power. Finally, they analyzed the energy grid coverage to identify communities located far from the national electricity network.

Figure 1: Interactive Map Top Candidates for Renewable Energy in Nigeria

Figure 1: Interactive Map Top Candidates for Renewable Energy in Nigeria

Impact

Expanding access to electricity is a crucial step in improving education, healthcare, economic development, and overall quality of life. Renewable Africa 365 is currently using the tools developed by Omdena collaborators to prioritize sites for renewable microgrid deployment. The data has also been shared with Nigeria’s Renewable Energy Agency (REA), a key funding authority for rural electrification.

The outcomes from this project support data-driven investments and policy decisions with the potential to improve the lives of millions in Nigeria. The tools and methodology are reproducible and can also be applied to other regions experiencing high energy poverty.

2. Reducing Hunger with Crop Classification 

Location Nepal
Project Partner UN World Food Programme (WFP)
Project Goal Address food scarcity and hunger in Nepal with data driven insights on the location and growth of food staples such as rice and wheat.
Results Developed a ML model to analyze publicly available satellite images, classify crops, and discern the demarcations between individual crop fields, with an accuracy reaching 99%. Insights from the process were aggregated into a guide for data scientists using satellite imagery data for agricultural applications.
Approach Omdena’s data scientists enhanced low resolution open source satellite imagery, and then used neural networks on images from multiple spectral bands to identify rice and wheat crop fields with an accuracy approaching 99%.

Methods

The project team used publicly available imagery from the European Space Agency’s Copernicus Sentinel-2 satellite, which provides 27,000 labeled and geo-referenced images across 13 spectral bands. Because Sentinel-2 images are relatively low resolution, Omdena’s data scientists improved the image quality using two super-resolution techniques: Deep Image Prior and Decrappify.

After enhancement, the team created an image classification model using a ResNet-like deep learning architecture and multispectral input data. This achieved an overall accuracy of 98.69%. Additional data augmentation using MixUp increased the accuracy to 99%.

Figure 2: Before and after super-resolution

Figure 2: Before and after super-resolution

Impact

Accurate, up-to-date information about the distribution and size of smallholder rice and wheat fields is essential to improving food security in Nepal. These insights enable the World Food Programme to allocate resources more effectively, support crop planning efforts, and help accelerate the production of essential food staples—ultimately reducing hunger.

Additionally, the guide produced by Omdena provides data scientists around the world with practical methods for using satellite imagery in agricultural applications, extending the impact of this work far beyond Nepal. Similar AI-enhanced satellite workflows are being applied to monitor crop health and forecast yields globally AI-enhanced satellite workflows

3. Improving Disaster Response

Location Turkey
Project Partner ImpactHub Istanbul
Project Goal Identify solutions to help mitigate disaster in earthquake prone Istanbul.
Results Omdena team members zeroed in on the challenge of reuniting families in the aftermath of an earthquake by helping individuals identify the safest and shortest routes through a damaged city. They developed a prototype tool to map routes between two points that optimize for safety, as well as distance. Users can identify routes on a single case basis, or search for routes between multiple addresses in bulk.
Approach Correlating “safety” with areas of low building density as well as wide roads and streets, Omdena’s data scientists combined data from satellite images and street maps, using image segmentation models, raster and vector maps, and other tools to identify the shortest and “safest” path from one part of Istanbul to another.

Methods

The team selected Istanbul’s Fatih District as the initial testing area. Using satellite images of the region, Omdena data scientists applied machine learning segmentation models and tools such as Mapbox to extract street and building data. This allowed them to determine street width and building density, which were then used to create a rasterized safety heatmap.

Next, they calculated shortest and safest routes using vector map data from OpenStreetMap. By integrating the safety heatmap values into the street network graph, the team created a routing system that allows users to choose the safest available path between two locations—favoring open spaces and wider roads.

Figure 3: Path comparison — shortest and safest

Figure 3: Path comparison — shortest and safest

Impact

This prototype provides ImpactHub Istanbul with a valuable tool to help protect residents during and after an earthquake event. The approach is reproducible and can be adapted for other earthquake-prone cities around the world.

See also AI for Disaster Response: Predicting Relief During Cyclones here

4. Humanitarian Aid – Insights on Climate Change, Conflict and Forced Displacement

Location Somalia
Project Partner UNHCR
Project Goal Map the relationship between forced displacement, violent conflicts, and climate change in Somalia. Develop findings and solutions that can inform UNHCR decision making and streamline the delivery of support to people and communities in need.
Results Correlating changes to vegetation, water and land use with data on internal displacement caused by conflict, Omdena’s data scientists produced tools and visualizations that provide the UNHCR with quantified, actionable insights, These findingings illustrate how climate changes such as drought or floods can lead to conflict, violence and forced displacement. They also demonstrate how the process is cyclical, where forced displacement and migrations can lead to resource strain and environmental damage that in turn fosters conflict and damages communities.
Approach Omdena data scientists leveraged a range of AI approaches, including machine learning, image classifiers, neural networks, and data visualization to analyze environmental data from weather satellite images in order to document changes to land use and the environment over time. This information was then correlated with UNHCR displacement statistics from 2016-2019. Results document how climate change can influence conflict and forced displacement – as well as how conflict and forced displacement impact the environment.

Methods

The team focused on the Banadir region surrounding Mogadishu, which experienced severe disruptions in 2016–2017. Using satellite images, they applied classification models to detect changes in vegetation, water, and building density by district.

To identify land use changes, such as increased construction, the team used Landsat 8 satellite imagery from the USGS. Spectral signatures and a Random Forest algorithm were employed to distinguish between land and built-up areas. The team also used environmental indicators such as the Vegetation Health Index (VHI), the Normalized Difference Vegetation Index (NDVI), and the Normalized Difference Water Index (NDWI) to measure environmental conditions and change over time.

Figure 4: Main hypothesis

Figure 4: Main hypothesis

Results were validated against ground-truth data from OpenStreetMap. The team also found that k-means clustering produced results similar to those from OpenStreetMap, which is especially valuable for rural areas with limited digital mapping. Finally, neural networks were used to correlate environmental shifts with displacement and conflict data from UNHCR.

Impact

Insights from this project support more effective allocation of UNHCR staff and resources to assist the 2.6 million people currently displaced in Somalia. The predictive models can also help identify future conflict or displacement hot spots, enabling proactive intervention. In addition, these findings support the creation of policies and programs designed to reduce conflict, displacement, and environmental degradation.

Read more about the article “Using Neural Networks to Predict Droughts, Floods and Conflict Displacements in Somalia” here

5. Preventing Wildfires with Tree Identification  

Location Sweden
Project Partner Spacept
Project Goal To build an AI solution to identify trees that are too close to power stations, and thus help reduce the risk of power outages and forest fires.
Results Used satellite imagery, deep learning, and computer vision to build a model to identify the location of trees with 95% accuracy. This model is able to successfully identify trees in new images, even distinguishing between forest shadows and trees, exceeding the accuracy of manual labeling.  
Approach Challenge participants tested a variety of ML approaches for training AI algorithms to distinguish between trees and other landscape features. The best approach, a deep U-Net model, detected trees with 95% accuracy.

Methods

The training dataset included approximately 200 satellite images from Australia, covering diverse landscapes such as forests, arid regions, farms, and urban environments. Omdena’s data scientists manually labeled the images using Labelbox to create high-quality training data.

They tested several neural network architectures for computer vision, including Mask R-CNN and multiple U-Net variations, to determine which model best identified and segmented trees. The most successful model—a deep U-Net convolutional neural network—achieved 94% accuracy. Additional data augmentation improved performance to 95%. The model demonstrated strong generalization capability, accurately distinguishing tree shapes from shadows in new imagery.

Impact

The resulting model is now being integrated into Spacept’s product. Automating tree identification enables faster and more cost-effective inspections of vegetation near power lines. These insights help prevent outages and reduce the risk of fires caused by falling trees—ultimately saving lives and lowering CO₂ emissions.

Figure 5: Sample results — trees in white, non-tree areas in black.

Figure 5: Sample results — trees in white, non-tree areas in black.

Learn more about Detecting Wildfires Using CNN Model with 95% Accuracy here

6. Advancing Space Exploration with Anomaly Detection

Location Mars
Project Partner University of Bern, Switzerland
Project Goal To design a model that can detect and classify anomalies on the Martian surface, with the aim of identifying terrestrial (or extra-terrestrial) “technosignatures” – measurable properties that provide scientific evidence of past or present technology.
Results Omdena collaborators created an AI tool capable of analyzing satellite images of Mars and identifying and labeling anomalies, distinguishing between 7 types of “natural anomalies” and a  “terrestrial technosignature” with precision scores of 90-99%.
Approach Team members created a custom Python package to efficiently process large satellite image datasets from the Mars Orbital Data Explorer. They trained and tested a variety of different ML models for identifying and classifying anomalies according to eight different classes – 7 natural and one “terrestrial” or caused by humans. A U-Net model yielded the best scores for classifying anomalies, with precision measures between 90-99%.

Methods

Figure 6: Initial satellite image and labeled result

Figure 6: Initial satellite image and labeled result

Figure 6: Initial satellite image and labeled result

Omdena collaborators first created a Python package called mars-ode-data-access to retrieve and process large-scale imagery from the Mars Orbital Data Explorer. They then identified eight anomaly classes, using a publicly available Mars orbital dataset from Zenodo:

  • Seven natural classes: craters, dark dunes, bright dunes, slope streaks, impact ejecta, spiders, and “swiss cheese” terrain

  • One terrestrial technosignature class: debris or remnants from past Mars lander missions

The team classified and labeled approximately 300–400 images to create a training dataset. They experimented with four machine learning models — SSD, Mask R-CNN, U-Net, and AnoGAN — representing both supervised and unsupervised approaches. The U-Net model produced the highest classification precision, exceeding 90% across all anomaly types.

Impact

The pipeline developed through this project enables scientists to efficiently access and analyze space satellite data. The final computer vision model represents a state-of-the-art approach to identifying technosignatures and provides a foundation for evaluating how such detection methods might be used in future space exploration missions.

Figure 7: Sample Output

Figure 7: Sample Output

Conclusion

These six projects show the transformative potential of combining AI with open satellite imagery to solve some of the world’s most urgent challenges. Across food security, renewable energy, disaster response, climate-driven displacement, wildfire prevention, and even space exploration, each initiative turns raw geospatial data into practical insights that help organizations act faster and more effectively. Together, they demonstrate how collaborative, mission-driven AI can unlock new levels of accuracy, reduce operational barriers, and empower governments, humanitarian agencies, and innovators to make smarter decisions that improve lives at scale.

Accelerate your mission with AI solutions that convert Earth observation data into real-world impact across agriculture, energy, disasters and beyond. Connect with Omdena today.

FAQs

To extract meaningful insights from large-scale Earth observation data and support informed decision-making for global challenges.
Open-source data is free and widely accessible, making it easier for NGOs, researchers, and developing regions to use for real-world impact.
Energy poverty, hunger, disaster response, climate-driven displacement, wildfire prevention, urban development, and space research.
Projects are selected based on societal impact, aligned partners, and potential for scalable, ethical, real-world deployment.
ML automates tasks like classification, segmentation, object detection, and pattern analysis, turning raw imagery into actionable insights.
Ground-truth data validates satellite-based predictions, ensuring reliability before solutions are deployed in real-world contexts.
Yes. Many solutions are designed to run on lightweight systems and use global open data sources for scalability across regions.
Organizations can propose a challenge, provide context and goals, and work with Omdena’s global AI community to co-build the solution.