| Demo Day Insights | AI for Disaster Response: World Food Programme Project

| Demo Day Insights | AI for Disaster Response: World Food Programme Project

Helping affected populations during a disaster most effectively through AI. A collaborative Omdena team of 34 AI experts and data scientists worked with the World Food Programme to build solutions to predict affected populations and create customized relief packages for disaster prevention.

The entire data analysis and details about the relief package tool including a live demonstration can be found in the demo day recording at the end of the article.

 

The problem: Quick disaster response

When a disaster strikes, the World Food Programme (WFP), as well as other humanitarian agencies, need to design comprehensive emergency operations. They need to know what to bring and in which quantity. How many shelters? How many tons of food? These needs assessments are conducted by humanitarian experts, based on the first information collected, their knowledge, and experience.

The project goal: Building a disaster relief package tool for cyclones (applicable to other use cases and disaster categories)

 

AI Disaster

Cyclones

 

Use Case: Cyclones (Solution applicable to other areas)

Tropical cyclones cost about 10,000 human lives a year. Many more are injured with homes and buildings destructed, which results in financial damage of several billions of USD. Due to changes in climate and extreme weather events, the impact is growing steadily.

 

AI Disaster

Long Beach after Hurricane Katrina. Estimated damage of 168 billion dollars (Source: Wikipedia).

 

The data

The Omdena team gathered data from several sources:

  • IBTrACS – Tropical cyclone data that provides climatological speed and directions of storms (National Oceanic and Atmospheric Administration)
  • EmDAT – Geographical, temporal, human, and economic information on disasters at the country level. (Université Catholique de Louvain)
  • Socio-Economic Factors from World Bank
  • The Gridded Population of the World (GPW) collection – Models the distribution of the human population (counts and densities) on a continuous global raster surface

Missing data was collected manually or partially automated by scraping from Wikipedia or cyclone reports.

 

Data exploration: Determining affected populations

All five data set were aggregated and included more than 1000 events and 45 features characterizing cyclones and affected populations.

 

AI Disaster Response

Data Exploration

 

AI Disaster

Impact Cyclones (Landing vs. No-landing)

 

Important correlation factors to determine affected populations:

  • Rural Population
  • Human Development Index
  • GDP per capita
  • Landing
  • Wind Speed
  • Exposed population
  • Total hours in Land
  • Total damage
  • Total deaths

 

The team mapped the correlation factors to determine which populations are most in need. As an example, below the income level is correlated with the number of people affected. Taking advantage of past data, the data model predicts affected populations.

 

AI Disaster Response

Predicting affected populations based on income level

 

The tool: Calculating relief packages

Once an affected population has been identified, humanitarian actors need to design comprehensive emergency operations including how much food and what type of food is needed. The project team built a food basket tool, which facilitates calculating the needs of affected populations. The tool looks for various different features such as days to be covered, the number of affected people, pregnancies, kids, etc.

 

AI Disaster Response

The relief package tool

 

The entire data analysis and details about the relief package tool including a live demonstration can be found in the video.

 

 

The team: Collaborators from 19 countries

This Omdena project hosted by the WFP Innovation Accelerator united 34 collaborators and changemakers across four continents. All team members worked together for two months on Omdena´s innovation platform to build AI solutions with the mission to improve disaster response. To learn more about the project check out our project page.  

 

AI Disasters

Omdena Collaborators

 

All changemakers: Ali El-Kassas, Alolaywi Ahmed Sami, Anel Nurkayeva, Arnab Saha, Beata Baczynska, Begoña Echavarren Sánchez, Chinmay Krishnan, Dev Bharti, Devika Bhatia, Erick Almaraz, Fabiana Castiblanco, Francis Onyango, Geethanjali Battula, Grivine Ochieng, Jeremiah Kamama, Joseph Itopa Abubakar, Juber Rahman, Krysztof Ausgustowski, Madhurya Shivaram, Onassis Nottage, Pratibha Gupta, Raghuram Nandepu, Rishab Balakrishnan, Rohit Nagotkar, Rosana de Oliveira Gomes, Sagar Devkate, Sijuade Oguntayyo, Susanne Brockmann, Tefy Lucky Rakotomahefa, Tiago Cunha Montenegro, Vamsi Krishna Gutta, Xavier Torres, Yousof Mardoukhi

 

More about Omdena

Omdena is an innovation platform for building AI solutions to real-world problems through the power of bottom-up collaboration.

 

 

Enhancing Satellite Imagery Through Super Resolution

Enhancing Satellite Imagery Through Super Resolution

By James Tan

 

The power of deep learning paired with collaborative human intelligence to increase crop cultivation through super resolution.

 

The Problem

In order to accurately locate crop fields from satellite imagery, it is conceivable that images of a certain quality are required. Although Deep Learning is notoriously known for being able to pull off miracles, we human beings will have a real field day labeling the data if we cannot clearly make out the details within an image.

The following is an example of what we would like to achieve.

 

Semantic Segmentation Example https://www.frontiersin.org/articles/10.3389/feart.2017.00017/full

 

 

If we can clearly identify areas within a satellite image that correspond to a particular crop, we can easily extend our work to evaluate the area of cultivation of each crop, which will go a long way in ensuring food security.

 

The Solution

To get our hands on the required data, we explored a myriad of sources of satellite imagery. We ultimately settled on using images from Sentinel-2, largely due to the fact that the satellite mission boasts images of the best quality amongst other open-source images.

 

 

Is the best good enough?

 

Original Image

 

 

Despite my previous assertion that I am not an expert in satellite imagery, I believe that having seen the above image we can all agree that the quality of it is not quite up to scratch.

A real nightmare to label!

It is completely unfeasible to discern the demarcations between individual crop fields.

Of course, this isn’t completely unreasonable for open-source data. Satellite image vendors have to be especially careful when it comes to the distribution of such data due to privacy concerns.

How outrageous it would be if simply anyone can look up what our backyards look like on the Internet, right?

However, this inconvenience comes at a great detriment to our project. In order to clearly identify and label crops in an image that is relevant to us, we would require images of much higher quality than what we have.

Super-Resolution

Deep Learning practitioners love to apply what they know to solve the problems they face. You probably know where I am getting with this. If the quality of an image isn’t good enough, we try to enhance it of course! The process we like to call super-resolution.

 

Deep Image Prior

This is one of the first things that we tried, and here are the results.

 

Results of applying Deep Image Prior to the original image.

 

Quite noticeably there has been some improvement, the model has done an amazing job of smoothening out the rough edges in the photo. The pixelation problem has been pretty much-taken care of and everything blends in well.

However, in doing so the model has neglected finer details and that leads to an image that feels out of focus.

 

Decrappify

Naturally, we wouldn’t stop until we got something completely satisfactory, which led us to try this instead.

 

Results of applying Decrappify to the original image.

 

 

Now, it is quite obvious that this model has done something completely different than Deep Image Prior. Instead of attempting to ensure that the pixels blend in with each other, this model instead places great emphasis on refining each individual pixel. In doing so it neglects to consider how each pixel is actually related to its surrounding pixels.

Albeit being successful in injecting some life into the original image by making the colors more refined and attractive, the pixelation in the image remains an issue.

 

The Results

 

Results of running the original image through Deep Image Prior and then Decrappify.

 

When we first saw this, we couldn’t believe what we were watching. We have come such a long way from our original image! And to think that the approach taken to achieve such results was such a silly one.

Since each of the previous two models were no good individually, but they clearly were good at getting different things done, what if we combined the two of them?

So we ran the original image through Deep Image Prior, and subsequently fed the results of that through the Decrappify model, and voila!

Relative to the original image, the colors of the current image look incredibly realistic. The lucid demarcations of the crop fields will certainly come a long way in helping us label our data.

 

Our Methodology

The way we pulled this off was embarrassingly simple. We used Deep Image Prior which is found at its official Github repository. As for Decrappify, given our objectives, we figured that training it on satellite images would definitely help out. Having the two models readily set up, its just a matter of feeding images into them one after the other.

 

A Quick Look at the Models

For those of you that have made it this far and are curious about what the models actually are, here’s a brief overview of them.

Deep Image Prior

This method hardly conforms to conventional deep learning-based super-resolution approaches.

Typically, we would create a dataset of low and super-resolution image pairs, following which we train a model to map a low-resolution image to its high-resolution counterpart to increase crop cultivation. However, this particular model does none of the above, and as a result, does not have to be pre-trained prior to inference time. Instead, a randomly initialized deep neural network is trained on one particular image. That image could be one of your favorite sports stars, a picture of your pet, a painting that you like, or even random noise. Its task is then, to optimize its parameters to map the input image to the image that we are trying to super-resolve. In other words, we are training our network to overfit to our low-resolution image.

Why does this make sense?

It turns out that the structure of deep networks imposes a ‘naturalness prior’ over the generated image. Quite simply, this means that when overfitting/memorizing an image, deep networks prefer to learn the natural/smoother concepts first before moving on to the unnatural ones. That is to say that the convolutional neural network (CNN) will first ‘identify’ the colors that form shapes in various parts of the image and then proceed to materialize various textures in the image. As the optimization process goes on, CNN will latch on to finer details.

When generating an image, neural networks prefer natural-looking images as opposed to pixelated ones. Thus, we start the optimization process and allow it to continue to the point where it has captured most of the relevant details but has not learned any of the pixelations and noise. For super-resolution, we train it to a point such that the resulting image it creates closely resembles the original image when they are both downsampled. There exist multiple super-resolution images that could have produced each low-resolution image to increase crop cultivation.

And as it turns out, the most plausible image is also the one that doesn’t appear to be highly pixelated, this is because the structure of deep networks imposes a ‘naturalness prior’ on generated images.

We highly recommend this talk by Dmitry Ulyanov (who was the main author of the Deep Image Prior paper) to understand the above concepts in depth.

 

 

 

 

 

 

 

The super-resolution process of Deep Image Prior

 

 
Decrappify

In contrast with the previous model, here it is about to learn as much possible about satellite images. As a result, when we give it a low-quality image as an input, the model is able to bridge the gap between a low and high-quality version of it by using its knowledge of the world to fill in the blanks.

The model has a U-net architecture with a pre-trained ResNet backbone. But the part that is really interesting is in the loss function, which has been adapted from this paper. The objective of this model is to produce an output image of higher quality, such that when it is fed through a pre-trained VGG16 model, it produces minimal ‘style’ and ‘content’ loss relative to the ground truth image. The ‘style’ loss is relevant because we want the model to be able to be careful in creating a super-resolution image with a texture that is realistic of a satellite image to increase crop cultivation. The ‘content’ loss is responsible for encouraging the model to recreate intricate details in its higher quality output.

 

 

 

More About Omdena

Omdena is an innovation platform for building AI solutions to real-world problems through the power of bottom-up global collaboration.

A Practical Guide for Creating A Quality Satellite Imagery Dataset for Agricultural Applications

A Practical Guide for Creating A Quality Satellite Imagery Dataset for Agricultural Applications

By Łukasz MurawskiAlexander Epifanov, and Jayasudan Munsamy.

This article is the result of working in Omdena’s AI project to estimate crop yield and crop classification with the UN World Food Program in Nepal by using satellite imaging analysis. The problem was tough, challenges were huge, and resources scarce. Still, a community of 36 collaborators managed to build a solution with 89% accuracy. This article focuses on the dataset creation.

 

The characteristics of a ‘good’ dataset

Rubbish in — rubbish out’ is a popular phrase referring to poor ML model results caused by poor quality datasets.

No matter how good a model architecture is, it’s useless if not trained with appropriate good data. At the same time, the opposite holds true — sometimes even the basic model architecture is enough to get the job done if fed with a proper dataset for training.

And while so much has been written about ML model building, we think that dataset preparation has been a bit disregarded and this is understandable because it ain’t too sexy of a subject and takes forever to get it right. Unfortunately, our path to realizing the importance of ‘creating a good dataset’ was painful and agonizing.

So what really is ‘good’ dataset? Every project is different and has unique requirements for its dataset.

Let’s start with the characteristics of an expected outcome — our ML model. It seems that a good model is the one that achieves best results on a variety of input data — we say that model generalizes well and we call it ‘robust’.

In short, to achieve this, we need to train the model with high quality, precisely labeled datasets representing the full spectrum of input possibilities. The importance of data in ML can be understood from the fact that ‘typically 80% of an ML project is spent on data — analysis, gathering & engineering’.

The key characteristics of a good dataset are listed below:

  • Data distribution — data should cover all or most of the possible spectrum of the input
  • Data coverage — every class should have enough representation in the dataset
  • Data accuracy — data should be highly relevant to the task in hand and be as close as possible to that used for inference, in terms of quality, format, etc.
  • Feature engineered — data should enable the ML model to learn what we intend it to learn (appropriate features)
  • Data transformation — almost always data acquired cannot be used as-is and an appropriate data transformation pipeline can simplify the model architecture
  • Data volume — depending on whether the ML model is built from scratch or learning transferred from another model, availability of data is critical
  • Data split — typically data is split into 3 chunks: training (75%), validation (15%) & test (10%) and it’s important to ensure there is no ‘duplicate/same’ data across these chunks and the samples are distributed properly

 

One of the key challenges with ML projects is that exact requirements for data are NOT known at the time of data analysis & gathering and sometimes is known only after the model is built and shortcomings understood. So, an iterative approach to creating and refining datasets in order to improve model metrics is a safe bet.

 

 

The Solution: Seven recommendations for creating satellite imaging analysis datasets for crop classification

 

Earth seen through satellite

Photo by NASA on Unsplash

 

Satellite images can be invisible colors (RGB) and in other spectra, e.g. data within specific wavelength ranges across the electromagnetic spectrum like Near-Infrared. There are also elevation maps, usually made by radar images which can be used to estimate vegetation growth rate, etc. Normally, the interpretation and Satellite Imaging Analysis method are conducted using specialized remote sensing software but advancements in AI have made autonomous, large scale analysis of satellite imaging analysis possible such as Crop Classification. We have listed some of the key points to consider.

  • Ground truth data — know the ground truth
  • Source of satellite images — decide the source
  • Spatial distribution — know the terrain
  • Temporal distribution — know the crop growth cycle
  • Image quality — know what’s in the images
  • Vegetation indices — know the right indices
  • Labeling and Masking — know what is what & wherein the images

 

#1 Ground truth data — know the truth

Almost every stage of the project is dependent on the Ground Truth (GT) data provided. In satellite imaging analysis for the crop classification, it should contain all the details about the crop fields, which will help to identify them individually, so the information can be fed to ML models via appropriate datasets for correct feature extractions. In many cases, GT data is in the form of a file containing the required information gathered during field surveys. Usually, it’s a simple spreadsheet file filled with a full variety of field information that can be directly used for labeling. But in practice, we may not have all the required information and should be cautious to pay attention to the content of the file for two main reasons:

  • Acquiring comprehensive field survey results is expensive, if possible at all
  • Preparing proper Ground Truth data file requires an understanding of all the details required for creating robust ML models

In short, the goal of the Ground Truth data should be to provide complete, well-balanced, and properly distributed data. It should also serve as a reference on how to recognize different objects of interest to facilitate complete and reliable labeling.

It should have the following characteristics:

  • Contain all the required details of data (crop parcels in agriculture case) — for example in case of crop type identification GT should specify crop field id, their dimensions, GPS locations, shape, size, crop cultivated, seasonal crops for the field, crop cycle details, land use patterns, etc.
  • A well-balanced number of classes (Ex: equal or similar number of samples for each class)
  • Well distributed spatially (various terrains in area of interest) and temporally (covering various time periods like seasons/crop cycles, etc.), representing the full range of possible scenarios
  • Number of data points must exceed the expected number of images since for some/many of them there won’t be good satellite images (due to weather conditions, pollution, etc.)
  • Should contain samples from few different years, so in case a year’s satellite images cannot be used for bad weather conditions or some other reasons, we can utilize images from other years mentioned in ground truth data to compensate for data loss
  • Should highlight periods when objects of interest (say different crops) are easiest to be recognized. For example, months when the crops are in the fully grown stage to help in easy labeling (from there, we can easily propagate masks knowing the vegetation specifics)
  • Should include examples of a similar kind of crops/vegetation (not just visually but also in terms of VIs if possible) in the nearby regions around the area of interest. For example, if rice fields are the class of interest, examples of grass, cornfields, etc. in the surrounding area which look similar to rice field should be also included

The key point to remember is that Ground Truth data quality will have a big impact on the dataset created, labeling/masking done on a dataset, and ultimately results of the solution.

 

#2 Source of satellite images — decide the source

With the advent of satellites launched by many countries and private organizations, Satellite Imaging analysis has become more accessible to the general public for a variety of applications, in our case, crop classification. Some of the more popular programs are Landsat (by USGS & NASA, 30m resolution since the early 1980s), MODIS (by NASA, near-daily satellite imagery of earth in 36 spectral bands since 2000), Sentinel (by ESA, 5 days frequency of earth in 16 spectral bands since 2016) and ASTER (by NASA, detailed maps of land surface temperature, reflectance, and elevation).

 

Organizations selling Satellite Imaging analysis techniques

Several private organizations sell raw & processed satellite imagery with customized data as required by customers. Few popular ones are GeoEye (since Sep2008, images with a ground resolution of 0.41 meters (16 inches) in the panchromatic or black and white mode also has multispectral or color imagery at 1.65-meter resolution or about 64 inches), DigitalGlobe (imagery with 0.46m & 0.6m panchromatic only spatial resolution, also images with 0.31 m spatial resolution), OneAtlas platform (by Airbus, Optical & Radar Earth Observation), Spot Image (by Bratislava, images with 1.5 m for the panchromatic channel, 6m for multi-spectral and 0.50 meter or about 20 inches) and ImageSat International (also known as “EROS” satellites, images can be used for mapping, border control, infrastructure planning, agricultural monitoring, environmental monitoring, disaster response, training, and simulations, etc.).

Key decision points for choosing the satellite imaging source, for crop classification:

  • Raw or processed datasets — we can’t use raw satellite imaging analysis and processing of satellite images for crop classification is an involved activity using various tools: so processed images which are available readily as part of datasets are a good choice to start with
  • Image quality — sharp images with clear differentiation of objects we are interested in is critical: the higher the resolution, the better the results with the ML model
  • The spatial resolution of images — it’s the area on the ground covered by a single pixel in the satellite image: the lower the resolution (they go down to 15cm), the better they are. Few organizations improve the spatial resolution of final images by applying scaling techniques which sometimes may cause the undesirable quality of images and hence need to watch-out (ex: few bands of Sentinel2 dataset are resampled/scaled with constant Ground Sampling Distance metric depending on native resolutions of the bands and hence can have a spatial resolution of 10m, 20m or 60m, but the corresponding images will be low quality due to sampling)
  • Free or paid — may seem like an easy choice, but various aspects like quality, the processing is done, completeness of data, etc. in the provided datasets depend on effort spent by the provider: the source of images used at inference time and accuracy/other metrics of ML models mainly drive the decision
  • Temporal images coverage — depending on where, when & purpose of the satellites that were launched, imagery may be available only for certain geography and period (ex: Sentinel2 Level1C dataset has images only from June 2015 onwards): temporal data required for the task will help to decide
  • Spectral bands to use — depending on the sensors in satellites, various spectral data will be available in images (ex: Sentinel2 imagery contains 13 spectral bands): usage of various vegetation indices for the type of remote sensing task will help to decide
  • Number of images per day/week/month — depending on the frequency of satellite orbiting over the area of interest, number of images may vary (ex: Sentinel2 orbits over a location every 5 days, so in a month, there will be 4 to 6 images of a particular location: volume of images required will help to decide
  • Image processing should be done — image quality decreases with various factors like cloud cover, haze cover, pollution distractions, etc. many organizations apply various processing techniques to get rid of such distractions in images: image quality requirements for the task at hand will help to decide

 

#3 Spatial distribution — know the terrain

 

Nepal’s agro-ecological zones in different terrains

Nepal’s agro-ecological zones in different terrains

 

While RGB bands in satellite images can show the crop fields, the terrain of these fields also plays an important role. For example, crop fields in plains tend to be large, with more regular shapes and similar crops are usually in the neighborhood; whereas in hilly areas crop fields tend to be small, different shapes & altitude and mix of other vegetation may be surrounding the fields; similarly in forest areas crop fields tend to be surrounded by thick trees without a clear visual representation of fields and their borders. Hence understanding the various terrains the crop fields are in becomes important.

The recommendation is to look at satellite imaging analysis from different time periods/seasons to understand the terrain of an area of interest, include images from various terrains in the dataset, consider the challenges with images from different areas while labeling/marking and address those challenges as much as possible with appropriate labeling like crop classification.

 

#4 Temporal distribution — know the growth cycle

 

Photo of corn fields

Photo by Kai Pilger on Unsplash

 

Satellite images from all the months covering various growth stages of crops should be added to the dataset. These images will help the ML model to generalize well and be able to accurately identify crops irrespective of the growth stage of crops. A better understanding of the crops’ growth cycle and seasonal crops cycle can help to find satellite images of crops at different stages of growth.

While looking for temporal data, there is a possibility that a few months in a specific year do not have any images due to bad weather or climatic conditions. In such cases, consider choosing images from other years for these months. An important assumption that needs to be validated here is that the crop cycle & seasonal crops for those years are the same as the year with ground truth. In some cases, the same crop can have a different growth cycle in different regions.

In our case with Nepal, there were 10 to 12 varieties of rice widely adopted by farmers, having two main growing seasons depending on rice variety: 1) Spring rice (February/March to June/July): Chaite 2, Chaite 4, Ch 45, Bindeswar, etc. and; 2) Main season rice (June/July to October/November): Mahsuri, Savitri, etc. (Source).

Another important point to consider regarding temporal data is the land use pattern of cultivated fields. Though there might be defined crop cycle for each crop, there can be scenarios where the same crop fields are used for different crop cultivation in different seasons (ex: seasonal short-term crops may be cultivated in the same fields after main crop’s harvesting is done and before next sowing). Missing out on satellite images from different months representing seasonal crops will result in an incomplete dataset, resulting in inaccurate ML models.

Talking to the subject matter experts/farmers in the area of interest to understand the temporal data to be captured is critical at this stage of dataset creation.

 

#5 Image quality — know what’s in the images

The resolution of satellite images is relatively high and image processing is time-consuming. Similarly, depending on the sensor from which the imagery was created, appropriate processing is required before consuming the images. For the same reason, weather (rain, clouds, etc.) & environmental (pollution, haze, etc.) conditions can affect image quality. For such reasons, publicly available satellite image datasets are typically processed for visual or scientific commercial use by third parties.

Just like any other digital image, the resolution of satellite images is critical for the purpose and varies depending on the instrument used and the altitude of the satellite’s orbit. There are four types of resolution when discussing satellite imagery in remote sensing: spatial (pixel size of an image representing the size of the surface area being measured on the ground), spectral (wavelength interval size and number of intervals), temporal (amount of time/days that passes between imagery collection periods for a given surface location) and radiometric (levels of brightness/contrast).

 

Satellite image spatial resolution vs quality

Satellite image spatial resolution vs quality

 

Though there are open datasets of satellite imaging analysis available to the public free of cost, quality images to be used for specific purposes like crop growth detection, crop classification, crop type identification, etc. are expensive. The higher the quality, the higher the cost. As simple as that. But not always high-quality images are required. For example, for a project with the objective of identifying building structures in satellite images, we may not require images with spatial resolution as low as 30cm or so. So, there’s no standard guideline or single rule suggesting the minimal or maximum image quality required for a project. It all depends on the objective of the project. However, there are two main factors which need to be considered before deciding on the quality of images to use:

  • ML model’s accuracy/performance metrics — will the image quality is chosen help to meet the high-performance requirements of the project?
  • Data labeling potential — will the image quality is chosen to be good enough for labeling given the kind of objects to be identified from images?

The decision on point 1 is quite obvious — we can test the ML models using images of different quality and choose the one that meets the project’s requirements/metrics with the lowest quality images. The decision on point 2 is more subjective in terms of what objects are to be identified on images and should be agreed upon at the beginning of the project by carefully assessing the labeling capabilities. But, the image quality must be good enough for labeling, so people can easily see and draw masks around all objects/classes of interest. Solutions that require small objects to be identified, where distinguishing edges is more important than counting the overall coverage, require very high-quality images. That was the case with Omdena’s Trees Recognition Challenge, where the goal was to identify trees close to electricity lines in order to prevent power outages and fires sparked. Here, extremely accurate masking, close to the tree’s edges were necessary. For that, high-resolution 0.5m spatial resolution pictures had to be used. Thus, not only trees but also little bushes and shadows had to be precisely annotated. And that paid off. With only 150 original images and very basic transformations, using Deep UNet Model, the team achieved around 95% accuracy.

 

From trees identification project at Omdena

 

For our crop identification project in Nepal, such a high resolution was not necessary as crop field shapes were pretty much regular (except for the ones in hilly areas), with borders being mostly straight lines. So, in this project, the ability to distinguish similar objects (like rice vs. grass) at the labeling stage was the main factor to decide on image quality. We ended up using Sentinel2 Level-1C satellite images with 13 spectral bands from the Copernicus program (European Space Agency) with a maximum spatial resolution of 10m per pixel for certain bands. Unfortunately, it turned out that the max zoom level to get clear RBG images was only 100m. And that zoom level was not good enough for labeling since the crop field areas appear too small in the agricultural setting as seen in the images below (actual images used were 500×500 dimension and yet many fields appeared too small /unclear to recognize).

 

 

Crop fields satellite images (left: rice, right: wheat) at 100m zoom level

 

Since data from RGB channels are not enough for our AI model to identify crops, we could use data from the other spectral bands in Sentinel2 imagery to calculate different vegetation indices, include them in images and then train ML models with that dataset. As suggested in this paper, the dataset can be based on images with other bands including RGB and appropriate vegetation indices.

However, without good RGB images, the problem of proper labeling/masking persists. And it seems that there are basically very few options as listed below:

  • Assuming that the final solution can’t be based on commercially obtained high-quality satellite imaging analysis, they might be necessary at-least for one-time masking/labeling, just at the model creation phase. Those masks can then be used to train the model using multi-spectral bands and calculated appropriate vegetation indices as bands in images for crop classification.
  • Make sure the ground truth data defines the dimensions of every field precisely by using GPS coordinates to help labeling/masking team mark crop fields accurately.
  • Draw masks on satellite images automatically using GIS software by physically being in-the-field with teams of people with GPS devices.

 

#6 Vegetation indices — know the right indices

As seen in the above section, for specialized tasks like differentiating vegetation types, it is required to analyze data contained in other spectral bands (ranging from 3 to 16 bands) of satellite images other than just RGB bands. This is where Vegetation Indices (VIs) play a critical role. Vegetation Indices are combinations of surface reflectance at two or more wavelengths designed to highlight a particular property of vegetation.

They are derived using the reflectance properties of vegetation. Each of the VIs is designed to accentuate a particular vegetation property. Satellite images from the various organizations have a varying number of spectral bands containing data useful for VI calculations. For example, Sentinel 2 satellite imagery from ESA is a wide-swath, high-resolution, multi-spectral imaging mission supporting land monitoring studies — vegetation, soil, water cover, inland waterways & coastal areas and have 13 spectral bands containing various top of atmosphere (TOA) reflectance levels which can be used for a variety of VI calculations.

More than 150 VIs have been published in scientific literature, but only a small subset have a substantial biophysical basis or have been systematically tested. Many tools/platforms provide support for calculating various VIs. For example, the SNAP platform part of Sentinel Toolboxes supports around 21 VI calculations and the ENVI Image Analysis platform provides 27 VIs to use to detect the presence and relative abundance of pigments, water, and carbon as expressed in the solar-reflected optical spectrum (400 nm to 2500 nm). VIs can be broadly categorized into the following groups — Broadband Greenness, Narrowband Greenness, Light Use Efficiency, Canopy Nitrogen, Dry or Senescent Carbon, Leaf Pigments, and Canopy Water Content.

The below table shows few veg indices applicable to differentiating Rice, Wheat & Other crops with respect to the Sentinel2 Level1C dataset of Nepal’s specific cultivation areas. The analysis was done based on data from 30+ sample crop fields of each category. Unfortunately, the values for VIs of crops vary from region to region based on their climatic conditions, temperature, soil conditions, etc. and hence study/analysis done for crops in one region cannot be reused for the same crops in other regions and hence need to be calculated for each region/area of interest.

 

Vegetation Indices analysis for crops (rice and wheat) differentiation

 

Specifically in case of ‘crop type identification’, calculating the chosen vegetation indices, adding them as bands /channels in images dataset, and then feeding them to ML models will help model learn correlations between crop type and veg indices and ultimately identify different crop types with high accuracy. So, do not forget to explore various VIs suitable for the task in hand.

 

#7 Labeling/ Masking — know what & where

As per the ML model architecture Classification or Segmentation, objectives of labeling will change:

  • For classification — diversified dataset of pictures labeled with one class per image
  • For segmentation — every parcel belonging to each class of interest needs to be annotated/masked in every image, along with other objects around them

Since labeling for segmentation task is more demanding, here are few points to do it right:

  • Every crop field belonging to every class of interest should be masked. Omitting/mislabeling and not being accurate enough will result in false negatives/positives at the model training stage and will impact the model performance
  • Labeling needs to be accurate with regards to objects of interests in the image — say crop field/parcel borders: ideally, every parcel should have its own, clearly outlined border
  • If the labeling is required for temporal data (satellite imaging analysis a location is taken at various time intervals like week or month or year) like in the case such as crop type identification project and crop classification, we can manually label only images with objects when they’re in the easiest to recognize stage. That means, in crop type identification project, we can label only images with crops in a fully grown-up stage. Since the location of the fields in images don’t change across temporal images, masks from those images can be propagated to the other temporal images, saving a lot of time & effort
  • To ensure our model learns to differentiate crop types cultivated at the very same fields during different seasons (Ex: rice-wheat-fallow, rice-winter maize-fallow), it is important to annotate not only the main seasonal crops but also samples of other crops cultivated in the same field
  • Labeling should enable ML models to differentiate objects of interest in images from the ones that look similar to the main objects. So, it is a good idea to have samples of similar objects in a dataset and labeling them appropriately. For example, grass & little bushes might look similar to the rice field on RBG images. So creating a separate label category for these similar-looking objects is a good idea
  • Similarly, areas/environments surrounding the main object of interest in an image also should be labeled to help highlight differences in them. For example, in the case of crop identification project, images with rice fields near river banks or on the river bed should be annotated with river and riverbank as well. Since the rice field in the initial 1 to 5 weeks or so will be filled with water, the model should be able to differentiate rice fields during the first few weeks and actual river or a pond nearby.

 

 

About Omdena

Omdena is an innovation platform for building AI solutions to real-world problems through the power of bottom-up collaboration.

 

Stay in touch via our newsletter.

Be notified (a few times a month) about top-notch articles, new real-world projects, and events with our community of changemakers.

Sign up here