Monitoring Reforestation Success using Machine Learning

Published
Sep 13, 2022
Reading Time
Rate this post
(10 votes)
Monitoring Reforestation Success using Machine Learning

ACF Type: wysiwyg

A comparative case study on using machine learning techniques on satellite data, drone imagery, and field data to monitor reforestation success.

Why we need reforestation

The world is facing a climate crisis. The global temperature is rising, the ice caps are melting and the oceans are warming up. The Earth’s natural resources are being depleted and we need to take action now before it’s too late. One of the most important things that we can do right now is reforestation which can help mitigate the climate crisis.

The project partner

Our partner Bôndy’s incredible initiative towards mitigating climate change through their reforestation program includes planting thousands of trees in Madagascar (one of the biodiversity hotspots) to create social and environmental impact and help the rural population to have more sustainable revenues. Bôndy’s community-based reforestation program in Madagascar consists of 5 regions, spanning a range of climate zones and land use from agroforestry to mangroves.

The goal of this Omdena challenge was to build an Artificial Intelligence algorithm using satellite and drone imagery, for monitoring reforestation success based on the first 5 years of tree growth and survival as the greatest risk of tree mortality is in the first 5 years of reforestation.

Build your portfolio with real-world projects from Omdena

Project pipeline

Project Pipeline

Project pipeline

Data sources

The following data sources have been used in the challenge.

A. Satellite Data Sources:

1. Planet satellite monthly data: Norway’s International Climate & Forests Initiative (NICFI) mosaics contain both monthly and biannual collections. (Biannual collections are generated every 6 months). For this challenge, the team used monthly data. 

  • Red Green Blue and Near InfraRed bands
  • 4.7m resolution
  • May 2020 – May 2022

2. Sentinel 2 satellite cloud-free L3A data: Level 3A products for Sentinel-2 are monthly, cloudless, surface reflectance syntheses.

  • Red Green Blue and Near InfraRed bands
  • 10m resolution
  • May 2020 – May 2022     

Satellite Data Sources

B. Drone Image Sources: 

These are the drone images provided by Bondy

1. Bondy collected data 

  • Red Green Blue bands
  • Range of ground sampling distance 5cm +
  • May 2020 – July 2022
Georeferenced drone image of a Bondy field

Georeferenced drone image of a Bondy field

C. Meteorological Data: 

1. ERA-5 data: ERA5-Land is a reanalysis dataset providing a consistent view of the evolution of land variables over several decades at an enhanced resolution compared to ERA5.

  • Temperature, Precipitation, and Evapotranspiration
  • 10km (0.1 degree) spatial resolution
  • May 2020 – July 2022
ERA 5 Land Reanalysis Data

ERA 5 Land Reanalysis Data

D. Field Data:

Bondy collected field information: This is the field information provided by Bondy about their 150 fields and their outlines. The data was provided in the form of KMZ files for 5 regions.

  • 150 Bondy field outlines
  • Tree planting data per field
  • 1800 Tree locations and tree images

Field Data

Data preprocessing and analysis

The team used the python library ‘Rasterio’ to read satellite data, retrieve geographic metadata,  transform the coordinates, crop the images, merge multiple images and save the data in other formats. Python libraries Earthpy and Matplotlib are used for data visualization.

The field data (KMZ) files provided by Bondy were cleaned using open source software QGIS with the plugin KMLTools to extract valid polygon parcels for Bondy’s reforestation sites. 

The team used a Python library called  PIL  to extract metadata stored in EXIF tags of drone images to get useful information on centroid, date and time of image collection, flight altitude, and camera information like height, width, and focal length. The images were then georeferenced using OpenDroneMap (WebODM) software. The software turns the simple 2D images into Georeferenced Orthorectified maps (orthomosaics), Georeferenced Digital Elevation Models (DSM), and 3D Textured Models. Ground Sample Distance (GSD) is also extracted using OpenDroneMap which is very useful in modeling for estimating the number of trees from an image. 

Data Preprocessing and Analysis

Applying super-resolution on images

The team tested a number of pre-trained super-resolution models on drone and satellite data. The SRCNN model was built for the super-resolution of satellite and drone images. The SRCNN model from the paper Image Super-Resolution Using Deep Convolutional Networks by Dong et. al is used. The team primarily worked with a reimplementation of the model by WarrenGreen. The weights given in the repo were also initially trained on satellite imagery. The team started model training via transfer learning from these weights. The model training is carried out in two steps. First, we trained the model on relatively high-resolution drone images provided by Bôndy. After reaching good accuracy we started re-training on medium-level resolution images and finally tested the model on the lower resolution images. The model takes the input of 400×400 sized images and for training, any large images were cropped in a convolutional manner to 400×400 size and then augmented by rotation and finally used for training. Before training the team created lower resolution images for input to the model. We first scaled down the image to half its size and immediately scaled it back up creating a blurred low resolution image. These images were the training input to the model and the model output was compared with the original images for loss calculation. After training was complete we transferred the model to ONNX format for faster inference time. Using the model as a sort of convolutional filter we can convert images of any shape to their super-resolution state and sharpen and add missing details in between the upscaled images.

Super Resolution of images

Vegetation indices

The three vegetation indices namely Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Modified Soil Adjusted Vegetation Index Values (MSAVI2) were calculated. The indices calculation is done using a python library called rasterio. 

Vegetation Indices (NDVI, NDWI and MSAVI2)

Vegetation Indices (NDVI, NDWI and MSAVI2)

Time series for metéorological data

To monitor the impact of the environment on the health of trees, three environmental parameters were chosen; temperature, precipitation, and evaporation. Time series models for each of the parameters for 3 regions of Madagascar were built using the FaceBook prophet library. The data retrieved from ERA5-Land hourly data from 1950 to the present is computed using python library xarray at the closest grid point to the parcel. 

Temperature and evaporation time series

Temperature and evaporation time series

Precipitation Trend

Precipitation Trend

Modeling

UNet++ architecture is used in training. It is an architecture for semantic segmentation based on UNet. The architecture is formerly used in medical images. The reason we chose to use UNet++ in this challenge is that it outperforms the UNet by using connected nested decoder sub-networks enhancing extracted features processing. EfficientNet is used as backbone also known as feature extractor in our model.

Modeling Steps

Modeling Steps

Postprocessing:

The outputs given by the model are the raw masks which are the pixel-wise detections of where trees are located. The model cannot give the count of trees in the image directly. The outputs need to be processed to get our desired results given ground sampling distance(GSD) in centimeters and average tree size in the area (centimeter square). The results are the approximated number of trees and vegetation percentage in a georeferenced image. Although the model cannot detect individual trees but can detect tree patches and approximate the number of trees in the image. There may be a large error margin on the number but it can give insights of how vegetation has changed in the area over a certain period of time. Calculating vegetation percentage and tree number approximation are the basic mathematical operations which are:

Vegetation Percent

Vegetation Percent

Number of trees

Number of trees

Dashboard deployment

The main objective of the dashboard is to identify the number of trees in a given region for a given drone image. To detect and estimate the number of trees UNet++ model is integrated into the dashboard.  The vegetation indices time series and meteorological parameter forecasting time series are also integrated into the dashboard. The dashboard is dockerized by creating a docker container.

Deployment with docker container

Deployment with docker container

Results and Insights 

After trying a couple of deep learning architectures, the team has chosen Unet++ architecture for model training which is semantic segmentation model. It’s a very robust classification model. The model could identify the areas where the trees are planted. Although the model cannot detect every small individual sapling, it can definitely be useful once the saplings grow into more mature trees. The model would be helpful in detecting and estimating the number of trees in a particular parcel.

4 metrics were used to evaluate the model ;IoU (Intersection over union), Precision, Recall and Loss.  

As you train the model IoU increases over the number of batches/steps on training data. Similar trend is seen on test data.  

Intersection of Union (IoU) on test and training data

The intersection of Union (IoU) on test and training data

The loss function tells how good your model is in predictions. If the model predictions are closer to the actual values the Loss will be minimum and if the predictions are far away from the original values the loss value will be maximum. Our model shows a decrease in loss function as we train the model.el.

Loss function on test and training data

Loss function on test and training data

As we train the model more, the precision increases with an increase in batch size but for the test, it is a bit bumpy.

Precision on test and training data

Precision on test and training data

Usually, recall should increase during the training. But for some reason from batches 10 to 25 our model has seen a very unusual decrease but we kept on training and the trend started showing up from batches 25 onwards. Although the recall for the test is a bit bumpy, it’s showing an increasing trend. The recall has improved over the test set.   

Recall on test and training data

Recall on test and training data

Dashboard: From the drone image, the model can predict tree patches, the approximate number of trees, and vegetation health (as a percentage). The model can also predict the best precision, best loss, or best recall. The user can also adjust ground sampling distance, tree size, and confidence threshold.

Model Prediction Using Dashboard

Model Prediction Using Dashboard

The vegetation indices page lets users choose from NDVI, NDWI, and MSAVI2 for a particular date and particular region. It also shows the time series data for that index.

Vegetation Indices Dashboard

Dashboard with Vegetation Indices and Time Series

Dashboard with Vegetation Indices and Time Series

Meteorological data shows the time series forecasting for temperature, precipitation, and evaporation for 3 regions of Madagascar.

Time series for meteorological data

Time series for meteorological data

Future scope

  • Launch a survey for all sites to collect valid geometry parcels per site.
  • For drone imagery, Bôndy could automate flight paths in advance. The constant flight height is recommended to get a unique Ground Sampling Distance (GSD) which would help in better model prediction. Also at least 5 images for each plot are required (maximum 65, recommended 32) to have a good balance between processing time and accuracy of the result. Image overlap of 65% or more is required, recommended 72%. 3D images must overlap 83% or more. 
  • Ground Sampling Point (GCP) can be established at one site (only), to check the result of processing for different automatic flight path settings, until you get the best one.
  • In the future, Bôndy could try a multi-spectral camera with NIR band, point cloud (LiDAR) data, or commercial satellite data.
  • The LiDAR data may be too precise for the purposes. The LiDAR may not detect the very small saplings in the first year or two of growth.  It may be better suited for trees at the 4-5+ years post-planting.
  • An alternative solution to LiDAR would be to calculate tree height and topography with photogrammetry principles using Phantom 4.
  • Another alternative solution would be to rent a drone to fly LiDAR once per field site, for example, GLOBHE (https://globhe.com/) offers crowdsourced drone data collection for a low cost.

Collaborators

Sanjiv Chemudupati, Alhamdou Jallow, Mehul Sethi, YOUNKAP NINA Duplex, Fred Mensah, Jeremy, Md. Safirur Rashid, Malitha Gunawardhana, Lu Htoo Kyaw, Deepali Bidwai, Joan Vlasschaert, Aldrin Lambon, Ashwath Salimath, Maha Haj Meftah, Paolo Thomas Peralta, Gijs van den Dool

References

  • Galar, M., Sesma, R., Ayala, C., Albizua, L., & Aranda, C. (2020). Super-resolution of sentinel-2 images using convolutional neural networks and real ground truth data. Remote Sensing, 12(18), 2941.
  • Lanaras, C., Bioucas-Dias, J., Galliani, S., Baltsavias, E., & Schindler, K. (2018). Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network. ISPRS Journal of Photogrammetry and Remote Sensing, 146, 305-319.
  • Zongwei, Z., Md Mahfuzur Rahman, S., Nima , T., Jianming, L. (2018).
  • UNet++: A Nested U-Net Architecture for Medical Image Segmentation. arXiv:1807, 10165.
  • Daifeng, P., Yiyi, Z., Guan. (2019).
  • End-to-End Change Detection for High-Resolution Satellite Images Using Improved UNet++. 11(11), 1382.
    Image Super-Resolution Using Deep Convolutional Networks by Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang; https://arxiv.org/abs/1501.00092

ACF Type: image

Deepali Bidwai

ACF Type: text

Deepali Bidwai

ACF Type: url

Q
Newsletter
Leave a comment.
0 Comments
Submit a Comment

Your email address will not be published. Required fields are marked *