Identifying Optimal Locations for Floating Solar Installations Using Satellite Data

Identifying Optimal Locations for Floating Solar Installations Using Satellite Data

The team of more than 30 Omdena AI engineers collaborated with Norwegian cleantech startup Glint Solar to use AI to augment their solar site assessment tool for floating solar panels.

The project goal was to apply remote sensing techniques on satellite imagery to infer the depth of inland water bodies. This information can be added to GlintSolar´s solar site assessment tool to identify suitable sites and accelerate the green energy revolution. 

 

The problem

As Global Warming continues to rise, there needs to be a way to tackle this problem. One such method is to install solar panels and harness the energy from the sun.

But due to the low availability of land to install solar panels, an increasing number of installers use inland water bodies and install floating panels utilizing the water surfaces. Using water bodies is especially attractive in places where the availability of land is low. Interesting locations could be drinking water reservoirs, water cleaning facilities, and hydropower reservoirs.

AI and solar

Source: Glint Solar

 

Bathymetry (depth map) and vertical water level variation over time are essential to evaluate when choosing the best locations for solar installations, as these have a significant impact on the number of panels that can be installed on a given area, as well as the overall cost Bathymetry, can be derived using multispectral satellite images, but all commonly used techniques require calibration data.

Furthermore, today’s techniques are susceptible to noise from varying bottom conditions and particles in the water (such as silt and algae). In inland water bodies, calibration data is seldom available, and the water often has a high degree of particles. 

 

The project outcomes

The three main objectives of the project were as following: 

  • Identifying the preprocessing steps to denoise the data for better model performance,
  • Building AI models to infer the depth of inland water bodies, and
  • Integrating the suitable model for the GlintSolar solar site assessment tool
The data

The datasets collected in this challenge include Water Bodies Dataset (Sample_1), Bathybase Dataset, and Global Reservoir Dataset. Additionally, several other available datasets were also identified for GlintSolar to consider further. 

Three preprocessing steps were made to denoise the satellite data: a general pipeline that preprocesses raster data from the Bathybase dataset to modeling ready format, a Cloud cover removal process using Sentinel 2 Level 2 images, and Algal Blooms Detection process using MODIS data. While these processes showed excellent results in clearing the noises in the satellite image data, the limitation is that the Algal Bloom Detection process was not integrated into the pipeline due to different image sources; however, a proof of concept was done for further development.

The models and deployment phase

Several models were tested, and the best-performing model was identified.

The deployment of the work put the code into modularized python scripts for production purposes, included all the required dependencies, and stored all the files on a DAGsHub repository. This challenge successfully identified parts of the preprocessing steps to denoise the satellite data and identified a well-performing model to predict the depth of water bodies from satellite images. The current modeling process is based on one lake/water body but can be developed in the future to accept multiple waterbody data for modeling. The result of this challenge provided a preliminary preprocessing and modeling pipeline as a minimal viable product that will be further developed and scaled up for integration into GlintSolar’s solar site assessment tool.

 

Smart Electric Vehicle (EVs) Charging Network Management Using Machine Learning

Smart Electric Vehicle (EVs) Charging Network Management Using Machine Learning

The Energy Tech Hib and Centre for New Energy Technologies collaborated with a team of Omdena AI engineers to analyze EV charging data and applying machine learning for optimizing Electric Vehicle (EVs) Distributed Network Service planning.

 

The outputs from this project can help to derive insights on when and how much EVs get recharged as well as showing where the consumption peaks are. Knowing these charging patterns can inform the system operators to manage their networks and optimize the efficiency of EV charging operations. 

 

The problem

The increasing uptake of electric vehicles and their charging pose challenges to today’s energy networks, e.g. unexpected peak load and voltage problems in the distribution network. There is a growing interest in understanding how and when EVs are charged to inform the design of charging incentives and energy management schemes. However, it is challenging to keep track of EV charging at a large scale in a cost-effective way. With smart meter data collected, this sheds light on using AI and machine learning techniques to detect EV charging from the meter data, providing an effective yet non-intrusive solution for Distributed Network Service Providers (DNSPs) to know how EVs are charged on their networks, which will inform their network planning, upgrade, and operations.

 

The project outcomes

 

Exploratory Data Analysis to understand consumption patterns

 

The overall method of the approach taken in this project has been as follows: 

1. Data Preparation: A good portion of the time was spent on data preparation to make a usable dataset for modeling. 

2. Exploratory Data Analysis (EDA): This step was focused on understanding consumption patterns for different types of users (EV, producers, peak consumers, etc). 

 

The EDA process helped to derive insights on what information is most relevant for the modeling part.

 

AI Smart EV Charging

Figure 1: Average hourly consumption for one day for EV and non-EV users

 

AI modeling to detect EV charging patterns

The team implemented a machine learning clustering algorithm that groups similar timestamps of consumption in smart meters. Initially, the team chose three clusters. Meaning each of the time slots (for every smart meter) is grouped into three buckets (low, medium, high). Low indicates “low consumption”. Medium indicates medium-sized consumption and potentially non-EV appliances. High indicates potential EV charging.

The outputs from this project can help to derive insights on when and how much EVs get recharged as well as showing where the peaks are. Knowing these EV charging patterns can inform the system operators to manage their networks and optimize the efficiency of EV charging operations. 

The model and insights were built into an Excel tool and dashboard.

 

Detecting Harmful Video Content and Children Behavior through Computer Vision

Detecting Harmful Video Content and Children Behavior through Computer Vision

US-based startup Preamble collaborated with 50 Omdena AI engineers and data scientists to develop a cost-effective solution to detect harmful situations in online video challenges. Using computer vision the team was ablo to detect if a video is harmful or not.

 

The results from this project are intended as a baseline to help Preamble build solutions for safer online platforms. 

 

The problem

Children are more susceptible to acting impulsively and participating in online internet challenges. Internet challenges can encourage kids to replicate unsafe behaviors to increase user engagement and materiality on social media. Some of these outrageous challenges have led to severe bodily harm and even death. To protect children from these types of dangerous ideas and peer pressure, we are building a model to filter out this content. 

computer vision violence

Source: AsiaOne

 

Some prior internet challenges that are dangerous to participants and especially children:

  • Blackout challenge 
  • Eating Tide detergent pods
  • Cinnamon challenge (can cause scarring and inflammation)
  • Super gluing their lips together 
  • Power outlet challenge

 

The project outcomes

 

The process

The team divided several tasks across contributors according to their expertise in the following process: 

  • Select and download videos with harmful content from social media platforms
  • Extract frames (images) from the videos at regular intervals.
  • Label each image as harmful, ambiguous, or not harmful
  • Train an image classification model supervised by the labeled frames.
  • Evaluate the image classification model

 

The data

The bulk of teamwork was concentrated on data collection and labeling. The team developed scripts to facilitate video and metadata retrieval from social media platforms. Specifically, existing python libraries were used to download videos from YouTube, Vk, and TikTok. Through this process, the team manually collected more than 240 challenge videos.

Violent Online Challenges

Figure 1. Challenges distribution

 

The model

After a manual and partly automated labeling process of images and challenges as harmful, ambiguous, or not harmful, the team tested several computer vision models. As an outcome of this eight weeks project, the best-fit model was able to detect if a video is harmful or not using the labeled data set. The following steps will be to expand the model performance and applicability to a broader set of conditions.

 

Modeling Triggers and Symptoms of Hashimoto’s Disease

Modeling Triggers and Symptoms of Hashimoto’s Disease

An Omdena team of 30 AI changemakers collaborated with the rising impact startup Hashiona to better understand the relationship between triggers and symptoms of Hashimoto’s disease. The team extracted user data from Hashiona´s app, consolidated the data, did extensive data exploration, and applied statistical analysis and clustering models on triggers and symptoms for Hashimoto’s disease. The models were deployed and visualized in a Streamlit app.

 

Results from this project are intended to be used in Hashimoto´s app to help the user cope better with Hashimoto´s disease.

 

The problem

Hashimoto’s disease is an autoimmune disorder that causes hypothyroidism. Hashimoto’s and other frequent autoimmune diseases are incurable and hard to treat. Hashimoto’s is described with at least 45 medical symptoms, which are hard to detect. Exactly as in the treatment of diabetes, patients have to implement comprehensive lifestyle changes to mitigate the autoimmune response in their body & feel better (i.e. go into remission).  Patients with Hashimoto’s disease may experience persisting symptoms despite normal hormone levels.

Hashiona is the first mobile application dedicated to people with Hashimoto’s disease and/or hypothyroidism. Hashiona´s user base exceeds 10,000 and their approach has been validated by Stanford, Draper University, MIT, and more. 

 

Source: Hashiona

 

 

The project outcomes

 

The data

The first step was to apply Exploratory Data Analysis (EDA) to discover the hidden patterns and anomalies in the entire dataset. Next, the team divided the data into subsets for analysis:

  • Profiles who suffer from Hashimoto
  • Profiles who suffer from Thyroidism
  • Profiles with Thyroidism symptoms
  • Profiles having any triggers, mind or body symptoms

 

Next, top triggers and symptoms were identified. Examples include:

  • Overtraining
  • Cold
  • Caffeine consumption
  • Relationship with family
  • Heat

 

The team identified user profiles who either suffer from Hashimoto’s or Thyroidism and have recorded their hypothyroidism symptoms. Among these profiles, correlation analysis was performed for body symptoms, mind symptoms, and thyroid symptoms with triggers.

Next, clustering analysis and modeling have been applied to show the most occurring triggers for various clusters. After grouping data by cluster, the average population in the cluster affected by a trigger or symptom was calculated. 

 

The models and dashboard

To visualize the models and outcome the team choose Streamlit. Streamlit allows users to quickly build highly interactive web applications and is generally used for deploying Machine learning models.

For the demo, static charts have been implemented.

 

Screenshot: Streamlit Dashboard

 

 

Optimizing Delivery Routes in LATAM using AI Planning

Optimizing Delivery Routes in LATAM using AI Planning

In this high-impact 8-week challenge, 50 AI engineers have collaborated to build a route optimisation tool using Google´s OR-Tools and Open Street Map.  of the division of deliveries between delivery vehicles using AI.

The results showed optimum routes of the city of Bogota and the techniques are reproducible in other locations or cities as well. 

 

The problem

Bogota, Lima, Mexico City, and Rio de Janeiro are among the most congested cities in the world. General mobility trends in the area, together with the population growth in urban areas, suggest that the levels of congestion currently present in cities in the area will not be reversed soon.  The logistics industry, particularly last-mile logistics, bears a disproportionate amount of this burden, particularly with the growth of e-commerce in the wake of the COVID-19 pandemic.

Optimized deployment of existing resources (vehicles and drivers) can improve the level of service for customers, reduce carbon footprint, enrich the well-being and livelihood of drivers, and lessen the industry’s overall impact on urban congestion. With the opportunity to create such an impact, the Omdena collaborators and partner Carryt came together to address the issue at hand.

The Colombian company and Omdena partner, Carryt wants to optimize routes to improve logistics using artificial intelligence and route planning. Carryt, a technology company with a field-services solution has recently become a last-mile logistics provider with a technology product empowering the gig economy, providing drivers with a livable wage, and offering delivery services to customers. Carryt conducts operations in Mexico and soon in Brazil with more than 200K deliveries per month in 2021 and aiming to increase up to 1 million deliveries per month in 2022.

The project outcomes

 

The data

The Omdena collaborators thoroughly studied the dataset provided by Carryt (Shapefile and OSM file formats). The collaborators researched for relevant knowledge resources, conducted data preparation including wrangling, preprocessing, and exploratory data analysis, modeling, and algorithms, and explored deployment options.

The team started exploring multiple alternative modeling approaches and later on narrowed it down to the use of an open-source software suite for optimization route, Operations Research tool (OR-tool) from Google AI that allows the flexibility to consider the restrictions, and transportation regulations aiming for reduced time and shorter routes. The challenge was addressed by individual task teams that also cooperated on dependent tasks. The nature of Omdena collaborator teams allowed the team to explore multiple alternative routes and solutions before narrowing it down to the final solution. 

The data preparation team explored how to better understand, clean, and analyze the partner-provided datasets. They identified data quality issues and prepared a separate guide for special handling of harder-to-incorporate data. They also worked for hand in hand with the modeling and deployment teams to help create a robust optimization solution. 

Google Operations Research tool and deployment

The team customized an OR tool to map the exact routes considering the restrictions from the dataset. Finally, a high-performance deployment web service using a flask web framework was implemented on AWS.  

Despite the complexity of the challenge, the contributors showed strong teamwork and remote collaboration of talent globally to achieve the project goals. The quality of data available, and the limitations of domain expertise in the field challenged the team in achieving the project goals. Given the limited time and resources, the results showed optimum routes for the city of Bogota and the techniques are reproducible in other locations or cities as well. Future works should aim at the preparation of quality datasets or limited restrictions to optimize routes for multiple vehicles in shorter times and distances.