Crop Disease Prediction for Improving Food Security
November 3, 2021
In this article, I go through exploring two ways to enhance Crop Disease Prediction using Convolution Neural Networks (CNN) or Image Classification. I joined one of Omdena Local Chapters in Malaysia. The challenge goal is to Improve Food Security and Crop Yield in Malaysia Through Machine Learning. It is conceivable that images of a certain quality are required.
The Problem
Crop diseases cannot be accurately predicted by merely analyzing individual disease causes. Only through construction of a comprehensive analysis system can users be provided with predictions of highly probable diseases. The First and foremost limitation in Image classification is gathering the proper quality data as the Image background may contain elements that may disturb the training process, especially if those elements are present in multiple samples. The other problem is availability of data.
The following is a pipeline that helps us better visualize our approach.
If we are able to build a proper AI model for crop disease prediction, it will help us curb the severity of the diseases responsible for heavy food/monetary losses and threatening food security.
Let’s get started
The data used in the work originated from several heterogeneous sources : The Plant Village dataset by SPMohanty (Github_link), Banana dataset showing Bacterial Wilt (Banana_dataset), and Paddy dataset from Kaggle. To find/gather this data, thorough research was conducted on sites like Kaggle, Github and research papers related to crop diseases. The data analyzed differed in volume and were represented in different types and formats.
Is the best good enough?
Although the data is hard to find, we were lucky in the fact that the data was well structured and annotated. Quantity of images was a problem. As we know that when doing image classification not only the quality, but the quantity of images also matters.
Our Methodology
Crop diseases cannot be accurately predicted by merely analyzing individual disease causes. Only through construction of a comprehensive analysis system can users be provided with predictions of highly probable diseases. The approach here was:
- Finding the best research papers to see what worked for them gives us a general sense in which direction to go.
- Where to gather data from? And most used approaches, their pros and cons.
A Quick Look at the Models
For those of you that have made it this far and are curious about what the models actually are, here’s a brief overview of them.
Forecast model based on feature extraction using Transfer learning and Logistic Regression with Incremental Learning:
When performing feature extraction we treat the pre-trained network as an arbitrary feature extractor, thus allowing the input image to propagate forward and stopping the model at a pre-specified layer. After that we take the output of that layer as our features.
The basic steps that we follow are:
- Arranging our dataset that is putting it in a proper structure for pre-processing.
- Extracting features using our pre-trained CNN.
- Training Logistic regression model from the extracted features
- Train it is using a Linear Model
Once treating networks as a feature extractor, we have a tendency to primarily stop the network at our pre-specified layer (typically before the fully-connected layers, however it extremely depends on your specific dataset).
If we were to prevent propagation before the fully connected layers in VGG16, the last layer in the network would become the max-pooling layer in Figure below, which can have an output form of seven x 7 x 512. Flattening, this volume into a feature vector we might acquire a listing of 7 x 7 x 512 = 25,088 values — this list of numbers is our feature vector wont to quantify the input image. We will then repeat the method for our entire dataset of pictures.
Given a complete 160K images in our network, our dataset would currently be diagrammatic as a list of 160K vectors, each of 25,088-dim (around 18 GB of CSV files.)
Are we stuck now?
Nope. Incremental Learning to the rescue!
This is where incremental learning comes into picture as the extracted features are too large to fit in the memory. Using incremental/online machine learning we can train our model on small subsets of training data called batches. In fact, neural networks are very good examples of incremental learners.
Why does using a linear model make sense?
The reason we are using a linear model is because our CNN is already able to learn non-linear features, and with our feature vector having such high dimensionality, linear models can be very fast to train.
In our case we have used Logistic Regression.
Forecast model based on Transfer Learning:
While in the previous case we did not re-train our CNN and used it as a feature extractor, fine-tuning requires that we not only update our architecture but also re-train it to identify new image classes.
In this case we are using AlexNet (CNN architecture, competed in the ImageNet challenge in 2012, considered one of the most influential papers published in computer vision).
Fine Tuning steps are:
- Eliminate the fully connected nodes toward the finish of the organization (i.e., where the genuine class name forecasts are made).
- Replace fully connected nodes with newly initialized ones.
- Freeze prior CONV layers prior in the organization (guaranteeing that any previous robust features learned by the CNN are not obliterated).
- Begin preparing, however just train the FC layer heads.
- Alternatively thaw a few/the entirety of the CONV layers in the arrange and play out a second pass of preparing.
The dataset we used here is collected from Kaggle and is an augmented version of the previous plant village dataset.
Results of Our Model
This is one of the first things that we tried, and here are the results.
Challenges
- After data gathering the main problem is training the model as CPU training will take a lot of time. So, in this case we can go for Google Colab to use GPU.
- When we are going for feature extraction in transfer learning, the extracted feature files are quite large. In my case they were around 18 GB (Training, Test and Validation).
Closing Words
Recently I thought of improving my skills using real world data with impact-ful projects. Omdena provided me with this opportunity. This happened when I applied for mentorship at Omdena and was contacted by Lucie Schnitzer (Head of Community) for a talk. She guided me towards the local chapters as to build my confidence before tackling problems on a global scale. This is how I joined the Malaysia Local chapter – Improving Food Security and Crop Yield in Malaysia Through Machine Learning. Having done a research internship previously in crop yield prediction I must say this chapter held a certain charm towards me.
Being a Task leader in my first Omdena project itself was a unique experience, and I am really glad that I managed to get reasonably good results! Aside from that, I’m really elated to be able to learn the applications of machine learning on real world problems. I also learned how to work in a collaborative environment.
One thing that comes to mind at the end is that no matter the problem there is always a solution we just need the positive mindset to find it.
This article is written by Apoorv Masta.