AI Insights

Vehicle Image Analysis and Insurance Fraud Prevention through EDA Techniques

April 21, 2023


article featured image
In this article, we will dive into some EDA techniques for processing image data and how they were applied to the AI Innovation Challenge in which we built an Artificial Intelligence (AI) solution for validating vehicle images, identifying damage, and classifying its severity to determine a restoration price.

Introduction

Exploratory Data Analysis (EDA) is a methodology (e.g., cluster analysis, box plots, etc) for analyzing datasets to summarize their main characteristics [1]. Therefore, it provides a foundation for further pertinent data collection, and for suggesting hypotheses to test rather than to confirm. It is an important step when applying a data-centric AI approach in Machine Learning (ML) projects, as the improvement of data quality is equally effective as doubling the training set with new samples [2].

The advancement of Deep Learning (DL) techniques has brought further life to the field of computer vision by revolutionizing different domains such as medicine, security, remote sensing, etc. Most of the state-of-the-art models utilize DL as their backbone to extract features from input images (or videos) [3]. However, despite that Convolutional Neural Networks (CNN) are inspired by the human visual cortex when working with images, EDA does not become intuitive as when using tabular data [4]. 

In this article, some recommendations are shared to obtain lots of information when analyzing images. Also, it is discussed how some of these were applied during the AI Innovation Challenge [5].

Problem statement

Insurance can be defined as a policy in which a person or entity (insurer) protects another person or entity against losses from a specific occurrence or eventuality. For example, insuring a car in the event of an accident. The insurance industry can be categorized as a business that comprises people and organizations that develop, sell, administrate, and regulate insurance policies.

After an insured property (e.g., a car) has been damaged in an accident, insurance claims can be made. The insurer takes several important steps, including assessing the damage to determine whether repair or replacement is necessary, being the goal to restore the property to its state at the time of insurance. Furthermore, they will take several measures to mitigate fraud, which has been a persistent problem over time.

The project focused on building an Artificial Intelligence (AI) solution for validating vehicle images, identifying damage, and classifying its severity to determine a restoration price.

Figure 1. Car insurance policies

Figure 1. Car insurance policies [6].

I. Project Pipeline

The pipeline has 2 stages (Figure 2):

  • Stage 1: Fraud model

Consists of 3 models: the first model (YOLOv8) detects license plates. Next, that detection is cropped and passed to a second model (OCR) which recognizes and reads the text on those license plates. 

A third model (VisionTransformer + Cosine similarity) is used to compare how similar two car images might be. 

  • Stage 2: Car damage

Consists of 2 models, the first model detects which car’s exterior parts show defects, and the second model evaluates the severity of those damages. Based on that, a total price estimation for the insurance claim is calculated.

Figure 2. Project pipeline.

Figure 2. Project pipeline. (Source: Omdena)

 

II. Steps to do EDA on image data 

EDA is a mandatory part of the development process in AI to achieve optimal inference results. It is part of a continuous process: analyze data, formulate hypotheses, build models, make inferences, validate, compare, and return back to data analysis until achieving satisfying results [7]. 

It consists of the following strategy:

  • Data analysis: It means studying the characteristics of the datasets. (e.g., image acquisition processes, labeling quality, size, and area of bounding boxes, image quality, number of samples, etc.)
  • Data cleaning: It is the process to identify incomplete, incorrect, or inaccurate data, and then replacing, modifying, or deleting it.
  • Data splitting: When splitting the dataset (train, validation) it is important to take into account the equitable distribution of classes. ex. stratified k folds splitting.
  • Data augmentation: Most Deep Learning algorithms need huge quantities of data to achieve interesting results. Data augmentation can improve the generalization and reduce the over-fitting of models by making different variations of the same image. Ex: flipping, rotations, padding, cropping, gaussian noise injection, random erasing, etc.

Some recommended EDA steps for image data are:

STEP 1: Assessing data quality

For image data, the simplest method of EDA is by visualizing a sample of images from each class (Figure 3). This is very useful to get familiar with the data and consequently adapt it to the algorithm to achieve better performance.

Recommendations:

  • Visualize multiple images at the same time. Focus on size, orientation, brightness, background variations, etc.
  • Make sure there are no corrupted files (ex. images that cannot be opened).
  • Take note of the different extensions: jpg, jpeg, jpe, png, tif, tiff, bmp, ppm, pbm, pgm, sr, ras, webp.
  • Verify that all images share the same color model: RGB, grayscale, etc.

Applied recommendations:

  • From the scrapped images, only the ones that had a sufficiently high quality and at the same time showed damages within economic repair were selected. Also, duplicates were removed and all images were converted to the chosen format: JPG.
Figure 3. Visualization of multiple images.

Figure 3. Visualization of multiple images. (Source: Omdena)

STEP 2: Visualize image size and aspect ratio

In the real world, datasets do not have images of the same size and shape.  Furthermore, it is usual to combine multiple datasets to acquire more samples for a given AI task.

Recommendations [8]:

  • Make a histogram to visualize the distribution of image size and aspect ratio (Figure 4).
    • If the majority of images have a uniform distribution (same dimensions), then it is up to you to decide how much to alter them. You can start by using the average size and aspect ratio or taking into account the minimum image size accepted by your chosen algorithm.
    • If the distribution is bimodal (has two peaks), then you can alter the images by adding some padding. 
    • If the distribution is random (images very wide and very narrow), it is better to use advanced techniques to avoid altering the aspect ratio.
  • Pick a consistent image size: large enough to keep features distinguishable, but not too large to run out of memory.
Figure 4. Different types of distributions.

Figure 4. Different types of distributions.

Applied recommendations:

  • A histogram to visualize the distribution of size and aspect ratio (all images) was prepared (Figure 5). From the visualization, it was decided to resize the images to 640×480. No further techniques were applied.
Figure 5. Resolution and aspect ratio histogram.

Figure 5. Resolution and aspect ratio histogram. (Source: Omdena)

STEP 3: Verify that all images have been annotated

Recommendations:

  • Everything that has not been annotated, will be considered as background. Therefore, leaving unannotated images will only send conflicting signals to the training model.

Applied recommendations:

  • For compliance validation, between the collected images and their respective annotation files, a script was developed. The results displayed the number of images within each group, as well as the files that have to be deleted or fixed (Figure 6).
Figure 6. Results after executing file compliance script.

Figure 6. Results after executing file compliance script. (Source: Omdena)

STEP 4: Class imbalance

This is a very common problem. When training a model, it is expected to have uniformly distributed classes [9]. Otherwise, the class with a higher number of data points would tend to create bias in the model. 

Recommendations:

  • Performance metrics [10]

To avoid misinterpreting biased models as performing well, you can use metrics such as F1 score (Dice coefficient) [11]Jaccard index (Intersection Over Union (IoU)) [12], or Mean Average Precision (mAP) [13].

  • Data augmentation

This is the most widely used regularization technique. But, when applying certain transformations (ex. cropping, rotations, etc) to the images, there is a high probability of altering their annotated bounding boxes as well. That is why, these transformations also have to be updated into the respective annotations (Figure 7).

Figure 7. a) Original image, b) Rotated image, c) Rotated image and bounding boxes. (Source: PaperspaceBlog)

Figure 7. a) Original image, b) Rotated image, c) Rotated image and bounding boxes [14].

  • Merging classes [15]

Ideally, depending on the problem to solve, this technique should be done by domain experts. The high-resolution image in Figure 8 clearly consists mostly of buildings. So if you want to detect buildings, trees, cars, buses, and trucks; you will have a huge class imbalance. In order to solve the issue, you can take zoomed-in tiles and merge similar classes (ex. cars, trucks, buses) into one category ‘cars’. This will reduce the number of classes (from five to three) and increase the number of ‘cars’ labels.

Figure 8. a) Satellite image, b) Zoomed-in image. (Source: neptune.ai)

Figure 8. a) Satellite image, b) Zoomed-in image. [15]

Applied recommendations:

  • The agreed evaluation metric was mean average precision (mAP), in particular mAP50 [16]. As it is a standard metric for object detection. The best performance was achieved by the YOLOv8 model (Figure 9).
Figure 9 . Trained models. (Source: Omdena)

Figure 9 . Trained models. (Source: Omdena)

  • The distribution of classes was highly imbalanced (Figure 10). The overrepresented classes belonged mostly to parts located in front of the car. Therefore, an additional image search targeted at underrepresented classes was carried out and augmented via the application of horizontal and vertical flipping.
Figure 10. Imbalance bar chart. (Source: Omdena)

Figure 10. Imbalance bar chart. (Source: Omdena)

STEP 5: Verify the size and shape of annotated bounding boxes

Most computer vision models are anchor-based (Figure 11) [17]. In other words, there is a stage in which these anchors have to match with the ground truth bounding boxes (the annotations). Consequently, if these anchor boxes have not been tuned, the neural network will not know that a certain object exists [18].

Also, it is usual to execute multiple models in sequence for complex problems. The output of a certain model A is cropped and used as the input of model B. In this case, once again, it highlighted the importance of knowing the distribution of size and shape of bounding boxes; as these will be used to tune accordingly the anchors for the next algorithm [19].

Figure 11. a) RetinaNet’s anchor boxes, b) Anchors vs Ground truth box vs Predicted box. (Source: V7labs and TowardsDataScience)

Figure 11. a) RetinaNet’s anchor boxes, b) Anchors vs Ground truth box vs Predicted box. [13] [18]

Recommendations:

  • Prepare a histogram to visualize the size, shape, and aspect ratio of the annotated bounding boxes [8]. This is useful to get a rough estimate of the smallest and biggest bounding box you want to detect (specify a threshold considering the range of expected objects of interest). Another option is to learn the anchor box configuration [20].

Applied recommendations:

  • The project pipeline consisted of 2 sequential models: a) fraud model (YOLOv8 and OCR) and b) car damage model (YOLOV8 and MobileNetV2) (Figure 2). For each one, it was essential to know the average size and aspect ratio of the bounding boxes in order to achieve a good performance. Therefore, annotated images were visualized and a histogram was prepared (Figure 12).
Figure 12. Car damage model: a) Ground truth annotations and b) Bounding boxes aspect ratio. (Source: Omdena)

Figure 12. Car damage model: a) Ground truth annotations and b) Bounding boxes aspect ratio. (Source: Omdena)

Conclusion

EDA is a crucial step in any ML project [1]. However, when dealing with images, the methodology to follow is a bit different when compared to tabular data. On the other hand, AI pipeline to prevent illegal insurance claims consists of 2 stages:

a) Fraud model.

b) Car damage model.  

In this article, some EDA techniques for processing image data have been discussed, and how they were applied to the AI Innovation Challenge. Some of the presented recommendations are:

  • Assess data quality: plot multiple images at the same time, identify incomplete and incorrect data to then replace or delete it.
  • Visualize image size and aspect ratio: prepare a histogram and accordingly decide on a consistent image size to be fed into the chosen algorithm.
  • Verify that all images have been annotated: unannotated images are considered as background, which will only send conflicting signals to the training model.
  • Ensure there is no class imbalance: to avoid bias in the model use appropriate performance metrics, augment data or merge classes.
  • Check the size and shape of annotated bounding boxes: most computer vision models are anchor-based. By visualizing the size, shape, and aspect ratio of bounding boxes, you will be able to tune the anchors and achieve better model performance.

References

Ready to test your skills?

If you’re interested in collaborating, apply to join an Omdena project at: https://www.omdena.com/projects

Related Articles

media card
FloodGuard: Integrating Rainfall Time Series and GIS Data for Flood Prediction and Waterbody Forecasting in Bangladesh
media card
A Beginner’s Guide to Exploratory Data Analysis with Python
media card
The Ethics of AI Data Collection: Ensuring Privacy and Fair Representation