Detecting Weeds Through YolactEdge Instance Segmentation to Support Smart Farming

Published
Mar 24, 2021
Reading Time
Rate this post
(5 votes)
Detecting Weeds Through YolactEdge Instance Segmentation to Support Smart Farming

Execution time: 0.0003 seconds

Execution time: 0.0003 seconds

Authors: Thomas Joseph, Marjan Ghobadian

Introduction

In order to reach the precise classification and location of crop and weeds for smart farming, several methods such as instance and semantic segmentation are applied. The target is to reach not only high accuracy regarding location and classification but also the highest possible speed.

In this article, we discuss the implementation and results of YolactEdge, a fully-convolutional model for real-time instance segmentation. This method is suitable to be implemented on MS COCO evaluated on a single Titan XP which makes it very fast.

The real-time object detection started with one-stage object detectors such as SSD (Single Shot MultiBox Detector) and YOLO (You Only Look Once). In order to speed up and reach instance segmentation, an improvement to two-stage instance segmentation methods which depends on feature localization on mask production was needed. Two-stage object detectors, use a Region Proposal Network to generate regions of interest in the first stage and then send the region proposals down the pipeline for object classification and bounding-box regression [6]. These methods are better in accuracy but too slow for real-time object detection.

Fig.1 shows a single-stage detector architecture. This method considers a simple regression problem to detect objects. It takes an input image and learns its class probabilities and bounding box coordinates. Since it is detecting an object and its pose in a single forward pass by relying on convolutions, therefore it is faster than two-stage detectors.

Fig.1, Architecture of a convolutional neural network with a single-stage detector

Fig.1.  Architecture of a convolutional neural network with a single-stage detector

Yolact as a real-time instance segmentation with generating a dictionary of non-local prototype masks over the entire image, and then predicting a set of linear combination coefficients per instance came to the stage. The method, afterward, linearly combines the prototypes using the corresponding predicted coefficients for each instance and then crop with a predicted bounding box [1].

Fig.2 Yolact architecture consists of four stages; feature backbone, feature pyramid, prediction head, and Protonet [1].

Fig.2. Yolact architecture consists of four stages; feature backbone, feature pyramid, prediction head, and Protonet

Yolact++ however, incorporates deformable convolutions [2] into the backbone network, which provide more flexible feature sampling and strengthen its capability of handling instances with different scales, aspect ratios, and rotations. Yolact++ network is improved by using an efficient and fast mask rescoring network, which re-ranks the mask predictions according to their mask quality. Then improve the backbone network with deformable convolutions so that feature sampling aligns better with instances, which results in a better backbone detector and more precise mask prototypes.

This network is further improved to YolactEdge, a novel real-time instance segmentation approach that runs accurately on edge devices at real-time speeds [3].

Model Architecture

YolactEdge is a video segmentation model, designed to work on successive frames and not frame by frame like normal segmentation models. It is based on Yolact and shares the same architecture and does not alter it but at the same time delivers about a 5x speedup from Yolact, while keeping about the same accuracy (mAP).

YolactEdge is the optimal choice if you want to do instance segmentation on the edge as it achieves real-time speeds (>30 FPS) on the Jetson AGX Xavier. The improvement is done in two optimization technique levels, systematic and algorithmic.

  1. Systematic level: TensorRT inference engine : TensorRT is NVIDIA’s deep learning inference optimizer, it changes the weights from FP32 to a combination of INT8 and FP16 which greatly improves inference time. It also keeps the accuracy with quantizing the network parameters to fewer bits.
  2. Algorithmic level: Exploiting Temporal Redundancy in Video : This is where YolactEdge truly shines and takes advantage of having a video and not just a single frame, and that’s what we will be exploring in detail in this part.

Architecture

Fig.2, presents the original Yolact architecture which consists of 4 stages:

  • A backbone
  • A feature pyramid network
  • A prediction head
  • A Protonet

The first two are for feature extraction and can be seen in other models and the last two are Yolact specific, therefore we will group the backbone and the FPN as the feature extraction part and the prediction head and the Protonet as the prediction part.

The prediction part is explained in detail in the Yolact article [3] and uses the feature maps extracted from the feature extraction part to produce the final masks.

It is also to be noted that the feature extraction is more computationally expensive than the prediction and takes about 60% of the computation cost.

Feature extraction

Fig.3, Feature extraction part of Yolact network architecture.

Fig.3. Feature extraction part of Yolact network architecture.

The feature extraction consists of ResNet + FPN and each of them consists of 7 stages, the ResNet down-samples the input frame and the FPN down-samples and up-samples the output of the ResNet while taking C5, C4, C3 as skip connections for P5, P4, P3 respectively.

The FPN feature maps are what is used for prediction in later pipelines of the architecture.

It must be noted that ResNet is the most resource-hungry part of this network with C4 and C5 taking 41.84% of the whole network resources.

Why temporal redundancy?

Since this is a video it’s highly probable that the next frame is similar to the previous one with some changes and the higher feature maps (C4-C5) are more resilient to change than the lower ones (C1-C2-C3), we can start to think that extracting the same features for every frame might seem computationally wasteful since there must be shared features that you can ‘transform’ from a frame to another and that’s exactly what YolactEdge does and we will explore next.

Two types of frames

YolactEdge extends YOLACT [1] to video by transforming a subset of the features from keyframes (left side of Fig.4) to non-keyframes (right side of Fig.4), to reduce expensive backbone computation.

Fig.4, YolactEdge network architecture.

Fig.4. YolactEdge network architecture.

We will separate frames into :

  • Keyframes: where the whole feature extraction will be computed
  • Non-keyframes: where some features will be computed and the other will be ‘transformed’

As we’ve seen C4-C5 are the most computationally expensive so we will compute them only for keyframes, as for non-keyframes they won’t be computed and P5-P4 will be warped from the previous non-key frame and thus we can share or propagate features across frames.

Feature Warping

It starts with the question of how we can predict the way features will look in the next non-key frames?

The answer is here: if I showed you a video of someone throwing a ball, pause and ask you where you think the ball will be after I press resume? You will probably ask me which direction the ball is moving and based on that you will probably predict the next position of the ball.

It’s the same thing here, to warp features we need to know how objects are moving in the frames in computer vision that is called ‘Optical Flow Estimation’ and there are neural networks designed specifically to compute the 2-D flow field given two successive frames as input and outputs a 2-D flow field where it estimates the direction where every object in the picture is moving and in YolactEdge the network to compute it is inspired by FlowNet [7].

An example of a flow field given two frames is shown in Fig.5.

Fig.5, an example of a flow field

Fig.5. An example of a flow field

A smart implementation of FlowNet

Fig.6, FlowNet structure, consists of two parts; a) FlowNetS, b) FeatFlowNet

Fig.6, FlowNet structure, consists of two parts; a) FlowNetS, b) FeatFlowNet

FlowNet stacks two images together and then runs them through some convolutions to extract features, then these features are refined by recursively upsampling and concatenating them, and then these final features are used to predict the flow field.

To reduce the overhead of recalculating features, YolactEdge reuses the features C1-C3 calculated by the ResNet concatenate them, uses a few convolution layers, refines them the same way in FlowNet, and uses that to make the flow field prediction.

Then P4-P5 is warped using the flow field, the values of P4-P5 in the previous key-frame, and bilinear interpolation.

Key-frames selection

How do we distribute frames to key and non-key frames?

Key-frames should be framed that have a significant change in them over the previous key-frame but YolactEdge did not go that far as explained in [1]:

‘It is not guaranteed that the randomly selected keyframes are free from motion blur. A smart way to select keyframes would be interesting future work.’[1]

But for now, YolactEdge takes key-frames after a constant number of k frames, e.g.; if n is equal to 10 every 10 frames a frame will be taken as a key-frame and the rest as non-key frames.

Pseudocode

Here is the example of Pseudocode.

We use Nfeat_all to denote computing the whole backbone and FPN (C1-C5) and (P3-P7),

Nfeat_partial_1 to denote computing (C1-C3),

Flow to denote calculating the optical flow between keyframe and current non-key frame using backbone extracted features,

Warp to denote transforming P5 and P4 from the previous keyframe to get(W5, W4),

Nfeat_partial_2 to denote computing (P3,P6,P7) using (W5,W4,C3),

Ntask to denote the rest of the network (the prediction head, the Protonet),

_k to denote belonging to a keyframe, _i to a non-key frame.

Comprehensive documentation and installation of YolactEdge network is provided in [3] and GitHub repository: https://github.com/haotian-liu/yolact_edge

Dataset

The input data for training Yolact++ and YolactEdge is MS COCO with JSON annotations.

The dataset used in this article was provided by the Weedbot team and contained 750 images of size 3008 x 3008. For each image, a JSON annotation label was created.

The dataset was provided in two data folders with two separate annotations for each. First we combined the images in one dataset and for that we combined the annotation file as well.

To prepare images, we rotated images with an image editor using windows built in “rotate” function from context menu then uploaded them as new tasks then uploaded COCO JSON to check if annotations are rotated vertically as well.

To improve the results we also increased annotation classes and re-annotated images using CVAT tool. This has been applied to 10000 annotations within 750 images.

YolactEdge down-samples images from higher resolution to network resolution 550 x 500 and this slows down the training. So to solve this, we resize the images and annotations to network resolution, thus training goes much faster and we will have more images (10462 images) to train the model which leads us to better results. A Python script is used to crop the images to network resolution. The principal is new_coordinate = resize_ratio x old_coordinate. It is using the carrot annotations as reference points. 

With the new annotations, we got four additional classes. To solve the inconsistency caused by different views of the annotators and border cases, two more classes (Class 2-3 and Class 3-4) were introduced. The distribution of the different classes in the dataset are shown in Fig.7.

Fig 7, Distribution of the different plant classes

Fig 7. Distribution of the different plant classes

Results and Conclusion

We tried several real-time object recognition architectures that could potentially be used for a weed-crop segmentation, including publications that present work in Bonnet, U-Net, Yolact++. In general, all of these object detection models struggle with the trade-offs between speed and accuracy.

In this article, we discuss the instance segmentation frameworks on Weedbot dataset.

In the first attempt, we trained our dataset with Yolact++. Installation, setup, and configuration are provided in the GitHub repository https://github.com/dbolya/yolact

In order to use YOLACT++, compiling the DCNv2 code is necessary. We modified the configuration for our dataset, for training on Yolact++, and set up the training with resnet50. The prediction is shown in Fig 8.

Fig.8, the predicted results of Yolact++ using Weedbot data.

Fig.8. the predicted results of Yolact++ using Weedbot data.

The results of accuracy were acceptable, however, the inference time results in the speed of about 400 ms, which was far from our target (12 ms).

Therefore, we adjusted the data and trained them with the YolactEdge network. We used the model setup provided in GitHub repository: https://github.com/haotian-liu/yolact_edge

Installation, Setup, and Documentation containing TensorRT installation are provided in GitHub repository: https://github.com/haotian-liu/yolact_edge/blob/master/INSTALL.md

Nvidia TensorRT provides INT8 and FP16 optimization modes for inference which significantly reduces latency with minimal to no loss in accuracy. Different precision modes of TensorRT provided a huge speed up as seen in Fig.9. We took this approach for training the YolactEdge model, an instance segmentation model, optimized for edge devices (Fig.10).

Fig.9, Comparing different precision modes of TensorRT

Fig.9. Comparing different precision modes of TensorRT

Fig.10, YolactEdge training on multi-class — 3008 X 3008 cropped to 1920 X 1200 and then resized to 550 X 550

Fig.10. YolactEdge training on multi-class — 3008 X 3008 cropped to 1920 X 1200 and then resized to 550 X 550

To measure the performance of YolactEdge and compare the results of the predictions with other studies, evaluation metrics used to measure the accuracy of the object detector on the weedbot dataset. We used evaluation measures such as; F1-score, mean Intersection over Union (IoU), precision and recall are measured. To compute these metrics, a confusion matrix is calculated between the prediction and the ground truth. Thereafter, true positive (TP), true negative (TN), false positive (FP), and false-negative (FN) are computed. The computing formula are as below:

Fig.13, shows benchmark inference times on Jetson Xavier platform for yolactedge models with different precision modes given an image input size of 550x550px. The results show promising speeds where we get almost 1.5x speedup just by converting pytorch model (FP32) to FP16 precision. And we achieve a 2.5–3x speedup by converting pytorch model (FP32) to INT8 precision. The best model achieved using this approach i.e. 58 ms provides a 10x better results compared to what we had from Yolact++ i.e 400 ms.

Fig.13, shows benchmark inference times on Jetson Xavier platform for yolactedge models with different precision modes given an image input size of 550x550px. The results show promising speeds where we get almost 1.5x speedup just by converting pytorch model (FP32) to FP16 precision. And we achieve a 2.5–3x speedup by converting pytorch model (FP32) to INT8 precision. The best model achieved using this approach i.e. 58 ms provides a 10x better results compared to what we had from Yolact++ i.e 400 ms.

Fig.11, shows benchmark inference times on Jetson Xavier platform for yolactedge models with different precision modes given an image input size of 550x550px. The results show promising speeds where we get almost 1.5x speedup just by converting pytorch model (FP32) to FP16 precision. And we achieve a 2.5–3x speedup by converting pytorch model (FP32) to INT8 precision. The best model achieved using this approach i.e. 58 ms provides a 10x better results compared to what we had from Yolact++ i.e 400 ms.

Fig.11, Results of YolactEdge training on Weedbot data and a server with 32GB of RAM and 24 GB per GPU (48 GB total).

Fig.11. Results of YolactEdge training on Weedbot data and a server with 32GB of RAM and 24 GB per GPU (48 GB total)

To improve the results regarding the accuracy and speed the below points can be considered:

  • Experiments can be done around temporal redundancy and observe how they affect the benchmarking times.
  • Experiments around different training sizes and new datasets can provide further mAP improvements.
  • Same as bonnet, the inference in C++ can provide further improvements in terms of speed.

References

  • [1] Liu, Haotian, et al. “YolactEdge: Real-time Instance Segmentation on the Edge (Jetson AGX Xavier: 30 FPS, RTX 2080 Ti: 170 FPS).” arXiv preprint arXiv:2012.12259 (2020).
  • [2] Bolya, Daniel, et al. “YOLACT++: Better real-time instance segmentation.” arXiv preprint arXiv:1912.06218 (2019).
  • [3] Bolya, Daniel, et al. “Yolact: Real-time instance segmentation.” Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.
  • [4] NVIDIA Deep Learning TensorRT Documentation
  • [5] Stein, Elias, Siyu Liu, and John Sun. “Real-Time Object Detection on an Edge Device.”
  • [6] Soviany, Petru, and Radu Tudor Ionescu. “Optimizing the trade-off between single-stage and two-stage deep object detectors using image difficulty prediction.” 2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC). IEEE, 2018.
  • [7] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In ICCV, 2015.

More Computer Vision Case Studies

ACF

ID58157
keyfield_623341deec7d0
labelPhoto
namephoto
prefixacf
typeimage
valueArray ( [ID] => 88119 [id] => 88119 [title] => Favicon [filename] => Favicon.png [filesize] => 69919 [url] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [link] => https://omdena.com/favicon/ [alt] => Favicon [author] => 19 [description] => Favicon [caption] => Favicon [name] => favicon [status] => inherit [uploaded_to] => 16994 [date] => 2022-10-10 12:36:26 [modified] => 2022-10-10 13:36:24 [menu_order] => 0 [mime_type] => image/png [type] => image [subtype] => png [icon] => https://omdena.com/wp-includes/images/media/default.png [width] => 512 [height] => 512 [sizes] => Array ( [thumbnail] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [thumbnail-width] => 96 [thumbnail-height] => 96 [medium] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [medium-width] => 512 [medium-height] => 512 [medium_large] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [medium_large-width] => 512 [medium_large-height] => 512 [large] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [large-width] => 512 [large-height] => 512 [1536x1536] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [1536x1536-width] => 512 [1536x1536-height] => 512 [2048x2048] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [2048x2048-width] => 512 [2048x2048-height] => 512 [et-pb-post-main-image] => https://omdena.com/wp-content/uploads/2022/10/Favicon-400x250.png [et-pb-post-main-image-width] => 400 [et-pb-post-main-image-height] => 250 [et-pb-post-main-image-fullwidth] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [et-pb-post-main-image-fullwidth-width] => 512 [et-pb-post-main-image-fullwidth-height] => 512 [et-pb-portfolio-image] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [et-pb-portfolio-image-width] => 284 [et-pb-portfolio-image-height] => 284 [et-pb-portfolio-module-image] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [et-pb-portfolio-module-image-width] => 382 [et-pb-portfolio-module-image-height] => 382 [et-pb-portfolio-image-single] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [et-pb-portfolio-image-single-width] => 512 [et-pb-portfolio-image-single-height] => 512 [et-pb-gallery-module-image-portrait] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [et-pb-gallery-module-image-portrait-width] => 400 [et-pb-gallery-module-image-portrait-height] => 400 [et-pb-post-main-image-fullwidth-large] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [et-pb-post-main-image-fullwidth-large-width] => 512 [et-pb-post-main-image-fullwidth-large-height] => 512 [et-pb-image--responsive--desktop] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [et-pb-image--responsive--desktop-width] => 512 [et-pb-image--responsive--desktop-height] => 512 [et-pb-image--responsive--tablet] => https://omdena.com/wp-content/uploads/2022/10/Favicon.png [et-pb-image--responsive--tablet-width] => 512 [et-pb-image--responsive--tablet-height] => 512 [et-pb-image--responsive--phone] => https://omdena.com/wp-content/uploads/2022/10/Favicon-480x480.png [et-pb-image--responsive--phone-width] => 270 [et-pb-image--responsive--phone-height] => 270 ))
menu_order1
parent58155
wrapperArray ( [width] => [class] => [id] => )
return_formatarray
preview_sizethumbnail
libraryall
_namephoto
_valid1

Module Settings

custom_identifierImage
acf_namefield_623341deec7d0
is_author_acf_fieldoff
post_object_acf_namenone
author_field_typeauthor_post
linked_user_acf_namenone
type_taxonomy_acf_namenone
acf_tagdiv
show_labeloff
label_seperator:
visibilityon
empty_value_optionhide_module
use_iconoff
icon_color#7EBEC5
use_circleoff
circle_color#7EBEC5
use_circle_borderoff
circle_border_color#7EBEC5
use_icon_font_sizeoff
icon_image_placementleft
image_mobile_stackingcolumn
return_formatarray
image_link_urloff
image_link_url_acf_namenone
checkbox_stylearray
checkbox_radio_returnlabel
checkbox_radio_value_typeoff
checkbox_radio_linkoff
link_buttonoff
email_subjectnone
email_body_afternone
add_css_classoff
add_css_loop_layoutoff
add_css_class_selectorbody
link_new_tabon
link_name_acfoff
link_name_acf_namenone
url_link_iconoff
image_sizefull
true_false_conditionoff
true_false_condition_css_selector.et_pb_button
true_false_text_trueTrue
true_false_text_falseFalse
is_audiooff
is_videooff
video_loopon
video_autoplayon
is_oembed_videooff
defer_videooff
defer_video_iconI||divi||400
video_icon_font_sizeoff
pretify_textoff
pretify_seperator,
number_decimal.
show_value_if_zerooff
text_imageoff
is_options_pageoff
is_repeater_loop_layoutoff
linked_post_stylecustom
link_post_seperator,
link_to_post_objecton
loop_layoutnone
columns4
columns_tablet2
columns_mobile1
repeater_dyn_btn_acfnone
button_alignmentcenter
text_before_positionsame_line
label_positionsame_line
vertical_alignmentmiddle
image_max_width_last_editedon|phone
admin_labelPhoto
_builder_version4.16
_module_presetdefault
title_css_font_size14px
title_css_letter_spacing0px
title_css_line_height1em
acf_label_css_font_size14px
acf_label_css_letter_spacing0px
acf_label_css_line_height1em
label_css_letter_spacing0px
text_before_css_font_size14px
text_before_css_letter_spacing0px
text_before_css_line_height1em
seperator_font_size14px
seperator_letter_spacing0px
seperator_line_height1em
relational_field_item_font_size14px
relational_field_item_letter_spacing0px
relational_field_item_line_height1em
background_enable_coloron
use_background_color_gradientoff
background_color_gradient_repeatoff
background_color_gradient_typelinear
background_color_gradient_direction180deg
background_color_gradient_direction_radialcenter
background_color_gradient_stops#2b87da 0%|#29c4a9 100%
background_color_gradient_unit%
background_color_gradient_overlays_imageoff
background_color_gradient_start#2b87da
background_color_gradient_start_position0%
background_color_gradient_end#29c4a9
background_color_gradient_end_position100%
background_enable_imageon
parallaxoff
parallax_methodon
background_sizecover
background_image_widthauto
background_image_heightauto
background_positioncenter
background_horizontal_offset0
background_vertical_offset0
background_repeatno-repeat
background_blendnormal
background_enable_video_mp4on
background_enable_video_webmon
allow_player_pauseoff
background_video_pause_outside_viewporton
background_enable_pattern_styleoff
background_pattern_stylepolka-dots
background_pattern_colorrgba(0,0,0,0.2)
background_pattern_sizeinitial
background_pattern_widthauto
background_pattern_heightauto
background_pattern_repeat_origintop_left
background_pattern_horizontal_offset0
background_pattern_vertical_offset0
background_pattern_repeatrepeat
background_pattern_blend_modenormal
background_enable_mask_styleoff
background_mask_stylelayer-blob
background_mask_color#ffffff
background_mask_aspect_ratiolandscape
background_mask_sizestretch
background_mask_widthauto
background_mask_heightauto
background_mask_positioncenter
background_mask_horizontal_offset0
background_mask_vertical_offset0
background_mask_blend_modenormal
custom_buttonoff
button_text_size20
button_bg_use_color_gradientoff
button_bg_color_gradient_repeatoff
button_bg_color_gradient_typelinear
button_bg_color_gradient_direction180deg
button_bg_color_gradient_direction_radialcenter
button_bg_color_gradient_stops#2b87da 0%|#29c4a9 100%
button_bg_color_gradient_unit%
button_bg_color_gradient_overlays_imageoff
button_bg_color_gradient_start#2b87da
button_bg_color_gradient_start_position0%
button_bg_color_gradient_end#29c4a9
button_bg_color_gradient_end_position100%
button_bg_enable_imageon
button_bg_parallaxoff
button_bg_parallax_methodon
button_bg_sizecover
button_bg_image_widthauto
button_bg_image_heightauto
button_bg_positioncenter
button_bg_horizontal_offset0
button_bg_vertical_offset0
button_bg_repeatno-repeat
button_bg_blendnormal
button_bg_enable_video_mp4on
button_bg_enable_video_webmon
button_bg_allow_player_pauseoff
button_bg_video_pause_outside_viewporton
button_use_iconon
button_icon_placementright
button_on_hoveron
positioningnone
position_origin_atop_left
position_origin_ftop_left
position_origin_rtop_left
width100%
max_widthnone
max_width_tablet25%
max_width_phone25%
max_width_last_editedon|tablet
module_alignmentcenter
min_heightauto
heightauto
max_heightnone
custom_margin_tablet||0px||false|false
custom_margin_phone||0px||false|false
custom_margin_last_editedon|phone
filter_hue_rotate0deg
filter_saturate100%
filter_brightness100%
filter_contrast100%
filter_invert0%
filter_sepia0%
filter_opacity100%
filter_blur0px
mix_blend_modenormal
animation_stylenone
animation_directioncenter
animation_duration1000ms
animation_delay0ms
animation_intensity_slide50%
animation_intensity_zoom50%
animation_intensity_flip50%
animation_intensity_fold50%
animation_intensity_roll50%
animation_starting_opacity0%
animation_speed_curveease-in-out
animation_repeatonce
hover_transition_duration300ms
hover_transition_delay0ms
hover_transition_speed_curveease
link_option_url_new_windowoff
sticky_positionnone
sticky_offset_top0px
sticky_offset_bottom0px
sticky_limit_topnone
sticky_limit_bottomnone
sticky_offset_surroundingon
sticky_transitionon
motion_trigger_startmiddle
hover_enabled0
title_css_text_shadow_stylenone
title_css_text_shadow_horizontal_length0em
title_css_text_shadow_vertical_length0em
title_css_text_shadow_blur_strength0em
title_css_text_shadow_colorrgba(0,0,0,0.4)
acf_label_css_text_shadow_stylenone
acf_label_css_text_shadow_horizontal_length0em
acf_label_css_text_shadow_vertical_length0em
acf_label_css_text_shadow_blur_strength0em
acf_label_css_text_shadow_colorrgba(0,0,0,0.4)
label_css_text_shadow_stylenone
label_css_text_shadow_horizontal_length0em
label_css_text_shadow_vertical_length0em
label_css_text_shadow_blur_strength0em
label_css_text_shadow_colorrgba(0,0,0,0.4)
text_before_css_text_shadow_stylenone
text_before_css_text_shadow_horizontal_length0em
text_before_css_text_shadow_vertical_length0em
text_before_css_text_shadow_blur_strength0em
text_before_css_text_shadow_colorrgba(0,0,0,0.4)
seperator_text_shadow_stylenone
seperator_text_shadow_horizontal_length0em
seperator_text_shadow_vertical_length0em
seperator_text_shadow_blur_strength0em
seperator_text_shadow_colorrgba(0,0,0,0.4)
relational_field_item_text_shadow_stylenone
relational_field_item_text_shadow_horizontal_length0em
relational_field_item_text_shadow_vertical_length0em
relational_field_item_text_shadow_blur_strength0em
relational_field_item_text_shadow_colorrgba(0,0,0,0.4)
border_radiion|100%|100%|100%|100%
border_radii_tableton||||
border_radii_phoneon|100%|100%|100%|100%
border_radii_last_editedon|phone
button_text_shadow_stylenone
button_text_shadow_horizontal_length0em
button_text_shadow_vertical_length0em
button_text_shadow_blur_strength0em
button_text_shadow_colorrgba(0,0,0,0.4)
box_shadow_stylenone
box_shadow_colorrgba(0,0,0,0.3)
box_shadow_positionouter
box_shadow_style_buttonnone
box_shadow_color_buttonrgba(0,0,0,0.3)
box_shadow_position_buttonouter
text_shadow_stylenone
text_shadow_horizontal_length0em
text_shadow_vertical_length0em
text_shadow_blur_strength0em
text_shadow_colorrgba(0,0,0,0.4)
disabledoff
global_colors_info{}
Favicon

Execution time: 0.0039 seconds

ACF

ID58156
keyfield_623341caec7cf
labelName
nameblog_author_name
prefixacf
typetext
valueOmdena
parent58155
wrapperArray ( [width] => [class] => [id] => )
_nameblog_author_name
_valid1

Module Settings

custom_identifierACF Item
acf_namefield_623341caec7cf
is_author_acf_fieldoff
post_object_acf_namenone
author_field_typeauthor_post
linked_user_acf_namenone
type_taxonomy_acf_namenone
acf_tagp
show_labeloff
label_seperator:
visibilityon
empty_value_optionhide_module
use_iconoff
icon_color#7EBEC5
use_circleoff
circle_color#7EBEC5
use_circle_borderoff
circle_border_color#7EBEC5
use_icon_font_sizeoff
icon_image_placementleft
image_mobile_stackinginitial
return_formatarray
image_link_urloff
image_link_url_acf_namenone
checkbox_stylearray
checkbox_radio_returnlabel
checkbox_radio_value_typeoff
checkbox_radio_linkoff
link_buttonoff
email_subjectnone
email_body_afternone
add_css_classoff
add_css_loop_layoutoff
add_css_class_selectorbody
link_new_tabon
link_name_acfoff
link_name_acf_namenone
url_link_iconoff
image_sizefull
true_false_conditionoff
true_false_condition_css_selector.et_pb_button
true_false_text_trueTrue
true_false_text_falseFalse
is_audiooff
is_videooff
video_loopon
video_autoplayon
is_oembed_videooff
defer_videooff
defer_video_iconI||divi||400
video_icon_font_sizeoff
pretify_textoff
pretify_seperator,
number_decimal.
show_value_if_zerooff
text_imageoff
is_options_pageoff
is_repeater_loop_layoutoff
linked_post_stylecustom
link_post_seperator,
link_to_post_objecton
loop_layoutnone
columns4
columns_tablet2
columns_mobile1
repeater_dyn_btn_acfnone
text_before_positionsame_line
label_positionsame_line
vertical_alignmentmiddle
admin_labelName
_builder_version4.21.0
_module_presetdefault
title_css_text_alignleft
title_css_font_size14px
title_css_letter_spacing0px
title_css_line_height1em
acf_label_css_text_alignleft
acf_label_css_font_size14px
acf_label_css_letter_spacing0px
acf_label_css_line_height1em
label_css_fontRoboto|700|||||||
label_css_text_alignleft
label_css_letter_spacing0px
text_before_css_font_size14px
text_before_css_letter_spacing0px
text_before_css_line_height1em
seperator_font_size14px
seperator_letter_spacing0px
seperator_line_height1em
relational_field_item_font_size14px
relational_field_item_letter_spacing0px
relational_field_item_line_height1em
background_enable_coloron
use_background_color_gradientoff
background_color_gradient_repeatoff
background_color_gradient_typelinear
background_color_gradient_direction180deg
background_color_gradient_direction_radialcenter
background_color_gradient_stops#2b87da 0%|#29c4a9 100%
background_color_gradient_unit%
background_color_gradient_overlays_imageoff
background_color_gradient_start#2b87da
background_color_gradient_start_position0%
background_color_gradient_end#29c4a9
background_color_gradient_end_position100%
background_enable_imageon
parallaxoff
parallax_methodon
background_sizecover
background_image_widthauto
background_image_heightauto
background_positioncenter
background_horizontal_offset0
background_vertical_offset0
background_repeatno-repeat
background_blendnormal
background_enable_video_mp4on
background_enable_video_webmon
allow_player_pauseoff
background_video_pause_outside_viewporton
background_enable_pattern_styleoff
background_pattern_stylepolka-dots
background_pattern_colorrgba(0,0,0,0.2)
background_pattern_sizeinitial
background_pattern_widthauto
background_pattern_heightauto
background_pattern_repeat_origintop_left
background_pattern_horizontal_offset0
background_pattern_vertical_offset0
background_pattern_repeatrepeat
background_pattern_blend_modenormal
background_enable_mask_styleoff
background_mask_stylelayer-blob
background_mask_color#ffffff
background_mask_aspect_ratiolandscape
background_mask_sizestretch
background_mask_widthauto
background_mask_heightauto
background_mask_positioncenter
background_mask_horizontal_offset0
background_mask_vertical_offset0
background_mask_blend_modenormal
custom_buttonoff
button_text_size20
button_bg_use_color_gradientoff
button_bg_color_gradient_repeatoff
button_bg_color_gradient_typelinear
button_bg_color_gradient_direction180deg
button_bg_color_gradient_direction_radialcenter
button_bg_color_gradient_stops#2b87da 0%|#29c4a9 100%
button_bg_color_gradient_unit%
button_bg_color_gradient_overlays_imageoff
button_bg_color_gradient_start#2b87da
button_bg_color_gradient_start_position0%
button_bg_color_gradient_end#29c4a9
button_bg_color_gradient_end_position100%
button_bg_enable_imageon
button_bg_parallaxoff
button_bg_parallax_methodon
button_bg_sizecover
button_bg_image_widthauto
button_bg_image_heightauto
button_bg_positioncenter
button_bg_horizontal_offset0
button_bg_vertical_offset0
button_bg_repeatno-repeat
button_bg_blendnormal
button_bg_enable_video_mp4on
button_bg_enable_video_webmon
button_bg_allow_player_pauseoff
button_bg_video_pause_outside_viewporton
button_use_iconon
button_icon_placementright
button_on_hoveron
positioningnone
position_origin_atop_left
position_origin_ftop_left
position_origin_rtop_left
text_orientationleft
widthauto
max_widthnone
module_alignmentleft
min_heightauto
heightauto
max_heightnone
custom_margin_tablet||10px||false|false
custom_margin_phone||10px||false|false
custom_margin_last_editedon|tablet
custom_padding5px||||false|false
filter_hue_rotate0deg
filter_saturate100%
filter_brightness100%
filter_contrast100%
filter_invert0%
filter_sepia0%
filter_opacity100%
filter_blur0px
mix_blend_modenormal
animation_stylenone
animation_directioncenter
animation_duration1000ms
animation_delay0ms
animation_intensity_slide50%
animation_intensity_zoom50%
animation_intensity_flip50%
animation_intensity_fold50%
animation_intensity_roll50%
animation_starting_opacity0%
animation_speed_curveease-in-out
animation_repeatonce
hover_transition_duration300ms
hover_transition_delay0ms
hover_transition_speed_curveease
link_option_url_new_windowoff
sticky_positionnone
sticky_offset_top0px
sticky_offset_bottom0px
sticky_limit_topnone
sticky_limit_bottomnone
sticky_offset_surroundingon
sticky_transitionon
motion_trigger_startmiddle
hover_enabled0
title_css_text_align_tabletcenter
title_css_text_align_phonecenter
title_css_text_align_last_editedon|phone
acf_label_css_text_align_tabletcenter
acf_label_css_text_align_phonecenter
acf_label_css_text_align_last_editedon|phone
label_css_text_align_tabletcenter
label_css_text_align_phonecenter
label_css_text_align_last_editedon|desktop
text_orientation_tabletcenter
text_orientation_phonecenter
text_orientation_last_editedon|phone
module_alignment_tabletcenter
module_alignment_phonecenter
module_alignment_last_editedon|desktop
title_css_text_shadow_stylenone
title_css_text_shadow_horizontal_length0em
title_css_text_shadow_vertical_length0em
title_css_text_shadow_blur_strength0em
title_css_text_shadow_colorrgba(0,0,0,0.4)
acf_label_css_text_shadow_stylenone
acf_label_css_text_shadow_horizontal_length0em
acf_label_css_text_shadow_vertical_length0em
acf_label_css_text_shadow_blur_strength0em
acf_label_css_text_shadow_colorrgba(0,0,0,0.4)
label_css_text_shadow_stylenone
label_css_text_shadow_horizontal_length0em
label_css_text_shadow_vertical_length0em
label_css_text_shadow_blur_strength0em
label_css_text_shadow_colorrgba(0,0,0,0.4)
text_before_css_text_shadow_stylenone
text_before_css_text_shadow_horizontal_length0em
text_before_css_text_shadow_vertical_length0em
text_before_css_text_shadow_blur_strength0em
text_before_css_text_shadow_colorrgba(0,0,0,0.4)
seperator_text_shadow_stylenone
seperator_text_shadow_horizontal_length0em
seperator_text_shadow_vertical_length0em
seperator_text_shadow_blur_strength0em
seperator_text_shadow_colorrgba(0,0,0,0.4)
relational_field_item_text_shadow_stylenone
relational_field_item_text_shadow_horizontal_length0em
relational_field_item_text_shadow_vertical_length0em
relational_field_item_text_shadow_blur_strength0em
relational_field_item_text_shadow_colorrgba(0,0,0,0.4)
button_text_shadow_stylenone
button_text_shadow_horizontal_length0em
button_text_shadow_vertical_length0em
button_text_shadow_blur_strength0em
button_text_shadow_colorrgba(0,0,0,0.4)
box_shadow_stylenone
box_shadow_colorrgba(0,0,0,0.3)
box_shadow_positionouter
box_shadow_style_buttonnone
box_shadow_color_buttonrgba(0,0,0,0.3)
box_shadow_position_buttonouter
text_shadow_stylenone
text_shadow_horizontal_length0em
text_shadow_vertical_length0em
text_shadow_blur_strength0em
text_shadow_colorrgba(0,0,0,0.4)
disabledoff
global_colors_info{}

Omdena

Execution time: 0.0009 seconds

Execution time: 0.0003 seconds

Execution time: 0.0004 seconds

Vetted Senior AI Talent

Work with our top 2% hidden gems, vetted through over 300 real-world projects.

Top Talent

Leave a comment.
1 Comment
  1. Joseph Catanzarite

    This is a great project. Is there a github repo with your code?

    Reply
Submit a Comment

Your email address will not be published. Required fields are marked *