Building a Content Communication Prediction Environment
Brand Love intelligence (BLi) builds Marketing Intelligence tools that help others create more meaningful digital experiences.
50 technology changemakers developed a reinforcement model and dashboard to address the following problem.
Audiences today have to deal with an information overload as they are being flooded by daily content communication online, coming from all sides and corners. That is naturally reducing their attention span. In that way, content is no longer being absorbed and organizations and brands lose their impact. To reverse the tendency of content avoidance, partners of BLi such as organizations (with their various types of brands) and reputable persons such as athletes & artists, are in need of highly relevant content to obtain the audience’s attention and remain at the same time attractive to them in the long run.
Two combined things are especially important to address the problem: 1. Moment of being receptive to specific content (receptivity) and 2. Content-match (likeability in the broadest sense). In order to get applicable data (information), we suggest the following technical solution:
An interactive hyper-local dashboard (world map) that visualizes the output of the reinforced ML model by providing users (near) real-time insights and notifications of audience receptivity. The dashboard will give users the empowerment to make optional model customizations to adjust the solution to their needs. While the transparency by design will let users understand why the output is as presented. This serves as a launch platform for further research.
The reinforcement model will receive input from various sources of data or libraries which are necessary to track (near) real-time hyperlocal indicators that influence behavior. The indicators are as follows: hyper-local influencers (weather, news, vacation, live events, google trends, Share of Search, etc.), journey analysis, mood analysis (anger, joy, confidence, etc.), and behavior analysis (context analysis with NLP and image classification, etc.). For the latter two, the Tweet data of Twitter timelines will be used.
The summed above are examples that can be defined later during the project.
A particular focus of this challenge is The Share of Search (SoS) construct (in other words, ‘Share of Search’ can be defined as the portion of overall online interest in a particular keyword that you are capturing) in relation to the operational and practical quality of SoS within the model functionality. The idea behind it is to strengthen the prediction ability to know which personality types intend to engage (show other behavior) with / act on brands, branches, product categories, products/services at what time (momentum). SoS is believed to have serious predictive value when connected to more of the available data points.
An example would be to think of NGO’s and using SoS for nudging the donation behavior of the audience.
Also, a connection needs to be made to the BLi infrastructure to get the survey audience’s personality and their answers. We expect to increase our data set every few months in the beginning. Later, the interval will decrease to weeks or even days. Therefore the model needs to have an auto-retrain possibility.
In summary, the representative outcomes of the project are as follows:
A reinforcement ML model that predicts the Audience Cluster Receptivity Score (%) and the Audience Cluster Brand Receptivity Score (%);
A reinforcement ML model that maps the receptivity stage of worldwide audience clusters. The model will be decided by the community together with the company. The recommended model now is LSTM;
A real-time interactive dashboard (world map) using the D3 library that visualizes the size and receptivity stage of audience clusters;
The (sub-)models developed will be stand-alone and can be used independently or combined with not yet defined future models or code;
Integrated REST-APIs to get data from and push output to the BLi tool;
In-depth and detailed documentation will be delivered explaining, among other things, why particular choices are made, how code, features, and models are working and what input is needed for optimal performance