A leader in collaborative AI development, Omdena believes that truly great startups should build solutions that positively change the world by uplifting the vulnerable, improving health, and protecting the environment. This means building products that maximize positive impact rather than shareholder profit.
To make this vision a reality, Omdena has launched a first-of-its-kind incubator program, where 50 impact startups will get support to build and launch their AI solutions in 2021.
Omdena uses best-in-class tools, well-defined processes, a knowledge-base, and its global diverse community to build such high-impact solutions efficiently, ethically, and effectively. Since Omdena’s inception 18 months ago, over 1700 AI engineers from 86 countries have collaborated on developing solutions for World-renowned organizations like UNHCR, the World Resources Institute, World Energy Council, World Food Programme, UNDP, Save the Children, etc to solve issues with land conflicts, gang violence, hunger, climate change, sexual assault, and other significant problems.
From January 2021, Omdena will give access to its collaborative AI platform to impact-driven startups through the incubator program. A pre-launch version of the program has already successfully provided AI capabilities to several impact startups such as prestigious XPRIZE contestant Zzapp Malaria, AI for Good Innovation Factory finalist Child Growth Monitor, Engie Factory, and many more. Partnerships with well-known tool providers like Activeloop (YC-19), Miro, Labelbox, Spell.run, Apify, etc. have been established for the program too.
In a recent Forbes article on why Omdena is here to lead, a partner organization describes their experience in the following words:
“The power of a collaborative team of 50 AI engineers brought us a deployable AI model within two months. Something that would have taken us at least six months, using traditional development approaches.”
Artificial Intelligence (AI) is arising as one of the most important technologies in the new wave of digital innovation and is transforming industries around the world. The Foundations listed below are at the forefront of some of the latest advancements in the AI field. They devote their energy to addressing challenging problems in order to create a better future.
The Research and Applied AI Summit brings together entrepreneurs and researchers who accelerate the application and science of AI technology for the common good. The foundation awards financial grants for open-source AI research and projects anywhere in the world. Grants are awarded on the basis of their potential for a common good impact. In particular, they support communities that would otherwise not have a chance to participate in advancing AI.
This Swiss-based Foundation champions the use of AI and digital technology to improve the well-being and health of children and young people in growing urban environments. To achieve this, they support research and invest in emerging technologies and scalable solutions globally.
The Foundation focuses on new and emerging technologies — including AI, robotics, and big data — are playing an increasingly important role in shaping the jobs of the future. They shift jobs between sectors, transforming the kinds of tasks that are done in existing jobs, and changing how people connect to work. The Future of Work(ers) seeks to leverage new technologies to help increase human productivity as well as job quality.
Tableau Foundation is committed to empowering organizations with data to address not just the immediate crises, but also the longstanding inequities behind them. The Foundation works across several areas such as climate action, equity, global health, and poverty.
The AI for Good Foundation applies artificial intelligence research for global sustainable development. Current projects are helping to advance the global sustainable development goals (SDGs) — a set of goals adopted by 150 countries to “achieve a better and more sustainable future for all”.
The Cloudera Foundation is based on the belief that responsible use of data is the most powerful way to progress on the world’s most challenging problems. Cloudera, as a leading tech company, through the Cloudera Foundation, is committed to contributing its expertise in big data to find solutions to problems people face around the globe today.
The Wadhwani Institute for Artificial Intelligence is an independent, nonprofit research institute developing AI solutions for social good. Their mission is to develop and apply AI-based innovations and solutions to a broad range of societal domains including healthcare, agriculture, education, and financial inclusion.
Given the pace of innovation in the field of artificial intelligence, many people are looking for orientation, explanation, and classification. The trust for AI will only grow if its development also deals with the ethical, moral, and normative consequences of its actions. The Volkswagen Foundation supports this process with a constantly updated focus — and a funding initiative supporting the close collaboration of technology and social sciences.
Hewlett Foundations supports much of its AI work through its Cyber Initiative, which seeks to cultivate a field that develops thoughtful, multidisciplinary solutions to complex cyber challenges and catalyzes better policy outcomes for the benefit of societies. They are also a major founder of the Ethics and Governance of Artificial Intelligence Fund.
The Patrick J. McGovern Foundation supports pioneering neuroscience research and explores the ways in which advances in technology, AI, and data science can be used for the betterment of humanity. They believe that artificial intelligence has the potential to improve people’s lives and that the benefits and value of AI should be shared by all.
The Skoll Foundation drives large-scale change by investing in, connecting, and celebrating social entrepreneurs and innovators who aim to solve the world’s most pressing problems. As the use of artificial intelligence grows exponentially each year, there is a possible future where AI and machine learning are used to bolster solutions to difficult challenges and drive scalable innovations.
For these Foundations, technology and AI is more than just a supporting player. It is taking a leading role and delivering breakthrough results. Needless to say, we should all collaborate and use AI in the most purposeful and ethical way.
Omdena is a collaborative platform to build innovative, ethical, and efficient AI solutions to real-world problems.
Whether termed cyclone, typhoon or hurricane, these natural weather occurrences pack a serious punch and are responsible for approximately 10,000 deaths per year and, “in some cases, causing well over $100 billion in damage. There’s now evidence that the unnatural effects of human-caused global warming are already making hurricanes stronger and more destructive. The latest research shows the trend is likely to continue as long as the climate continues to warm (Berardelli, 2019).”
It is for these reasons that the World Food Programme teamed up with Omdena to more accurately predict the types and amount of aid required when disaster strikes. “Assisting almost 100 million people in around 83 countries each year, the World Food Programme (WFP) is the leading humanitarian organization saving lives and changing lives, delivering food assistance in emergencies and working with communities to improve nutrition and build resilience.”
Omdena gathered a team of 34 collaborators specializing in artificial intelligence and machine learning spanning 19 different countries for eight weeks with the goal of developing an AI data-driven way to help the WFP and other humanitarian organizations to know exactly what resources the people affected by cyclones (or any other disaster) will need and to expedite deployment as fast as possible. A priority on the team’s list, were answers to questions such as, how much food and water is required? What sort of shelters and how many are needed? What types and how much non-food essentials are appropriate? Before AI models could be developed, relevant data had to be gathered for this disaster response problem.
The team collected data from a variety of sources, such as NOAA, to determine affected populations and critical features of these populations such as income level, injuries, deaths, and more. Important factors were determined about cyclones including wind speed, total hours on land, damage factors, and whether the populations were rural versus urban. Below we see the correlation mapped based on income level and the number of people affected revealing populations most in need of assistance.
Understanding the attributes of the people affected by a disaster helps to reveal the types of aid required. So that the WFP and other aid organizations can determine what and how much relief to send with precision, the team used mathematical models to create a tool that calculates the needs of the people in the targeted disaster zones. The tool calculates how much food, non-food items, shelter, etc., the population should need for a determined number of days.
This exciting AI prototype can be used as the basis to assist disaster response organizations around the world to accurately customize aid resources to the specific needs of the people impacted. The team identified a more precise way to allocate aid in times of disaster. This will allow the World Food Programme and other organizations to respond to the needs of affected people faster and more efficiently than ever before thus reducing suffering and saving lives.
Sentiment Analysis on Energy Transition commissioned by the World Energy Council and carried out by Omdena
The world is in the midst of an energy transition. This massive shift aims to move away from reliance on fuels that are destructive to the climate, the environment, and people’s well-being. The goal established by the UN is to “ensure access to affordable, reliable, sustainable and modern energy for all” by 2030. While governments, energy companies, and activists dominate the headlines, the progress with infrastructure and technology won’t be sufficient. A successful energy transition for the good of all humanity depends on the action of individuals. Together with the World Energy Council, the world’s leading member-based global energy network, Omdena explored the use of AI in understanding how people around the world perceive this energy transition and their role in it.
How each one of us views the steps in this energy transition likely depends on our personal perspective. For instance, while an increase in home fuel bills might be a mere inconvenience for an affluent family, it might push someone in a marginalized community into poverty. A gasoline tax to subsidize renewable energy efforts will be applauded by some and protested by others, as was the case with the “yellow vest movement” that ignited in France in 2018. Though outlawing the use of fossil-fuel-based generators will be irrelevant for someone with reliable access to electricity, it may cast those living in energy poverty into darkness.
Knowledge of these diverging views is critical to guiding the global shift to clean, affordable, and socially-just energy. Are individuals aware of the risks and benefits of the move to clean power? Do they believe their personal choices and behaviors will have an impact? Who do they feel should pay the costs of the transition? The World Energy Council commissioned Omdena to explore the effectiveness of artificial intelligence to grasp the attitudes of the world’s populace on these topics.
Omdena is a global platform where AI experts and data scientists from diverse backgrounds collaborate to build AI-based solutions to real-world problems. You can learn more here about Omdena’s innovative approach to building AI solutions through global collaboration.
For this eight-week machine learning project, the team built numerous AI models to perform natural language processing. Known as NLP, this approach to AI is concerned with understanding human language. Social media conversations and news articles addressing energy-related topics served as the data for the project. The NLP models were trained to gather and categorize public conversations about energy transition topics. In the words of Amardeep Singh:
What sets this challenge apart from the rest was the sheer scale of data collected, social channels scraped and data analyzed.
For example, one set of models gathered and analyzed tweets in more than 20 countries that were related to complaints about “renewable energy cost”.
As seen in the chart, the modeling revealed that technology is the biggest concern in the complaint tweets in Brazil and France. In contrast, relevant tweets in Nigeria were focused solely on policy. Though conclusions cannot be drawn from these isolated collections of data, this exploratory work has led to an understanding of the boundaries of what can be extracted from public online sources. Omdena Collaborator Mahzad Khoshlessan applied various models to filter for relevant tweets to visualize thoughts, concerns, and sentiments of citizens in the USA, UK, Nigeria, and India. Below is an example visualization displaying the most discussed topics in India.
Word Clouds: India
“Here at the World Energy Council, we recognize the opportunity and urgent need to humanize energy transition. Only by working at the human-level, embracing a broader community and addressing the social impacts agenda, will it be possible to achieve and sustain the breakthrough performance required for fast, clean, just, and socially inclusive global energy transition.” — The World Energy Council
This project aimed to provide a proof-of-concept machine-learning-based methodology to identify land conflicts events in geography and match those events to relevant government policies. The overall objective is to offer a platform where policymakers can be made aware of land conflicts as they unfold and identify existing policies that are relevant to the resolution of those conflicts.
Several Natural Language Processing (NLP) models were built to identify and categorize land conflict events in news articles and to match those land conflict events to relevant policies. A web-based tool that houses the models allows users to explore land conflict events spatially and through time, as well as explore all land conflict events by category across geography and time.
The geographic scope of the project was limited to India, which has the most environmental (land) conflicts of all countries on Earth.
Degraded land is “land that has lost some degree of its productivity due to human-caused process”, according to the World Resources Institute. Land degradation affects 3.2 billion people and costs the global economy about 10 percent of the gross product each year. While dozens of countries have committed to restore 350 million hectares of degraded land, land disputes are a major barrier to effective implementation. Without streamlined access to land use rights, landowners are not able to implement sustainable land-use practices. In India, where 21 million hectares of land have been committed to the restoration, land conflicts affect more than 3 million people each year.
AI and machine learning offer tremendous potential to not only identify land-use conflicts events but also match suitable policies for their resolution.
All data used in this project is in the public domain.
News Article Corpus: Contained 65,000 candidate news articles from Indian and international newspapers from the years 2008, 2017, and 2018. The articles were obtained from the Global Database of Events Language and Tone Project (GDELT), “a platform that monitors the world’s news media from nearly every corner of every country in print, broadcast, and web formats, in over 100 languages.” All the text was either originally in English or translated to English by GDELT.
Annotated Corpus: Approximately 1,600 news articles from the full News Article Corpus were manually labeled and double-checked as Negative (no conflict news) and Positive (conflict news).
Gold Standard Corpus: An additional 200 annotated positive conflict news articles, provided by WRI.
Policy Database: Collection of 19 public policy documents related to land conflicts, provided by WRI.
In this phase, the articles of the News Article Corpus and policy documents of the Policy Database were prepared for the natural language processing models.
The articles and policy documents were processed using SpaCy, an open-source library for natural language processing, to achieve the following:
Tokenization: Segmenting text into words, punctuation marks, and other elements.
Part-of-speech (POS) tagging: Assigning word types to tokens, such as “verb” or “noun”
Dependency parsing: Assigning syntactic dependency labels to describe the relations between individual tokens, such as “subject” or “object”
Lemmatization: Assigning the base forms of words, regardless of tense or plurality
Sentence Boundary Detection (SBD): Finding and segmenting individual sentences.
Named Entity Recognition (NER): Labelling named “real-world” objects, like persons, companies, or locations.
Coreference resolution was applied to the processed text data using Neuralcoref, which is based on an underlying neural net scoring model. With coreference resolution, all common expressions that refer to the same entity were located within the text. All pronominal words in the text, such as her, she, he, his, them, their, and us, were replaced with the nouns to which they referred.
For example, consider this sample text:
“Farmers were caught in a flood. They were tending to their field when a dam burst and swept them away.”
Neuralcoref recognizes “Farmers”, “they”, “their” and “them” as referring to the same entity. The processed sentence becomes:
“Farmers were caught in a flood. Farmers were tending to their field when a dam burst and swept farmers away.”
The objective of this phase was to build a model to categorize the articles in the News Article Corpus as either “Negative”, meaning they were not about conflict events, or “Positive”, meaning they were about conflict events.
After preparation of the articles in the News Article Corpus, as described in the previous section, the texts were then prepared for classification.
First, an Annotated Corpus was formed to train the classification model. A 1,600 article subset of the News Article Corpus was manually labeled as “Negative” or “Positive”.
To prepare the articles in both the News Article Corpus and Annotated Corpus for classification, the previously pre-processed text data of the articles was represented as vectors using the Bag of Words approach. With this approach, the text is represented as a collection, or “bag”, of the words it contains along with the frequency with which each word appears. The order of words is ignored.
For example, consider a text article comprised of these two sentences:
Sentence 1: “Zahra is sick with a fever.”
Sentence 2: “Arun is happy he is not sick with a fever.”
This text contains a total of ten words: “Zahra”, “is”, “sick”, “happy”, “with”, “a”, “fever”, “not”, “Arun”, “he”. Each sentence in the text is represented as a vector, where each index in the vector indicates the frequency that one particular word appears in that sentence, as illustrated below.
With this technique, each sentence is represented by a vector, as follows:
“Zahra is sick with a fever.”
[1, 1, 1, 0, 1, 1, 1, 0, 0, 0]
“Arun is happy he is not sick with a fever.”
[0, 2, 1, 1, 1, 1, 1, 1, 1, 1]
With the Annotated Corpus vectorized with this technique, the data was used to train a logistic regression classifier model. The trained model was then used with the vectorized data of the News Article Corpus, to classify each article into Positive and Negative conflict categories.
The accuracy of the classification model was measured by looking at the percentage of the following:
True Positive: Articles correctly classified as relating to land conflicts
False Positive: Articles incorrectly classified as relating to land conflicts
True Negative: Articles correctly classified as not being related to land conflicts
False Negative: Articles incorrectly classified as not being related to land conflicts
The “precision” of the model indicates how many of those articles classified to be about the land conflict were actually about land conflict. The “recall” of the model indicates how many of the articles that were actually about the land conflict were categorized correctly. An f1-score was calculated from the precision and recall scores.
The trained logistic regression model successfully classified the news articles with precision, recall, and f1-score of 98% or greater. This indicates that produced a low number of false positives and false negatives.
Categorize by Land Conflicts Events
The objective of this phase was to build a model to identify the set of conflict events referred to in the collection of positive conflict articles and then to classify each positive conflict article accordingly.
A word cloud of the articles in the Gold Standard Corpus gives a sense of the content covered in the articles.
A topic model was built to discover the set of conflict topics that occur in the Positive conflict articles. We chose a semi-supervised approach to topic modeling to maximize the accuracy of the classification process. We chose to use CorEx (Correlation Explanation), a semi-supervised topic model that allows domain knowledge, as specified by relevant keywords acting as “anchors”, to guide the topic analysis.
To align with the Land Conflicts Policies provided by WRI, seven relevant core land conflicts topics were specified. For each topic, correlated keywords were specified as “anchors” for the topic.
The trained topic model provided 3 words for each of the seven topics:
Topic #1: land, resettlement, degradation
Topic #2: crops, farm, agriculture
Topic #3: mining, coal, sand
Topic #4: forest, trees, deforestation
Topic #5: animal, attacked, tiger
Topic #6: drought, climate change, rain
Topic #7: water, drinking, dams
The resulting topic model is 93% accurate. This scatter plot uses word representations to provide a visualization of the model’s classification of the Gold Standard Corpus and hand-labeled positive conflict articles.
Identify the Actors, Actions, Scale, Locations, and Dates
The objective of this phase was to build a model to identify the actors, actions, scale, locations, and dates in each positive conflict article.
Typically, names, places, and famous landmarks are identified through Named Entity Recognition (NER). Recognition of such standard entities is built-in with SpaCy’s NER package, by which our model detected the locations and dates in the positive conflict articles. The specialized content of the news articles required further training with “custom entities” — those particular to this context of land conlficts.
All the positive conflict articles in the Annotated Corpus were manually labeled for “custom entities”:
Actors: Such as “Government”, “Farmer”, “Police”, “Rains”, “Lion”
Actions: Such as “protest”, “attack”, “killed”
Numbers: Number of people affected by a conflict
This example shows how this labeling looks for some text in one article:
These labeled positive conflict articles were used to train our custom entity recognizer model. That model was then used to find and label the custom entities in the news articles in the News Article Corpus.
Match Conflicts to Relevant Policies
The objective of this phase was to build a model to match each processed positive conflict article to any relevant policies.
The Policy Database was composed of 19 policy documents relevant to land conflicts in India, including policies such as the “Land Acquisition Act of 2013”, the “Indian Forest Act of 1927”, and the “Protection of Plant Varieties and Farmers’ Rights Act of 2001”.
A text similarity model was built to compare two text documents and determine how close they are in terms of context or meaning. The model made use of the “Cosine similarity” metric to measure the similarity of two documents irrespective of their size.
Cosine similarity calculates similarity by measuring the cosine of an angle between two vectors. Using the vectorized text of the articles and the policy documents that had been generated in the previous phases as described above, the model generated a collection of matches between articles and policies.
Visualization of Conflict Event and Policy Matching
The objective of this phase was to build a web-based tool for the visualization of the conflict event and policy matches.
An application was created using the Plotly Python Open Source Graphing Library. The web-based tool houses the models and allows users to explore land conflict events spatially and through time, as well as explore all land conflict events by category across geography and time.
The map displays land conflict events detected in the News Article Corpus for the selected years and regions of India.
Conflict events are displayed as color-coded dots on a map. The colors correspond to specific conflict categories, such as “Agriculture” and “ Environmental”, and actors, such as “Government”, “Rebels”, and “Civilian”.
In this example, the tool displays geo-located land conflict events across five regions of India in 2017 and 2018.
By selecting a particular category from the right column, only those conflicts related to that category are displayed on the map. Here only the Agriculture-related subset of the events shown in the previous example is displayed.
News articles from the select years and regions are displayed below the map. When a particular article is selected, the location of the event is shown on the map. The text of the article is displayed along with policies matched to the event by the underlying models, as seen in the example below of a 2018 agriculture-related conflict in the Andhra Pradesh region.
Here is a closer look at the article and matched policies in the example above.
This overview describes the results of a pilot project to use natural language processing techniques to identify land conflict events described in news articles and match them to relevant government policies. The project demonstrated that NLP techniques can be successfully deployed to meet this objective.
Potential improvements include refinement of the models and further development of the visualization tool. Opportunities to scale the project include building the library of news articles with those published from additional years and sources, adding to the database of policies, and expanding the geographic focus beyond India.
Opportunities to improve and scale the pilot project
Further development of visualization tool
Expand library of articles with content from additional years and sources
Expand the database of policies
Expand the geographic focus beyond India
About the Authors
Laura Clark Murray is the Chief Partnership & Strategy Officer at Omdena. Contact: email@example.com
Nikhel Gupta is a physicist, a Postdoctoral Fellow at the University of Melbourne, and a machine learning engineer with Omdena.
Joanne Burke is a data scientist with MUFG and a machine learning engineer with Omdena.
Rishika Rupam is a Data and AI Researcher with Tilkal and a machine learning engineer with Omdena.
Zaheeda Tshankie is a Junior Data Scientist with Telkom and a machine learning engineer with Omdena.
Omdena Project Team
Kulsoom Abdullah, Joanne Burke, Antonia Calvi, Dennis Dondergoor, Tomasz Grzegorzek, Nikhel Gupta, Sai Tanya Kumbharageri, Michael Lerner, Irene Nanduttu, Kali Prasad, Jose Manuel Ramirez R., Rishika Rupam, Saurav Suresh, Shivam Swarnkar, Jyothsna sai Tagirisa, Elizabeth Tischenko, Carlos Arturo Pimentel Trujillo, Zaheeda Tshankie, Gabriela Urquieta
This project was done in collaboration with Kathleen Buckingham and John Brandt, our partners with the World Resources Institute (WRI).
Omdena is an innovation platform for building AI solutions to real-world problems through global bottom-up collaboration. Omdena is a partner of the United Nations AI for Good Global Summit 2020.
Bas Godska, a former marketing executive of lastminute.com and investor in over a dozen companies with an overall value of €1bn, will help Omdena grow to the next level with his impact investment and his 20-year expertise in growth hacking.
By Julia Szyndzielorz
Palo Alto, May 5th, 2020 – The Palo Alto-based startup Omdena runs collaborative AI projects in which global teams of 40 or more data scientists and experts build AI solutions to address significant real-world problems. To date, more than 950 people from 80 countries have participated in Omdena’s challenges. Founded by AI-mentor and entrepreneur Rudradeb Mitra in 2019, Omdena is planning to enter a path of steady growth with Godska’s help.
“’We look forward to having Bas join our non-executive board and help us grow Omdena to the next level. Since we started 12 months ago, we ran 18 challenges involving more than a thousand collaborators. In the next 12 months, we look forward to executing 30 projects with more than 1500 collaborators,” said Rudradeb Mitra, the founder of Omdena.
Godska: “I’m excited to contribute to Omdena’s success. The power and speed of Rudradeb’s fast-growing global group of over 950 hands-on AI-experts add rocket fuel to the paradigm shift we all see taking place in society. Governments, NGOs, and businesses struggle nowadays to make much-needed fast decisions, as reliable large data sets need teams and time to be collected, analyzed, and processed. Omdena has the unique resource of an immediately deployable big AI-team available to “crack the code”. I believe Omdena is as close as you can get currently to a sustainable, future-proof A.I. initiative. Its lean versatility and ultra-fast turnaround through precise Challenges will make this model attractive for the high-speed transitions imminent. I want to add value where it matters, A.I. for Good helps to deliver a sustainable future.”
Earlier in April, Omdena and its COVID-19 project which studies the impact of lockdowns on the world’s poorest populations was recognized by Nasdaq and Norrsken Foundation: the company’s effort to make a contribution to solving the global crisis through a collaborative AI project was applauded in the form of a video screen on the Nasdaq building at Times Square in New York City.
Omdena Times Square
Omdena, which is a partner of the United Nations’ AI for Global Good Summit 2020, comes with a track record of successfully completed AI projects. Those efforts include using machine learning to identify the safest routes in Istanbul for earthquake victims to reunite with their loved ones. It has also delivered AI solutions which helped detect the outbreak of fires in the Brazilian rainforest with 95 percent accuracy.
Omdena is a partner of several United Nations organizations, including the UN Refugee Agency and the UN World Food Programme, as well as an official Innovation Partner of the United Nations AI for Global Good Summit 2020.
For media inquiries contact: Laura Clark Murray, Omdena, firstname.lastname@example.org
About Omdena: Founded in May 2019, Omdena is an innovation platform for building AI solutions to real-world problems through global collaboration. The company’s partners include the UN World Food Programme and the UN Refugee Agency. Omdena is an Innovation Partner of the United Nations AI for Good Global Summit 2020. Learn more at Omdena.com.
About Rudradeb Mitra: The India-born Rudradeb Mitra is a graduate from the University of Cambridge, UK and an international AI expert. He has built six startups in four countries. His primary interest is to build products with social value. He is a mentor and AI advisor at several institutions including Google Launchpad, ImpactHub, MIT Enterprise, Founders Institute and a senior AI advisor of EFMA Banking Group. Mitra founded Omdena in 2019 to address real-world problems through global collaboration.
About Bas Godska: Bas Godska is General Partner at early-stage tech fund Acrobator Ventures: the world’s first growth hack VC and only Western investor with over a decade of local presence in the CIS, CEE and Benelux regions. Rated by TechCrunch and Crunchbase as “the most prolific Western investor in Eastern Europe”, Godska accelerated growth for many global companies like Orbitz.com, Adobe, and his portfolio firms like Harver (HR tech/AI) and Miro (productivity platform) which recently secured a $50 million investment.