Ethical AI Building Blocks: The Interdependence of Emotional and Artificial Intelligence
July 14, 2020
One of my favorite quotes at the moment is from Max Tegmark, MIT professor and author of ‘Life 3.0: Being Human in the Age of Artificial Intelligence’. Tegmark talks about avoiding “this silly, carbon-chauvinism idea that you can only be smart if you’re made of meat” in reference to a more inclusive definition of intelligence to include artificial as well as biological intelligence. I’d like to double down on the requirement for an even more inclusive definition of intelligence – or rather, a more inclusive approach to artificial intelligence (AI). An approach where the emphasis is on diversity and collaboration, for meat lovers, vegans, and robots alike.
Outside the tech biosphere, reservations are often expressed about AI. These moral questions can run even deeper for some of us within the AI sector. Fear that AI will put humans out of a job or learn to wage war against humanity is bounced around the social interwebs at will. But ask a machine learning engineer how the AI she’s been developing actually does what it does, and most often you are met by a bit of a shrug of the shoulders beyond a certain point in the process. The truth is, advanced AI is still a bit of a mystery to us mere humans – even the really smart machine-learning humans.
Armed with this context, I won’t argue there aren’t potential downsides. AI is built by people. People decide what data goes into the model. People build models. People train the models and ultimately people decide how to productionize the models and integrate them into a broader workflow or business.
Because all of this is (for the moment) directed by people, it means we have choices. Up to a point – we have a choice about how we create AI, what its tasks are, and ultimately the path we direct it to take. The implications of these choices are crystal clear now more than ever. The power of AI to create a better, healthier, and arguably more equitable world is tangible and occurring at a very rapid pace. But so is the dark alternative – people have a choice to create models which spread Fear, Uncertainty, and Doubt to hack an election or to steal money.
AI is a tool like any other… well, almost.
Beyond The Tech
The pursuit of ‘AI nirvana’ is thought by some to be a pipedream cluttered with wasted money and resources along the path to mediocre success. Others share a view that AI at-scale is something reserved only for the FAANG companies (plus Microsoft, Uber, etc.). Without diving into the technicalities of data science and machine learning too deeply, the reality is that organizations are still struggling to capture the value of their data with any corresponding models they build. In fact, 87% of data science projects fail to deliver anything of value in production to the business. Challenges I hear time and again from customers, friends and colleagues include:
- Competing or out of sync business silos
- Lack of cohesion around a data strategy
- Data in various formats and locations
- Lack of clear objectives within the context of broader business transformation
The Importance of Soft Skills and Collaboration
Critically, some of the most important characteristics of data science success relate to soft skill development – those which make us uniquely human. Yes, we need great programmers, data wranglers, architects, and analysts for everything from data archeology to model training. But it is just as important (I would argue now more important) to curate emotional intelligence if you want to succeed with artificial intelligence. The success of an organization is now judged more heavily based on its ability to build and maintain Cultural Empathy, Critical Thinking, Problem Solving, and Agile Initiatives. Importantly, these skills also lead to a more natural ability to link data science investment directly to organizational (and social) value.
In other words, instilling a culture of diversity, inclusion, and collaboration is integral to AI and ultimately business success. As an organizational psychologist and professor, Tomas Chamorro-Premuzic said in a 2017 Harvard Business Review article, “No matter how diverse the workforce is, and regardless of what type of diversity we examine, diversity will not enhance creativity unless there is a culture of sharing knowledge.” Collaboration is key.
Remove Bias and Enhance Creativity
Out of all the soft skills, the need for an unbiased and collaborative approach to AI is probably the most important thing we can do to more positively impact AI development. Omdena has quickly become the world leader in Collaborative AI, demonstrating rapid success in solving some of the world’s toughest problems. Experts discuss AI bias at length, but remember that humans create AI. We are not perfect and we certainly are not all-knowing. Imagine if all AI were produced by programmers in Silicon Valley. Even they would agree, a model to predict landslides based on drought patterns from satellite imagery in Southeast Asia, would be better done in collaboration with those local to the problem who also understand farming and economics relevant to the region. Likewise, a model built to analyze mortgage default risk based on social sentiment analysis and financial data mining needs to be built by a diverse, collaborative team. As recent history is teaching us, decisions made by the few, expand to elevate systemic division and privilege.
Jack Ma, the world’s wealthiest teacher, said in an address to Hong Kong graduates, ‘Everything we taught our kids over the past 200 years, machines will do better in the future. Educators should teach what machines are not capable of, such as creativity and independent thinking.’
My hope is that schools are adapting to this change, along with all the other changes they must now manage. But for most corporate teams, they have some catching up to do to ensure AI adoption is not only successful but considered a success for all. Let’s start by encouraging a broad, diverse, and collaborative approach to AI. As Tegmark says, “Let’s Build AI that Empowers Us”.
Jake Carey-Rand is a technology executive with nearly 20 years of experience across AI, big data, Internet delivery, and web security. Jake recently joined Omdena as an advisor, to help scale the AI social enterprise.
Omdena is the company “Building Real-World AI Solutions, Collaboratively.” I’ve been watching the impact Omdena and its community of 1,200+ data scientists, from more than 82 countries (we call them Changemakers) have been doing over the last 12 months. Their ability to solve absolutely critical issues around the world has been inspiring. It has also led to some questions about how these Changemakers have been able to do what so many organizations fail to do time and time again – create real-world AI solutions in such a short amount of time. This has inspired us to explore how we could scale this engine of AIForGood even faster. The Omdena platform can be leveraged by enterprises who, especially during these challenging times, have to accelerate, adapt, and transform their approach to “business as usual” through a more collaborative approach to AI.