Can We Stop “Othering” AI?

Steve Johnson
5 min readMar 1, 2021

An AI-guided robotic arm hovers over a box full of canned beans, instant stuffing and jars of cranberry sauce. There’s a moment of what feels like contemplation as it identifies what’s missing: then with a deft movement, it picks up a jar of gravy and adds it to the box to complete the holiday care package.

This modern warehouse “picking robot” — deployed, in this instance, by my company to pack charity Thanksgiving dinners and allow food bank volunteers to remain socially distanced at a critical time — is a far cry from the threatening AI we see represented in films, from Stepford Wives to The Terminator to Ex Machina. Robots in pop culture are frequently presented as a foreign, and often menacing, presence, something deeply inimical to humanity.

The reality of smart robots, however, is often far different. They do our vacuuming; they help drive our cars. In factories, they lift and sort. During the Covid crisis, they’ve, literally, been lifesavers: enabling human workers to stay socially distanced and ensuring supply chains are running.

Yet, the broad fear of AI and smart robots persists. Indeed, the dynamic is often reduced to a zero-sum game: it’s us or them. But this simplistic perspective — humans versus robots — keeps us both from engaging with the true potential, and wrestling with the true perils, of this technology. To really appreciate AI and its consequences, we must first stop othering AI.

The cop out of “othering”

To borrow a term from sociology, “othering” means to treat a group as intrinsically different from oneself — so foreign as to be inscrutable and beyond the pale of rational analysis. Othering shuts down the possibility of engagement and dialogue a priori. It’s used to distance, diminish and alienate.

In many respects, this is how we’ve approached AI to date. The technology is commonly othered as an outside force, disconnected from humanity. The result, predictably, has been widespread fear, misunderstanding and resistance. Some 50 percent of consumers say they’re straight-up fearful of artificial intelligence, while fewer than half say they even understand it.

The problem often starts with a failure to distinguish between “general AI” and “narrow AI.” General AI — the ability for computers or software to learn or absorb any information and, for practical purposes, do anything — is the stuff that keeps Elon Musk and Bill Gates up at night. Yet, for all of our fears of super robots, we are living firmly in an era of “narrow AI” — that is, is specific technology learning specific tasks, and performing them better than a human could.

Think: the spam filters that sort your inbox, the product recommendation engines that appear when you’re shopping online, the maps that plot out the quickest route from A to B, or the computer algorithms that help doctors detect cancer.

This kind of AI is not coming to get us. It’s already here and adding real value to our daily lives. The tipping point, as it were, has been reached. And the applications and promise of AI and smart robots are only growing. Case in point: the World Economic Forum reports that AI stands to create 58 million new jobs by 2022, with physical tasks being replaced by mechanized labor, allowing humans to focus on more creative or complex assignments. This could contribute up to $15 trillion to the global GDP in the next 10 years.

Deployed with vision and foresight, robotics and machine-learning has the promise to make us more human in many respects, both in our lives and at our jobs. But seeing AI as a scary, HAL-like monster — inscrutable and to be resisted at all costs — prevents us from getting creative with its true potential, on large and small scales. In the end, this kind of othering only stands to hurt us.

Seeing AI as “us” makes us more responsible

There’s another grave issue with othering AI: labelling the technology as a boogeyman absolves its creators or users of real responsibility for its consequences.

The simple truth is we’re the ones creating and programming these machines and the algorithms that guide them. The robots take their cues from those who teach them. Many forms of AI learn to make decisions or identifications based on training data — photos, words, statistics — that can reflect human bias and perspectives, consciously or subconsciously.

Systematic prejudices against race, gender or sexual orientation can be baked right into seemingly “objective” data sets. For example, facial recognition software learns to read emotions by studying photos tagged as happy, sad, angry, and so on. If the programmer inadvertently labels more white faces as “happy” and more black faces as “angry,” the machine may start to associate emotion with race, as does the majority of emotional analysis technology, according to one recent study.

Other cautionary examples aren’t hard to find. Microsoft’s AI chatbot learned anti-semitic phrases from Twitter users. Amazon shut down its recruitment AI experiment when it discovered it had inadvertently learned to be misogynistic. An algorithm used by US hospitals to identify who would need more medical care started heavily favoring white patients over black ones.

When we “other” AI, we let ourselves off the hook for these consequences. The technology becomes something beyond our control, and its outcomes are seen as either inevitable or objectively right … and, in any case, aren’t our responsibility.

Nothing could be further from the truth, of course. Ensuring integrity in the AI we develop and implement and taking accountability for its consequences is a responsibility we all need to share going forward — as developers, as consumers and as businesses. Indeed, efforts are already being made, with individual companies adding AI accountability into their corporate policies (IBM is just one example), and global organizations creating international ethics guidelines.

Ultimately, the future of innovation and AI are so intertwined as to be nearly synonymous. From care-package-packing robot pickers to self-driving cars, every industry stands to be transformed by AI in deep and lasting ways. Ensuring that we reap the benefits, while avoiding the pitfalls — that we use this technology to make us more human, not less — requires seeing AI for what it truly is: not “them” but “us.”

This post was originally featured in Techonomy. Stay up to date with my latest by following me here and on Twitter.

--

--

Steve Johnson

President and COO at @BerkshireGrey. Curious, family man, technology veteran, traveler. I help cutting-edge companies scale with purpose.