The role of empathy in the use of AI

AI may be (arguably?) the most fascinating, exciting, breathtaking (any more hyperboles?) developments in the past several years. It learns faster than we humans do. It reads, writes, sees, hears; It parses, sorts, chunks, classifies, identifies and analyzes patterns, it offers prognosis. If that’s not sophisticated cognitive capability, here’s the kicker: It is still an infant.

BTW, I’m using “It with a capital ‘I’ here, to denote AI; to denote machine learning, and deep learning.

Phew! With that out of the way, we can get on with some serious business. Wait a minute, It has already done that. How am I so sure? Well, don’t take my word for it. Obama (yes, Barrack Obama) said it himself. Here: http://bit.ly/2erbSuM. Some excerpts:

“My general observation is that it (AI) has been seeping into our lives in all sorts of ways, and we just don’t notice…. We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out…”

Right, Barrack Obama’s next job is at the MIT or at Google, controlling It? No, but imagine how scary, if his successor is actually sitting in their home-office (yes, that’s what the White House is, isn’t it?), and controlling It? I mean, imagine if, It forbid, the successor is the malicious, ignorant, greedy, power-hungry, megalomaniacal dim wit like the Ol’ Donald, getting excited at this whole deep learning thing? I mean how exciting, this deep learning thing? Deep learning. That’s right. Deep learning. I said, deep learning.

I’ll give you an example, to relate to a later hypothesis: “(Google) had mapped every single location in France in two hours, and the way they did it was that they fed street view images into a deep learning algorithm to recognize and read street numbers”. (http://bit.ly/2e7tLzz).

moreyouOK, now imagine, this guy above, (the Ol’Donald), in that home-office called the White House, if he’s going to get an algorithm (deep learning) written to parse through the whole world and identify the exact spots where scummy un-white perverts lived? And if he gets this algorithm connected to the nuclear codes? And what if he then commands the Pentagon to press the codes, to explode and destroy? 4 minutes, that’s what it will take from his command, to their destruction. No, Hiroshima was a play school. Believe me.

No, I’m not proclaiming doom here. I’m not dissing AI. I’m just reminded again and again, of that cliche of cliches, ‘with great power comes great responsibility’. Now, we know all too well that Mr. Hyde is as much a part of Dr. Jekyll. They are not two, they’re one. Each has the potential to be the other.

That means, Batman has the evil of the Joker in him. And Joker is as much a do-gooder as Batman is. Wait, wait, wait, this is not a woozy schmooze. Harry and Voldemort are connected. It’s just how it is.

Which means, It, may be capable of some Jekyll and some Hyde? We know It can see, hear, read, write, self-learn. What it cannot is to feel anger. Nor can it feel compassion. At least not yet. It can execute at lightning speed, and deliver much greater precision than the human mind can perceive. Which is what makes It a benevolent and a malevolent. In Mother Teresa’s hands, It would remove all suffering. In Joker’s hands, It could destroy the world.

puppyloveSo how do we protect our planet from It, our own creation? Sure, It can self learn, and self-multiply. Can we teach It empathy? Can we teach It to ‘feel’ in some fantastical way? Remember It’s still an infant? And infants are pretty self-centered creatures, that only learn empathy as they grow and live among humans?

This is why Joi Ito’s* observation is worrying:

“(O)ne of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.”

And it’s also why Nick Bostrom’s** illustration gets you worried too:

“Now this has profound implications, particularly when it comes to questions of power. For example, chimpanzees are strong — pound for pound, a chimpanzee is about twice as strong as a fit human male. And yet, the fate of Kanzi and his pals depends a lot more on what we humans do than on what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend on what the superintelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make.”

Now if we would step forward from the exhilaration, and the pessimism of this ‘super power’ It, and look at practical answers to leveraging it to human advantage, then it seems obvious that we apply to It a human filter; a human filter that’s at least more or less unique to humans. A filter that, some argue, sets us apart from our beloved animal kingdom. A filter that could have prevented Hiroshima, Assad, Qaddafi, Kim Jong II, Hitler, Genghis Khan, Mugabe#… A filter called “empathy”.

“Empathy is the ability to ‘feel with’ another person, to identity with them and sense empathy_ai_birdwhat they’re experiencing. It’s sometimes seen as the ability to ‘read’ other people’s emotions, or the ability to imagine what they’re feeling, by ‘putting yourself in their shoes.’”, Steve Taylor (http://bit.ly/2e8R8pS)

There is, literally, no other way It can be stopped on It’s course to Hiroshima. Powers that be will get It. Some of those powers may be a Bertrand Zobrist^, and AI can then become the single most potent weapon of mass destruction.

“Research shows that personal power actually interferes with our ability to empathize. Dacher Keltner, an author and social psychologist at University of California, Berkeley, has conducted empirical studies showing that people who have power suffer deficits in empathy, the ability to read emotions, and the ability to adapt behaviors to other people. In fact, power can actually change how the brain functions, according to research from Sukhvinder Obhi, a neuroscientist at Wilfrid Laurier University in Ontario, Canada.”^^

But there may be some hope, actually. Given machines can be programmed to learn, based on past events or experiences, Steve Taylor’s wisdom may come as a savior: “empathy is seen as a cognitive ability, along the same lines as the ability to imagine future scenarios or to solve problems based on previous experience.”

Take this research, for example:

A new AI programme has been developed to attempt to accurately detect signs of depression using Instagram photos.

The study, carried out by researchers from Harvard University and the University of Vermont, used machine learning tools to identify markers of depression. It was found that the programme was 70 per cent accurate in detecting signs of depression, which was better than previous studies looking at the success rate of GPs diagnosing patients – normally around 42 per cent accurate. (http://bit.ly/2b39jxJ)

We can go on, and on. Quite obviously, AI is ultimately a tool, to be used benevolently, or malevolently, depending on who’s wielding It at any given moment. It is the metaphorical monkey. It’s imperative, therefore, that It is injected with the ‘empathy virus’ before the proverbial monkey turns, well, Kanzi the Chimpanzee. To quote Nick Bostrom again, “Think about it: Machine intelligence is the last invention that humanity will ever need to make.”

2-robots

__________________________________________________________

*Joi Ito is Director of MIT’s Media Lab.

** Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and he is currently the founding director of the Future of Humanity Institute at Oxford University. (Wikipedia)

# Timeline sequence of these dictators deliberately jumbled up, as it’s inconsequential to this argument.

^ Bertrand Zobrist: A transhumanist genius scientist who is obsessed with Dante’s Inferno. He is intent on solving the world’s overpopulation problem by releasing a virus. (Wikipedia). From the Hollywood movie Inferno.

^^ Becoming Powerful Makes You Less Empathetic, by Lou Solomon, Harvard Business Review (http://bit.ly/1bgRvKJ).

hyper-real

Train people well enough so they can leave, treat them well enough so they don’t want to

RU 000245, Box 222, Folder 9 (envelope 1); Blackfoot Albatross chick on Kure Atoll (c. 1960's) photographs as part of field work completed during the Pacific Ocean Biological Survey Program.
Train people well enough so they can leave, treat them well enough so they don’t want to. -Richard Branson

Countless business managers, leaders, business owners feel paralysed at the thought of “after training what”. These are not mega corporations, with oceans of people working for them. Continue reading “Train people well enough so they can leave, treat them well enough so they don’t want to”