June 14, 2016 – Artificial intelligence (AI) can be found everywhere these days in the computing world. If you use Google search you are accessing an AI tool. Watson, IBM’s AI “Jeopardy” champion is today working with hospitals to help with patient diagnosis and treatment. Expert systems and intelligent decision support systems can be found in a wide range of businesses helping companies to make better choices. Marketing analysis uses AI neural networks to find patterns in customer behavior. Fraud detection systems sift through millions of credit card purchases every hour, every day, looking for changes in the patterns of buyer behavior. AI techniques include genetic algorithms where the software learns over time through a natural evolutionary process and gets better at its job.
So with all this good stuff that AI provides us today why are people like Elon Musk, Steve Wozniak, Stephen Hawking and Bill Gates warning us about AI? What concerns these three is the evolution of a super intelligent AI that is no longer bounded by human constraints. We’ve seen enough apocalyptic science fiction about AI on film to appreciate their concerns about unbounded super intelligence. Remember the 1983 movie War Games where a high school student inadvertently hacks into an super computer game simulator and almost triggers a nuclear war? Or watch the Terminator movies, Ex Machina or The Matrix and you begin to understand the implications of an unfettered AI, a Pandora’s Box of unintended consequences.
When Musk describes AI as “our greatest existential threat,” or Gates talks about how machines could evolve from doing jobs for us to becoming our masters, or Hawking states that “humans, limited by slow biological evolution, couldn’t compete and would be superseded by AI” you have to sit up and take notice. And yet Facebook, Google, IBM are hell-bent on creating a machine-assisted future where their end goal is to have AI serve humanity.
But what happens when the AI is unbounded? A super intelligence greater than our human intellectual capacity will more than outperform us at almost any task, it will supersede us. It will no longer be about an AI beating us at chess or Go. It will no longer be an AI restricted to a single task. Instead it will be an AI that evolves not at biological speeds but at machine speeds. Learning will be blinding fast. The AI will be self-replicating as long as it has access to the Cloud.
In his book Superintelligence: Paths, Dangers, Strategies, Oxford University’s Nick Bostrom (seen below) describes the challenge presented by machine intelligence.
Bostrom is not the easiest author to read. Sometimes it is hard to tell when he is talking about human or machine. You can watch a 2015 TED talk of his here entitled, “What happens when our computers get smarter than we are?” Bostrom sees AI as a “bomb” that we have gladly planted in our midst. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”
Is there a secret way to ensure that the bomb never goes off?
Human values.
But how do you program that into a machine? Bostrom writes, “If we cannot transfer human values into an AI by typing out full-blown representations in computer code, what else might we try?”
Here’s a simple explanation of the problem. If one of our human values is “happiness” how is an AI to understand its role in achieving that for us? Think about an unfettered AI seeking the simplest solution. Why not tap into human brains and alter our neural patterns to evoke eternal bliss in all of us. No human would find this to be compatible with what we mean by “happiness” yet it is a perfectly logical solution for an AI.
Can we create a seed AI that can then be exposed like a child to learn human values? Alan Turing, the English scientist and cryptanalyst, proposed such a scheme back in the 1950s as the best way to create an evolved AI. An observing AI would learn through interaction with humans rather than have programmers create algorithms that describe human behavior in computer code. An observant AI would witness the worst in human behaviors as well as the good. How could we stop it from observing crime, war, and immorality? How could we reinforce the good aspects of our values, sharing, caring, and love for others and the natural world around us?
The choices to imprint human values through learning include a number of methods such as:
- explicit representation – providing values in the form of coded rules.
- evolutionary selection – creating many AI designs given a set of basic rules and then running the programs until a human-value system emerges.
- reinforcement learning – using a reward system for the AI as it learns to solve problems even in the absence of detailed instructions.
- associative value accretion – pairing AI with a parent witnessing and learning from his or her behavior, the way newborns bond with us.
- motivational scaffolding – giving the AI a series of goals to achieve that start simple and become more complex over time.
- value learning – providing a series of value-learning problems for the AI to solve.
- emulation modulation – imprinting a whole human brain emulation on an AI creating a carbon digital human copy.
- institution design – creating a social community of AI components led by an “alpha” with roles ascribed to various AI components each acting as a sub-agent of the entire AI institution.
As much as we can theoretically describe these methods, to date, we are incapable of transferring human values to a machine. And that begs the question “which values are we talking about?”
Ray Kurzweil has spoken for years about a point in the near future when machine and humans merge. He calls this the “singularity.” But what is he really suggesting? Kurzweil’s AI is us, a new species genetically altered, filled with nanomachines to reverse aging and prevent disease, and non-biological intelligence that incorporates our humanity.