May 14, 2019 – Elon Musk has warned us. Stephen Hawking has too. Artificial Intelligence (AI) could prove to be an existential threat to our species. But Chief Decision Scientist at Google, Cassie Kozyrkov, has a different take. It’s the humans creating AI tools that we should fear.
She has been a vocal advocate for developing AI as just another tool for business and personal use. It is a more evolved tool than pen and paper which we have been using for several thousand years to store knowledge and refer to it as needed. Should we fear books and libraries? If not then we shouldn’t fear AI.
In a talk given at The Next Web Conference, TNW2019, in Amsterdam, this last week, Kozyrkov blamed science fiction for teaching us “more fiction than science,” when it came to understanding AI. She pointed to movies like “2001: A Space Odyssey,” “Terminator,” “Demon Seed,” “Ex Machina,” and “I, Robot” that have provided a basis for fear about what robots and AI could potentially do to us.
As much as I appreciate her argument, what Kozyrkov doesn’t say is probably the reason for some to fear AI. After all, humans have created tools to go down some very dark paths. For example, so many human inventions have been about developing tools for violence, the benign and facilitating wheel becomes a wheeled war chariot, splitting the atom turns into the atom bomb, and rocket propulsion becomes the V2, ICBMs, and a vehicle for mutually assured destruction (MADD).
Kozyrkov has pioneered using AI for decision making because in her words, “data is not important, decisions are.” So the machine learning AI tool isn’t the problem, it is the recipes we create to meet the end results we seek that is. That’s where Musk and Hawking are off track seeing in AI itself the potential for evil rather than the evil being us.
Google has trained more than 15,000 of its employees on decision intelligence, what the company calls applied AI for business decision making. This is a revolutionary approach to AI because it no longer is about creating general-purpose machine learning code, but about building from a common set of AI capabilities, a new class of algorithms that can make neural network computing turn out perfect recipes to solve today’s problems.
Kozyrkov points out that neural networks are great at learning from examples. They are great at looking at billions of things at a time to come up with just the right end result recipe. In some cases a neural network may recommend more than one recipe equal to solving a problem. For example, global warming, suggesting different mitigation or adaptation strategies from which humans can then implement policy. This is the superior power of today’s AI and it is not about taking over the world and wiping out biology to replace it with machines of its own making. That’s left to science fiction writers and movie makers, Elon Musk, and Stephen Hawking.
But what if the recipe request from a human to an AI has evil intent? Isaac Asimov, the brilliant science and science fiction writer created three essential laws to guide robots, by which he meant intelligent machines. His laws were:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by a human being except where such orders conflict with the first law.
- A robot must protect its own existence as long as such protection is not in conflict with the first or second law.
Asimov later added an extension to these existing laws to cover all of humanity so that AI would never injure our species, or allow us to come to harm through its actions or inaction. An update to Asimov produced five laws which stated:
- Robots should not be designed solely or primarily to kill or harm humans.
- Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
- Robots should be designed in ways that assure their safety and security.
- Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
- It should always be possible to find out who is legally responsible for a robot.
And last year an Anglican bishop in the United Kingdom went even further in ascribing ten commandments for AI which included:
- AI should be designed for all, and benefit humanity.
- AI should operate on principles of transparency and fairness, and be well signposted [signposting refers to road signs and the directions they provide].
- AI should not be used to transgress the data rights and privacy of individuals, families, or communities.
- The application of AI should be to reduce inequality of wealth, health, and opportunity.
- AI should not be used for criminal intent, nor to subvert the values of our democracy, nor truth, nor courtesy in public discourse.
- The primary purpose of AI should be to enhance and augment, rather than replace, human labour and creativity.
- All citizens have the right to be adequately educated to flourish mentally, emotionally, and economically in a digital and artificially intelligent world.
- AI should never be developed or deployed separately from consideration of the ethical consequences of its applications.
- The autonomous power to hurt or destroy should never be vested in AI.
- Governments should ensure that the best research and application of AI is directed toward the most urgent problems facing humanity.
We humans and our technological evolution created anthropogenic climate change without the help of AI. If Kozyrkov is right, and Musk and Hawking are wrong, then maybe we should be using AI to help us stop the runaway technological train that appears to be our modern civilization. We need good recipes to yield the right baked goods to ensure our future, and with the AI tools we have built we might just get all the right recipes to fix our world.
Ray Kurzweil, the noted futurist believes that through the merger of AI and human intelligence that we will save our planet and ourselves. He calls that merger “The Singularity” when natural evolution will become one with technological evolution. Is this something to fear, or a vision of a better future? If asked would Kozyrkov believe the latter? I’ve reached out to her and will let you know her answer.