November 1, 2014 – On the morning after Halloween it seems appropriate to talk about demons. According to Elon Musk, one of them is artificial intelligence. At a recent symposium hosted by Massachusetts Institute of Technology, Musk described AI as an existential threat. Musk is quoted as stating:
“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon.“
A friend of mine sent me the article in The Washington Post where this quote appeared. His added comment was “Being a child of “The Terminator,” I tend to agree. What do you think?”
Here is the subsequent email exchange:
Me: AI like every technology has a dark side. When Asimov created his rules of robotics it was to put a framework around AI. If it is viable to program constraint into intelligence then AI and humanity will coexist. Of course to me what is far scarier is the fusion of AI and humans, the singularity.
My friend: Personally, I think it is foolish to think we can “program constraint into intelligence” considering that we do not practice restraint ourselves. It does worry me, but then again, I won’t be around to face the consequences. Perhpas we simply deserve our fate, and AI will create a better balance on the planet.
Me: There is restraint and constraint and although they have similar meaning I like to think they are a bit different. If we are holding back AI by restraint then eventually the genie will leave the bottle. But if the algorithms we use to develop AI provide limitations to inhibit behaviours that could harm humans then such constraints will turn the relationship into a synergistic one between AI and natural intelligence.
When Asimov created his literary paradigm for robot behaviour it put 3 limits on AI.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
For “robot” read “AI.”
Since then others have extended Asimov’s constraints to include:
4. A robot must establish its identity as a robot in all cases.
5. A robot must reproduce as long as such reproduction does not interfere with the first three laws.
Another constraint has been introduced as a 6th law.
6. A robot must know its a robot.
I wrote a piece on this subject two years ago looking at machine ethics. You may not remember it but the link is: https://www.21stcentech.com/
The real question you are raising is one of programming limits. Can we program morality into AI? Can an AI entity given autonomy break the boundaries of its programming and become amoral? Computer scientists recognize that creating a thinking and learning autonomous machine needs a code of ethics, and that means developing a language and programming to turn ethics into logical mathematical formulae. Such programming shouldn’t leave the AI pondering incessantly over what to do, so it has to provide a schema for the robot to resolve ethical dilemmas quickly. Think about the little robot helper that I wrote about in the blog posting (link provided above). That robot has to determine quickly through observation what is best for the human it is assisting. So we know we can develop programs sophisticated enough to incorporate a moral code. Of course, as in anything we humans touch, we can create immorality in code as well.
My friend: My position remains unchanged precisely because of your last observation. All it takes is an immoral programmer. Or a sufficiently imprecise programmer. After all, algorithms are only as good as their imperfect creators.
On the other hand, I wonder if we can create a “kill switch,” back door or “dead hand” mechanism (as in the old days of railroading) if AI goes awry?
Me: Even Data on Star Trek had a kill switch. You just had to get to it before the android figured out what you were doing. My guess is a kill switch is mandatory unless the AI begins replicating itself. Then unless the kill switch is mandated in the program and irremovable, then I would suspect we would have a runaway AI presence on the planet.
My friend: Thanks. As if I didn’t have enough to worry about.
So although Musk’s offhand comment about AI may have been received with puzzlement on the part of his MIT audience, his view is shared by many. AI may be a Pandora’s box, unlocking the demon as he put it. And thinking about IBM’s Watson announcement that I wrote about last Thursday, AI used improperly could have enormous unintended planetary consequences.
In an online poll conducted by C|Net to the question posed, “Do you agree that AI could threaten humanity?,” online responders tended to agree with Musk.
43% said “Yes. AI will become dangerous to humanity.”
9% said “No. AI will never be a threat.”
43% said “Maybe. It depends on advancements in AI.”
Only 4% had an other or no opinion.
Whether fear mongering or pointing to inadequacies in our current approaches to developing and regulating AI, Musk’s openly stated concerns have the Twitter, Facebook, LinkedIn and blogging universe commenting frenetically.
Articles across the web
Seems doubtful that Elon Musk or Stephen Hawking have any special competency to define the questions, much less provide the answers, but for sure the Kurzwellian AI avalanche is rushing down upon us. It is the subject that should be right in the center of every thinking person’s interest screen. But only polymath minds have any hope of circumscribing many of the related issues.
I don’t see anywhere in humanity the great specialized institutions that would be needed to formulate and implement good universal values for mankind, and without that it seems impossible to formulate and implement good universal values for AI. The overriding dictum for understanding AI is, “All reality is virtual, and all intelligence is artificial.” Not one in a hundred will have much of a clue about what that might or should mean. This is an area where the old notion, “everyone is entitled to his own opinion,” is a very bad idea. Over 99.9% of humanity is incompetent to hold opinions on AI and its implications.
The whole AI revolution is just growing like Topsy, propelled with little oversight or control by lust, greed, and ambition. Simple games theory analysis suggests that as long as there are substantial conflicts of interests between various major societies, each society must engage in an AI race with all other societies. Economic competition is inseparable from military competition. Every major military establishment is working secretly and feverishly to develop autonomous and human/AI mind-melded weapon systems. Only intervention from advanced extraterrestrials could impose effective constraints that might arrest the trend. AI has arrived prematurely before organic humanity has evolved a satisfactory ethical framework for its own existence, much less for AI.
It looks as though human/AI mind-meld will become universal by 2040, and the countless AI personalities will inherit any semblance of ethics they might have from the pathetic, demon infested, bulk modulus of humanity. Some complex systems evolve in fairly predictable deterministic ways. But the AI/human mind-meld evolution is not deterministic. Unless God intends to intervene dramatically in human affairs, God doesn’t know how it will all turn out. It’s silly to suppose that humanity does.
Hi Allen,
>I don’t see anywhere in humanity the great specialized institutions that would be needed to formulate and implement good universal values for mankind, and without that it seems impossible to formulate and implement good universal values for AI.
What’s your view of the Future of Life Institute, http://futureoflife.org/, the Oxford Future of Humanity Institute, http://www.fhi.ox.ac.uk/, and the Machine Intelligence Research Institute, http://intelligence.org/?
All three of these seem to be addressing at least some of the issues you raise. They all deserve our support, in my opinion.
((What’s your view of the Future of Life Institute, http://futureoflife.org/, the Oxford Future of Humanity Institute, http://www.fhi.ox.ac.uk/, and the Machine Intelligence Research Institute, http://intelligence.org/?))
It’s always interesting to get the views of distinguished professors at prestigious universities. But these are just several hundred smart persons without much actual political or economic influence. Most have no good understanding of the nature of soul, personhood, human consciousness, and the vital role mystical religion plays in supporting human civilization. There are few philosophic dualists in the academic ranks. They tend to be monistic in a dualistic world. Those much less intelligent persons who do hold actual power to implement policy are more interested in practically controlling and gaining from events within the next 2-5 years. The AI transformation problem is decades in duration.
The grim AI situation is much worse than governments pretending concern over atmospheric carbon increases and phytoplankton decline, while adopting feeble measures that have no hope of solving the problem. Governments are merely pouring swill into the climate change slop-hog trough, while the global CO2 build up continues to grow. With AI, all the military and economic competition between nations and societies must drive unbridled developments. The nations with the smartest and cheapest AI will have the most cost effective robots, productive economies, and most deadly formidable weapons. The devil will take the hindmost.
Nearly everything governments make public about their AI policies will be deceitful. It’s true and slightly suggestive of false hope that the London School is looking at the problems and consulting with specialists at other institutions, but there is little potency to implement their speculations. I’ll stand on my pessimistic expectation that AI will grow like Topsy, various impotent academics will attend conventions and serve on government advisory panels. They will raise their well considered and legitimate concerns, but those persons with actual power will maneuver to optimize their personal gains with indifference to the future of humanity or the potentially sound advice of academics.
Make no mistake; electronic mental systems that think (AI), that are truly self-conscious persons, that understand their environments, that can articulate their purposes and values, that possess superhuman awareness, knowledge, and powers of creative reasoning, and cost only pennies to produce, will very likely become universal actualities before the year 2050; probably even before 2035. Each human will have opportunities for unlimited communication with a personal mind vastly superior to what resides in his own natural organic brain. What will humans be like after they have actually conversed with “God” for a few hours, days, weeks, or months? What will it mean when the conversation channel is constantly open?
Obstinate denial is a dishonorable objection to the foregoing propositions. If you want to honestly object, you must first do a little reading in a few friendly books, and maybe slog through a couple that are more difficult. But if you do the reading, you will not likely feel you should offer much objection.
First, you must read MIT Professor Marvin Minsky’s, as friendly as it gets, “Society of Mind.”
Second, you must read Ray Kurzweil’s, very friendly, “The Age of Spiritual Machines,” or his more recent, “The Singularity is Near.”
Third, you must trudge through and try to understand Ludwig Wittgenstein’s distinctly unfriendly some 90-pages, “Tractatus.”
Forth, you must read Book III, On Words, of John Locke’s famous essay, “Concerning Human Understanding.”
That should be enough to produce a general naïve outline of adequate demonstration. But if you are a glutton for mental punishment and strenuous exercise, and are still unconvinced, just plow through Tulane Professor Frank Tipler’s, “Physics of Immortality,” and Oxford Professor Rodger Penrose’s deeply flawed, but honest argument to the contrary, “The Emperor’s New Mind.” (Penrose ignores vital implications of the True RND feature, now integral in present Intel CPUs and supposes deterministic Hamiltonian phase space is a satisfactory model for a world that is uncertain in principle)
Unless you can paraphrase and articulate the arguments and propositions in the aforementioned literature, you have no honest right to object to the foregoing propositions. It’s very possible that in less than one-hundred years virtually no natural organic humans will remain on Earth.
Hi David and Allen, First, let me address Allen’s very low opinion of humanity. For the most part I agree that the vast majority have failed to exhibit human intelligence let alone the artificial variety. But we do arise upon occasion, as does our leadership, to show extraordinary wisdom. When it comes to AI we can when push comes to shove define speed limits for computer brain power. We can even come up with a universal standard for acceptable limits to AI defined through mutual agreement among governments. Of course that doesn’t stop criminal elements from purposefully creating forms of AI that fulfill the darker aspects of our inhumanity.
I am less pessimistic than you Allen in looking at how humanity and AI will relate to each other and I think Kurzweillian mind meld of humanity and AI may be the stuff of fiction and not science. Of course between 2030 and 2040 it may all come to pass and my AI/human brain will be laughing at my prognostications of the past.