November 1, 2014 – On the morning after Halloween it seems appropriate to talk about demons. According to Elon Musk, one of them is artificial intelligence. At a recent symposium hosted by Massachusetts Institute of Technology, Musk described AI as an existential threat. Musk is quoted as stating:
“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon.“
A friend of mine sent me the article in The Washington Post where this quote appeared. His added comment was “Being a child of “The Terminator,” I tend to agree. What do you think?”
Here is the subsequent email exchange:
Me: AI like every technology has a dark side. When Asimov created his rules of robotics it was to put a framework around AI. If it is viable to program constraint into intelligence then AI and humanity will coexist. Of course to me what is far scarier is the fusion of AI and humans, the singularity.
My friend: Personally, I think it is foolish to think we can “program constraint into intelligence” considering that we do not practice restraint ourselves. It does worry me, but then again, I won’t be around to face the consequences. Perhpas we simply deserve our fate, and AI will create a better balance on the planet.
Me: There is restraint and constraint and although they have similar meaning I like to think they are a bit different. If we are holding back AI by restraint then eventually the genie will leave the bottle. But if the algorithms we use to develop AI provide limitations to inhibit behaviours that could harm humans then such constraints will turn the relationship into a synergistic one between AI and natural intelligence.
When Asimov created his literary paradigm for robot behaviour it put 3 limits on AI.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
For “robot” read “AI.”
Since then others have extended Asimov’s constraints to include:
4. A robot must establish its identity as a robot in all cases.
5. A robot must reproduce as long as such reproduction does not interfere with the first three laws.
Another constraint has been introduced as a 6th law.
6. A robot must know its a robot.
I wrote a piece on this subject two years ago looking at machine ethics. You may not remember it but the link is: https://www.21stcentech.com/
The real question you are raising is one of programming limits. Can we program morality into AI? Can an AI entity given autonomy break the boundaries of its programming and become amoral? Computer scientists recognize that creating a thinking and learning autonomous machine needs a code of ethics, and that means developing a language and programming to turn ethics into logical mathematical formulae. Such programming shouldn’t leave the AI pondering incessantly over what to do, so it has to provide a schema for the robot to resolve ethical dilemmas quickly. Think about the little robot helper that I wrote about in the blog posting (link provided above). That robot has to determine quickly through observation what is best for the human it is assisting. So we know we can develop programs sophisticated enough to incorporate a moral code. Of course, as in anything we humans touch, we can create immorality in code as well.
My friend: My position remains unchanged precisely because of your last observation. All it takes is an immoral programmer. Or a sufficiently imprecise programmer. After all, algorithms are only as good as their imperfect creators.
On the other hand, I wonder if we can create a “kill switch,” back door or “dead hand” mechanism (as in the old days of railroading) if AI goes awry?
Me: Even Data on Star Trek had a kill switch. You just had to get to it before the android figured out what you were doing. My guess is a kill switch is mandatory unless the AI begins replicating itself. Then unless the kill switch is mandated in the program and irremovable, then I would suspect we would have a runaway AI presence on the planet.
My friend: Thanks. As if I didn’t have enough to worry about.
So although Musk’s offhand comment about AI may have been received with puzzlement on the part of his MIT audience, his view is shared by many. AI may be a Pandora’s box, unlocking the demon as he put it. And thinking about IBM’s Watson announcement that I wrote about last Thursday, AI used improperly could have enormous unintended planetary consequences.
In an online poll conducted by C|Net to the question posed, “Do you agree that AI could threaten humanity?,” online responders tended to agree with Musk.
43% said “Yes. AI will become dangerous to humanity.”
9% said “No. AI will never be a threat.”
43% said “Maybe. It depends on advancements in AI.”
Only 4% had an other or no opinion.
Whether fear mongering or pointing to inadequacies in our current approaches to developing and regulating AI, Musk’s openly stated concerns have the Twitter, Facebook, LinkedIn and blogging universe commenting frenetically.

Articles across the web