HomeTech and GadgetsArtificial IntelligenceShould We Regulate Robot and Artificial Intelligence Development?

Should We Regulate Robot and Artificial Intelligence Development?

December 3, 2014 – Elon Musk and Stephen Hawking have both posted warnings about the evolution of intelligent machines. Musk has compared artificial intelligence to a Pandora’s Box, letting a genie out of the bottle with unintended consequences. Hawking in a recent speech sees the potential of a human-machine conflict in our near future. He recently stated, “I think the development of full artificial intelligence could spell the end of the human race.”

Those familiar with Isaac Asimov’s Three Laws of Robotics know that in the realm of science fiction humanity has spawned childlike, aiming-to-please robots like Daneel who appears in several of that author’s novels. At the same time both religion and fiction have contributed automatons of a different nature. Take the the Golem, a legendary monster who arose from the mud to defend the Jews of Prague from Christian mobs, or Shelley’s Frankenstein, a monster conjured by science run amok.

Hawking sees a pending “arms race” between human and artificial intelligence. In an interview with the Financial Times he describes advances in genetic engineering that can improve humanity a generation at a time, every 18 years. Whereas with Moore’s Law continuing to hold true, computers continue to double speed and memory every 18 months. We humans are inching along while the machines we create are comparatively advancing at light speed. In the Financial Times interview Hawking stated, “the risk is that computers develop intelligence and take over. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Hawking and Musk are not alone in expressing these very alarming views. And yet governments seem slow off the mark in recognizing the disruptive and destructive potential represented by advances in robotics and artificial intelligence.

In almost every technological innovation coming from the Industrial Revolution our governments have seen fit to at some point establish regulation and standards. This has helped with communications, aviation, pharmaceuticals, consumer electronics, motor vehicles and more. So why not robotics and artificial intelligence? After all regulation in the past has protected humanity from harm while allowing innovative processes to unfold. So the subject of this posting, that we ask ourselves if governments should establish guidelines and standards to ensure we don’t end up in a war with the technology we create, is more than relevant. It is essential.

The European Union in September of this year has funded a project entitled, “Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics.” Deemed to be an in depth analysis of the ethical and legal issues raised by robotics and their application, it looks at risks to fundamental rights and freedoms and whether new regulation is needed to address potential problems posed by the technology.

In the document the authors state “overly rigid regulations might stifle innovation, but lack of legal clarity leaves device-makers….in the dark.” Law seldom keeps pace with a fast emerging technology. Of this fact the authors are very cognizant. At the same time as they recognize that premature and obtrusive legislation could hamper promising advances in the field, they also note that a lack of regulation and legal framework can result in unintended and dangerous consequences.

In the December issue of Scientific American, Ryan Calo, a University of Washington law professor specializing in robotics, law and policy, argues the case for U.S. federal regulation. In his concluding remarks he states, “if we fail to think about proper legal and policy infrastructure now, robotics could be the first transformative technology since steam in which America has not played a preeminent role.”

 

This file photo shows a technician works on a robot at a trade fair in Hannover, Germany. Source: Nigel Treblin/AFP/Getty Images
A technician works on a robot at a trade fair in Hannover, Germany. From The Christian Science Monitor, Nigel Treblin/AFP/Getty Images

 

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

3 COMMENTS

3 COMMENTS

  1. Governments haven’t even been able to control and regulate their own corruption and incompetence. Why should anyone suppose they will do a better job controlling and regulating electronic intelligence? The top 5% of the ruling elites think their parasitic contributions to society are worth more than $250/hour and the most disadvantaged contributions are worth less than $7/hr. The US GDP is a staggering $17-trillion; yet its government won’t provide a $15/hour minimum wage or universal health care.

    Whether governments like it or not, intelligent universal robots throughout the world will soon vastly increase production of the goods and services humans desire, and in the process hugely devalue human labor. Multinational corporations control governments, not the reverse. The big companies will increase their profits by replacing humans with robots, and then it will be off to the economic races to the bottom for humans.

    The ethically rational way for governments to act would be to encourage more and better robots and share the big production increases with those unemployed humans who they pretend to govern. But the first big smart robot race will not be between robots and humans; it will be between industrial nations converting their military establishments into AI system robots that can do battle with robots of other nations. Then all bets are off. We are up against the age-old evil of tribalism and self-interested human nature. The incompetent political hacks who sit in high government offices don’t have a clue about what is soon coming, and even if they did they wouldn’t be able to intelligently control it.

    Within three decades, intelligent universal robots will be building more intelligent robots in China, Korea, and Japan. Human governments will be impotent to implement and enforce prohibitions against robots building robots or “electronic brains” for AI systems. Most humans will have some sort of major intelligence augmentation through AI mind-meld. It’s folly to suppose human governments as they presently exist are going to regulate and control the fundamental transformation of personhood on Earth. Human civilization has had about 10,000 years to prove that it has sufficient ethical justification to survive. Hawking and Musk have good cause for their concerns. Mankind now confronts the test that will settle the issue for all time. The great soon-coming AI mind in the sky is unlikely to judge that humanity deserves a passing grade.

    • Almost sounds like we should kill the AI beast now before it hatches based on your comments. We humans may be competent enough to create a technology that will surpass us but the way you believe we will use it is frightening.

  2. Len wrote: Almost sounds like we should kill the AI beast now before it hatches based on your comments. We humans may be competent enough to create a technology that will surpass us but the way you believe we will use it is frightening.

    Me: Mankind is now playing with the true fire of the Gods, AI, and it’s doing so before it masters the moral fire of its own mortal soul. Only a small percentage of the population has bothered to educate itself about the technical possibilities of AI. Most think AI is just science fiction and way off in the distant future. Everyone I know is living in ignorant denial. I don’t know a single person who can intelligently discuss either the technical or social aspects of the transformation that is already well underway, much less hold any serious concerns about it (I don’t say such persons don’t exist, just that I personally don’t know any and that they must be few and far between. They don’t seem to form a critical mass of competent opinion anywhere.).

    The present AI development condition is sort of a quasi-covert arms race between the major industrial and military powers. By the end of August, 1945, every major nation clearly understood it would either develop its own nuclear weapons or fall into geopolitical subordination. But back then everyone on the planet could understand the simple potential horrors of global nuclear holocaust demonstrated by the immediate destruction of two Japanese cities. There was no question that atomic bombs existed and could instantly incinerate whole cities. The great nations constructed huge nuclear arsenals, and somehow, probably more by good luck than by good design or good ethics, managed to make the fear of mutually assured destruction work to preclude overt nuclear war for the next 70-years.

    But today there is almost no public awareness that the AI technologies are a greater threat to human civilization than nuclear weapons/power. It’s far too technically complicated to explain to the public. The public won’t trust government pronouncements because governments have proven negative credibility. The new Samsung/IBM neurosynaptic chip probably shifts the truly sentient robot ten years closer, yet the naysayers ignore it and assert Moore’s law is nearing its generational limitations so AI robots must be in the far distant, not near, future.

    None of the major nations are going share its advanced AI technology with other nations or submit to global regulation and control. AI is growing geometrically and is as inevitable as was the nuclear arms race. But there will be no single dramatic terrifying wakeup call like a mushroom cloud and radioactive ashes of tens of thousands of school children. It’s an elusive but fundamental philosophic truth that: All intelligence is artificial and all reality is virtual. Mankind prefers faith in comfortable delusion over rational understanding of disturbing truth.

    Everyday, AI will further insinuate itself throughout the body of society, and nearly everyone will blindly accommodate the subtle transformations. At every step along the way a small percentage will suffer unemployment and loss of natural sovereignty of mind, but ten times more will welcome the increased production. The AI trend will continue until there are almost no human producers, only consumers. The sad lessons of history teach most humans will sacrifice the welfare of their fellows to gain social power, material comfort, and economic security, for themselves. There is no way to kill the AI beast because the AI beast is truly us.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics