As a young man, I was fascinated by the science fiction writings of Isaac Asimov. His Foundation trilogy, in my mind, compares to Frank Herbert’s epic novel, Dune, and J.R.R. Tolkien’s, Lord of the Rings.
Asimov divined three laws of robotics and made the conflicts within them a feature in novels and short stories. The Three Laws go beyond our physical concept of robotics to delve into the relationships that should govern artificial intelligence (AI) in interacting with humanity.
The three laws are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm. This is the foundation of robotic ethics and emphasizes human safety as a primary directive. Robots and thus AI must prioritize humanity’s well-being above all else and take necessary actions to prevent harm or injury to us.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law. This establishes the principle of obedience to human authority. Robots or AI are to follow human instructions as long as they do not violate the first law. In the case of a conflict between the first and second law, the former would take precedence.
- A robot must protect its existence as long this protection doesn’t conflict with the first and second laws. This gives a robot or AI the right of self-preservation as long as it doesn’t compromise human safety or disobey human orders.
Our world today has recently become awash with AI. We have Large Language Models training on global datasets. We have neural networks that mimic a human brain. We have machine learning algorithms that can help separate patients with cancer from those without and do it better than humans.
The AI makers may not be paying attention to Asimov’s three laws as they rush to build the latest and greatest technology to release to the public. From where you come often determines what kind of AI you develop. Although the creators of AI try to instill human values in their programs, biases based on cultural differences lead to different outcomes, which makes Asimov’s three laws barely a starting point in creating rules for AI.
Ina Fried publishes an online blog Axios AI+. She writes: “There’s no such thing as an AI system without values.” She states that AI creators don’t give birth to algorithms in a vacuum. They train, tune and deploy AIs by choosing data and values they respect.
An AI’s worldview is coloured by its creators. This starts with the algorithms and then follows in the selection of the data used for training. The final step “aligns” the AI to make it “safer” as it is tested to see what answers it produces. These answers are then rated as “more or less desirable.”Â
When I look at Asimov’s first law and note how the issue of abortion is dividing Americans, I wonder how an AI created by someone who is anti-abortion would differ from one developed by a person who is pro-choice. How would the algorithms differ? How would the training dataset differ? If human safety is the AI’s prime directive, then how would it deal with a woman seeking to abort a fetus?
Asimov’s Revised Five Laws of Robotics
When Asimov attempted to create extensions to his three laws, he didn’t define what is a “human.” The revised five read as follows:
- Robots should not be designed solely or primarily to kill or harm humans.
- Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
- Robots should be designed in ways that assure their safety and security.
- Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
- It should always be possible to find out who is legally responsible for a robot.
That lack of definition for “human” is a serious oversight when you consider history. Hitler considered Jews to be sub-human, “Untermenschen.” A Nazi AI creator could imbue values into it that would not include Jews in a definition of human.
The same would be true for an AI created by someone who is racist or sexist and doesn’t define humans as including people of a different skin colour, or people of a different sex or sexual orientation.
That is why a common set of AI rules is needed. As new AIs come from people whose biases make Asimov’s original and revised laws untenable, we need a global rule set for this emerging technology. The AIs we are producing today could theoretically violate human rights based simply on lacking a common definition of what is “human.”
A British Bishop’s Rewrite of Asimov’s Laws
Back in 2018, I wrote about the work of Steven Croft, the Bishop of Oxford in the United Kingdom. Croft who was in the British House of Lords, (clergy get representation there) proposed a rethinking of Asimov’s laws. He created 10 commandments for robots and AI to follow. I republish them here as a good reference point for all AI creators and governments to consider:
- AI should be designed for all, and benefit humanity.
- AI should operate on principles of transparency and fairness, and be well signposted [signposting refers to road signs and the directions they provide].
- AI should not be used to transgress the data rights and privacy of individuals, families, or communities.
- The application of AI should be to reduce inequality of wealth, health, and opportunity.
- AI should not be used for criminal intent, nor to subvert the values of our democracy, truth, or courtesy in public discourse.
- The primary purpose of AI should be to enhance and augment, rather than replace, human labour and creativity.
- All citizens have the right to be adequately educated to flourish mentally, emotionally, and economically in a digital and artificially intelligent world.
- AI should never be developed or deployed separately from consideration of the ethical consequences of its applications.
- The autonomous power to hurt or destroy should never be vested in AI.
- Governments should ensure that the best research and application of AI is directed toward the most urgent problems facing humanity.
[…] than the future painted in Isaac Asimov’s Foundation Trilogy. Asimov created his three basic laws of robotics to protect us from them and ensure that they were a force for good in human society. But robots […]