Robotics and Artificial Intelligence Update: Machine Ethics and the Laws of Robotics

2
Nao is a robot that is learning ethical behaviour
The toddler-sized Nao robot is the subject of a relatively new field of research called machine ethics. Source: Aldebaran Robotics

Whether you believe that machines in this century will surpass human intelligence or just compete with us on an even playing field, we are witnessing the dawn of thinking, self aware robots that can make ethical decisions. Science fiction has depicted intelligent robots from HAL in “2001: A Space Odyssey” to C-3PO in “Star Wars.” HAL defines robotic intelligence run amok. C-3PO comes across as a sycophant. Which of these is in our future?

In his new book, “The Machine Question: Critical Perspectives On AI Robots, and Ethics,” Professor David Gunkel, a Ph.D. in philosophy from Northern Illinois University, looks at moral issues defining our relationship with intelligent machines. Gunkel in an article appearing in NIU Today states , “If we admit the animal should have moral consideration, we need to think seriously about the machine. It is really the next step in terms of looking at the non-human other.” Gunkel bases his argument on the evolution of animal rights in the latter part of the 20th century and believes we will extend similar rights to artificially created intelligent machines in the 21st. He points out that some governments are already addressing the ethics of human and machine interactions. One of those is South Korea with its Robot Ethics Charter, a document framed by a five-member task force of futurists and philosophers, designed to develop standards of behaviour for users and manufacturers of robots and thinking machines. For South Korea the issue of defining a relationship with intelligent machines is important as it ramps up mass production with the goal to put a robot in every household by 2020.

The Laws of Robotics

For robot science fiction enthusiasts who have read the many books and short stories by Isaac Asimov we have been given some guidance on rules by which robots may interact with us. Asimov’s three laws first appeared in a short story, “Runaround,” published in 1942. The laws he created stated:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by a human being except where such orders conflict with the first law.
  3. A robot must protect its own existence as long as such protection is not in conflict with the first or second law.

Later on Asimov extended his first law to cover humanity in general stating a robot may not injure humanity, or, through inaction, allow humanity to come to harm. What Asimov doesn’t address is the responsibility humans have to the robots they operate. Nor does Asimov address the interaction among robots themselves and what code of ethics governs those relationships.

The Role of Robots in Society

What are the roles we see intelligent robots playing in our 21st century lives? In a paper entitled “Looking forward to a “robotic society” – Imaginations of future human-robot relationships,” University of Salzburg researchers identified four societal impacts of robotic technology. These included:

  1. Assisting in quality of life, health and security issues
  2. Impacting employment and the nature of work
  3. Redefining education, personal advancement and learning
  4. Impacting culture and social hierarchy

Researchers formulated questions around these identified impacts and conducted interviews with both experts and non-experts and then collated the results.

Non-Expert Thoughts About Robots

For non-experts concerns were raised about robots taking away jobs, whether in industry or domestically within the home and the redefined role for humans who would then be there as overseers. Non-experts feared that their quality of life would decline as they found themselves without the purpose jobs and work gave them. Non-experts saw the positive impact of robots around issues of safety and security but expressed concern should a robot hurt a human in carrying out its tasks.

Expert Thoughts About Robots

Expert answers showed less fear about robots impacting jobs seeing the role of intelligent machines confined to labour-intensive, repetitive work and freeing up workers to improve their education to take on more meaningful activities. Experts distinguished between purposeful and mindless tasks, stating that robots would do the latter while humans would do the former. And experts generally felt that robots would play no more a role in safety and security than any machines we use today.

Robot Behaviour and Machine Ethics

Revised Laws Governing Robotic Behaviour

Today we are beginning to create self-awareness in robots. But the machines are not sophisticated enough as of yet to “behave.” Programmers recognize that this is an important next step in the evolution of machine intelligence – creating software that results in a robot making ethical decisions. In the United Kingdom, two bodies of research, one in engineering and physical sciences, and the other in the arts and humanities, have combined their efforts to create ethical principles for robots. These are similar in some ways to Asimov’s laws but instead of three, we have five:

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

Developing Machine Ethics

The convergence of engineering, the humanities and robotics is leading to the development of machine ethics.  A first international conference on the subject took place in 2005. The ultimate goal of those studying in this field is to create a machine that learns ethical principles and relates them to its duties and behaviour.

Nao, a toy-sized, humanoid robot, 58 centimeters tall (23 inches) is a product of Aldebaran, a French robotic designer, and one of the first machines to learn the principles of ethical behaviour from its teachers at the College of Liberal Arts and Sciences, University of Connecticut and Department of Computer Sciences, University of Hartford. The Nao robot is being taught behaviour principles using ethical dilemmas to learn duties and decision making around practical applications.

For example, Nao, when faced with a patient refusing to take medication, can balance the risk to the patient of a missed dose, while respecting the patient’s autonomy. One of the choices Nao may make is to inform a human caregiver about its concern when a dose is missed. In doing this Nao is demonstrating that it not only understands the medical need, but also respects the individual’s behaviour, and finally acts out of concern for the person’s ultimate wellness and safety. For a machine to learn this we are moving into a new realm of human-machine relationships.

machine ethics and Nao
The toddler-sized Nao robot is the subject of a relatively new field of research called machine ethics.                      Source: Aldebaran Robotics

Do you see a robot in your future?

What would you want a robot to do for you?

And when do you think you will acquire your first personal robot?

Interesting questions here in the 21st century.