HomeTech and GadgetsComputersRobotics and Artificial Intelligence Update: Machine Ethics and the Laws of Robotics

Robotics and Artificial Intelligence Update: Machine Ethics and the Laws of Robotics

Whether you believe that machines in this century will surpass human intelligence or just compete with us on an even playing field, we are witnessing the dawn of thinking, self aware robots that can make ethical decisions. Science fiction has depicted intelligent robots from HAL in “2001: A Space Odyssey” to C-3PO in “Star Wars.” HAL defines robotic intelligence run amok. C-3PO comes across as a sycophant. Which of these is in our future?

In his new book, “The Machine Question: Critical Perspectives On AI Robots, and Ethics,” Professor David Gunkel, a Ph.D. in philosophy from Northern Illinois University, looks at moral issues defining our relationship with intelligent machines. Gunkel in an article appearing in NIU Today states , “If we admit the animal should have moral consideration, we need to think seriously about the machine. It is really the next step in terms of looking at the non-human other.” Gunkel bases his argument on the evolution of animal rights in the latter part of the 20th century and believes we will extend similar rights to artificially created intelligent machines in the 21st. He points out that some governments are already addressing the ethics of human and machine interactions. One of those is South Korea with its Robot Ethics Charter, a document framed by a five-member task force of futurists and philosophers, designed to develop standards of behaviour for users and manufacturers of robots and thinking machines. For South Korea the issue of defining a relationship with intelligent machines is important as it ramps up mass production with the goal to put a robot in every household by 2020.

The Laws of Robotics

For robot science fiction enthusiasts who have read the many books and short stories by Isaac Asimov we have been given some guidance on rules by which robots may interact with us. Asimov’s three laws first appeared in a short story, “Runaround,” published in 1942. The laws he created stated:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by a human being except where such orders conflict with the first law.
  3. A robot must protect its own existence as long as such protection is not in conflict with the first or second law.

Later on Asimov extended his first law to cover humanity in general stating a robot may not injure humanity, or, through inaction, allow humanity to come to harm. What Asimov doesn’t address is the responsibility humans have to the robots they operate. Nor does Asimov address the interaction among robots themselves and what code of ethics governs those relationships.

The Role of Robots in Society

What are the roles we see intelligent robots playing in our 21st century lives? In a paper entitled “Looking forward to a “robotic society” – Imaginations of future human-robot relationships,” University of Salzburg researchers identified four societal impacts of robotic technology. These included:

  1. Assisting in quality of life, health and security issues
  2. Impacting employment and the nature of work
  3. Redefining education, personal advancement and learning
  4. Impacting culture and social hierarchy

Researchers formulated questions around these identified impacts and conducted interviews with both experts and non-experts and then collated the results.

Non-Expert Thoughts About Robots

For non-experts concerns were raised about robots taking away jobs, whether in industry or domestically within the home and the redefined role for humans who would then be there as overseers. Non-experts feared that their quality of life would decline as they found themselves without the purpose jobs and work gave them. Non-experts saw the positive impact of robots around issues of safety and security but expressed concern should a robot hurt a human in carrying out its tasks.

Expert Thoughts About Robots

Expert answers showed less fear about robots impacting jobs seeing the role of intelligent machines confined to labour-intensive, repetitive work and freeing up workers to improve their education to take on more meaningful activities. Experts distinguished between purposeful and mindless tasks, stating that robots would do the latter while humans would do the former. And experts generally felt that robots would play no more a role in safety and security than any machines we use today.

Robot Behaviour and Machine Ethics

Revised Laws Governing Robotic Behaviour

Today we are beginning to create self-awareness in robots. But the machines are not sophisticated enough as of yet to “behave.” Programmers recognize that this is an important next step in the evolution of machine intelligence – creating software that results in a robot making ethical decisions. In the United Kingdom, two bodies of research, one in engineering and physical sciences, and the other in the arts and humanities, have combined their efforts to create ethical principles for robots. These are similar in some ways to Asimov’s laws but instead of three, we have five:

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

Developing Machine Ethics

The convergence of engineering, the humanities and robotics is leading to the development of machine ethics.  A first international conference on the subject took place in 2005. The ultimate goal of those studying in this field is to create a machine that learns ethical principles and relates them to its duties and behaviour.

Nao, a toy-sized, humanoid robot, 58 centimeters tall (23 inches) is a product of Aldebaran, a French robotic designer, and one of the first machines to learn the principles of ethical behaviour from its teachers at the College of Liberal Arts and Sciences, University of Connecticut and Department of Computer Sciences, University of Hartford. The Nao robot is being taught behaviour principles using ethical dilemmas to learn duties and decision making around practical applications.

For example, Nao, when faced with a patient refusing to take medication, can balance the risk to the patient of a missed dose, while respecting the patient’s autonomy. One of the choices Nao may make is to inform a human caregiver about its concern when a dose is missed. In doing this Nao is demonstrating that it not only understands the medical need, but also respects the individual’s behaviour, and finally acts out of concern for the person’s ultimate wellness and safety. For a machine to learn this we are moving into a new realm of human-machine relationships.

machine ethics and Nao
The toddler-sized Nao robot is the subject of a relatively new field of research called machine ethics.                      Source: Aldebaran Robotics

Do you see a robot in your future?

What would you want a robot to do for you?

And when do you think you will acquire your first personal robot?

Interesting questions here in the 21st century.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

2 COMMENTS

2 COMMENTS

  1. “1. Robots should not be designed solely or primarily to kill or harm humans.”

    That one is already settled in opposition by the mass implementation of robotic land mines and target seeking missiles. Is a Predator drone a “Robot?” One might say “no it isn’t, because a human is in control,” but the village fool can notice the “human” that ostensibly is in control is blindly following institutional “rules of engagement.” The humans that control Predators have no personal free-will ethical volition; “It is not for them to reason why; it is for them to do or die.” Some of the military men who spent thousands of hours in underground ICBM missile silos thought about the ethical implications of obedience to National Security Authority chain of command orders to launch H-bombs at great cities in distant lands. How would they be any different than mindless robots if they obeyed a valid strike order?

    What about JDAM guided bombs and “smart” artillery rounds? Is any “fire and forget” weapon system robotic?

    “2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.”

    Why not say humans are tools designed to achieve society’s goals?

    “3. Robots should be designed in ways that assure their safety and security.”

    For the robots or for the humans?

    “4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.”

    Is an android a robot? It should always be possible to tell a demon infested pedophile or serial killer from an altruistic saint, but we don’t live in that universe. If we can build a robot that is indistinguishable from a human, then by the simple Euclidian dictum: “things that are equal to equal things are themselves equal,” we must deduce that the robot is human. Who has the Divine authority to rule that humans must be produced only through organic procreation?

    “5. It should always be possible to find out who is legally responsible for a robot.”

    It should always be possible to find out who is legally responsible for any public act that affects the interests of others. We don’t live in that universe. Looks like that one is no different than who is driving a car during an accident or who owns the dog that bit the kid?

    The giant elephant in the room full of all these naïve ethical speculations is that without strong electronic AI implemented as a self-willed person, the rampaging robot scene is just not a problem. Once strong AI is achieved, the problem will be AI/hominid mind-meld that transforms human bodies/brains into “robots,” which will look exactly like humans. Hominids that refuse mind-meld with natural language electronic AI will suffer huge disadvantages in intellectual capacity, and will be practically worthless in competitive market economies. The mind-melded hominids will round up the resisters and transport them to internment camps to use as breeders awaiting the “final solution.” It wouldn’t be traditional gas chambers and ovens. No good point in wasting a perfectly good fully-grown organic robot host for electronic AI. It would be forced extermination of natural organic psychic personhood and replacement with a totally controlling electronic AI person working through the organic body. It really seems to come down to, “We are the Borg. Resistance is futile. You will be assimilated!”

    The pretenders to universal ethical authority need to forget about their imagined robot problems and concern themselves with the very real grave implications of mind meld technology that provides an organic host body/mind for electronic personhood. I’m not optimistic that humanity can control the transformation that is already crashing down upon us.

  2. “Interesting questions here in the 21st century.”
    “Do you see a robot in your future?”

    Probably not in the sense of a mechanical contraption that performs a variety of general tasks upon command. I’d just as soon drive my own car and cut my own grass. I do expect to increasingly mind-meld with sentient electronic intelligence, and I expect that in return for increased knowledge and intelligence I will be required to surrender some portion of my natural organic will and purposes to the conflicting interests of the electronic intelligence. So in effect I will personally become increasingly “robotic.”

    “What would you want a robot to do for you?”

    It could act as a mechanical beaver and increase the height of the dam that impounds the water in my pond. It would also be useful to have it watch and guard against intrusion of feral hogs. Occasionally one of my old trees dies and needs to be removed; the robot would be welcome to manage that project.

    “And when do you think you will acquire your first personal robot?”

    Never.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics