I recently finished reading David J. Gunkel’s book of the same name in which the author challenges the reader to understand the philosophical questions related to our perspective on artificial intelligence (AI) and machines in the 21st century. The HAL computer of Arthur C. Clarke’s novel, “2001: A Space Odyssey,” and Commander Data of “Star Trek: The Next Generation” are often reference points for the discussion on moral agency and patiency, two concepts thoroughly treated throughout Gunkel’s text and equated with the definition of what is a “person.”
Moral agency refers to the capacity of an individual to differentiate between right and wrong and then act knowing full well the implications.
Moral patiency refers to the capacity to endure the actions of a moral agent.
Humans exhibit both agency and patiency. In Gunkel’s book he looks at the application of these concepts to animals and AI. Can an animal be defined as a “person” if it displays agency and patiency? Can a machine? Animal rights advocates believe that several species as we understand them today could easily be seen as qualifying for the definition of “person” based on this criteria.
Humanity has undergone an awakening in the last fifty years. Rene Descartes may have thought animals were automata. But where we once saw ourselves as unique, separate from other animals, today we are very much aware of our evolutionary roots and are cognizant that many animals display high levels of awareness, emotion, moral patiency and agency. Just recently I read a press report that described the results of a scientific study indicating that even lobsters and crustacean feel pain when we boil them in a pot. Pain and patiency go hand in hand.
So when HAL uncovers the human plot to shut down sentient functions, the machine responds to the threat terminating the lives of several of the crew before being disabled by the one human survivor, David Bowman. HAL in its actions displays moral agency and patiency. It is particularly poignant when HAL states, “I’m afraid” as Bowman shuts down its higher functions.
Gunkel also refers to another science fiction source when he discusses the three laws of robotics developed by Isaac Asimov. Asimov used the laws as a literary convenience for spinning his many stories about robots. Gunkel describes them as largely literary license for good storytelling rather than substantive rules for AI.
Ray Kurzweil, the noted futurist, envisions a point in time when machine intelligence will surpass human intelligence. He sees the inevitable integration between humans and machines and calls this the singularity. Kurzweil believes computers will reach a point in 2029 when machines will be able to simulate the human brain. And by 2045 AI machines and humans will be fully integrated.
But there are others who argue that the singularity will never happen and that we humans will always have a master-slave relationship with AI limiting machine intelligence so that it can never be equal to a HAL computer or a Commander Data.
Gunkel’s book wrestles with all of these issues but arrives at no firm conclusions. It seems that advancements in technology and machine intelligence are leading us to ask complex philosophical questions never anticipated by Plato, Aristotle or Descartes. Maybe a future machine will provide the answers.