September 7, 2016 – Envisioning the impact of artificial intelligence (AI) on North American urban life in 2030, the Stanford University study described in a previous post, also looks at policy and societal implications and makes a number of recommendations for all levels of government. Of course the great AI fear is that the technology supersedes us. That it eliminates so many jobs it creates a permanent underclass. That its benefits are outweighed by its disadvantages.
For the governing of cities faced with the disruptive forces of AI innovation we are already seeing some of the policy challenges. Here in Toronto on demand transportation options have been licensed to compete with the traditional taxi industry. That on demand service will at some point become a fleet of autonomous vehicles. The implications for the taxi and urban transport workforce of a future where all forms of public transport are AI-driven as seen from a driver’s perspective is ominous? If implementing policy around AI focuses on one aspect of good governance, doing no harm, then how can we justify policy that puts people out of work?
The Stanford study states that “the goal of AI applications must be to create value for society.” The saving grace of AI policy adoption is that AI will not advance in an explosive manner, but rather incrementally which will give policy developers the means to experiment and adapt. And because AI will evolve in capacity and capability it is certain that there will be technology dead ends that policy makers will discard, and mistakes will be made. A good example is the recent report about a self-driving car getting into a fatal accident, not for the car, but for the human occupant. This has created a media backlash about autonomous vehicles run by AI systems. And even though this single accident pales in comparison to the daily carnage on roads where human drivers make judgment errors that lead to traffic deaths, the fact that one could happen because of an AI creates fear and doubt.
This brings us to the most important policy requirement for government in dealing with the evolution of AI and its impact on urban living. People need to be educated to what AI is all about. An uneducated population will be highly distrustful of any AI system introduction. To offset this “design strategies that enhance the ability of humans to understand AI systems and decisions and to participate in their use, may help build trust and prevent drastic failures.” The onus lies on the AI technical community to manage people’s expectations, to neither over promise, nor underperform when introducing an AI in place of current services and applications.
Today we already interact with AI. Our smartphones are, for most, the first encounter with the technology. They understand our speech. They find things for us. They have apps that map our travel to destinations. They facilitate purchases. As each of these AI tools get implemented we become more familiar and comfortable with the presence of AI in our lives.
And as we use these AI tools everything we do can be digitally remembered which has implications for our privacy. Today AI surveillance technology has the potential to become widespread in our cities. Going by the name sousveillance, we are all recorders of our activities through the use of personal, portable devices. And governments can listen in and see what we are doing. The Orwellian implications of this with the government overstepping the boundaries set by constitutional rights is one of the greatest fears brought on by this daily AI use.
So what does the Stanford University study recommend in terms of general AI policy. Three things:
- Create a plan to master understanding AI at all levels of government. Governance to be effective must become expert, understanding the interactions between AI technologies, program objectives, and overall societal values. Government insufficiently trained on the technology can do considerable harm. And government incapable of evaluating AI impacts will poorly serve constituents.
- Remove the barriers to doing proper research on AI systems to understand the fairness, security, privacy, and social impact implications. Because AI is software, it can be reverse engineered and, therefore, it is critical that existing legislation not inhibit its proper vetting by academics and other researchers to ensure it meets the criteria of doing no harm.
- Increase funding for interdisciplinary studies of AI’s potential societal impacts. This should involve both public and private investment to assess safety, privacy, fairness and other AI impacts.
The Stanford study poses a number of questions to address publicly in terms of policy creation. For example:
Who is responsible when a self-driven car crashes or an intelligent medical device fails?
Who should reap the gains of efficiencies enabled by AI technologies and what protections should be afforded to people whose skills are rendered obsolete?
What are the legal implications of introducing AI into provision of tax advice, automated trading, or generating medical diagnoses?
What are the educational implications of autonomous tutoring systems working with children in regard to handling questions and information imparting science versus describing religious belief, for example “evolution” versus “intelligent design?”
If predictive AI is used to determine future human behaviour what are the implications if it predicts a high likelihood of recidivism for an incarcerated individual versus society’s obligations to consider parole after sentence is served?
The implications that AI could produce physical or social harm needs to be put in the context of liability both civil and criminal. And that liability would lie with whom? As AI evolves to become self learning should it be treated as its own agent, or does it remain the property of its creator?
Policy will have to address all of the above while still encouraging AI innovation to ensure the technology’s benefits are as widespread and fair as possible. Because there is no doubt that we are already participants in this bold experiment and will see much more of AI in our existence as we approach 2030 and beyond.