Skip to main content

Living With an Imperfectly Ethical AI

(Source: maxuser/Shutterstock.com)

 

Ongoing developments in artificial intelligence (AI) hold immense promise for achieving social good. AI’s potential to help us overcome some of the world’s greatest challenges is already being explored across a range of sectors. From agriculture to astronomy, the breadth of AI applications seems limited only by our imagination.

 

Like any tool, however, AI could end up creating or perpetuating some problems while solving others, even when designed with the best intentions. The ethical risks associated with AI will broaden alongside the contexts in which it is applied. A perfect ethical framework for navigating this new landscape could prove elusive, but the enormity of the challenge is matched by its importance. New technological contexts will dictate the need for new norms.

 

To address these risks (Figure 1), technologists must:

  • Understand that bias exists in data and tools
  • Increase awareness of this bias and improve efforts to mitigate it
  • Institutionalize ethical thinking in engineering processes

Figure 1: Technologists must understand that bias exists in data and tools, increase awareness of this bias and existing efforts to mitigate it, and institutionalize ethical thinking in their engineering process.(Source: Author)

 

Understanding Bias in Data and Tools

The ethics of AI has become a hot topic recently, in part because algorithmic decision-making and decision support systems are being integrated into public-administration domains, including public health, law enforcement, and criminal justice. Such applications raise the ethical stakes of employing AI technology, amplifying the potential real-world ramifications of AI tools that might yield biased or unfair results.

 

Biased AI tools could have profound and long-lasting impacts on individuals’ lives, affecting their criminal records, creditworthiness, and employment prospects. Bias can creep into an algorithm in many ways. One such bias—data bias—can arise from flawed data collection or reflect a broader systematic bias at play. For example, if individuals from minority populations are arrested at higher rates than their white counterparts for the same crime, algorithms trained on the resulting data will perpetuate those inequities. Such risks related to prejudicial categorization have already played out in the real world. Pretrial risk assessment algorithms used in criminal justice proceedings, for example, have repeatedly been found to discriminate against racial minorities.

 

Bias can also arise in the way computer scientists frame problems and select the attributes an algorithm considers. Algorithms for job recruitment efforts, for example, rely on many assumptions: Which attributes should be associated with a worthwhile candidate? Could those attributes carry gendered or racial connotations? This sort of bias led Amazon to scrap an AI recruitment tool in 2018 after realizing that the model discriminated heavily against women: The model “learned” to associate strong candidates with maleness because the company employed more men than women.

Increasing Awareness of Bias and Improving Solutions

The implications of such cases of bias have prompted development of a robust body of literature on the ethical considerations of AI. Many philosophers, including well-known experts such as Oxford philosophers Nick Bostrom and Luciano Floridi, have begun to devote themselves to developing frameworks around these issues. Such academic efforts have focused on the notions of “fairness, accountability, and transparency” in machine learning. Awareness of and engagement with this vast and growing body of literature is key to forging appropriate risk mitigation strategies.

 

Understanding and building awareness of issues pertaining to AI ethics requires interdisciplinary cooperation. Several initiatives have already been developed with this goal in mind, from think tank programming like that of the Future of Life Institute to industry efforts such as Google’s publication of Responsible AI Practices. The third annual Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT, formerly ACM FAT*) took place in January 2020 and brought together stakeholders from numerous fields to explore the ethics of computing systems. Such efforts are signs of progress, but it is incumbent on each individual to contribute to this interdisciplinary dialogue and learn from the work already done.

 

Equally important to initiatives dedicated to studying these issues, however, is the widespread adoption of new ethical frameworks throughout the technology industry. Like the practice of building “human-centered” AI, we should consider developing new means of institutionalizing ethical and socially responsible thinking into every step of the engineering process. Ethics need not be perceived as an esoteric science reserved for ordained philosophers but rather a practice or skill that anyone can exercise and sharpen in every step of his or her work.

Institutionalizing Ethical Thinking in Engineering Processes

New modes of thinking about how to institutionalize responsible AI can borrow heavily from disciplines outside engineering and computer science. Much of the thinking in international security, for example, focuses on worst-case scenarios. Just as military forces create contingency plans to prepare for unforeseen challenges, so technologists can employ worst-case scenario thinking with respect to AI ethics: What are all the ways a given product could go awry? How can we mitigate those risks? One thought experiment could mirror deliberations about weapon design and deployment: What could go wrong if bad actors acquire this new capability?

 

Other ways to institutionalize ethical thinking within technological development can hit closer to home. The concept of iterative progress, for example, is perhaps best understood in the technology industry. Agile software development entails concurrent development and testing; it emphasizes incremental delivery as well as continual team collaboration and learning. Just as innovations can be implemented iteratively to resolve technical kinks, agile deployment and institutionalized feedback can help keep ethical considerations in perspective.

 

Finally and perhaps most important, we must all continue to ask big-picture questions about the limitations of technology. For every product or approach that involves AI—especially in a public sector context—one might ask whether AI is really best suited to perform that function. Human judgment is demonstrably imperfect and often falls victim to a confluence of cognitive biases, but we must remain vigilant so that we do not replace one set of flaws with another.

About the Author

Michelle Nedashkovskaya is a Master in Public Affairs (MPA) student at the Woodrow Wilson School at Princeton University. Previously, she served as an adviser to the US Director at the European Bank for Reconstruction and Development and as an adviser at the US Mission to the United Nations. Michelle holds a BA in international relations from Princeton University.