Skip to main content

AI’s Evolution Demands Strong Ethics, Safety

(Source: irkus/Shutterstock.com)

 

Some inventions are more important than others. Certain innovations have an outsized effect on society, while others, even when they are pervasive, are mere conveniences in our lives. Consider the different impacts of microwave ovens and light bulbs: Although most people these days have a microwave oven, our lives would not be drastically different if some bizarre solar activity somehow zapped all microwaves tomorrow. Most of us would still probably have a cooktop, an oven, a toaster, and maybe even a grill, fire pit, crockpot, pressure cooker, or air fryer standing by and ready to use.

 

Suddenly extinguishing all light bulbs, however, would be a different story because convenient lighting has offered a huge boost to humanity’s standard of living and economic well-being. Electric bulbs that illuminate vast areas at the flick of a switch replaced the messy lanterns and candles of eras past. They have made illuminated environments the standard rather than the exception for nearly 150 years. Newer advances, such as connected lighting systems and light fidelity (Li-Fi), are further integrating light with other building and communication systems.

 

How important will artificial intelligence (AI) be? Will its impact be more akin to a microwave or to the light bulb? Probably both. Like many other inventions, AI is part of a larger spectrum of development that is solving problems. Potentially, AI could one day be so integrated that it becomes a modern necessity. It also brings with it strong needs for ethics and safety to ensure that the technology serves the greater good and values human life.

AI: Doing Good Today . . . and Beyond

If solar flares managed to evaporate all of AI today, most of us would have other options for meeting needs, such as having a stovetop instead of a microwave, or be able to revert to the processes and capabilities of our not-so-distant past. Businesses analytics, medical imaging, recommended products, and music playlists would be limited by human capabilities coupled with whatever technology is available. Here again, however, the absence of AI would be more akin to missing microwave ovens than missing light bulbs—at least for this fleeting moment.

 

At this juncture, several technologies, such as sensors, processing, and storage, have matured and converged to enable AI to solve tangible problems. Of the 160 cases the McKinsey Global Institute have tracked, only about a third have real-life uses, and many of those are still in the testing phase. That said, AI-driven solutions are emerging. For example, an organization called AI for Good Global Summit aims to connect “AI innovators with those seeking solutions to the world’s greatest challenges so as to identify practical applications of AI that can accelerate progress towards the United Nations’ Sustainable Development Goals (UN’s SDGs).” Its work cites progress in agriculture, drug development, and computer vision for satellite imagery:

  • Farmers can now integrate massive amounts of data from a range of sources, including sensors in the field, weather data, markets, and satellite imagery. AI-based time series analysis provides recommendations for increasing crop yields and maximizing efficient land use (Figure 1).
  • Pharmaceutical companies are also realizing benefits from AI modeling. Using digital models of molecules and their interactions, researchers can generate and search over huge, even exponential numbers of possible treatments. As human genome sequencing also progresses, better treatments can be customized for individual cases.
  • Organizations are using satellite imagery to detect wildfires (a harder problem than you might think) and carbon emissions and even to locate areas of extreme poverty in the world.

Figure 1: AI-based analysis provides recommendations for increasing crop yields and maximizing efficient land use (Source: William Potter/Shutterstock.com)

 

AI’s potential to do good is quickly transitioning from solving tangible, more immediate needs to becoming increasingly integrated into larger, more abstract solutions. The UN SDGs go beyond solving local and short-term problems to challenging the world to devise solutions to poverty, inequality, climate change, environmental degradation, and peace and justice. Could there be a role for AI in achieving these goals? The McKinsey Global Institute indicates that there might, and many of these roles relate to the UN’s SDGs in particular. Some of the nearer-term applications are expected in education and health care. However, applying AI solutions in a significant way to difficult problems will require action across a spectrum of groups, from governments and organizations to private industry (Figure 2).

Figure 2: Prioritizing artificial intelligence for human benefit could help solve some of society's most pressing challenges, such as climate change, environmental degradation, and even the protection of democracies. (Source: By Wetzkaz Graphics/Shutterstock.com)

Ethics in Doing Good

As AI advances and solutions become more far reaching, ethics will become increasingly more important for ensuring that technologies are used for good and emphasize the value of human life. Most of the published guidelines cite the need for ethical AI as a way to extract the greatest benefit from it. One of the leading groups discussing the ethics of AI is AI4People, which tries to influence governments and organizations. Its goal is to shape the social impact of new applications of AI and lay out the foundational principles, policies, and practices for building a “Good AI Society.” The group has made 20 specific recommendations for achieving that end. One of its major concerns is that AI could, in fact, be underutilized if the public does not trust AI solutions and so rejects them. If this happens, the world will not derive the great benefit that could otherwise come from AI.

 

The European Commission released a white paper in February 2020 promoting the development and deployment of AI to ensure benefits that conform to European values. The white paper emphasizes the ethical implications at the same time it calls for scientific breakthroughs that improve lives while respecting human rights. The white paper also warns of the downsides if AI takes on a larger, more intrusive role in human lives. Regulations that already exist can cover some aspects of AI, but existing rules might have to be adapted or clarified with respect to AI products. Transparency of AI models will make enforcement of regulations more difficult. Clear regulations are needed to protect citizens and give businesses legal clarity.

 

The European Commission also emphasizes the importance of a unified effort to reach the scale needed to solve big problems. A fragmented approach from different member countries with different directions and unnecessary duplication of effort risks creating AI solutions that do not scale to the necessary level. The commission plans to involve multiple stakeholders and provide incentives to industry to prioritize Europe’s current strengths in technology, including manufacturing automation and quantum computing. It also expects to coordinate among academic centers of excellence. Finally, the white paper mentions using AI to achieve the SDGs, viewing the technology as especially relevant to climate and environmental goals.

 

The United States has similar ambitions. Individual agencies within the federal government are making plans for AI and publishing white papers describing their expectations. Recently, the Secretary of Energy Advisory Board created a working group to examine and report on AI and the U.S. Department of Energy’s (DOE’s) role in supporting the development and promotion of AI technologies. The board recently released its report. In it, the DOE emphasizes the urgency of developing AI, seeing it as a new space race with China, which is making its own large investments in AI.

 

Prioritizing AI for human benefit could help solve some of the most pressing societal challenges, such as climate change, environmental degradation, and even the protection of democracies. To achieve these ambitions, a strategy is needed to coordinate the many stakeholders. Many governments, intergovernmental agencies, organizations, and companies are publishing similar statements and AI recommendation papers.

Safety in Doing Good

Risks exists in these and future uses of AI, a potentially powerful tool that could also be misused and comes with the high likelihood of unintended consequences. In traditional engineering disciplines such as structural engineering or aviation, designs include safeguards wherever possible. New products and systems are extensively tested, and solutions are subjected to stresses to better understand the limits of their capabilities. Armed with knowledge, industry has had success deploying engineered solutions that provide benefit with minimal risk.

 

The engineering culture and mindset are so far absent from AI, even as we deploy it for real-life situations, fully expecting its significant impact on people’s lives. Much of the best practices in traditional engineering have been codified in regulations. As useful as microwave ovens are in our lives, if they spewed radiation throughout our homes, we would not be able to use them. The European Union (EU) white paper suggests that regulations are likely required, but other, similar documents from other entities are less clear about how to mitigate potential harms. The U.S. statement on AI specifically calls for minimizing regulation.

 

In the absence of leadership and guidance from government (with the exception of the EU), many other entities have stepped in to fill the gap. A whole body of ethical guidelines has been developed, mostly directed at developers and researchers. At this point, no real mechanism for enforcement exists, but the principles and recommendations identified can help elucidate the key aspects of the conversation when lawmakers and regulators get involved.

Conclusion

Considerable work remains to advance AI to the point where it can help solve some of the world’s greatest challenges. As we expand AI’s potential, we must be cautious of its dark side. The core task of AI is to automate what would otherwise be human decision-making. To do that, it requires vast amounts of data, but garnering that data risks intruding into our personal lives to understand us better. Balancing these risks against AI’s tremendous potential is tricky, but it is also likely to be the difference between the impact of a microwave and a light bulb.

About the Author

Kyle Dent is an AI researcher and manager interested in the intersection of people and technology. He writes about technology and society and serves as co-chair of the AI Ethics Committee at the Palo Alto Research Center (formerly Xerox PARC).