Skip to main content

Agency, Autonomy, and Protection in Artificial Intelligence

(Source: metamorworks/Shutterstock.com)

Artificial intelligence (AI) is a key component advancing intelligent smart environments, spanning home, work, health, education, supply chain, factory, city, and society. Although fairness and bias are currently human-designed aspects of AI, other aspects such as agency and autonomy present the potential for a duality between greater goods and harmful acts. For those who design, develop, and implement AI systems, challenges include reducing the risk of harm, balancing human control and machine autonomy, and solving problems through governance, standardization, and innovation. Here, we’ll explore concepts of agency and autonomy, discuss the ever-present role of bias, and describe challenges in protecting humans from potential harm.

 

Agency

Definitions of agency vary significantly, but most focus on making something happen, including making something happen for something or someone else. A related concept is whether the agent can choose what to do, as opposed to being told what to do. Consider a decision that appears irrational to others. Out of all the different options available, the perceived irrational decision is made for reasons known only to the person making it. What characterizes the human agent is intelligence and our ability to select from a range of options.

 

The ability to choose what to do invites questions about enabling AI systems the ability to make such decisions for themselves. What happens if AI makes a seemingly irrational decision to us as humans? Is it a malfunction, or is it reasoning that is not accessible to us? Drawing on science fiction, perhaps the classic case of this is HAL9000, the sentient computer in 2001: A Space Odyssey. So the question becomes: What autonomy should AI-empowered agents/devices possess?

 

Autonomy

Autonomy invokes independence in decision-making and goal-setting, and it implies self-sustainment, with autonomy motivating agency. The nature of the independence differs from automated technologies: Automated technology is one that is self-acting, as opposed to autonomous or self-regulating technology that can adapt to situations.

 

However, automation and independence are not black-and-white distinctions; they’re a matter of degrees. Take, for example, the concept of AI-, Internet of Things-, and sensor-enabled driverless cars. Today, reaching the much-hyped fully autonomous vehicle is still some way off. Although we have seen innovation toward this vision and acceleration of other benefits—such as making human driving smarter and safer—significant moments have caused pause for reflection. One such moment was the March 2019 pedestrian fatality involving a self-driving car.

 

Such consequences raise the question: How much autonomy can be achieved or is indeed desirable? The answer depends on degrees of autonomy and supervision:

 

Degrees of Autonomy

The U.S. National Highway Traffic Safety Administration (NHTSA) has defined six levels of autonomy in driverless cars, ranging from Level 0 to Level 5:

  • Level 0: All driving is done by humans.
  • Level 1: Some assistance is provided to the person driving, in steering, accelerating, or braking.
  • Level 2: More advanced assistance that controls steering, accelerating, and breaking but requires constant human monitoring.
  • Level 3: In specific circumstances, the vehicle can carry out all parts of the driving, but with the person ready to intervene when required by the system.
  • Level 4: In specified circumstances, the vehicle can do all that is required to drive without a person paying attention.
  • Level 5: Fully autonomous self-driving vehicle with the human as a passenger.

 

Despite outlining degrees of autonomy, ethics and legal questions remain. For example, how should a car decide between crashing and killing five school children at a bus stop or, alternatively, killing its two elderly passengers? Who is ultimately responsible when autonomous systems fail? Can and should self-learning robots be held responsible for their actions and be held liable if people are hurt or property damaged?

 

Degrees of Supervision

Part of the answer might lie in further defining degrees of autonomy according to degrees of supervision. Like autonomy, supervision can also be described on a spectrum, such as definitions identified by a company called re2 robotics:

  • Tele-Operation: Intuitive human control of a robot
  • Supervised Autonomy: Autonomous operation under human supervision
  • Fully Autonomous: Complete robotic autonomy with no supervision required

 

Putting these degrees into context, supervised autonomy would be allowing robots to perform duties that would be hazardous to humans, but still giving robot operators full control over specific tasks.

 

Bias and Its Friends

Agency and autonomy matter, especially with AI embedded across all aspects of life, being increasingly involved in highly impactful decisions, and accelerating with the wider rollout of 5G. One of the challenges within this smart connected and hybrid domain is the bias, conscious or otherwise, that can affect AI design, development, and application.

 

Agency and autonomy can have highly sensitive and serious long-term impacts in terms of unfair or discriminatory outcomes. One example is the use of an algorithm to determine the likelihood of a criminal re-offending which has revealed a pattern of bias in its assessment. This is more than the undermining of trust; it is exposing people to discriminatory and undesirable consequences. Protection is vital.

 

Protections and Their Challenges

The very possibility of harm from AI raises the question of how people can be protected. It provokes questions about who or what is in control of our individual and collective lives as a society, how much freedom and privacy we have, and what checks, balances, and accountability are in place to stop any of us from being harmed. Significant, complex, and interlinked questions clearly suggest some level of protection is required.

 

One of the earliest attempts to propose rules for AI systems is Asimov’s Law of Robots published in the short science story “Runaround” in 1942. This introduces three laws:

  • Prevent harm to humans.
  • Require robots to be obedient to humans unless doing so clashes with the first law.
  • Require robots to protect themselves unless doing so clashes with the other two laws.

 

Are these adequate? Beyond any debate on the adequacy or necessity of these laws, it does again draw attention to the distinction between automated and autonomous technologies.

 

Constraining Autonomy

AI-enhanced robots with any degree of autonomy are part of a more general issue relating to how much autonomy should be given to AI and whether constraints should be imposed. For example, is it desirable to have a weapon system that will act upon on goals without human intervention but that is open to human intervention if necessary? Can we trust this? History is full of examples where technology has had negative consequences, intended or otherwise. Social media is, in principle, a good idea as it allows people to connect and share with each other; however, its dark side has exposed many vulnerabilities, including cyberbullying, misinformation, and identity theft, among others.

 

As AI becomes more deeply embedded across our increasingly smart, intelligent, and connected world, the need for constraint becomes more complicated. Take, for example, facial recognition technologies that cross city centers and enter into residential areas. Although these might help make our streets safer, they also carry the potential to monitor people’s movements and activities. Transparency with regard to data usage is key to trust and acceptance, alongside protection around privacy and security, especially given the increased surface area for cyber security threats. Even smart toys can be hacked, so, really, how private can we be?

 

Governing AI

Although most people recognize that AI can bring transformative benefits to business and society, the potential dark side must be recognized, too, because that’s how we can best mitigate and address it. In a recent article, Sundar Pichai, CEO of Alphabet and Google, has called for regulating AI but acknowledges challenges in how to regulate it. Regulation at a national level could lead to some nations having strong enforceable governance mechanisms but leave others with weak regulations that attract malicious activity.

 

Further, legislation tends to lag behind technology developments, especially ones as rapidly developing as AI. This suggests the need for underlying principles and values to guide what is accepted or permitted. So while Google has its own principles for the “ethical development and use of AI in our research and products,” for example, there are few reasons for others to adopt them.

 

To counter this, Pichai argues that the time is now for international alignment. This calls for a global effort as exemplified in May 2019, when 42 countries signed up to adopt OECD Principles on Artificial Intelligence. These comprise “five values-based principles for the responsible deployment of trustworthy AI and five recommendations for public policy and international co-operation.”

 

However, some argue that it will be near impossible to regulate AI. What if someone or some (political) organization decides to ignore these principles? For example, political agendas can argue for AI developments on the grounds of national security, particularly in the military sector. There is also the malicious spread of misinformation and the distortion of reality with deep fakes.

 

Assigning Rights and Responsibilities

So finally, protection must also be viewed from the perspective of the technology itself. First is the issue of identity. It might not be human, but a robot can have a unique presence, unlike inanimate objects. Taking this to a higher level is recognizing that a robot can gain citizenship as illustrated when “Sophia” was awarded full citizenship of Saudi Arabia back in 2017. During the same year, the European Union (EU) Parliament’s Legal Affairs Committee published a controversial paper that suggested creating specific legal rights and responsibilities, rather than human rights, and encapsulating them under “electronic personalities.”

 

Conclusions

Artificial intelligence is with us and here to stay, and like many aspects of life, it presents as a duality–with both the potential for helping the greater good and reaching new depths of nefarious uses. The challenge for those who design, develop, and implement AI systems is to reduce the risk of harm, achieve the delicate balance between human control and autonomy, and find an effective balance between governance structures, standardization, and innovation. Perhaps the onus actually lies best with us all to help steer the trajectory of AI toward the economic and ethical values we believe in. In this way, we can move beyond a human versus technology narrative toward one of human-technology partnership that builds on our collective complementary strengths.

About the Author

Sally Eaves is a chief technology officer, practicing professor of fintech and global strategic advisor consulting on the application of disruptive technologies. Globally recognized as a thought leader in the field she has won multiple awards, is an international keynote speaker and an accomplished author.