Machines Making Moral Judgements

Armaan Khare-Arora
4 min readJan 27, 2022
(Credit: Brian J. Matis/Flickr)

Morality & Machines. Technology has been a driving force for rapid change since the Industrial Revolution. But during the past 20 years, we have witnessed unprecedented innovation surges. Seismic technology shifts led to a current digital landscape that barely resembles the turn of the century. Every day there seems to be a new app, a new way to manipulate artificial intelligence and technology to improve quality of life in one way or another. A few months ago, researchers at the Allen Institute for AI unveiled new technology designed to make moral judgments. While the application, Delphi, is just a prototype and designed “to help AI systems be more ethically-informed and equity-aware,” what if there was an app that genuinely answered all moral quandaries? At a glance, this may seem like an immensely beneficial asset: no longer will we find ourselves lost in the moral metropolis. When faced with a profound ethical question, we can type a query, and the answer comes forthwith. Next time we weigh the value of a tasty steak against the disvalue of animal suffering, we will know what to do. Never again will we be paralyzed by moral dilemmas including the frequently discussed one of pushing a person onto trolley tracks to save five others from harmed. We will just type it in the query. Unfortunately, it’s not that simple; this piece will explore the benefits and drawbacks of using AI to determine morality.

Where Machines Have an Advantage. Let’s face it. Morality has always been an issue for humans. History is littered with instances of mass genocide and senseless violence. Greed and power were often the primary motivators of leaders. We are often biased, tribalistic, and lack the cerebral capacity to weigh multiple scenarios in our heads. Luckily, a well-designed program seems to solve most, if not all, of those issues. First, let’s explore how Delphi and other future applications would work. It begins with the AI being fed a vast number of scenarios. Then, people are recruited and used as arbiters of the AI’s answers. Each answer is put to three arbiters, with the majority of average conclusions used to decide right from wrong. The process is selective, with participants needing to show a particular moral aptitude, along with a lack of clear and evident bias. Unlike previous models of “AI” based on logical constructions or carefully constructed algorithms, the new generation of machine learning/AI is based on statistical methods. This relies on machines preparing a variety of potential outputs based on the input and then selecting the outcome that statistically is the most likely to happen. In this context, this behavior somehow mimics the notion of a rational human being, analyzing the input based on past experiences with data and then choosing the output or decision that gives the maximum benefit (or minimal loss). Furthermore, like human beings, these systems that rely on data generally improve performance as more samples are collected. It’s similar to a human being learning more and getting better as they gain more experience. Many of the AI systems grow from not only getting a response right but also getting the response wrong. Like robots that learn to traverse a maze or a living room, as the robot bumps or takes a wrong turn, it also starts to remember decisions that were not optimal and hence can improve over time as it gains more “experience.”

Where Humans Have an Advantage. A critical part of evaluating the morality of an action is determining an ethical framework by which to judge the act. One commonly used mechanism is Utilitarianism — a framework evaluating which action would result in the most significant benefit for most people. A remarkably flexible framework, it is often used very successfully to present both sides of an argument. However, determining what “utility” to maximize is an inherently human decision. What the “good” is in any particular solution is entirely subjective and differs completely depending on the quandary presented. For example, when determining which method of vaccine distribution to enact, how would a program determine the “good.” Would it be simply maximizing the number of people who receive a vaccine? Or would it delineate a list of those most in need of one and ensure they are prioritized? Furthermore, most ethicists, when asked to determine the ethicality of an act, will first request context. Context is incredibly important in many moral situations, and when feeding queries into an algorithm such as Delphi, this imperative situational awareness is foregone. Thus while it is certainly possible for a machine to learn from humans and be able to generally produce ethical answers to simple moral questions; when tasked with elucidating a response to a complex issue that requires nuanced contextual information or where the opportunity costs may have multi-faceted political, economical, or social impacts, it seems that humans may still have the upper hand.

--

--

Armaan Khare-Arora

Exploring the moral & ethical dimensions of emerging policy and technology issues.