Moral dilemmas—balancing one right action against another—are a ubiquitous feature of 21st-century life. While they are inevitable, they are not unique to our present. The problem of meeting conflicting needs was as important in the lives of our ancestors as it is for us today.
Many psychologists and sociologists argue that natural selection has shaped the cognitive systems in the human brain to regulate social interactions. But how do we arrive at appropriate judgments, choices, and actions when faced with a moral dilemma—a situation that activates conflicting intuitions about right and wrong?
An influential view claims that certain dilemmas will always confuse us because our minds cannot reach a solution by weighing conflicting moral values against each other. But new research from the University of California, Santa Barbara and the University of Desarrollo in Santiago, Chile, shows that we humans have an unconscious cognitive system that does just that.
A team of researchers including Leda Cosmids of the University of California, Santa Barbara has found the first evidence of a system well-designed to make trade-offs between competing moral values. The team’s findings are published in Proceedings of the National Academy of Sciences.
As members of a cooperative group species, humans regularly encounter situations in which the complete satisfaction of all their multitude responsibilities literally impossible. A typical adult, for example, may have countless responsibilities: to children, elderly parents, a partner or spouse, friends, allies, and community members. “In many of these situations, partial performance of each duty — a trade-off — would promote better fitness than completely neglecting one duty to fully satisfy another,” said Cosmides, a professor of psychology and co-director of UC Santa Barbara’s Center for Evolutionary Psychology. “The ability to make intuitive judgments that strike a balance between conflicting moral obligations may thus have favored selection.”
According to Ricardo Guzman, a professor of behavioral economics at the Center for Social Complexity Studies at the University of Desarrollo and lead author of the paper, the function of this moral trade-off system is to weigh competing ethical considerations and calculate which of the available options for resolving a dilemma is the most morally “right” one. . Guided by evolutionary considerations and analysis of similar trade-offs from rational choice theory, the researchers developed and tested a model of how a system designed to perform this function should work.
According to the research team, which also includes María Teresa Barbato of the Université Desarolles and Danielle Schnysser of the Université de Montréal and the Oklahoma Center for Evolutionary Analysis at the University of Oklahoma, their new cognitive model makes unique predictions that have never been falsified before. tested, and which contradict the predictions of the influential dual-process model of moral judgment. According to this model, the sacrificial dilemma—one in which people need to be harmed in order to maximize the number of lives saved—creates an irreconcilable struggle between emotion and reason. Emotions issue an internal command – do no harm – that contradicts the conclusion reached by reason (that it is necessary to sacrifice some lives to save most lives). The team is “non-negotiable”, so finding a balance between these competing moral values will be impossible.
But the researchers’ model predicts the opposite: they have proposed a system capable of making such trade-offs, and doing so in an optimal way. “Like the previous study, we used a sacrificial moral dilemma,” explained Guzmán, “but unlike past studies, the menu of options for solving this dilemma included trade-off solutions. To test several key predictions, we adapted a rigorous method from rational choice theory to the study of moral judgment.’ Rational choice theory is used in economics to model how people make trade-offs between scarce goods. It states that people choose the best option available to them given their preferences for those goods.
“Our method assesses whether a set of judgments conforms to the Generalized Axiom of Revealed Preferences (GARP), a demanding standard of rationality,” Guzmán continued. “GARP-respecting choices allow for stronger inferences than other tests of moral judgment. If intuitive moral judgments respect GARP, the best explanation is that they were made by a cognitive system that works by constructing—and maximizing—a correctness function.’
The correctness function reflects a person’s personal preferences, Kosmids noted. “In other words, how your mind weighs competing moral goods,” she said.
Research data collected from more than 1,700 subjects has shown that people are perfectly capable of making moral compromises while meeting a strict standard of rationality. Subjects were presented with a sacrificial dilemma – similar to those faced by US President Harry Truman and British Prime Minister Winston Churchill during World War II – and asked which solution seemed the most moral. If the bombing of cities ends the war sooner and ultimately results in fewer deaths overall – and each civilian sacrificed spares the lives of more soldiers (hence the term “sacrificial dilemma”) – it results in harming innocent bystanders, To save more lives in total seems like the morally right thing to do? And if so, how many civilians to save how many more lives?
Each subject was responsible for 21 different scenarios that varied the cost of saving a human life. The vast majority of subjects believed that trade-off solutions were the most morally correct for some of these scenarios: they chose options that harm some—but not all—innocent bystanders in order to save more—but not most—lives. These trade-offs are trade-offs – they strike a balance between the duty to avoid causing lethal harm and the duty to save lives.
As predicted, the judgments people made corresponded to variable costs while increasing validity: the individual-level data showed that most subjects’ moral judgments were rational—they respected the GARP. However, these were intuitive considerations: Deliberative reasoning cannot explain the trade-off considerations respecting GARP.
“This,” Cosmides said, “is an empirical indication of a cognitive system that operates by constructing—and maximizing—a correctness function. People consistently chose the options they believed to be the most correct, given how they valued the lives of civilians and soldiers. “
Compromised moral judgments do not mean failure to pursue the right course of action, she continued. Rather, they show that the mind finds a balance between multiple commitments and deftly deals with the inescapable quality of the human condition.
Guzmán, Ricardo Andres, A moral trade-off system produces intuitive judgments that are rational and consistent and strike a balance between conflicting moral values, Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2214005119. doi.org/10.1073/pnas.2214005119
Citation: Researchers demonstrate human cognitive system designed to enable moral compromise decisions (2022, October 10) Retrieved October 10, 2022, from https://phys.org/news/2022-10-human-cognitive-enable-moral- tradeoff.html
This document is subject to copyright. Except in good faith for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.