Every day, moral principles govern human decisions small and large, from how to treat one’s neighbor to how to design ethical technology. But how do we acquire this important sense of what is wrong and what is right?
Previous theorists proposed two main options, according to Shaun Nichols, professor in the Sage School of Philosophy, College of Arts and Sciences: “One theory is we’re born with a moral grammar that tells us right and wrong,” said Nichols, an expert on moral psychology. “The other theory is that morality is based on emotions, rather than rational thought.”
In his new book, “Rational Rules: Towards a Theory of Moral Learning,” Nichols argues that, contrary to previous theories, we can explain many of the features of moral systems and how humans form them in terms of rational learning from evidence. In other words, he said, a moral sense is neither hardwired nor totally emotional.
“My book argues that our judgments are driven by a system that is learned in rational ways,” Nichols said. “It’s acquired in a rational way, but also it’s not fixed into your brain – you could learn another system if you had different evidence.”
In research with collaborators in psychology and human development (including Tamar Kushnir, associate professor in child development in the College of Human Ecology) Nichols has been developing such an alternative explanation for the acquisition of moral systems. The inspiration comes from an unlikely source: statistical learning.
“Recent cognitive science has seen the ascendance of accounts which draw on statistical learning to explain how we end up with the representations we have,” Nichols wrote. “I’ve come to think that statistical learning provides a promising avenue for answering central questions about how we come to have the moral representations we do.”
Drawing on statistics-based work in cognitive sciences, Nichols proposes that people learn morality by paying attention to rational aspects of a situation.
The thought experiment that drew him into study this question involves two dice: one with four sides and the other with 10. Imagine that a friend rolls one of the dice at random several times and comes up with the results: three, two, four, two and one.
“Do you think it’s the four-sided die or the 10-sided die?” Nichols said. “You think it’s the four-sided die, because it would be a suspicious coincidence if it were the 10-sided die and you only rolled one to four.”
Simple principles like this can explain how children learn complicated rules of moral systems based on the evidence they observe, Nichols said. He and other theorists wonder why children think it’s wrong for people to litter but not wrong for people to leave litter that’s already lying on the ground.
“It’s not like parents explicitly tell kids, ‘You shouldn’t litter yourself but you don’t need to pick up litter you see,’” Nichols said. “But it’s enough that parents show disapproval exclusively to acts of littering and not to people who leave litter on the ground. If the rule about littering also applied to people leaving litter on the ground, it would be a suspicious coincidence that this is never mentioned. So there is subtle evidence in the environment that children can use to make these inferences.”
The research for the book was funded by the Office of Naval Research, which wanted to understand how people make moral judgments so they could then figure out how to build autonomous vehicles that implement similar principles.
“It was part of a mission to figure out how to build a more moral machine,” Nichols said, adding that the vehicles in question perform noncombat tasks, such as defusing bombs. “They were concerned about soldiers trusting the robots they work with. They thought they wouldn’t trust them unless they thought the robots were behaving in ways they regard as ethical.”
Nichols thinks his research presents a hopeful picture of moral learning.
“The way we learn moral systems is rational, and it’s flexible,” he said. “If we think, as I do, that we should have our moral rules should be inclusive – that everyone should be included in the moral community – we know how to teach kids that. You just give them different evidence.”