Moral questions have always been tricky, but nowadays studying them scientifically brings a host of new problems. An edition of the BBC2 science programme Horizon aired on the 7th of September took a look at what research into the psychology of morality is revealing. The episode kicked off with the Leverhulme Trust-funded work of computer scientists at UCL. Led by Prof Mel Slater, with Research Fellow Dr Xueni Pan, they use virtual reality to assess how people respond when faced acting upon a moral dilemma.
One of the hardest aspects of studying psychology is getting data, since the more you can control a study the less realistic it is, and vice versa. However, in previous studies Prof Slater found that individuals respond to difficult situations in the virtual world in the same way as they would in the analogue world – showing changes in perspiration, heart-rates, and feelings of stress that mirror their responses in real life. For psychological research, VR provides a solution to the trade-off between control and realism – a perfectly predictable, but immersive, environment.
Immersed in the CAVE
The Virtual Environments group within the Department of Computer Science use a special room called the CAVE – a “Cave Automatic Virtual Environment” – to draw their subjects into the scene they have created. Within the CAVE, projections onto three walls and a floor give surrounding visuals, while a head-tracking system adjusts the environment to match the movements of the participant. Although the representations of people used by Prof Slater in the particular work presented are deliberately simple rather than photo-realistic computer graphics, their behaviour is carefully plotted, often using gestures captured from real actors, to provide realistic behaviour.
With these unique abilities, an Immersive Virtual Environment (IVE), such as CAVE, can confront participants with moral dilemmas that would be far too distressing for participants in real life. A classic question in the field is the ‘Trolley’ question: there is an empty trolley running out of control down a track that will run over and kill five people if not interrupted. But you have the option to flip a switch and put it onto another track, which will only kill one person who would otherwise have been unharmed. What do you do?
There is no right answer in this situation, but most people usually answer that they would flip the switch. Would they, though? To find out, Prof Slater and Dr Pan set up an analogous situation in virtual reality. They employed participants in the role of a lift operator in a gallery, taking people between levels. After participants have immersed themselves in this role, a gunman begins firing on a floor with five people on it, and the operator is faced with the decision of whether to remove him from that floor but thereby endanger one person on another floor, or to do nothing.
The majority of participants made the decision to take the utilitarian action of saving five people at the cost of one. Interestingly, 89% of participants took this action when immersed in the more realistic environment of the CAVE, while only 67% came to this view with the scenario presented on a standard non-immersive desktop machine. Since the CAVE gives a more realistic experience and a higher emotional response, as measured by physiological cues, this hints that emotion may be a key factor in taking action in the face of a moral dilemma. Philosophers have been debating whether emotions should feature in moral decisions for years, and this study has provided some evidence that emotions may play an important role in people’s behaviour in such situations.
But there were still more revelations to come: after being in this experiment, participants were more likely than members of the general population to take a strict utilitarian view on other moral dilemmas. They were asked to make a judgement about another trolley situation like the one above, but differing in that to save the five people you must actively throw an innocent bystander (wearing a heavy metal backpack, or carrying a lot of fat) onto the tracks to stop the car. Other studies have found that only 13% of people say they would do this – even though the outcomes are the same, with one person dying to save five – but after having this practical experience, 33% of experimental subjects answered that they would throw a man onto the tracks in order to save lives.
Dr Xueni Pan, co-author of the paper where this finding was reported, said: ‘That’s when we realized that not only was this a powerful testing tool, but it was also suitable for training.’ There are many occasions when hard decisions need to be taken quickly. Sharpening up people’s moral reflexes in practice scenarios could lead to people like policemen and emergency services staff being able to take crucial decisions in split seconds – making the right call and making it quickly enough.
This project is particularly interesting because it crosses many traditional subject boundaries, uniting computer science, philosophy and psychology. To support these other aspects of the work, the researchers consult with Marc Hauser, an evolutionary biologist and author of “Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong” (Harper Colins, 2007). Funding was received from the Leverhulme Trust, which specialises in funding projects which cross academic divisions, under the project title “The exploitation of immersive virtual reality for the study of moral judgements.”
Xueni Pan and Mel Slater (2011) “Confronting a Moral Dilemma in Virtual Reality: A Pilot Study in Human-Computer Interaction” The 25th BCS Conference on Human-Computer Interaction (HCI) Newcastle Upon Tyne, UK.
The exploitation of immersive virtual reality for the study of moral judgements – the group’s project webpage.
What if? A selection of moral dilemmas, from the BBC