Sunday, June 3, 2018: “Questions of Value” – Pentecost-2

St. Mark’s Adult Forum

3 June 2018

 

Questions of Value

 

‘The really fundamental questions of our lives are not questions of fact or finance but questions of value.’  Patrick Grim, Ph.D.

 

What Science Has to Say About Morality*

 

Questions of value, unlike questions of fact, are all too often relegated to philosophy and religion.  Questions of fact – the what, when, where and how – occupy science.  In fact, science would prefer to avoid value, meaning and purpose as these only muddy the waters.  Bertrand Russell, the famed British logician and philosopher, believed all knowledge is limited to science and “science had nothing to say about values.”  If so, science has nothing to say about morality.

 

Morality extends beyond abstract theories of the good, right and just.  In practice, morality is more about norms of appropriate behavior.  It’s about doing the right thing – often the harder thing – once you’ve figured out what that is.  Morality, therefore, is clearly reflected in our behavior.  And behavior is certainly open to scientific inquiry.

 

Two basic methods are used to study behavior.  Ethology is the scientific study of animal behavior, usually with a focus on behavior under natural conditions, and viewing behavior as an evolutionarily adaptive trait.  Ethology is done in the field.  Behavior is also studied in the laboratory under controlled conditions.  Think of experiments with rats running a maze.  But the big breakthrough came with neuroimaging – techniques commonly referred to as ‘brain scans’. 

For many researchers, especially cognitive neuropsychologists/biologists, functional magnetic resonance imaging (fMRI), has become the most popular imaging technology, allowing both functional and structural neuroimaging at the same time.  Functional imaging provides images of the brain as patients complete tasks, such as solving math problems, reading, or responding to stimuli such as auditory sounds or flashing lights.  The area or areas of the brain that are involved with completing or responding to these tasks ‘light up’, giving researchers a visual 3-D view of the parts of the brain involved with each type of task.    

Now, having laid this groundwork, consider posing moral dilemmas to subjects undergoing fMRI.  What might be learned?

Reasoning vs Intuition

 

Question: When we make a decision regarding morality, is it mostly the outcome of moral reasoning or of moral intuition?  Do we mostly ‘think’ or ‘feel’ our way to deciding what is right?

 

Moral reasoning – deliberating about the correctness of an ethical decision – activates the cognitive regions of the brain, specifically the prefrontal cortex (PFC); enabling perspective taking, Theory of Mind, and distinguishing between outcome and intent.  For the uninitiated, Theory of Mind involves visualizing a complex scene from the point of view of another person, but also inferring the ethical motivation behind a person’s act.  Moral reasoning takes effort.

 

The cognitive processes we bring to moral reasoning aren’t perfect; there are imbalances and asymmetries.  For example, doing harm is worse than allowing it.  For equivalent outcomes, we typically judge commission more harshly than omission.  We have to call on the PFC to judge them as equal.  Another cognitive skew; we’re better at detecting violations of social contracts that have malevolent versus benevolent consequences.  For example, in one scenario, a worker proposes to his boss: “If we do this, we will reap big profits, but harm the environment.”   In a second scenario: “If we do this, we will reap big profits, and benefit the environment.”  In both cases the boss answers: “I don’t care about the environment.  Just do it!”  In the first scenario, 85% of subjects stated the boss harmed the environment in order to increase profits; in the second scenario only 23% said that the boss helped the environment in order to increase profits.

 

Many moral philosophers not only believe that moral judgment is anchored in reason, but also that it should be.  The problem with this conclusion – people often haven’t a clue why they’ve made some judgment, yet fervently believe it’s correct.  That’s because moral decision making is often implicit, intuitive, and anchored in emotion.  It’s even been suggested that moral reasoning is what we use to convince everyone, including ourselves, that we’re making sense.  That is, we employ post-hoc rationalization to justify our ‘gut’ decisions and mask our innate biases.

 

When confronted with moral decisions, we don’t just activate the prefrontal cortex (PFC).  We also activate the emotional regions of the brain – the amygdala and others.  When activation is strong enough, we also activate the endocrine system and sympathetic nervous system and feel arousal.  During arousal, our moral assessments are made faster, the antithesis of grinding cognitive reasoning.  The patterns of activation in these regions predict moral decisions better than do patterns in the cognitive PFC.  And this matches behavior – people punish to the extent they feel angered by someone acting unethically.  Thus emotion and moral intuition are not some primordial artifacts of evolution that interfere with the human specialty of moral reasoning.  Instead, they anchor some of the few moral judgments that most humans agree upon.

 

Question: Consider the relative importance of reasoning and intuition: What circumstances bias toward emphasizing one over the other?  Can each produce a different outcome?

 

Consider the classic Trolley Problem.  A runaway trolley’s brake has failed, and it’s hurtling down the track and will hit and kill five people.  Is it morally correct to do something that saves the five but kills someone else in the process? 

 

Subjects were neuroimaged while pondering two scenarios.  Scenario one: The trolley approaches; five people will die!  Would you pull a lever that diverts the trolley onto a different track, saving the five, but where it will hit and kill someone else?  Scenario two: Same circumstance.  Would you push a bystander onto the track to stop the trolley, saving the five?  Imaging revealed that when contemplating pulling the lever, PFC activity predominates; the detached, cerebral profile of moral reasoning.  When contemplating pushing a bystander to their death, the limbic system’s amygdala kicks in; the visceral profile of moral intuition.

Consistently, 60 to 70 percent of subjects opted for the utilitarian solution; to pull the lever, killing one to save five.  Despite the same outcome, only 30 percent were willing to push the bystander with their own hands.  Intentionality seems to be the key.  By pulling the lever five people are saved because the trolley is diverted to another track.  The killing of the individual is an unintentional, albeit unfortunate, side effect; the five would still have been saved if that individual hadn’t been standing on the track.  In the pushing scenario the five are saved because a bystander is necessarily killed, and the intentionality feels intuitively wrong.  Thus a relatively minor variable determines whether people emphasize moral reasoning or intuition, engage different brains circuits in the process, and produce radically different decisions.

 

Context

 

Context influences morality in significant ways.  Consider proximity.  Time and distance make a difference.  For example, imagine you’re walking by a river in your hometown.  You see that a child has fallen in.  Most people feel morally obliged to jump in and save the child, even if the water destroys their $500 suit.  Now, suppose a friend in Somalia calls and tells you about a poor child there who will die without $500 worth of medical care.  Can you send money?  Typically not.  Another example.  A hurricane devastates the U.S. gulf coast.  You hurry off a check to the Red Cross.  But when the Red Cross solicits a donation to address future disasters, you balk.  We engage in moral discounting – the here and now carries more weight than over there or next year.  The same holds for familiarity; we’re kinder and gentler with Us, not so much with Them.

 

Question: How has the Internet affected the ‘here and now’?

 

Moral context dependency can revolve around language, culture and religion.  Consider the Golden Rule.  Amid the power of its simplicity, the Golden Rule does not incorporate people differing as to what they would/wouldn’t want done to them.  Is this a flaw in the Golden Rule’s universality?  If we focus solely on do’s and don’ts the answer is yes.  But we can overcome this criticism by focusing on reciprocity – giving concern and legitimacy to the needs and desires of people in circumstances where we would want the same done to us.

 

The context dependency of morality is revealed in another crucial way.  It’s easy to condemn the remorseless killer, rapist or thief.  But far more of humanity’s worst behaviors are due to a different kind of person, namely Us.  Think how often we declare: “Of course it is wrong to do X…but this time is different.”  We activate different brain circuits when contemplating our own moral failings versus those of others.  Going easy on ourselves also reflects a key cognitive fact: we judge ourselves by our internal motives and all others by their external actions.  Thus, on a cognitive level there is no inconsistency or hypocrisy; our motives mitigate our actions. 

 

This might help explain why so many of us readily lie and cheat.  One straightforward study shows our propensity to do so.  Over 2,500 college students from twenty-three countries were asked to roll a fair die – in private – with different results yielding different monetary rewards (e.g. $1 for a one…$6 for a six).  Two key points: a nominal yet tempting reward, and an opportunity to cheat.  Now, anybody familiar with statistics knows: given chance and enough rolls, if everyone was honest, each number on the die should be reported about one sixth of the time – that is, reports should be evenly distributed.  If everyone always lied for maximum gain, all rolls would produce the highest paying number – six.  There was lots of lying.  Reports were skewed toward higher reward.  Given the opportunity to roll the die twice – with only the first roll counting – subjects more often than not reported the higher roll of the two.  You can practically hear the rationalizing.  “Dang, my first roll was a one, my second a four.  Rolls are random; I could have just as readily rolled a four as a one, so…let’s just say I rolled a four.  That’s not really cheating.  Besides, were talking about a measly few bucks.”

 

Moral Fitness

 

Question: What about the subjects who didn’t lie or cheat?  Is resisting temptation at every turn an outcome of ‘will’ or an act of ‘grace’?

 

In those subjects who were always honest, both the cognitive and emotional regions of the brain were relatively ‘quiet’ when the chance to cheat arose.  There was no conflict.  There was no working hard to do the right thing.  Ask them why.  “I don’t know…I just don’t cheat.”

 

This is not to suggest that such honesty can only be the outcome of implicit automaticity.  We can – and do – think and struggle and employ cognitive control to produce similar results.  But, when under the gun, with repeated opportunities to cheat in rapid succession, the cognitive load becomes too great to endure; instead, automaticity is required.  Where might it come from?

 

I offer an analogy.  Physical exercise benefits both our body and mind.  Over time we become physically and mentally fit.  Likewise, cultivating virtue benefits our soul, helping us become morally fit.  The goal is to be able to reliably distinguish between right and wrong, good and evil, just and unjust – and act accordingly.  We must be prepared to act morally.  Hence one’s perceptions, intuitions, opinions, and reasoning all must be continually put to the test before one’s conclusions can be trusted.  Moral fitness doesn’t come easy, it takes discipline.  The day will come when you’ll be able to say: “I don’t cheat; that’s not who I am.”  The ‘will’ may yield to ‘grace’. 

 

In conclusion, it turns out that science has a lot to say about morality, and values in general.  Way more than put up for consideration here.  Science suggests the roots of human morality are older than our cultural institutions, older than our laws, older than religion.  Our behavior for good or ill is informed and modulated by our biology, itself modulated by context.  I leave you with two final thoughts.  First: You don’t have to choose between science and compassion.  Second: It’s complicated.

 

 

 

 

 

 

* Content was borrowed and adapted by Wayne Harper for our use from: Behave…The Biology of Humans at Our Best and Worst, Robert M. Sapolsky, Penguin Press, 2017