Sunday, June 26, 2016: “Questions of Value”

 

‘The really fundamental questions of our lives are not questions of fact or finance but questions of value.’  Patrick Grim, Ph.D.

 

 

What Ought We To Do?

 

This question – the driving force behind ethics – has been with us for millennia.  It’s tied to the bigger question concerning how we should live our lives.  Understanding right from wrong is essential if our goal is to live a flourishing life.  But before we attempt to answer what, we need to address why. 

 

Question: Why do we have any moral obligations at all? 

 

Is the answer to be found in religious injunctions, in ‘philosophy’, in social/cultural norms, in jurisprudence or in our evolutionary origins as a social species?  Probably; where else might one look?  But opinions differ.

 

  • Moral relativists claim we have no moral obligations.  They believe ‘I have my moral beliefs and convictions’ and ‘you have your moral beliefs and convictions.’  Nobody has the right to impose their morality upon anybody else.  Obligations are seen as impositions.
  • Moral absolutists believe differently.  They claim moral obligations do exist, and apply to everybody.  Some are fairly obvious: we have a moral obligation to care and nurture our offspring; to tell the truth; to keep promises; to respect life and property; and do no harm.  Christians have the moral obligation to love one another.

 

Question: Can you think of other moral obligations?

 

So, why do we have moral obligations?  The short answer: To help us survive and thrive as a person and as a society.  The long answer considers agency, free will, duty, justice, virtue, conscious, pleasure and pain; and is best left to the experts, who by the way, also disagree.

 

Question: What would life be like if we had no moral obligations?

 

Would a moral relativist find it OK or acceptable to be lied to?  To be used by others as a means to their ends?  Live in a world where ‘anything goes’?

 

Let’s move on to the what; the challenge of figuring out how to act morally.  We begin with moral intuitions which lead the way toward moral principles and beyond toward moral/ethical theories.  There are four main ethical theories.  Each has much to offer; no single theory dominates.   

 

  • Utilitarianism.  An action is moral/ethical if it results in good consequences.
  • Deontology.  An action is moral/ethical if the motive behind it is good.
  • Divine Command.  An action is moral/ethical if it is in harmony with God’s commands.
  • Virtue Ethics.  An action is moral/ethical if performed by a person possessing all the virtues.

 

Question: Is ethics basically a matter of setting and following rules, or something else?

 

Indeed, many people tend to think of ethics as a list of dos and don’ts, much in the style of the Ten Commandments.  And recall, Kant’s Categorical Imperative, thought to be (at least by Kant) an infallible overarching rule for ethical action.  But can a workable set of rules be developed to help us avoid moral confusion and resolve moral dilemmas?

 

It depends.  In our mundane day-to-day activities it’s possible to get by with a basic set of moral ‘conventions’.  Some folks might rely on ‘WWJD’ (What Would Jesus Do) or the ‘Golden Rule’.  One self-help book exhorts us to:  Do no harm, make things better, respect others, be fair, and be loving.  All these ‘rules’ are helpful, but far from infallible. 

 

Indeed, the easy problems have already been solved.  It’s the difficult, complex moral problems that resist rules; leaving one to ask ‘what ought I to do?’     

 

In any situation, there are a number of demands on you: your prima facie obligations.  Your obligation is what you should do all things considered.  The metaphor for decision in such cases is ‘weighing’.  Is there a rule for how to ‘weigh’ competing prima facie obligations?  Some cases are clear.  Should I lie to save a life?  You have both a prima facie obligation to tell the truth and to save a life.  Lying is always prima facie wrong.  But in this case your obligation to save a life ‘outweighs’ your obligation to tell the truth.    

 

To complicate matters, often an action will have more than one effect.  In certain situations an action can have two effects, one good and one bad.  Thomas Aquinas addressed this issue in his ‘Summa Theologia’ published in the 13th century.  In it, Aquinas put forth his ‘Principle of Double Effect’.  He argued that under certain, very specific conditions, it may be permissible to perform a good act that has a bad consequence, even one that we would ordinarily be obligated to avoid.  His conditions are fourfold:

 

  • The act itself must be morally good or at least indifferent.
  • The agent may not positively will the bad effect but may permit it.
  • The good effect must be produced directly from the action, not by the bad effect.
  • The good effect must be sufficiently desirable to compensate for the allowing of the bad effect.

The Principle of Double Effect (along with Kant’s Categorical Imperative and the ‘Utilitarian Calculus’) can be daunting and of little use to most of us mere mortals.

 

Philosopher, Thomas Schick, has come up with a more practical approach to moral reasoning he calls inference to the best action.  When we are presented with ethical conundrums, we should weigh our options in light of how they meet the criteria of:

 

  • Justice: The extent to which an action treats others justly or otherwise.
  • Mercy: The extent to which an action alleviates unnecessary suffering.
  • Beneficence: The extent an action offers the most benefit to the most people.
  • Autonomy: The extent to which an action respects individual rights.
  • Virtue: The extent to which an action reflects what a moral exemplar would do.

 

Question: Think of a real case which forced you to balance competing moral obligations.  What guided your deliberations?  What ‘rules’, if any, did you employ?

 

Ethical rules are near impossible to formulate, though many have tried.  Our knowledge of our own language is a similar case in which rules are extremely difficult to formulate.  A child picks up knowledge of the language by the age of three or four.  Yet hundreds of linguists have been working for decades to formulate that knowledge in terms of rules and are still a long way from success.  We learn ethics like we master language; by practice rather than learning rules.

 

Our efforts to determine moral certainty typically result in frustration.  Part of the reason is that whatever ‘rule’ we come up with requires some form of justification.  That’s what all the theories attempt to provide.  But justifying our moral claims can be problematic.

 

Moral Fortitude

 

Our moral ‘instincts’ inform our judgments and actions (and those we recommend to others).  Moral rules may be helpful, but both our intuitions and rules require some form of justification – a reasoned defense.  We are rarely called upon to rigorously defend our moral beliefs or claims, but at crucial times we cannot avoid the challenge.

 

Consider two moral claims: a claim that something is morally right, and another claim that something else is morally wrong.  They both require justification.  But do they both demand the same rigor or level of justification?  In today’s culture it appears the burden of proof is greater to justify a moral wrong than a moral right.  Indeed, ‘unjustified’ moral wrongs are typically considered permissible.  Consider the actions of ‘pay day lenders’.

 

Question: Are moral wrongs and rights held to a different standard?

 

One line of research in this area assumes actions that are unfair or cause harm are by default morally wrong.  But what if there is no unfairness or no harm?  In this case researchers employ a strategy all too familiar in children: they ask a series of ‘whys’.  And they continue to ask ‘why’ up to the point respondents become dumbfounded.  Using this tactic, many moral claims seem to lack sufficient justification.  For example, researchers might construct a hypothetical scenario in which two adult siblings, out of shear folly, agree to have sex.  Then they pose the question: Is incest morally wrong?  You answer ‘yes’.  But they don’t stop there, they ask you to defend your answer; ‘why’ do you believe it’s wrong?  You respond, ‘Because it’s shameful’.  Well, they say both parties are rational, discreet, and love each other.  Why is it shameful?  And so it goes, until respondents finally surrender.  Supposedly, the goal is to work back to first principles all can agree on.  But that rarely happens. 

 

Question:  If challenged, what if you can’t justify some moral claim?  What then?

 

Do we retract our claim and ‘flip’?  What we thought was morally wrong is now somehow morally right!  Do we agree to disagree and move on?  Do we err on the side of ‘freedom’?  Succumb to moral relativism?  Keep searching for an answer?

 

Perhaps we shouldn’t be looking for certainty, but instead, simply trying to figure out how to act morally.  That is, trying to ascertain which moral claims are true without knowing what makes them true.  If all else fails, Jean Paul Sartre recommends: self-justification.  No need to seek out moral authority; just decide for yourself.

 

Good advice?  Perhaps, but it demands we possess moral fortitude; the ability to discern, trust, and stand behind one’s moral convictions.  In the end, you may have no choice but to conclude: this or that is morally wrong (or right) because ‘I say so!’

 

***********

 

 

 

* The bulk of materials offered were borrowed and adapted by Wayne Harper for our use from two primary sources: ‘Questions of Value’ taught by Patrick Grim; and ‘the Big Questions of Philosophy’ taught by David Kyle Johnson.  Both are produced by The Teaching Company.