The Great Debate: Morality (Part 2)

In Part 1 of this 2-part discussion on morality, I examined the claim that God is necessary for objective morality to existFor all of the reasons that I discussed, I believe a god to be a poor explanation for the existence of objective moral values.  So what does this mean about objective morality?  Does it even exist?  Is morality nothing more than an illusion?  Is it simply the subjective opinions of a given culture, or society?  The answer is no.  In this, part 2 of my discussion on morality, I intend to establish that objective morality can be said to exist, without any appeal to a god or gods.  I certainly have no delusion of solving an age-old debate, in one paltry blog article, but I believe I can at least present a brief explanation of my position, and address a few common objections that I face.

Now, let me be clear, when I state that objective moral values exist, I am not talking about absolute morality.   I do not aim to make the case that there exist “transcendent” or “universal” moral truths, or that morality would exist independent of any sentient creatures to appreciate it.  Instead, when I speak of objective moral values, I am referring to values that are based on evidence, reason, observation, and logic.  This is the scientific deployment of the word objective, and says only that morality is a factual discussion, rather than a discussion of pure opinion.

So before I go any further, it seems important that I offer up a definition of “morality”.  After all, it is the entire focus of both this article, as well as the last.  There are many basic definitions out there, but the one I find to be most explicative, while still being concise, is the following:  “Morality is the differentiation of intentions, decisions, and actions between those that are good (or right) and those that are bad (or wrong)”  So when we speak about morality, we are talking about what is right, and what is wrong.  We are also speaking about behavior, decisions, and actions, implying, necessarily, that morality relates to the experience of conscious creatures.  After all, all evidence indicates that only conscious creatures are capable of making decisions, or having intentions.

Definitionally speaking, then, morality is a discipline where we determine what actions are “good” and which are “bad”, with respect to how they impact other conscious creatures.  The most common accusation against morality being “objective” is that we would have no way of stating what “good” is, or what “bad” would be.  Good for whom?  Bad for what?  These complaints aren’t nearly as compelling, or problematic, as their proponents seem to think.  After all, we are dealing with terms here that are well defined, and universally understood.  In the context of conscious experience, good is the term used to describe that which is favorable, enjoyable, pleasurable, or prosperous.  Meanwhile, bad is deployed to describe that which is undesirable, unpleasant, miserable, or detrimental.  So how does morality come into play?  How does understanding what good and bad mean, give us any insight as to which behaviors would fall into which category?

The answers to these questions, contrary to conventional thought, can be found in science.  After all, when we speak about the experience of conscious creatures, we are discussing a subject matter about which we have a great deal of knowledge.  We have information, and data, on what constitutes experience.  We are also able to reflect on our actions, and evaluate the consequences of our decisions.  In doing so, we are gathering data that is factual in nature.  We are uncovering patterns that are observable, testable, and verifiable.  From this, we can arrive at conclusions, regarding various behaviors, that are based on prior experience, observable evidence, and recognizable impact.  It is then possible to use reason, logic, and an evaluation of the evidence, to present solutions to moral dilemmas.  Solutions that will be evidentially based, and therefore OBJECTIVE.  In his book, The Moral Landscape, Sam Harris does a brilliant job of going into greater detail about how science can be used to determine human values.  Essentially, Dr. Harris argues that all experiences of conscious creatures will fall into either “peaks” or “valleys” on a “moral landscape”.  Peaks being the height of flourishing, and well-being, while valleys serve as less-optimal, and more miserable, areas of the landscape.  And since, he argues, there will be a great deal of factual information regarding where a given decision falls on this landscape, the subject of morality becomes one that is a fact-based realm of inquiry.

c/o whywereason.wordpress.com/tag/the-moral-landscape/

Once we are speaking about a realm of inquiry that is factual in nature, we are speaking about a discipline that will have objective “right” and “wrong” answers.  When the evidence is gathered, and reasoning is deployed, we can arrive at conclusions that are sound, and supported, rather than answers that are simply preferable, or baldly asserted.  This, of course, will not lead to universal answers to complicated ethical dilemmas.  What it does allow us to do, however, is evaluate our decisions regarding morality, from a factual standpoint.  In doing so, we are capable of promoting certain behaviors, or decisions, as objectively “correct”, and denouncing others as demonstrably “wrong”.  This has profound implications in the discussion on morality.  It means that we are not precluded from condemning morally depraved actions as simply being a “difference of opinion”.  Rather, we can deploy reason, argument, rationale, and evidence, in support of the conclusion that a given action is objectively wrong, regardless of opinion.

This is, in the interest of keeping this article of suitable blog-length, a very brief overview of how morality can be viewed as an objective approach to decision-making.  I will spend the remaining balance of this article addressing some of the more common objections I have encountered whenever I have defended this position.  After all, few people really fail to understand what is meant by morality, but rather disagree as to whether or not it is truly “objective”.

I Disagree!

Ok, so you lay out your case for a given action being wrong, you present evidence and reasoning, and I disagree with you.  After all, I have different priorities, and I simply do not recognize YOUR priorities as being correct.  How can you possibly say that morality is objective, if there is disagreement about what is or is not moral?

Disagreement is NOT a threat to the objective truth of a given claim.  After all, evolution is regarded as objectively true by modern biology.  This stands to be the case DESPITE the fact that a large faction of the world’s population disagrees with its teachings.  If a biologist were to present the evidence for common descent, and provide reasoning, and I were to still disagree, it would not make his claim any less objectively true.  It would, instead, render me objectively wrong on the matter.

The same can be said of moral disagreements.  In any situation where there are opposing views, the question becomes, what does the evidence say?  Who’s position is reasonably defensible, and more consistent with the facts?  In cases where there is an answer that is clearly in line with the evidence, those who disagree are factually incorrect.  In situations where there is no clear answer, the key will STILL be to make a decision that is as reasonable, and in line with the evidence, as possible.  Even in situations where the absolute best answer is unclear, there would still be a method by which to eliminate objectively BAD answers, and narrow the choices down.  And in such situations, further reasoning, and examination, could eventually provide better understandings, and more clear-cut correct answers.  This is how scientific discovery works.  Disagreement, ultimately, does nothing to threaten the objective truth of a claim, so long as the claim is adequately defended by evidence and reason.

Wishy-washy

Won’t this model of morality be situationally dependent?  How can it be objective, if this is the case?  Couldn’t you even potentially get multiple “right” answers to the same question?  This doesn’t seem very clear-cut.

In a word, yes.  While morality, and moral truths, can be said to be objective, they are not absolute.  Neither are we going to have one-size-fits-all blanket assertions.  This is a subject matter that is far too complex, and nuanced, for such simplistic thinking.  Does this threaten its objectivity, however? No.

Let’s look at an example of an “objective” truth, with which we are all likely familiar:  The freezing point of water.  Nothing subjective about this.  We don’t decide at what temperature water freezes based on opinion or conjecture, we arrive at the temperature based on objective evidence and observation.  So it won’t be situational dependent, right?  We can state that the freezing point of water is 32 degrees Fahrenheit, and 0 degrees Celsius.

Not so fast.  Are we dealing with fresh water, salt water, or pure water?  What is the barometric pressure, or altitude of the area?  What is the mass of the body of water in question?  Is the water stagnant, or in motion?  Answers to EACH of these questions will have an impact on the freezing point of water.  Is this temperature not “objectively” true?  Of course not.  We know at what temperature water freezes, and we can predict how this freezing point will be impacted by any of the above “situationally dependent” factors.  As is often the case with objective truths, in order to give an accurate answer, we need as much data as possible.  This does not make the arrived conclusion any less objective, so long as it remains based on evidence, and demonstrable facts.

And what of the problem of multiple answers to a given question?  This is hardly a problem at all.  Even in mathematics, a recognized objective realm, we can often have equations or problems that will have more than one solution.  This does not make any of the answers “subjective”, but rather means that there are numerous objectively correct answers to the problem.

Other times, however, we can define a “good” or “correct” answer, by what it is not.  That is to say, we can eliminate “wrong” or “incorrect” answers, even if there are numerous right answers, or the correct answer is unknown.  In his below linked speech, Sam Harris discusses the example of food.  He asks “what is food”, and points out that you can get a wide variety of answers, without collapsing the objective science of nutrition.  But I wish to examine a different version of Sam’s example, by asking a question that is almost completely subjective.  Even in these instances, where the “correct” answers will be highly subjective, we are still able to eliminate incorrect answers as “objectively” wrong.

If I were to ask you “what is the greatest imaginable food?”, we now have a question that calls for a relatively subjective answer.  You may decide to answer “ice cream”, because it is delicious, you may answer “broccoli”, because it is nutritious, or you may even decide to answer “steak”, because you have good taste!  Any of these answers could be viewed as acceptable answers to my question, and you could provide reasoning and argument to support your decision.  If, however, you answered “tire iron”, I would have to insist that you have misunderstood my question.  You answer is “objectively” incorrect.  There isn’t any evidence, or reasoning, to support that this is a valid response to my question, unless we begin changing the definitions of every word that was used in my inquiry.

The question of morality is much like this.  Our answers to many moral questions may be situationally dependent, and we may require as much specific knowledge of the situation as possible in order to give a truly accurate answer.  We may have moral dilemmas to which there are multiple “correct” answers.  We may even have situations in moral discussion where the correct answer isn’t clear, or is highly contested.  In such situations, however, we are not incapable of eliminating, or condemning, obviously wrong answers to the dilemma.  Answers that have no evidential support, and do not reasonably present a solution to the given problem.  None of this, at the end of the day, threatens the objectivity of morality.

David Hume I presume?

In his work, A Treatise of Human Nature, David Hume raises the famous complaint that one can not derive an “ought” from an “is”.  His point here is that, essentially, science is capable of telling us what is, but incapable of telling what we should do.  After all, how DOES one get an “ought” from an “is”?

Now, there have been many responses to this question in the history of meta-ethics, but I will not bore you with every last one of them.  Instead, I find Sam Harris’ response to this non-problem to be rather succinct.  In responding to the complaint that we cannot get an “ought” from an “is”, he is quick to point out that we cannot ascertain any “is” without first having “oughts”!  Scientific understandings could not have happened if we did not first have certain values.  We ought to value evidence, we ought to seek an understanding of the world around us, we ought to embrace logic, and reason.  With these oughts in tow, many “is” truths emerge.

So clearly we can derive an “is” from an “ought”, but what about the reverse?  Does our knowledge about what “is” inform what “ought” to be?  Of COURSE it does.  Each of us does this everyday.

My shift at work IS from 9am to 5pm
My livelihood IS dependent on showing up for work
It IS 8:30 am
It IS a 20 minute drive to work
I OUGHT to get my ass out of bed

We make every decision on ought, based upon what is.  How could it possibly be any other way?  Even if you ask a theist why certain oughts ARE oughts, they will respond with reasoning like “there IS a god, there IS a commandment from the god to do the following”.  In order to make any decision on what “ought” to be, we must necessarily know what “is”.  One of the most common complaints, when a person is incapable of making a decision, is “I don’t have all of the facts” or “I need more information”.

The basic truth is, without a rational understanding of the world around us, matters of ought become entirely meaningless.  How would one know whether or not they “ought” to jump out of a window or not, without knowing about gravity?  How could one even begin to form a value system, without first understanding the nature of reality?  This suggestion doesn’t even make the least bit of sense.  If we dismiss the “is” as nothing more than an explanation of what exists, we ignore the undeniable fact that all of our decisions are based upon these truths.  Oughts inform what “is”, and what “is” informs our oughts.  Hume’s “dilemma” is nothing more than wordplay.  These two elements are intrinsically linked, and they always have been.

It’s just an opinion

Philosophically speaking, there is almost nothing that ISN’T an opinion.  This objection misses several crucial points, chief among them being that there is a very big difference between an opinion, and an informed opinion.  Facts are, in every sense, nothing more than opinions that are based on sound reasoning, evidence, and observation.

So when we speak about factual matters, we needn’t constantly qualify that “all facts are based on subjective observation and inquiry”.  We recognize, in intellectually honest circles, that facts are solid knowledge, based on evidence and data.  They are not, however, dogmatically asserted to be absolute truths.  When we say that it is a fact that we live on a planet that orbits the sun, we mean to say that ALL evidence we have available suggests this to be the case.  Could we be wrong?  Technically, but we can demonstrate that we are not, using evidence.  It is therefore a fact, and few people have a hard time understanding this.  If the same methodology can be applied to matters of morality, then we are dealing with a factual realm of inquiry.

That’s NOT “objective”

Speaking of “opinions”, this brings me to the next objection.  I have spoken to many people, some whom even agree with my model of morality, who object to the use of the term “objective”.  In doing so, many point to a philosophical definition of objective that emphasizes that, to be objective, a given thing must be entirely free of ANY bias, or opinion.  The complaint here is that, if anything is dependent on a mind to understand it, it must necessarily be subjective, and therefore not objective.

There are multiple problems with this objection.  For starters, and most importantly, the above definition is NOT the only, or even most commonly used, definition of objective.  This may be one deployment of the term, but it is not what is meant when the term is used scientifically.  The scientific definition for objective is “based on evidence, observation, and facts, not tainted by personal feelings or bias”.  In the very definition, observations and evidence, are offered as the basis for objectivity.

Science often speaks of “objective” evidence, or “objective” reality.  When doing so, they are not speaking of absolutes, independent of any possible interpretation.  Indeed science does not deal in any such dogma.  Instead, they are recognizing that, philosophically speaking, EVERY human understanding, statement, knowledge, truth, or fact, will be “subjective” in the absolute sense.  Just as every “fact”, as stated above, could technically be called an opinion.  Because of this, subjective is used to describe opinions or truths that are reliant upon personal opinion, or subject to extreme bias.  Meanwhile, objective facts or truths are those that are based on evidence, and are demonstrable to anyone who wishes to “see the work”.  The Theory of Gravity, Theory of Evolution, and Germ Theory, are all objective truths in the scientific community.  This is to say that they are explanations based on demonstrable facts, observable phenomena, and tangible evidence.

The second problem with this objection is that it can be leveled at literally ANYTHING we hold as true.  As mentioned, all of our knowledge is, on some philosophical level, subjective in this sense.  If it is a complaint that can be leveled at anything, it is hardly compelling when leveled at any one given thing.  It is a line of radical skepticism that leads us dangerously close to the next objection that I sometimes encounter.

Post Modernism

This is a method that questions whether we can ever really “know” anything.  Knowledge is essentially illusory.  From the wiki article on the subject, “postmodernist approaches are critical of the possibility of objective knowledge of the real world”, and “postulates that many, if not all, apparent realities are only social constructs and are therefore subject to change”.

The first question I have for someone, essentially claiming objective knowledge to be impossible, is “how do you KNOW that?”  This position, aside from being bizarre, is rather self-defeating.  You can use post modernistic approaches to attack the underpinnings of all of the world’s most basic knowledge if you like, but it does not leave you with anywhere to go epistemologically.  It is also possible to question the ideas of post modernism, WITH post modernism.  This method leaves one chasing their own tail in a hurry.

We needn’t have absolute certainty, for knowledge to be relevant.  As this post is going to be long enough without delving into evidentialism, or various other philosophical epistemologies, I will simply sum up my position with regards to post modernism.  Either we have REASONS for believing something to be true, or we do not.  Either we have evidence to support our understandings, or we do not.  Either we can investigate the world around us, in a logical, consistent, and rational manner, or we can not.  And if you state that we can not, would you not have to provide REASONING for your assertion?  Would you not need to deploy the very process you condemn, in order to establish your idea?  In total, I feel like post modernism is summed up best by Russell Glasser, a host of a cable access show in Austin, TX, called The Atheist Experience.  He hits the nail right on the head when he remarks “post modernists are annoying”.

In Summary

So is it really this simple?  Is science really all that is necessary to unlock moral truths?  Well, yes and no.  Yes, we can deploy an objective, and scientific approach to morality, that will help us to arrive at objective moral truths.  No, this will not be remotely simple.  As with anything worth knowing, it requires a great deal of thought, hard work, and desire, to get it right.  Morality is a question that has been with our species since the beginning of history, and it is one that will, in all likelihood, remain with us until the end of time.  I apologize to anyone who thought that they would read this and find the easy, ready-made answers to all of life’s difficult ethical dilemmas.  Unfortunately, no such things exist.  Any attempt at simplifying morality is in vain.  Every explain-all solution, such as the god hypothesis, comes up woefully short.  The basic fact remains that decisions on morality, ethics, and well-being, are often complex, nuanced, and impossible to reduce to a list of universal do’s and dont’s.

This is hardly cause to fret, however.  As is hopefully clear, we are not lost in a sea of moral relativism, or incapable of finding answers to moral questions, however complex they may be.  Morality is, at its core, an objective discipline.  Through science, reason, observation, and experience, we can work our way to moral understandings, and truths.  We may never have all of the answers.  There may always be more to be discovered.  We must always remain vigilant in challenging the truths to which we already hold.  In doing so, however, we can take comfort in the fact that we needn’t do so blindly, or in the context of a culturally relative attitude.   Instead, we can investigate, and reason towards, answers to moral dilemmas that are true, correct, and OBJECTIVE.

Morality, in the end, is far more than just an illusion.  Rather, it is the science of understanding, and improving, the experience of life.

SOURCES/ Related links

Sam Harris’ speech at TAG conference

Hume’s is/ought
http://en.wikipedia.org/wiki/Is%E2%80%93ought_problem
Scientific “objectivity”
http://en.wikipedia.org/wiki/Objectivity_%28science%29
Definition of “objective”
http://dictionary.reference.com/browse/objective
Post modernism
http://en.wikipedia.org/wiki/Postmodernism
Evidentialism

Morality definition
http://en.wikipedia.org/wiki/Morality

Recommended reading:
Sam Harris, The Moral Landscape

Advertisements

It’s JUST a Theory…

As I am not a scientist by any means, I will generally leave the specifics of scientific discussion to people who are far more qualified.  I must, however, briefly address an idea I encounter often.  How many people reading this have ever been in a discussion about evolution, or the big bang, and heard a person dismissively say “it’s just a theory”.  This is often said with a certain level of smugness, as if theories aren’t to be taken seriously.  How many people reading this have actually, themselves, said something to this effect?  Since this comes up so often in debates with people who are skeptical of science, I figured I would address this point.  It is, after all, a misunderstanding of science that is so basic, even a layman can see the problem.

I Guess it’s a Theory…

The confusion, I suspect, arises from the colloquial use of the term theory.  When a person claims to have a “theory” about something, it is often actually a guess.  It is conjecture, and it could be wildly incorrect.  A random individual, offering up their “theory” about a given thing, is under no obligation to have thought about their idea for more than a single moment.  This, it seems to me, is why the word theory is not taken seriously by some.

So how, if at all, is it different when science uses the word theory?  Simply put, the meaning of the term theory is VERY different from that used above, when it is used in the context of scientific explanations.  The use of the term theory, that is described in the above paragraph, is far more like the word “hypothesis”.  When a scientist gathers enough observations, and sets out to explain a particular phenomena, they begin by forming a hypothesis that they believe explains what is being observed.  This hypothesis is a guess, even if it is often an educated guess.

And so it Begins

With a hypothesis in mind, the process is now only just beginning.  What follows is a very rigorous series of tests and experiments, designed to examine the hypothesis for accuracy.  These tests are designed to verify the hypothesis, and ensure that it accounts for all available data.  If, at any time, the results of an experiment call the hypothesis into question, the hypothesis is either tweaked, or outright discarded.  However, if the hypothesis appears to be confirmed by every test, the results will be forwarded to the next step in the process – peer review.

We have now made observations, formed a hypothesis, and tested this idea over and over again to great success.  One would think we have a pretty solid explanation for the original phenomena, but the real fun is only about to begin.  We must now subject our hypothesis, and our experimental results, to the process of peer review.  This basically means that well-trained experts, in our given field of study, are going to take our idea and do everything in their power to dismantle it.  The hypothesis will be screened with a fine tooth comb, and our experiments will be placed under a microscope.  Every known method of “falsifying” our hypothesis will be attempted.  If, at any time during this process, our hypothesis is exposed to be in error, it is dismissed and we go back to scratch.  If however, the idea survives the firing squad, the hypothesis will be published in peer-reviewed scientific journals.

Does Your Theory Pass the Test?

Why is all of this important?  What does this have to do with the word theory?  A scientific Theory has passed every ONE of the above measures, and more.  It has been examined, scrutinized, and taken to task, only to emerge even more confirmed each time.  The title “theory” is EARNED in science.  It is the HIGHEST title that a scientific explanation can achieve.  A scientific theory is based on evidence, and observation.  It’s explanations reliably account for all the information we have, its predictions are confirmed each time they are put to the test.

It is often thought that a scientific explanation is only called a theory when it isn’t “proven”.  Again, this is simply false.  As stated, once an explanation earns the title theory, that’s it.  There is no higher title for it to achieve, and it will never become anything else.  Some seem to operate under the misconception that a theory is a less proven idea, and a scientific law is a fact.  Once more evidence is gathered, a theory will become a law.  This is also not the case.  Scientific laws are simply different from theories altogether.  A theory will always be a theory, and a law will always be a law.  They are not different levels of certainty.  BOTH are considered to be theoretical and BOTH are considered to be “facts”.

It may come as a surprise to some that many ideas, commonly accepted as facts, are ACTUALLY scientific theories.  Gravity, motion, the behavior of atoms, the earth rotating on an axis, the planets orbiting the sun, etc.  All of these “facts” are, in actuality, theories.  Does this make THEM any less true?  No.  It seems to only become an issue when evolution, a theory that is equally supported (if not more so) by evidence, is discussed.

The Million Dollar Question

The only question remaining is that, if theories are established facts, then why do we call them theories?  Why not call them facts?  The answer to this goes to the very core of the scientific process.  Science almost never deals in absolutes.  Any explanation we have can only POSSIBLY be the best explanation available, given what we know.  Science is a discipline of skepticism and intellectual honesty.  It recognizes that our knowledge and understandings can and WILL always improve.  It recognizes that we can only hope to offer explanations that account for all CURRENT data and information.  New data could always be discovered that will force us to revisit our explanations and either refine them, or even discard them.  This is not a failing of science, but rather it’s GREATEST strength.  The discipline of science is ever mindful of our own limitations, and endlessly seeking the next great discovery.

Which of Your Theories Could Use Refinement?

When the topic is evolution, or even the big bang, it is so often said these are just “theories”.  Nothing could be further from the truth.  Yes, they are theories, but there is nothing “just” about them.  They stand as the best explanations we have, or COULD have, given the evidence available to us.  Their predictions have been observed, their teachings have been confirmed, their evidence has been examined.  Every attempt at falsification has proven to be in vain.  We know these things to be true with the same degree of certainty that we can know any scientific understanding to be true.  That is to say, it is definitely the case, provided new evidence does not come along to completely refine our understandings.

To make a long story short, it isn’t JUST a theory……

 

Sources: