Sunday, February 12, 2017
Philosopy: Mind and Machine-Chapter 4
Chapter 4
Theoretical Issues In Religion
What We Will Discover
* We learn the traditional arguments for the existence of God.
* We examine the critical questions about God's existence and the consequences of belief.
* The nature of the human being, and questions about the soul are examined.
4.1
Questions About God
The philosophy of religion, and associated fields such as theology, explore issues of faith in the specific context of religious belief. Here we will begin with classical arguments that seek to prove the existence of God and then look at a classic problem for God's existence, "the problem of evil." We will then turn to some of the topics that the philosophy of religion investigates: the soul, free will, and even what it means to be a person. We will also look at the relationship between the religious belief of an individual and the issues this raises for a diverse society.
The Ontological Proof
The ontological proof states that the greatest possible being you can think of must exist, and that being is God.
Concentrate as hard as you can, and try to think of the greatest thing you possibly can conceive. The being you are thinking of is perfect: You also recognize that you cannot think of a being greater than this being. This is the greatest being that you can possible conceive of. This being must exist.
The argument just given has come to be known as the ontological proof of the existence of God. "Ontological" comes from the Greek word for "being," and the ontological argument depends, fundamentally, on the nature of God. Traditionally, this argument is attributed to St. Anselm, although it is perhaps better known through the works of Descartes, and the title itself was given to this argument by Immanuel Kant.
The idea behind the ontological argument is actually pretty simple. If we assume, which seems plausible, that the greatest possible being one can think of would deserve to be called "God," then the being we refer to using the name "God" would have to exist. For if the being we are now calling "God" did not exist, it would be easy to think of a greater being; it would be that same being, only one that actually existed. Since you recognize that the being you are thinking of is the greatest possible conceivable being, that being must include existence as part of its nature. Therefore, God exists.
St. Anselm's Ontological Proof of God's Existence
Robert Adams discusses the traditional proof for the existence of God that, since the 18th century, has been known as the "ontological proof of the existence of God."
Question: Adams claims that the ontological proof has little contemporary support. Why do you think that is? Some philosophers and theologians continue to endorse this argument; why do you think some might still find this argument appealing?
To make this a bit clearer, imagine someone offers to give you $100. You are happy to accept, but then discover that the $100 is just an idea, not an actual $100 you could spend. It seems pretty obvious that the real, or actual, or existing $100 is superior to the $100 that is simply thought about. In the same way, the real, or actual, or existing God is superior to the God that is simply thought about. Since the God we are thinking about is the greatest possible being we could conceive of, then it must exist.
This argument seems a bit "fishy" to many, but it has also been defended by very sophisticated philosophers and theologians. Before turning to some of the criticisms of the argument, we can look at one other way of getting its basic idea across. If a person thinks of a triangle, the thing thought about must have three sides. One can't, in other words, think of a triangle correctly without thinking of an object, a polygon, with three sides. If it doesn't have three sides, it isn't a triangle, and if it is a triangle, it does have three sides. In this sense, it is said to be an essential property of a triangle to possess three sides. In the same way, it is said to be an essential property of God to possess existence; that is, if one thinks of God, one thinks of a being that exists, and exists necessarily. On this view, then, to say one is thinking of a nonexistent God is to contradict oneself, just as one would who said she was thinking of a triangle that didn't have three sides!
The Cosmological, or First Cause Proof
Which came first, the chicken or the egg? The cosmological proof states that God must exist because the chicken (or the egg) requires a first cause—something that brought it into existence.
If something exists, something else had to bring it into existence. For instance, you exist because your parents brought you into existence. Everything, then, that exists, does so because something else caused it to exist. Clearly enough, something we call the universe exists, regardless of how we might choose to characterize what it is we refer to as the universe. If the universe exists, something had to bring it into existence: a first cause. Furthermore, if the universe exists, and something therefore had to cause the universe exists, then that first cause must exist. The only thing that could qualify as this first cause is God. Therefore, God exists.
This argument, which has many different versions, is known generally as the cosmological argument or the first cause argument for the existence of God. It has a very long history, in both Western philosophy (it can be found in Plato and Aristotle) as well as in the work of the Islamic philosopher al-Ghazali, and a version can also be found in Indian philosophy. Two basic and related ideas drive the cosmological argument. First, something cannot come from nothing. The traditional version of this, ex nihilo nihil fit, is an ancient philosophical idea that simply states that anything that is had to come from some place. Second, every effect has a cause; there are no uncaused events, but anything that takes place was caused by a previous event.
The most famous version of this argument is probably that of St. Thomas Aquinas, who presents it at length in his Summa Theologica. Aquinas's argument is complex and very sophisticated and has generated a great deal of debate, both from those defending Aquinas and those criticizing him. But the basic idea of Aquinas's argument is fairly straightforward. He calls something that does not have to exist—you, me, trees, rocks, objects in motion, perhaps even the cosmos or universe itself—a "contingent being." Aquinas insists that all such contingent beings require a cause for their existence, and that other contingent beings aren't sufficient to provide this cause. Therefore, the only thing that could provide this cause would be a non-contingent being, a being that must exist: that is, a necessary being. Obviously, that necessary being is God, so if contingent beings exist, God must exist.
This is pretty abstract, but we might make it a bit more concrete by considering causal chains. You may need to meet someone in an office building: You then go to the building, go into the elevator, push the button for the appropriate floor, walk off, go to the office in question, and meet the person. That would be a short, simple causal chain, leading from needing to meet someone to doing so; all the intermediate steps are caused by previous actions. While each step includes causes and effects—wanting to go to the office causes me to take the elevator—here we might see the desire to meet someone as causing all the various earlier steps we take to reach this final goal. If we think of the universe as a whole, at any given time it is the result of a very, very long and complicated causal chain, but at every step of the way there was a previous "link" in that chain. If we proceed into the past—link by link—we eventually end up with the first link of the causal chain. This link, because it is the first link, doesn't have a cause. So the originating cause, the first cause, must exist in order to cause the universe and to set it into motion. Obviously enough, the only thing that could be this first cause is God.
The argument, when stated, may still sound pretty abstract, but the idea underlying it appeals to many people. While we will see some of the traditional objections to the cosmological argument later, it often seems natural to many who put the argument a bit more briefly and a bit more informally. If the universe didn't always exist, then it had to come into existence. It therefore required a cause, and the only thing that could function as that cause would be God. In short, if the universe exists, God exists.
Cosmological Proof for God's Existence by St. Thomas Aquinas
Three well-known thinkers consider Aquinas's version of the cosmological, or first-cause, argument.
Question: Some think that this argument involves a contradiction, because it seems to state that everything has a cause but that God does not have a cause. How might a defender of this argument resolve this contradiction?
The Argument from Design
Consider the complexity of a computer's circuit board. Someone or something had to make that. The argument from design suggests that our bodies and our world are too complex to come together on their own.
Someone hands you a paper bag containing the parts of a functioning television set; you are guaranteed that all the parts are there, that it was taken apart very carefully, and that if they were put together correctly, all you would have to do is plug it in and it would work. The problem is that you must put all of these parts together by shaking (gently!) the bag, until they are assembled in the proper way. How long do you think you would have to shake this bag to obtain a working television?
This, essentially, is the idea behind one of the oldest arguments for the existence of God. The traditional version speaks in terms of a watch. If you were out walking and found a working watch, you could be pretty confident that someone had made the watch; after all, the chances of all the various parts of a watch coming together accidentally, and it then working, seem to be infinitesimal! A watch, then, requires a mind, intending to construct a watch out of its various parts, and being able to design and build the watch. Assuming a television is as complicated as a watch, a functioning television must also be designed; it seems virtually impossible that its various parts could accidentally be assembled into a television that works.
Compare, then, the complexity of the natural world and the remarkable amount of sophistication its engineering would require. At the very smallest level, cells and bacteria reveal a large number of different parts that must work together for, in this case, the cell or the bacterium to do what it does. From a much more general perspective, the earth must be close enough to the sun to utilize its energy; if it were too far or too close, life—and certainly humans—could not survive. Similarly the ecosystem is delicately arranged in order that all of its millions, or billions, of components work together. In between the microscopic and cosmic scales, we might consider such things as the human eye or the human brain: Clearly such things could not have arisen by accident. The eye and the brain are many times more complicated than a watch or a television. If a watch and a television must be designed, then surely the natural world, which reveals enormously more sophisticated complexity, must be designed. Only God could have the power and ability to provide this design; therefore, God exists.
Without such a designer, there seems to be no explanation for all of the amazing diversity and complexity we find, and we can find it wherever we look. It seems impossible to imagine that all of this could have arisen by sheer accident; that would be similar to thinking that, in a finite, even if long, time, by shaking the bag of television parts you would be able to produce a working television set. Since that appears to be impossible to occur randomly, or by accident, the only explanation left is that someone, or something, was responsible for designing and creating this complexity, and only God could possess the required intelligence and power to do so.
William Paley and the Design Proof
Marilyn Adams discusses Paley's account of the Argument from Design, and how it seeks to establish that God's existence is necessary for that design.
Question: What kind of challenges from evolutionary biology does the Argument from Design confront? How might evolutionary biology, for instance, explain the intricate complexity (or apparent design) of the human eye?
Problems with Traditional Arguments
One objection to the ontological argument is that it doesn't apply to other things. Just because you can think of the best sandwich ever doesn't mean it must exist.
Each of the traditional arguments—the ontological argument, the cosmological argument, and the argument from design—has been criticized on the basis of a wide range of objections. Here we can look at just a few of these, focusing on what, over time, has generally been regarded as the most potent criticism of each argument. It should be emphasized that one who rejects a specific argument for the existence of God does not necessarily believe that God does not exist, but, rather, that the specific argument in question does not establish such a conclusion. So it is good to keep in mind that one may accept what an argument concludes, even though one may not accept the way an argument tries to establish that conclusion.
The ontological argument is generally thought to be the most abstract of the traditional arguments, and the response to it is also fairly abstract. Interestingly enough, the earliest objection to it was made by a Benedictine monk, Gaunilo of Marmoutiers. Gaunilo in the 11th century criticized the ontological argument on the basis of the inference from thinking of something "greater than which cannot be conceived" to the claim that such a thing actually exists (Gaunilo, 1992, "Reply on Behalf of the Fool"). Anselm argued that the greatest being, greater than which cannot be conceived, must exist. Using an island as his example, Gaunilo wondered why this kind of argument didn't work for other things. Using an island as his example, Gaunilo's argument follows:
I can think of an island, perfect in all respects, and no greater island can be conceived.
therefore
This island must exist (Gaunilo, 1992.
Extending Gaunilo's objection, then, it seems that we would have an ontological argument for anything—not just God, but anything "greater than which no other can be conceived."
There were, of course, responses to this objection, but in the 18th century, a very general criticism was put forth by the philosopher Immanuel Kant. Kant claimed that "exist" doesn't function as an ordinary predicate, or as the kind of term we would normally use to characterize something (Kant, 1996). For instance, one might describe a sandwich with various predicates: "has bread," "is delicious," "requires mayonnaise." To say that a sandwich exists, on the other hand, is a different kind of claim; one way of seeing Kant's point is that we must assume--or, more technically, presuppose--that the sandwich exists in order to go on to describe it with the ordinary predicates we might use. The British philosopher G.E. Moore gave contrasting examples that show pretty clearly why "exist" doesn't operate in the same way as other kinds of descriptive terms; that is, why existence cannot be treated as a predicate (Moore, 1936). Imagine Cindy goes to the zoo for the first time and sees her first lion. You ask Cindy what she learned about lions. If Cindy responds, "I learned that lions in the zoo growl," that seems to be the kind of thing one might not be too surprised to hear. But if Cindy were to respond, "I learned that lions in the zoo exist," this might sound odd. The oddness we see in this example is designed to bring out the difference between "lions growl" and "lions exist," showing why "exist" doesn't function as an ordinary predicate. If that is the case, Kant argues, then we can't accept the ontological argument, which employs "exist" as an standard kind of predicate.
A picture of the Milky Way galaxy. One objection to the causal argument: If God created the universe, who or what created God?
The objections to the cosmological argument vary, but they generally focus on the idea that it seems either to assume what it seeks to prove or, at best, provides an account of a first cause that is quite different from the conception many people would have of God. We can look at these in turn.
1. If everything has a cause, then this means that there is no thing that fails to be caused. The cosmological argument seems to indicate that there is a first cause, but it is not clear how there can be such a thing, according to the critics of this argument. Is it a contradiction to say the following two things?
Everything has a cause.
Something (the first cause) does not have a cause.
1. Thus, critics of this argument indicate that it simply asserts that the causal chain must stop somewhere with the first cause, or God, and that this is to assume what the argument seeks to prove. Either the first cause is uncaused (which contradicts the claim "everything has a cause") or the first cause is caused (which contradicts its nature as the first cause). Those who have defended the cosmological argument as saying only God is a necessary being seem to be assuming, as Kant indicated, the validity of the ontological argument, thus leading to the problems we have already seen that this argument encounters.
2. The French philosopher Blaise Pascal usefully distinguished between two conceptions of God, which he called the "God of the Philosophers and Scholars" and the "God of Abraham, Isaac, and Jacob." The God of the Geometers, then, would be a very abstract, impersonal God; perhaps responsible for causing the universe, but not necessarily one with any interaction with that universe, including human beings. The God of Abraham, Isaac, and Jacob is the God familiar from the Hebrew Bible: one who interacts extensively with humans, hears their prayers and, in the Christian Bible, sent his only to son to cleanse humanity of its sins (Pascal, 1999).
English naturalist Charles Darwin offered a third option to the chance vs. design debate: natural selection. The complexity that is said to be proof of God could be the result of years of evolutionary change.
Another standard objection to the cosmological argument is that it can, at best, establish the existence of a first cause, an originating source of the universe, which would seem to be Pascal's "God of the Geometers." It is difficult, then, to show how this conception of God necessarily leads to the spiritually much richer conception of the God found in the Hebrew and Christian Bible, or Allah as described in the Qur'an. The three great monotheistic religions—Judaism, Christianity, and Islam—characterize the Supreme Being using a variety of terms: all-knowing (omniscient), possessing only good intent (omnibenevolent), only doing good things (omnibeneficent), and all-powerful (omnipotent). Those who raise this objection to the cosmological argument see an enormous gap between a necessary first cause of the universe, that may be called God, to a Supreme Being that is all-good, all-knowing, all-powerful, and interacting on a regular basis with human beings, including rendering a final judgment on their lives.
The argument from design points to the remarkable features of the natural world and argues that since such features—as in the tremendously complex structure of the human eye—could not have arisen by accident, they must have been designed, and only God could serve as such a designer. The most potent objection to this argument came with the challenge to the idea that there were only two alternatives here: chance or design. This challenge is best known from Charles Darwin's 1859 On the Origin of Species by Means of Natural Selection. As we saw earlier, Darwin proposes a third option, that species develop through natural selection, or "descent with modification." Even though he was not aware of the genetic mechanism for such modifications, Darwin's ideas would then provide a third possible explanation, indicating that things such as the human eye could be the result of evolutionary change over many millions of years. If, then, the resulting complexity of the world could be explained solely through natural processes, then that complexity would not require a designer, and thus the argument from design would not provide support for its conclusion. Again, this does not mean that God does not exist; it would simply mean that this specific argument doesn't establish its conclusion.
Some respond to this criticism by noting the complexity involved could not be the result of undesigned evolutionary change. To stick with the example of the human eye, it seems that all of its parts are needed to work together—light striking the retina and sending information through the optic nerve to the brain where it is "translated" into a visual report—and this could not happen on an evolutionary account. As this objection is sometimes stated, "What good is half an eye, even if an organism could have half an eye?"
This giant kelpfish is shaped like kelp blades and is also kelp-colored, allowing it to blend into its kelp forest habitat. Evolutionary biologists would argue that the fish in this environment that looked more like kelp were more likely to survive and pass along their genes to their offspring in the process known as natural selection.
Evolutionary biologists have, in turn, responded to this by arguing that, in this case, an organism with "half an eye" might be considerably better off than one that had no visual apparatus at all. Thus, an animal that can make out, if only barely, that some dark object was headed toward it—perhaps a predator—might be able to avoid it, and thus be better off than its wholly sightless competitors. Having this slight advantage, then, might well allow it to survive and reproduce, thus selecting for "half an eye" (Dawkins, 1995, p. 77). In this way, more copies of the DNA that produce this modest visual advantage will themselves survive and reproduce. This, and other similar advantages, over millions of generations, could well result in what is appears to be a miraculously engineered organ, the eye. But, on the natural account here, it is neither the result of a designer, nor is it the result of a sheer accident; rather, it is a feature selected for that gave those who had that feature a competitive advantage.
Arguments continue to rage over the issues here, some of which we have seen, dealing with Creationism, Intelligent Design, and evolutionary biology. In this context, however, the point is simply that the argument from design insists that complexity must arise either from being designed or on the basis of an entirely random and accidental process. Evolutionary biology proposes another view, suggesting a third account of how complexity could arise, without requiring a designer but certainly not from a random process. For, as we saw earlier, biological changes (mutations) that are not advantageous are punished severely in nature. Successful mutations are rewarded in nature, and unsuccessful mutations (the majority of such mutations) are not. That structure of reward and punishment, then, guides descent with modification and is quite distinct from a random or accidental process. If complexity can be accounted for without a designer, this serves as a profound challenge to the argument from design.
The Problem of Evil
As noted previously, the traditional conception of the Supreme Being in the three great monotheistic religions of Judaism, Christianity, and Islam is that God must be, at least, omniscient (all-knowing), omnipotent (all-powerful), and all-good (omnibenevolent and omnibeneficent). One might interpret "God" as a being with other properties (for instance, timeless), but we can assume that no being qualifies to be called "God" who lacks these three properties.
Skulls in a mass grave, discovered after the Rwandan civil war between the Hutus and the Tutsis. Horrific things happen in the world, which some people see as a challenge to the existence of a benevolent, all-powerful God.
At the same time, horrific things happen in the world; things that many would describe as evil. The Shoah, or Holocaust, during World War II, is a standard example of evil; sadly, the list of things most people would describe as evil is a very long one. In addition to the kinds of events that are caused by human beings, there are other events, such as hurricanes, earthquakes, tsunamis, floods, tornadoes, fires, and other natural disasters that have taken the lives of millions. This has raised for some a challenge to the traditional conception of God; as can be seen in the title of Melvin Tinker's popular book Why Do Bad Things Happen to Good People?
One of the most famous versions of this debate can be seen in the philosophy of Gottfried Leibniz and the response to Leibniz in Voltaire's witty, but scathing, Candide. Leibniz argued that God, as omnipotent, could have created any possible world, out of an infinite set of possibilities. But God chose to create this world; thus, God freely chose to create the world in which we find ourselves. Given that God, as omniscient, knew what the best world would be to create, and as omnipotent was able to create it, God's choice must have been correct; otherwise, God could have chosen to create a world that was not the best. But, as all-good, God would only choose to create the best world; as Leibniz concluded, God chose to create this world and thus this is the best of all possible worlds. In 1755, a massive earthquake struck Lisbon, Portugal, virtually destroying the entire town. While this challenged the faith of many, Voltaire reacted by writing Candide, a satire of Leibniz's claim, mocking the idea that a world in which a random earthquake could suddenly annihilate an entire town would be part of the best of all possible worlds. In short, Voltaire asks why a world exactly like that of 1755, but without the Lisbon earthquake, would not be a better world than that world with the earthquake.
Voltaire's challenge has come to be known as the problem of evil. Perhaps the best-known version, in a formulation often attributed to J.L. Mackie, is quite straightforward. Mackie simply claims that these three claims cannot all possibly be true:
1. God is all-powerful (omnipotent).
2. God is all-good (omnibenevolent and omnibeneficent).
3. Evil exists.
Mackie's claim, then, is a logical claim: In saying these three cannot all be true, he is saying the three sentences form an inconsistent set (Mackie, 1955).
A stained glass of Adam and Eve at the Priory Church of St. Mary in Deerhurst, Gloucestershire. The traditional biblical story tells of "original sin," in which Adam and Eve disobey God and eat fruit from the Tree of Knowledge of Good and Evil. Some would argue that God gave humans the right to act freely, and evil happens because of bad people.
One way of looking at the argument here is to identify something one might consider evil; for instance, imagine the 2004 Indonesian tsunami, which killed, injured, or displaced over a million and a half people. Many of those killed were infants and children, and for the sake of the argument, we can describe their deaths as an evil existing in the world. Mackie argues that if this is evil, then either God was unable to prevent it, and thus is not all-powerful, or God was able to prevent it but did not do so, and thus is not all-good. Thus, the three sentences cannot all be true if the deaths of innocent children is in fact evil, then that evil cannot coexist with a God that is both all-powerful and all-good.
To remove the inconsistency, one must change, or eliminate, at least one of these three sentences. But that would seem to be difficult; it would certainly alter, in a fundamental way, the traditional conception of the Supreme Being to deny that it is all-powerful or that it is all-good. Thus, a traditional response to this version of the argument is to challenge sentence 3, and examine what is, and what isn't, evil.
On one hand, it seems to many theologians and philosophers that many examples of evil do exist in the world, but these are the result of human beings acting freely. Thus, genocide, the intentional mass murder of a large group of people, certainly qualifies as evil, but, as intentional, it is a choice made by a human being (or group of human beings). Here we see that evil does exist, but that evil is a result of God creating human beings as free; unfortunately, as we know all too well, human beings have used that freedom to do horrible things. God, then, is not responsible for the evils brought into the world by human beings; human beings were created with the freedom and the capacity to do good or evil.
On the other hand, we may wish to reserve the notion of "evil" to describe only those things done intentionally. Thus, a natural disaster may be tragic, cause much death and destruction, and make many people's lives miserable. But such a thing was not intentional; after all, no tornado decides what town to strike. If evil requires intent, then, the misery created by a natural disaster such as the tsunami discussed earlier cannot be accurately described as evil. Consequently, the evil that does exist is that caused by human beings who possess freedom. On this view, the three sentences can all be true: an all-good and all-powerful God created human beings who are free to do good and to do evil. That they are created as free, and able to do evil, is hardly an objection to the existence of God. Indeed, were human beings not free, it might be even more difficult to defend a conception of God who has created mere automata, or robots, who are unable to act intentionally at all.
Fideism
Some people get tired of what they see as pointless arguments and instead choose to believe in God as a matter of faith. This is called fideism.
As is probably obvious by now, the debates over religion, and specifically over the existence of God, appear to be endless. For every argument defending a specific way of establishing the existence of God, a powerful criticism seems to arise; a response is made to that criticism, a counter-response is made to that response, and the debate threatens to become interminable. Given the methods of philosophers and theologians, and given the significance of the topic—the existence of a Supreme Being—it seems unlikely, and perhaps not even desirable, for such arguments to come to an end. Instead, such explorations of both traditional and contemporary debates over the existence of God, and what might be implied by various responses within such debate, can tell us much about our own beliefs and the beliefs of others.
Some people, however, tire of such debates, convinced either that endless argument is ultimately pointless, or that the very nature of the topic excludes the possibility of evidence, reason, and argument. Often such people embrace a view known as fideism, the doctrine that religious beliefs, and specifically the belief in the existence of God, cannot be--and often should not be-- established on the basis of reason, but solely on the basis of faith. The name of this doctrine comes directly from the Latin for "faith," fides, as in the English word "fidelity"; hence, fideism is a commitment to a belief or set of beliefs grounded exclusively in faith.
This is an attractive option for many. It dispenses with the complex and difficult arguments, only a few of which we have seen, that require discussions of such things as ontology, causal chains, essential features, and possible worlds. It permits a profound sense of commitment to the object of one's belief: If one believes, solely on the basis of faith, in a particular conception of a loving God, there can seem to be no objection, such as that raised by the distance one might find between such a loving God and an abstract and impersonal first cause of the universe. A belief based solely on faith can also preserve the deep and abiding mystery many find fundamental to the relationship between themselves and their God. Finally, the fideist finds solace in the fact that such truths—possibly the most important truths—cannot be revealed by reason, but are the kinds of beliefs that are only available through faith.
For Kierkegaard, the biblical story of Abraham and Isaac was the epitome of faith: When asked to sacrifice his only son, Abraham doesn't question God. In the end, God rewards his faith and stops him from killing Isaac.
A number of influential philosophers have been associated with fideism, and have been seen as advocating it in various ways. Among these philosophers, the best known are probably Blaise Pascal, Søren Kierkegaard, William James, and Ludwig Wittgenstein. We can look briefly at Kierkegaard's approach, although each of these philosophers have a distinctive understanding of fideism, and each is worth looking at in more detail.
Kierkegaard, a Danish philosopher, saw faith as requiring making a choice, or decision, while failing to have sufficient evidence, or even any kind of evidence, to which one can appeal to justify that choice. In this sense, Kierkegaard argues that faith requires a "qualitative leap." This does not mean that for Kierkegaard that faith requires rejecting reason, rather it suggests that faith involves a relationship with an object—in this case God—for which reason is really irrelevant. Faith, at bottom, is fundamentally incomprehensible, and the attempt to make it comprehensible through reason not only attempts to do something that is impossible but also fails to correctly understand what faith involves. For Kierkegaard, the exemplar of this kind of faith is Abraham, who was asked to sacrifice his beloved son Isaac. Abraham proceeds to perform this heart-wrenching deed, seeming not to pause even to ask if such a thing should be done. Rather, as Kierkegaard puts it, Abraham "suspends" the ethical—the issues of right and wrong, which can be fruitfully addressed by reason—for faith. He makes this "leap of faith" and is prepared to do God's bidding, solely on the basis of his profound commitment to God. God, ultimately, stops Abraham, who is told to sacrifice a ram in Isaac's place. Abraham, therefore, demonstrated his fundamental and absolute commitment to faith, in a way that does not appeal to reason but that may even be seen as conflicting with it. This provides a useful example of fideist thinking (Kierkegaard, 2006).
One objection to fideism is that it gives up or discards reason, which can be intellectually unsatisfying.
A couple of problems have, however, been raised for fideism, one within the framework of doctrinal religion and one a more general question about belief, evidence, and reason. First, the Roman Catholic Church, following in particular the views of Thomas Aquinas, regards fideism as a heresy—a violation of Church doctrine—and, consequently, has repeatedly rejected it. On the view of the Roman Catholic Church, faith and reason are complementary, and both are necessary for a full and adequate understanding of one's spiritual commitment. Thus, Pope Gregory XVI insisted, in condemning fideism in 1834, wrote that "the use of reason precedes faith and, with the help of revelation and grace, leads to it" (1834). Therefore, one who regards oneself in accordance with Roman Catholicism cannot embrace fideism. It should be noted, however, that there is debate among Roman Catholics about what precisely constitutes fideism, and that there is also substantial disagreement among Protestant Christians over the relationship between faith and reason.
The more general concern that some have seen with fideism is that, by abandoning reason on such an important question, one's relationship with God is both intellectually and spiritually unsatisfying. On this view, one who believes something solely on the basis of faith has abdicated an essential part of intellectual inquiry, the search for good reasons for holding one's beliefs. From this perspective, one who holds beliefs without invoking reason provides insufficient support for those beliefs. Furthermore, if a conflict arises between two people who adopt fideism, it is unclear how such a dispute could ever be resolved, or even adequately understood. After all, if I believe, solely on the basis of faith, in a particular conception of God, someone who rejects that belief begins from a position at least as strong as mine. Having rejected the idea that arguments can be provided to defend my view, clearly I can't require my opponent to provide arguments for his objections to it. While these disputes may not matter a great deal when carried out on a purely theoretical basis, it is clear from history that these disputes have led to considerable political turmoil. By abandoning any way of resolving such disputes—indeed, actively advocating that abandonment—we may risk losing an important tool for resolving, or understanding, our opponents in these disagreements. In this case, then, the debates may have results that do not remain purely theoretical.
Concept Review 4.1 Arguments Over the Existence of God
Argument: Basic Idea
Ontological Argument
God contains all perfections, including existence. God necessarily exists.
Cosmological Argument
All things are the result of earlier causes; that causal sequence had to begin, and only God could have begun it.
Argument from Design
Things in nature demonstrate too much complexity and design to have arisen by accident; God must exist in order to have created and designed that complexity.
The Problem of Evil
Genuine evil exists, but cannot exist at the same time an all–loving, all–powerful, and all–knowing God exists.
Methodological Naturalism
All complexity and sophisticated indications of design can be explained by natural processes, such as descent with modification over large amounts of time.
Fideism
Crucial beliefs must be held solely on the basis of faith, and faith alone.
4.2
Questions about the Soul
Human beings have, seemingly forever, wondered about what, if anything, makes human beings unique. A traditional candidate is the soul, and here we will look at some of the issues that arise, particularly within the context of religion, about the human soul. This will require that we also look at questions of free will, determinism, and whether human beings really are, among all the animals on earth, unique.
The Notion of the Soul
Certain features of the human being—the species Homo sapiens—have traditionally been regarded as making those human beings, among all the species on earth, unique. A wide variety of characteristics have been suggested: the ability to use language, to use tools, to make plans, to use reason, and even to tell jokes, among many others. Yet some researchers, particularly primatologists who study chimpanzees and bonobos—species closely related to humans—have suggested that these abilities don't necessarily provide ways of distinguishing humans as unique; rather than marking a difference in kind, these features may merely be much more fully developed in the human being and thus suggest a difference in degree between us and our biological relatives.
You might be familiar with the legend of Faust, who sells his soul to the devil for unobtainable knowledge. (The idea of selling your soul has also crept into contemporary culture: the Simpsons episode titled "Bart Sells His Soul," for instance.) But what, exactly, is a "soul"?
At the same time, there seems to be something about human beings that reveals them to be unique, and this notion is reinforced in the three great monotheistic traditions of Judaism, Christianity, and Islam. This idea is noted, as well, in many other religious traditions. A traditional term for this unique characteristic of human beings is "the soul." But already, with this term, we see how deeply engrained this idea is in our language. The standard word translated as "soul" from Plato is psyche, which is, of course, the source of the English word "psychology." Aristotle's work on the soul is standardly referred to by its Latin title, De Anima: Here "anima" is the source of our words "animate" and "inanimate"; that is, an "inanimate object" (such as a chair or a rock) is one that lacks a soul! The French and Spanish use a word similar to the Latin (l'âme/aima) but also use the word (l'ésprit/espÃritu), which is very similar to the English "spirit." The Germans, on the other hand, have a word that is sometimes translated as "soul" (Seele), but they also have Geist, which is closely related to the English word "ghost." This may sound like a lot of words that aren't in English, but it reveals a couple of very important things about our topic. First, human beings have been talking about the soul for a long time; philosophers in the West were talking about it long before Plato. Second, we see that another term creeps into the language we use to talk about the soul, a word that may be more appropriately translated as "mind." We may (or may not) want to take the terms "soul" and "mind" as referring to two different things, but we will want to be careful in trying to determine what the difference is. For instance, does something with a mind have a soul? Or does something with a soul necessarily have a mind?
We will look a bit later at some of the specific issues involved in attributing to someone a mind; indeed, a whole branch of philosophy, called naturally enough the philosophy of mind, is devoted to studying those issues. But in the context of religion, and specifically the three great monotheistic religions on which we will focus, the human soul plays an enormously important role. Consequently, a great deal of attention has been devoted to it, by philosophers, theologians, and many others. After all, in these religious traditions, what happens to one after his or her physical death will involve, fundamentally, the soul. Does one's soul find paradise, or heaven? Or is one's soul doomed to perdition, eternally punished in hell? Or, for that matter, might one find one's soul somewhere in between heaven and hell, as in the Roman Catholic doctrine of purgatory? Obviously enough, to understand these religious views, it is important to understand what the soul is.
René Descartes suggested that human beings are made up of two completely distinct substances: the mind (or the soul) and the body. This is known as dualism.
Many different views have been proposed over the millennia; however, we will focus on one particularly influential view, called "dualism." Often attributed to René Descartes, dualism is the view that the human being is made up of two radically distinct substances, the mind (or the soul) and the body. The soul, on this view, is completely distinct from the physical body. If we were to talk about it in terms of the "mind," that would be a completely different thing than the brain, which is a physical substance. All human beings, then, possess both kinds of substances, which while we are alive on earth interact; in that way, human beings have both a soul and a body. The advantage of the two substances here being radically distinct is that they are independent of each other (independence being a traditional way, from Aristotle, to identify something as a substance). The soul, then, does not depend upon, or require, the body to exist. In this way, when we die and our physical body decomposes, our soul—because it is independent of that physical body—can persist. In this way, Descartes and other dualists argue that the soul is not subject to the kinds of things our physical body might experience after death. Descartes thus seeks to show, with this picture, that human beings have an immortal soul. Naturally, just as our physical bodies are unique to each of us, each of us has an immortal soul that is also unique to each of us. This view of dualism has proved to be both influential and very popular; one way we can see how it has been adopted, if unconsciously, is by remembering that an old expression used to refer to death was to "give up the ghost."
Some have scoffed at dualism's separation of the mind and body. Gilbert Ryle, for instance, ridiculed an embodied mind within a physical body as a "ghost in the machine."
While Descartes and the other dualists seem to have been very successful at separating the human soul (or mind) from the human body, this in turn generated a problem that has consistently presented problems for those defending dualism. Perhaps the shortest and best-known objection is that of Gilbert Ryle, who saw Descartes's view as offering a picture of an incorporeal soul "floating" or as somehow present in a purely material, or corporeal body, a picture he famously derided as the "ghost in the machine" (Ryle, 1949). But even those more sympathetic to Descartes's view have had difficulty explaining how two radically distinct substances are able to interact. On the one hand, there is the human body, which is made of flesh and blood, takes up space, has a certain mass and height, and can be described in purely material terms. On the other hand, there is the human soul, which is wholly immaterial, wholly non-corporeal, not something we can locate in space, and entirely distinct from the human body. As we saw, it was this fact of being so entirely distinct that made it possible for the soul to survive the body's physical death. But how do two radically distinct substances have an effect on each other? In other words, how can a purely "mental" substance, such as the soul, have anything to do with something that is purely physical? As a simple example, my mind may intend to do something, such as raise my hand; my body then carries out this intention, by my hand (part of the body) being raised. Explaining how exactly that works—while recognizing that it does seem to work!—has been a tremendously difficult challenge for philosophers, one so famous it is now known simply as the mind–body problem.
Free Will
If the soul is a traditional component that makes a human being a person, then one of the more specific ways people demonstrate that they have a soul, or mind, is to choose to do something. In the example we just saw, one might choose to raise one's hand. A person may intend to go to the movies. I may consider whether I should change the oil in my car this afternoon or go to the baseball game instead. Someone may see someone and feel the desire to talk with that person. "Choose," "intend," "consider," and "desire" are just a few of the words we use to indicate the idea that human beings have a will. Furthermore, these things also indicate that human beings have free will; that they can, one way or another, freely choose to do one thing or another, or choose not to do one thing or another.
Free will dictates that humans can choose what they want to do—and can also be held accountable for those choices.
If human beings have free will, then they can be held responsible for their actions, and rewarded or punished accordingly. We generally don't hold a toaster "responsible" for burning the toast because we don't think the toaster made a conscious decision to do so. Often we don't hold various kinds of people fully responsible for their actions in the way we do most people, whether because they are very young, insane, or lacking full mental capacity. But beyond a certain age, most people are expected to know right from wrong and are treated accordingly. If they were not free to do the right thing (or not do the right thing), it would make no more sense to hold them responsible than it would to punish the toaster for having burned the toast.
A long-standing worry about free will involves its relationship with the idea that if everything has a cause, then our choices are really the result of earlier causes. Some regard this as leading to the idea that freedom is really an illusion. We will look a bit more closely at this issue shortly. Here, we can see that within the context of religion—again focusing on the three great monotheisms of Judaism, Christianity, and Islam—human beings must be regarded as free. Only if they are free can they be held responsible for their actions, and only if they are responsible for their actions can they legitimately be rewarded for being good and punished for being bad. Because in the religious context we may often be talking about eternal rewards and eternal punishments, these very ideas are grounded in the attribution of freedom to the human being. Whether or not we can satisfactorily prove we are free, it is clear that the monotheistic religions all treat human beings as responsible for their actions, as deserving their rewards and punishments, and as free.
Another traditional concern that philosophers have raised involves the relationship between human reason and the will. As a simple example: I may have a desire to eat seven banana splits this afternoon. At the same time, I know that doing so would not be a particularly healthful choice. The desire indicates that my will is involved, but my reason is also involved because I claim to know that I shouldn't act on this desire. All too frequently, we are familiar with acting on desires, in contrast to what our reason and knowledge tell us we should do.
Philosophers have wrestled with the idea that we often make choices that don't make sense—like eating a whole plateful of donuts.
Different philosophical responses have been put forth to explain the relationship between reason and the will. Plato often seemed convinced that no one ever actually does something that is wrong and does so knowingly; rather, a person who really knows that something is wrong would not do it. For Plato, then, we act immorally because we are ignorant; we don't possess sufficient wisdom to realize that our acts are harmful. Many are skeptical of Plato's account; after all, it seems that many people do many things that harm themselves or harm others, while seeming to have full knowledge that these actions are harmful. Although philosophers have argued about how much, or how little, of Plato's view Aristotle adopted, one standard reading of Aristotle sees him as recognizing that there is a battle between reason and the will, in that emotions and feelings—what are sometimes called "the passions"—can overwhelm the intellect and reason. When our emotions win such a battle, and we decide to do something our reason tells us we should not, we are said to demonstrate "weakness of the will." In such cases, our desires prevent reason from operating as it should.
As is often the case, the views of Plato and Aristotle dominate the history of the philosophical discussion here, and these have been enormously influential in determining how religious traditions treat the human will. Many philosophers, psychologists, and others have struggled to explain why human beings seem to possess weakness of the will. Descartes argues that humans choose to act wrongly because the will is virtually infinite and cannot always be constrained by the bounds of finite human reason. One can find 20th century philosophers, such as R.M. Hare, who argue that weakness of the will is impossible, while others, such as Donald Davidson (1917–2003), argue that it may be unavoidable. Still others don't even see weakness of the will as a problem at all. Arthur Schopenhauer regarded the universe itself as an expression of will, and the human beings' desire to survive and flourish as an almost instinctual drive. Influenced by Schopenhauer, but ultimately rejecting his approach, Friedrich Nietzsche saw human life as an expression of a will to power, and for those few human beings who could achieve it, they were driven to dominate their surroundings (including nature and other human beings). Both, in contrast to much of the philosophical tradition, saw reason as interfering, in one way or another, with the need for the will to express itself.
These and many other views have been proposed to explain what the human will is, how it can be free, or regarded as free (if it is free), and what the relationship is between human reason and the human will. It should be clear, however, that on virtually all traditional religious accounts, human beings are by their very nature free, and thus are to be held responsible for their actions unless a compelling reason can be given that shows why they should not be held responsible. Thus, a free will, and ultimately the soul as fundamentally free, will be an essential aspect of those we regard as persons.
Free Will and Determinism
Causal determinism challenges the idea of free will, suggesting that our present decisions are caused by antecedent events.
On what might be called the "standard" view of the human being, people have souls (or minds), have the ability to use reason, and possess free will. As we have seen, these features allow us to hold people responsible for their actions. Of course, in a religious context, human beings may be held responsible for their actions in a more permanent, or eternal, sense.
But there is a long–standing difficulty with attributing a genuine sense of freedom to human beings, a difficulty presented by causal determinism. In short, if all our actions are caused by the various things that precede it—what are known as antecedent events—then in what sense are we really free? This very traditional concern can be seen in premise–conclusion form as follows:
At any point in time, a person choosing to do something could have chosen otherwise, and this choice is an act
All acts are events
All events are caused
If an event is caused, it is determined by those causes
An act is an event that is causally determined therefore
A person's actions are causally determined.
Free Will: Why We Do What We Do
Eminent philosopher Daniel Dennett considers various views of free will and determinism, and whether determinism really threatens our notions of human freedom.
Question: Why do you think most people find the problem of free will and determinism so compelling to consider, and so important a problem to try and solve?
Here the conclusion, while informally stated, contradicts the first premise, indicating that the first premise could not be true. Many different versions of this argument are available, but the basic point should be clear: If our choices are causally determined by antecedent events, then we aren't really making those choices. If we aren't really making those choices, we aren't really free. Of course, if we aren't really free, then we cannot be held responsible for our actions, and cannot be punished for them either.
This argument worries people for a couple of what are pretty obvious reasons. First of all, we certainly seem to think of ourselves as free; are we all in the grip of some illusion? Are we all convinced that we really are free, when we are not? Second, don't we have to regard ourselves, and others, as free, in order for each of us to be held responsible for our actions? Imagine Henry steals something; we can only consider that behavior wrong if we think he had a choice to do otherwise. But if Henry's theft was caused by earlier antecedent events, over which he had no control, can we really say he was doing something wrong? Wouldn't it more similar to Henry having an illness that was caused by antecedent causes, not by "choice"? We usually don't think someone is morally blameworthy for having gotten sick, do we?
Compatibilism and Incompatibilism
Naturally, a variety of responses have been made showing that freedom and determinism do not necessarily conflict. One standard response is called compatibilism, expressing the idea that freedom is compatible with determinism, or that the two do not necessary conflict. One version of compatibilism is that of David Hume, who argues that free will doesn't require absolute freedom but simply a sense of being able to do otherwise when confronted by a choice. When so confronted, if it is possible that you so choose, that seems to be a sufficient sense of freedom to deflect any significant threat of determinism. Others, such as Kant, seem to think that when we choose to do something, our reasoning seems to be spontaneous, and thus is not wholly subject to antecedent causes; in this sense, one might say our decision looks to be "self–caused." This is another way, some have argued, that we can have a sufficiently robust sense of freedom so that we don't have to worry about determinism, and thus the two are compatible. Some might insist that this is too weak a response, and that this only provides us a psychological view: We can regard ourselves as free, but we might be wrong! Hence, another approach has been offered, incompatibilism.
Both compatibilists and incompatibilists champion free will in spite of determinism. Compatibilists say the sense of having a choice is enough to deflect the threat of determinism. Incompatibilists say that free will and determinism cannot coexist.
Incompatibilists argue that genuine freedom cannot be compatible with the kind of causal determinism we saw in the preceding argument. Rather than argue that we can somehow accept a world that is completely determined by causal events, the incompatibilist argues that freedom and determinism cannot be reconciled. Thus, to maintain that we are genuinely free, we have to show that our freedom is not, in fact, threatened by determinism. It is important to see that both compatibilists and incompatibilists want to defend a rich sense of human freedom; they differ on how seriously they regard the threat of determinism. The distinctions here can be subtle, and the arguments can get very complex and difficult. But the difference might be made a bit clearer by looking at the response Kant makes. One might interpret Kant as a compatibilist: Since we aren't in any important way aware that our reasons result from earlier causes, we can at least think of ourselves as free. But one might interpret Kant as an incompatibilist: Our reasons actually are not the result of earlier causes but are genuinely independent of those causes, and as such spontaneous. This independence, then, grounds a richer sense of freedom, distinct from determinism, and in this way we really are free (rather than just thinking we are). Even though the debate over whether Kant is better regarded as a compatibilist or an incompatibilist rages among Kant scholars, we can at least see such a debate as indicating the complexity of the issues involved.
Soft and Hard Determinism
The hard determinist believes our hands are tied when it comes to free will, and that our choices are determined by whatever happened before.
Some versions of compatibilism are known as soft determinism. This kind of determinism is said to be "soft" because the determinism involved leaves open the possibility of freedom. But, as you may have guessed, this implies a different, stronger version of determinism, hard determinism. The hard determinist simply asserts that any event is the result of the antecedent conditions that lead to it. Just as a rock thrown up in the air will fall back down to the ground because gravity causes it to, any decision I might seem to make is really the result of causes that precede this decision. Indeed, many of these causes may have occurred long before I was even born. Perhaps I go to a music store and purchase a Bach recording. It seems that I might have been able to buy something by Beethoven, but forgotten by me is the fact that my parents spanked me as a child while Beethoven was playing. They had also both come to love Bach (and played his music often) because they had met when their parents (my grandparents) had taken them to a Bach concert, My grandparents, it turned out, loved Bach because, during the Great Depression, they each won a contest, the prize for which was $50 and a Bach recording. One can continue along these lines, but the hard determinist might well say that the reason I bought the Bach recording instead of Beethoven was because my grandparents had entered a contest during the Great Depression. This may sound a bit silly after a while; however, the basic point may be more troubling: If all events are determined by causal antecedents, then why aren't my own judgments determined by causal antecedents? If they are, in what sense am I really free?
One, somewhat pragmatic, response to the hard determinist seems to reveal the view to be incoherent, or at least so implausible that it would be very difficult to maintain. After all, if we were to accept hard determinism, what would that exactly mean? On the view itself, did we have any choice other than to accept it? For that matter, doesn't the hard determinist have to be a hard determinist? Were he really able not to be a hard determinist, then it would seem that the view itself is false, in that we freely choose whether to adopt the theory or not. Hence, it is worth asking whether anyone really is a hard determinist, and what such a claim would amount to, assuming such a view could even make sense.
In any case, our conception of the human being seems to require that we treat others as being responsible for their actions, which in turn seems to assume that we regard them as free. These results hold for ourselves, as well: We think of ourselves as free, at least to the extent that we believe we can choose to do other than we do, and that we should be held responsible for the choices we make. It should be clear that, on the larger religious perspective of the person, the human soul (or mind), and the freedom of the human will, these features are crucial in evaluating our actions, others' actions, and how all of us appear to each other and to God.
Naturalism and the Soul
The religious traditions of the West, including the three great monotheisms of Judaism, Christianity, and Islam insist that human beings, uniquely, possess a soul. This essential feature of what it means to be a human being, or a person, is fundamental to grounding a sense of freedom, as well as a sense of responsibility. At the same time, for various reasons, some philosophers and scientists have argued that there is no independent substance that persists after a person dies; in short, there isn't really some object or thing we can identify as the soul. Rather, for some it is a useful term to employ to describe certain mental states with which we are all familiar, whereas others dismiss the very idea as an illusion that cannot be supported by evidence or argument.
This returns us to an earlier discussion, namely that of methodological naturalism. As discussed there, in terms of the philosophy of science, this methodological approach rejects any introduction of supernatural explanations; thus, the methodological naturalist will not accept an explanation of something that requires fairies or angels or God. Rather, explanations must stick to standard causal explanations, based on solid, public evidence. Amanda might leave a tooth under her pillow one night and discover that it had been replaced by some money while she was asleep. The methodological naturalist will have to seek an explanation that does not involve the tooth fairy, but rather more ordinary—and perhaps less interesting—explanations involving Amanda's parents having replaced the tooth with some money. As we also saw, however, methodological naturalism is, as the name indicates, a method; there is no necessary implication that such supernatural causes do, or do not, exist. Rather, this method avoids employing them in providing its explanations.
A naturalist looks for evidence and avoids employing supernatural explanations for concepts like the soul.
When such a naturalist looks at the issue, then, all that can be permitted in an explanation of what is referred to as the "soul" is, for human beings, their bodies, which includes the brain, and their activity (thinking, judging, speaking, acting, and the like). We earlier encountered Gilbert Ryle's derisive term for what Descartes referred to as the soul, or the mind, "the ghost in the machine." If we take Ryle's language seriously here, then looking for a separate, distinct substance (namely, the soul) that can exist independently from the human body will be about as successful as looking for ghosts. While some may insist that there are ghosts, or even that they have seen ghosts, to accept such a claim the naturalist will require evidence that can be reproduced, be examined, and be publically available. Notoriously, the evidence for ghosts seems, currently, to be lacking in precisely these areas; the methodological naturalist may well say the same about our evidence for a Cartesian conception of the soul.
Then what is the naturalist's explanation for the soul, or the mind? Many naturalists are happy to recognize that there are activities that are fundamentally mental, and we use specific language (sometimes called mentalese, particularly by those who seek to avoid it) to describe such activities as fear, desire, envy, and attraction, among many others that have this aspect to them. How does the naturalist understand such common expressions of mental states, not to mention even trickier notions such as consciousness and self-consciousness. When I am aware of myself thinking about my own ability to think, I wonder whether the naturalist can account for that kind of activity.
Certainly a number of attempts have been offered. One way has been to describe behavior that seems essentially mental as an emergent property. This is a property that, obviously enough, emerges from a collection of more basic things. The easiest example, perhaps, is music: From the mere striking of one thing against another such as one's fingers against some guitar strings, we vibrate the air in such a way that we produce a set of waves that, when they strike a person's ear, generate a complex set of sounds we call music. Thus, music emerges from a set of more basic things, namely those things used to create the sound. The music, in this case, isn't identical to those things, but it is dependent upon them. In the same way, it is suggested that such a thing as hope, which we might express verbally (as in "I hope to catch a fish today") emerges as a general term for the mental state that is created by the enormously complex physical causes in the brain. Here, again, hope is not identical to those causes, but it is dependent upon them.
In naturalism, states that are considered "mental" are explained as being dependent on the neurological system. Feelings like hope and desire are emergent properties: They emerge from a collection of physical reactions.
Some philosophers have used the term "folk psychology" to describe the views that have traditionally dominated discussions of the mind (and, consequently, the soul). Often, this term is used critically, to indicate that such things as belief, desire, or want may be used in a naïve or unsophisticated way, but that there is a naturalist explanation that can replace these descriptions, in terms of one's body: specifically, one's brain. So imagine Ted is sitting on the couch and realizes he is thirsty; he then gets up, goes to the kitchen, and gets a glass of water. We might explain what Ted did in folk psychological terms, and say he discovered that he was thirsty and believed that water would eliminate his thirst; since he then wanted a drink of water, he went to the kitchen to get one. Those who reject such folk psychology might describe this in purely physical terms: A certain sensation in his mouth and throat sent impulses to Ted's brain, registering a state called thirst, which he remembered could be eliminated by drinking water. Many who adopt this kind of strategy rely on the idea that the more we discover about the brain, the more we can use those discoveries to explain our behavior, without requiring mentalese or folk psychology to provide the ultimate explanation. On many such views, we will continue to use terms such as "want" and "believe" because they make things a lot easier to explain, not because such terms refer to some state that can be found in a human being independent of that person's brain and body.
There is a great deal of work being done on studying the brain, and the complex causal relationship between it and what we actually do or say. One of the fields in philosophy doing this kind of work is called cognitive science. Some regard cognitive science as holding the key to explaining all human behavior in terms of the physical body of a human being; others, including those very sympathetic to methodological naturalism, are more skeptical. Thus, Donald Davidson, a well–known 20th century American philosopher, proposed a theory he called "anomalous monism." An anomaly is a puzzle, often a puzzle that may be regarded as difficult, or even impossible, to solve. Monism is the view that the world is made up of a single kind of substance. Davidson combines these two in his account of the things we do that we generally describe in terms of the mental. Thus, an event we might describe in terms of the mental—"I want to go to the movies"—will ultimately be contained in a universe that consists of just physical substances, which is why it is a version of monism. But to try and state the causal laws between the physical substance and the mental desire involved is, for Davidson, impossible to do, which is why it is anomalous. I can spend all day discussing my brain and my general physical state, and I can recognize that I have a desire to go to the movies; what I can never do is show in any law–like or predictable way that my physical state will result in my desire. In this way, Davidson seems to be able to maintain a commitment to naturalism—in the sense that the world only consists of material objects and the natural causes among them—while making room for the ability to use mental terms, such as "believe" and "intend" and "want."
Conclusions from a Debate
We have covered a lot of material here. We have seen several traditional arguments for the existence of God, many of which still flourish. We have also seen some of the standard objections against those arguments, and the argument from evil, as well as whether one should even engage in such arguments, or believe what one believes solely on the basis of faith.
The view of the human person as unique and responsible involves a commitment to human freedom and may require a commitment to something that has been called a "soul" or a "mind" (or both). We looked at some of the standard reasons for how that view can be established and maintained, but we have also seen why some question to what one is actually committed when adopting a conception of the human soul. We have also looked at the long–standing problem of trying to reconcile human freedom with the pervasive causal structure that seems to hold the world together. These issues are, simultaneously, both some of the most difficult and the most important topics taken up by philosophers. Obviously enough, our discussion can introduce only a small part of how these topics have been treated, and can give some hints indicating what some of the options available are. But can we also draw any conclusions from this discussion, recognizing that, given the issues involved—such as the nature of God, and of the human being—these arguments will continue?
A baby boy having his brain tested. Without aiming to reduce the psychological to the physical, perhaps we should acknowledge that we still have much to learn about the natural world and our own nervous system.
One preliminary conclusion may be that the more we discover about the brain, the central nervous system, and the human body in general, the more we may discover that some things we associate with a "soul" may be explained through science, specifically cognitive science. We have come a long way since natural science supported a clear causal connection between odd behavior and the phases of the moon, which gave us our term "lunacy." We no longer use the four "humors" of blood, phlegm, yellow bile, and black bile to diagnose why people act the way they do, although terms such as "phlegmatic" and "bilious" remain to remind us of an earlier stage of medicine. But we do continue to make remarkable discoveries about the brain, and many questions remain to be explored about the relationship between such things as emotions and attitudes and their potential physiological sources. This, however, is not necessarily to reduce the psychological to the physical or the mental to the material; rather, our preliminary conclusion may simply be that any such conclusions are, themselves, preliminary, and that much remains to be determined.
A second conclusion we may draw is that it is of particular importance for many to understand the relationship between faith and reason. While the debates and controversies that rage among philosophers, and among theologians, may become tedious to many after a while, understanding some of these arguments and criticisms has its benefits. After all, if one cannot defend one's beliefs, why should one insist on holding these beliefs? Certainly one's conclusions and beliefs may go beyond one's ability to support them in response to the various arguments that have been mustered against them. But seeing why, and how, these beliefs may be said to transcend the evidence and arguments involved is an important realization. As is often the case in philosophy, it may be the journey, not the destination, that teaches the most valuable lessons.
Finally, as we will see in the following section, understanding our own beliefs, and the reasons we have for holding those beliefs, teaches us more than something just about ourselves. It also leads to a greater understanding of others: their beliefs, their approach to religion and spirituality, and the reasons they themselves possess for their beliefs. Understanding another person better does not, of course, require that we agree with that other person. But we may make enormous strides in seeing both what we share, and where we disagree, when we make the effort to engage in an inquiry into what others believe, particularly when these beliefs—such as the belief in God and in the human soul—play such an important part in so many people's lives and in their understanding of themselves and their world.
4.3
Religion in the Public Sphere
We have looked at some of the theoretical issues debated in the philosophy of religion, such as the existence of God and the nature of the person. We will now turn to some more practical questions that examine the role of religion in society, the problems that arise when religious views conflict, and how these problems can be addressed, if not resolved.
Religious Differences
Even within specific branches of religion there are divisions. These young women are Shiite Muslims and are observing the annual day of mourning for Fatima, the daughter of the Prophet Muhammad. One difference between Shiites and Sunnis is that Shiites believe their religious leaders should be descendants of Muhammad. Fatima is considered the "purest of women" by Shiite Muslims.
There are hundreds of different religions in the United States and thousands of different denominations of, and approaches to, those religions. By far the religion with the most adherents in the United States is Christianity; there are also significant numbers of Jews and Muslims, as well as Hindus, Buddhists, and many others. Surprisingly, the most recent polling data indicates that after Christianity, the largest group identifies itself as having "no religion," which is taken to include those with no commitment to theism. Atheists, agnostics, deists, and others fall into this category (The Pew Forum on Religion and Public Life, 2010).
This can make for a great deal of conflict. Obviously enough, a theist will disagree with an atheist over such fundamental questions as the existence of God. But within specific branches of religion, there are important doctrinal differences, sometimes leading to a schism, or a division among adherents of the general denomination. Thus, there are substantial disputes among Baptists, Episcopalians, Roman Catholics, Eastern Orthodox Catholics, Mormons, and Lutherans: And these are denominations of Christianity. There are also substantial differences among Orthodox, Conservative, and Reformed Jews, and between Shiite and Sunni interpretations of Islam. Obviously enough, Jews and Christians disagree with each other over fundamental religious truths, as do Christians and Muslims, and Muslims and Jews. Thus there are a vast number of differences among those who follow the three great monotheisms, and, as noted, between theists and non–theists.
This is not surprising, of course; history is full of examples of such disputes generating social, political, and military conflicts. Religious differences continue to generate a wide range of hostilities, although sometimes it is suggested that religion is used to disguise what are, in fact, political disputes. One might be tempted to be rather pessimistic here. In spite of the fact that the monotheistic traditions of Judaism, Christianity, and Islam are all explicitly devoted to peace, a disturbing amount of violence has been committed in the names of each tradition.
One of the challenges of a diverse society is to recognize that diversity and find a solution to the kinds of conflicts it seems to generate. On the one hand, a person's religion is often essential to that person's self-conception, and to eliminate the religious component of that conception is to eliminate the way that person fundamentally thinks of himself or herself. On the other hand, what if someone's religious self–conception, by accepting the doctrines of that religion as absolutely true, seems to require that person to reject the doctrines of those who follow another faith? Is there a way of resolving that kind of tension, if the principles of one religion entail that another religion's principles are false?
In a diverse society, religious conflict and acts of intolerance are bound to occur. Here, a Jewish cemetery has been vandalized with a swastika. It is important to understand the sources of such disagreements in an effort to reach some resolution.
Furthermore, in a diverse society, as we have seen, innumerable conflicts inevitably arise, when a large number of people live with the same society but possess contrasting views on religion. In such a society, there may be a temptation to succumb to the wishes of the majority, or steps may need to be taken to ensure that such a temptation is avoided. These conflicts have riddled American history, from denying Roman Catholics the right to vote, to violence committed against Mormons, to having unwritten admissions quotas for Jews at prestigious universities. At the same time, some have argued that their rights to practice their religion are restricted by prohibiting school prayer, or by prohibiting the placement of religious displays on public property. These debates will, of course, continue. But it is important to see the extent and the causes of religious difference, both among those who are religious as well as between those who are and those who are not. Understanding the source of the various disagreements involved will be extremely helpful in determining what, if any, resolution to those disagreements is available.
Religious Tolerance
Some regard religion to be a private matter, but what happens when religion is publicly displayed? This 29-foot concrete cross, part of a war veteran memorial in San Diego, California, generated controversy when an atheist sued for its removal, saying it violated the "separation of church and state" because it was on public land.
It may be tempting to be very open–minded here and suggest that a person's religious viewpoints are private, are not the business of anyone else, and thus should be easily tolerated by others. In such cases, we seem to be following the Golden Rule: Respect my religious views just as you would like yours to be respected. Yet what if my religious views include public demonstrations of my faith, such as having a prayer at a public high school graduation or teaching Creationism as an alternative to evolutionary biology or insisting that the Ten Commandments be placed in courthouses? Such desires might conflict with those of other faith traditions and, of course, those with no faith tradition. One person might be offended by a company whose employees greet customers in December with "Merry Christmas," while another person might be equally offended by a company whose employees are prohibited from greeting customers in that fashion. Can these conflicts and tensions be resolved, or at least minimized?
The temptation, again, is to be open–minded and to adopt a position of tolerance, where all views are accepted. This view is quite similar to one we saw earlier: relativism. One person's beliefs, or those of a given culture, aren't "true" or "false"; rather, they are true for that person or for that culture. This seems to solve the problem and places a high value on tolerance.
But as we saw earlier, such relativism may itself be incoherent. For instance, what if my religious views require shunning those of other faiths? Or, worse, what if my religious views require forced conversions of others, and thus my religious views could be said to insist on not tolerating others' views? Is the relativist then asked to be tolerant of such intolerance, which, of course, would include intolerance of relativism itself? It is not clear that relativism can sustain a view that seems to embrace its own denial.
That is more of a logical point, however, than a religious one. Perhaps more pressing for the relativist's approach is if a religious doctrine involves practices, or rituals, that the larger society finds objectionable. For instance, the Church of Jesus Christ of Latter–Day Saints, or Mormonism, originally practiced polygamy, where men would have more than one wife. This practice was condemned and discontinued by the official Church in 1890; so–called "Mormon fundamentalists," however, continued to practice polygamy, and it may still be practiced in some communities. Presumably, a relativist would insist that the larger community might have overstepped its boundaries in characterizing polygamy as "wrong." Rather, it may have been "wrong" for the larger community, or for the majority of those in the larger community, but it may not have been wrong for those who believed it was an important part of their religious commitment. This is merely one of innumerable conflicts that may arise within a society of diverse religious beliefs.
A member of Malaysia's Ikhwan Polygamy Club poses with his wives. Polygamy is legal in predominantly Muslim Malaysia, and the club says its aim is to help single mothers and older women find husbands. Islam allows a man to have up to four spouses at the same time. Relativism encourages us to be open–minded and tolerant, but what do you do when your own religion doesn't allow for relativism?
This leads to a more general issue for a diverse society that has a large majority who adhere to a basic set of beliefs. For instance, in the United States, some 75% of Americans identify themselves as Christians. It is argued with some frequency that the United States is a Christian nation, having been founded largely by Christians, while including the phrase "In God We Trust" on its currency and the phrase "One Nation, Under God" in its Pledge of Allegiance. The Declaration of Independence refers to the "Creator" and "Divine Providence," Congress begins its sessions with a prayer, both the Senate and the House of Representatives have an official office of chaplain, and the Supreme Court begins each session with the announcement "God save the United States and this honorable court." Shouldn't the desires and wishes of such a large majority not be suppressed? Is it a restriction on their right to practice religion if a minority prevents Christians from practicing their religion as they see fit?
Others argue, in contrast, that the United States is not a Christian nation. On this view, it is not legitimate to infer that the United States is a Christian nation from various references to God and to a Creator; these can be terms used by deists, and deism is quite distinct from Christianity and its notion of a personal God. Some point to the fact that the Declaration of Independence is not a legally binding document; the U.S. Constitution, however, is legally binding and makes two religious references: prohibiting a religious test for political candidates and, in the First Amendment, restricting state endorsement of or interference with religion. Others refer to a somewhat obscure treaty, the Treaty of Tripoli, ratified by the U.S. Senate and signed by President John Adams in 1797. Given that the Constitution states that such treaties have the force of law in the United States, there has been some emphasis placed on this treaty's statement that "the Government of the United States of America is not, in any sense, founded on the Christian religion." Finally, some will point out that "In God We Trust" was added only to some coins in 1864, due to increased religious feeling generated by the Civil War. The phrase was not added to all currency until 1956; similarly, the phrase "One Nation, Under God" was officially added to the Pledge of Allegiance in 1954. Critics have suggested that such additions were made during the Cold War, which was fought against the officially atheistic Soviet Union, and had more to do with politics than religion. Certainly, things done in the 1950s say little about the founding of the nation in 1776.
Some people say the United States is a Christian nation, pointing at references to God on U.S. money and historical documents. Others, however, argue otherwise. For instance, "In God We Trust" was added to some coins only in 1864.
We can thus see that religious differences generate deep passions and frequent debate. The First Amendment of the Constitution seems to have sought to promote tolerance by keeping government out of the religion business altogether. Thus, many very religious people support the idea that the government should neither establish nor prevent religious practice: not because they dismiss religion but because they regard it as far too important for the government to be involved. Some fear, for instance, that the ritual repetition of vague prayers, written in order not to offend those of any specific faith, removes the powerful spiritual message such prayers are meant to convey. On this view, then, religious values are so important that the government should not be allowed near them, and their promotion should take place outside of the public sphere, including schools.
In any case, we can see that these issues can be difficult to resolve and require considerable care and sensitivity in addressing them. In any society, there may be the risk of the majority acting "tyrannically," imposing its views on those who do not share them. At the same time, the minority may be tempted to prevent a legitimate expression of religious faith, a result that will seem unfair and will deny certain rights to the majority because it is the majority. Trying to resolve these tensions, particularly in a society that is becoming increasingly diverse, both in terms of religious minorities and in terms of those of no religion, will continue to be a difficult balancing act, and will continue to require that careful attention be paid to all those whose rights are involved.
Religious Tolerance and Pluralism
Here several experts discuss the growing pluralism of religion in America, and the challenges and opportunities that pluralism offers.
Question: Some regard religious pluralism as a problem, while others regard it as a benefit to society. What reasons can you suggest for each of these reactions?
Marx's Critique of Religion
For many people, their religion is fundamental to who they are. It informs their moral worldview, helping make clear what is right and wrong; gives them a sense of community; is an important part of how they raise their children; provides solace about the present and about the future; and guides their lives in fundamental ways. It offers them an indispensable foundation, and thus is called upon for the most significant events in one's life. Thus, for Christians, between one's baptism and funeral, one will have his or her marriage performed in church, and, of course, one will insist that one's own children also have important events marked by these same religious traditions. In short, many people regard religion as an important part of their lives; they cannot conceive of their lives without its guidance and support. As such, it is indispensable to the way these people conceive themselves.
Karl Marx saw religion as an ideology and was rather cynical about the role religion played in society.
Karl Marx saw it differently. Marx, of course, is enormously controversial, and his harsh comments about religion are seen by many as not really understanding the role religion plays in a person's life, as well as being simply offensive. At the same time, along with Nietzsche (whom we've discussed) and Sigmund Freud, Marx's critique of religion has been extremely influential. This influence can be seen not just in those countries that were inspired toward revolution, such as the former Soviet Union and the People's Republic of China, but also in the work of many Western philosophers who, while not necessarily agreeing with Marx's politics, may share some of his cynicism about how religion has been used within various societies. Consequently, it is useful to be familiar with the outlines of Marx's critique. At the same time, it should be noted that Marx's philosophy is not just controversial, but very complex, and thus some things will be a bit oversimplified here.
For Marx, religion functions as an ideology. "Ideology" is a term used to characterize a set of ideas that structures a society and provides that society with ways of evaluating what is good, what is right, and what its goals should be. Marx argues that in any society the ideology that dominates that society will be the ideology of the ruling class. Thus, in a capitalist society, the ideology that dominates that society will be those ideas that support capitalism: private ownership of property, the ability to buy another person's labor on a free market, the ability to sell one's own labor on that market, and the ability to accumulate wealth to produce goods and services. In addition to the basic economic elements of this ideology, there will also be more abstract, conceptual elements, such as moral values and even what constitutes good art. Religion plays an important role in the construction of these abstract conceptual elements of society, and in a capitalist society it plays, on Marx's view, a distinct and important role.
A shoe factory. Marx saw the working class as exploited, with the worker performing unfulfilling work and the owner benefitting from the surplus value.
Marx sees economic profit as produced by hiring a person to do a certain amount of work, during which time that person produces something of value. Because the value that person produces is more than he or she is paid to produce it, this creates something to sell—at a profit—and is called by Marx "surplus value." The capitalist who fails to produce surplus value, or who doesn't generate profit, will soon go out of business. Marx also looks at this from the perspective of the worker producing that profit and regards this surplus value as wealth created for the person who owns the worker's labor. From this perspective, that profit is seen as the worker making profit for the owner, and thus is, in Marx's language, exploited. In this way, Marx sees capitalism as producing a system that turns workers into commodities, who create wealth for those who own their labor power. On this view, the worker has no particular interest in what he or she produces—in Marx's terms the worker is alienated from what he or she produces—and thus does not feel fulfilled by that work. Overall, then, Marx sees the worker as put in the condition of being another part of the capitalist mechanism to create wealth, and in this condition the worker is alienated from his or her work, and is exploited to the extent that the worker's labor produces profit for someone else, the owner.
Marx sees religion as being used to blind the working class to their true condition, promising them eternal reward in the afterlife, and thus making them more content.
What role, then, does religion play in all of this? Simply put, Marx sees religion as being used within capitalism to make the condition of the worker more tolerable. Religion provides solace and comfort to those who would be otherwise unhappy in being exploited and in not feeling sufficiently "connected" with their jobs. For Marx, religion is used to hide from the worker the worker's real condition. Thus, he advocates replacing what he regards as the illusory happiness, based on religion and another world (such as the promise of eternal paradise), with the real, actual, genuine happiness to be achieved in this world.
An example from an earlier historical era may make some of this a bit clearer. Consider those who were slaves in the United States in the early part of the 1800s. Their lives, of course, were not their own: They were literally owned by their masters. This is different from capitalism, under which a person can buy another person's labor; the ability to work is offered on the free market to the person who offers the worker the best deal. A master might mistreat a slave, and if the slave didn't perform, he or she could be threatened with violence or be sold to another master who might treat the slave worse. But it is in the master's economic interest to get as much work out of the slave as possible; the more the slave produces, the more wealth is created for the master. The master will thus take care of the slave in ways that are necessary to get the most out of that slave. In addition, then, to feeding, clothing, and housing the slave, the master will offer other ways of keeping the slave relatively content. Thus, many slave owners insisted that their slaves hear about religion. For, after all, if one's life on earth is short, and if after one's death eternal paradise awaits as the reward for he or she who deserves it, a slave may be much more willing to put up with the misery that inevitably accompanies being a slave. A religion that teaches obedience, shows how to put up with adversity, and teaches that this world is a world of sin may make it easier to put up with this world's genuine, but ultimately fleeting, miseries. As Psalm 23 of the Hebrew Bible puts it, this world may be the "valley of the shadow of death," but the Lord shall shepherd me through it, comfort me, and, finally, allow me to dwell in His house forever.
In short, then, Marx sees the promise of an eternal reward used to deflect and conceal the genuine horrors of one's life, whether as a slave or as a factory worker in the 1880s' Victorian England. As long as one accepts this world as simply one to be "passed through" on the way to the pure, eternal reward of paradise, the alienation and exploitation of a given economic system can be made to be more tolerable and less important. For Marx, ultimately, this means that religion can be successfully used as part of an ideology to prevent, or at least lessen the risk of, people rising up to change the economic and ideological system that causes their unhappiness in this world.
Religion and Politics
We may have been told, at some point in our lives, not to bring up two things when having a conversation with someone we don't know very well: religion and politics. Both, of course, are very sensitive issues and can generate strong reactions; political disagreements can be very passionate, as can religious disagreements. Thus the advice: avoid these topics in "polite company." In spite of this—or perhaps because of it—philosophers seem to be drawn naturally to both topics. Whether we try to avoid them or not, the two seem to be inextricably intertwined.
There seems to be an expectation that U.S. politicians be religious, even Christian. President Barack Obama frequently ends his speeches to the American public with "God Bless America." During the 2008 campaign, rumors spread that Obama was actually Muslim, which detractors seemed to hope would make him less appealing.
As we saw earlier, the citizens of the United States are, generally, a religious people, and particularly in comparison to relatively similar countries (those of Europe, and Japan), Americans are very religious. Most political candidates are not just willing to make sure potential voters know their religious orientation, they insist upon it. They can often be seen in the media attending church and frequently end speeches with "God Bless the United States." Many politicians participate in the National Prayer Breakfast, and in 1952 Congress designated the first Thursday in May as a National Day of Prayer.
At the same time, the Constitution of the United States mentions religion exactly twice. Article VI states that "no religious test shall ever be required as a qualification to any office or public trust under the United States." As we have seen, the First Amendment prohibits both the establishment of a specific religion and governmental interference in its free exercise. An indirect mention of religion is also included in the presidential oath, where one is required to "solemnly swear" (presumably to God) to uphold the office; even here, however, one is allowed to "affirm" this obligation, allowing those who wish not to swear to take this oath.
Although legally there is no prohibition on those from faiths other than the three great monotheisms from seeking office, there is a question of whether there is an informal or unofficial issue here. For instance, one might wonder how successful a candidate for public office would be if he or she publicly acknowledged being an atheist (not believing in God) or an agnostic (unsure whether there is a God or not). Indeed, those from other religious backgrounds, such as Islam, have encountered some resistance, as did Keith Ellison, a Muslim who ran for (and was elected to) the U.S. Congress. While the Constitution, then, prohibits any sort of religious test for public office, it is less clear whether or not one functions informally. Interestingly enough, even though they are superseded by the U.S. Constitution, seven state constitutions still contain their original religious tests, such as Mississippi's, which reads, "No person who denies the existence of a Supreme Being shall hold any office in this state."
Under the principle of charity, you would assume the other person is rational and would try to understand their beliefs rather than rejecting them outright.
What, then, is the role of religion within politics? Many Americans would insist that a politician who lacks the grounding provided by his or her faith is similar to a boat without a rudder, thus wandering aimlessly. Politicians, perhaps more than most, need the guidance provided by religious doctrine, and thus it seems more than reasonable to conclude that one lacking religion may not be fit to offer the moral leadership such political posts demand. At the same time, this would seem to impose an unofficial religious test for political office, the kind specifically prohibited by the U.S. Constitution. It may also imply that it is assumed that non–theists cannot lead fully moral lives, an assumption that might well be debated.
It is unlikely that these tensions and challenges will be resolved anytime soon. Indeed, as the United States becomes a country that continues to increase in its cultural, ethnic, and religious diversity, they will no doubt become more pressing. Those living in such a diverse society need to arrive at some way of defusing such tensions; we are all too familiar with the devastating results when they are not.
One suggestion that has been made involves invoking the principle of charity. In this case, "charity" does not involve giving things to those in need of help; rather, "charity" is that offered to one's opponent in an argument. The principle of charity states, more or less, that one seeks to interpret what one's opponent, or conversational partner, says, in ways that make understanding more easily achieved. Thus, we assume that the other person is rational and is using language in a more or less accepted way, and that the other person's views are largely true. The other person's views may not be true, but we assume they are until we have reason to suspect they are not. Importantly, we don't simply reject those beliefs we don't share; we first try to understand what those beliefs are, and why someone might hold them. Naturally, we expect the other person to regard us in the same way. This is, then, sort of a "Golden Rule" of conversation. After all, we are presumably trying to understand each other; thus, we will focus on what allows us to do so, rather than emphasizing contradictions and other difficulties.
Invoking the principle of charity hardly leads to the unlikely result that we all agree with each other. Rather, it allows us to focus on opposing viewpoints in a way that highlights agreement and understanding. This, ideally, will allow us to see where our opponent is "coming from"; although we may still disagree, we may have a much better grasp on our opponent's view. This, in turn, allows us to focus on the genuine disagreements, and their sources, in more productive ways, all set within the context of seeking mutual understanding and increasing (again, ideally) mutual respect.
The principle of charity offers the possibility of fruitful debate that may not be achieved by either the kind of dogmatism that simply rejects beliefs we don't share or the easy kind of relativism that says everyone's views are "just fine." After all, we really don't think some of the views that others hold are all that fine. Both dogmatism and relativism also seem a bit disrespectful. Dogmatism rejects other views without even trying to understand them, whereas relativism insists that all views are equally good. Don't we want to be able to criticize and object to others' views, just as others may wish to criticize and object to ours? In an increasingly diverse and interconnected world, it is probably clear why the principle of charity may be a useful methodological tool to promote understanding. Given the profound importance and sensitivity of religious differences, it may be in the context of religious disagreement where the principle of charity may play its most valuable role.
The New Atheism
The history of the United States has been punctuated, at various times, by a dramatic resurgence of renewed emphasis on the importance of religion. These are sometimes known as Awakenings, and historians often refer to three specific examples of these. The first Great Awakening occurred in the 1730s and is often thought to have helped spread some of the ideas that led to the American Revolution. The second occurred in the early 1800s; many have seen this second Great Awakening as inspiring some of the views, including abolitionism, that led to the Civil War. A third Great Awakening is thought to have taken place around the years 1880–1910 and has been seen as leading, among other things, to an emphasis on the Social Gospel and the importance of alleviating the suffering of the poor, Prohibition and the passage of the 18th Amendment to the Constitution (banning the sale of alcohol), and the development of new theological views, such as Christian Science and the Jehovah's Witnesses.
Religion has had its historical critics. Thomas Paine, whose pamphlets—such as Common Sense—were crucial to circulating the ideas of the American Revolution, was reviled for being an atheist.
Throughout this history, there have also been those who resisted or challenged the prevailing views on religion. Thomas Paine and Thomas Jefferson, who played essential roles in the founding of the United States, were sharply criticized for their views on religion. Opponents of Jefferson characterized him as an atheist and warned, if he were elected president, he would confiscate Bibles. Thomas Paine, who wrote pamphlets that were crucial to circulating the ideas of the American Revolution, was widely criticized as an atheist and generally shunned; only six people attended his funeral. The generally forgotten Robert Ingersoll, in the late 19th century, crossed the country giving long speeches (up to three hours long) to enormous crowds, promoting agnosticism (and some say atheism), as well as insisting on the importance of science, reason, and the fair treatment of women and African Americans. Ingersoll is sometimes said to have been the best–known American of his day, surpassing such figures as Mark Twain, although today few even recognize the name.
Some historians have argued that in the 1960s and 1970s, a fourth Great Awakening occurred, with a renewed emphasis on faith, increased political power, a significant growth in numbers of those who identify themselves as "born again" Christians, evangelicals, or fundamentalists. Although there is no general agreement, as there was in identifying earlier Awakenings, those who see a fourth Great Awakening have emphasized its importance on renewing public debate over such contentious issues as abortion, gay rights, and prayer in school and have suggested that it provided the context for electing Ronald Reagan president.
Revivals of religious feeling are often accompanied by a resurgence in vocal opponents. Richard Dawkins is among the more contemporary critics; he is the author of The God Delusion.
As with earlier revivals of religious feeling, there also occurred a renewal among those who opposed religion. In the early part of the 21st century, a number of books sharply critical of religion, specifically of the foundations of the world's three great monotheisms, were published. Particularly prominent among these texts were Richard Dawkins's The God Delusion, Christopher Hitchens's God Is Not Great, and Sam Harris's The End of Faith and Letter to a Christian Nation. In addition to advocating an end to theistic faith, these authors emphasize the importance of scientific explanation and naturalist methodologies and insist on using generally accepted rules for developing arguments, and on empirical evidence for supporting those claims they advocated. Many were surprised at the popularity of these books, which appeared on best–seller lists, widely reviewed in the mainstream media, and promoted by their authors with remarkable frequency on television.
As we saw in similar eras, then, along with a revival of religious feeling arose a renewal among its opponents, often referred to today as the new atheism. Just as evangelical and fundamentalist Christianity has seen significant growth, the number of people who self–identify as having "no religion" has also seen significant growth. Thus, while there has been some surprise expressed at how much attention the new atheists received, indicating some degree of acceptance of their views, it may also reflect a long tradition of American history. A resurgence of religion brings with it a resurgence of those critical of religion. Here again, we may see some justification for invoking the principle of charity, in order that each side can seek to understand the other, respect the integrity of the views involved, and focus on those things they may actually share. This requires neither dogmatic rejection of opposing views nor acceptance of those views with which one sharply disagrees. It does seem to require some degree of tolerance to promote the kind of mutual understanding that may allow the exploration of the differences involved, without the expectation of eliminating those differences.
Faith and Reason
Earlier, when looking at the doctrine of fideism, we discussed this approach to religious belief that many find attractive. Believing something solely on the basis of faith eliminates, in certain ways, the need for evidence and argument. If I believe something on the basis of faith alone, then reason and argument do not really play a role in justifying my belief; I am safe and secure in my belief, and thus immune to those who wish to challenge it.
Beliefs based on faith are important and valuable, but it is also important to recognize the value of dialogue and not shut out others to remain safe and secure in your beliefs.
What makes fideism attractive may also be a central weakness. As we saw earlier, if I believe something solely on the basis of faith, then anyone who wishes to reject that belief is on as firm of ground as I am. Because arguments don't really play a crucial role here, then all arguments are equally strong, or are simply beside the point. This seems to eliminate the possibility of dialogue and inquiry. This may seem to be an ideal solution, but by eliminating conflict, it brings with it certain negative results. Do we show more respect to one who disagrees with our religious views by adopting a position that prevents meaningful dialogue, or by actually engaging in that dialogue? Furthermore, many philosophers have urged that one's beliefs should be subjected to critical scrutiny. After all, the other things we believe seem to require justification, and our beliefs can actually become deeper and stronger by submitting them to criticism. For some philosophers, then, a belief that cannot be defended may not be worth defending. Even if we are not entirely successful at justifying our religious beliefs, it seems that we understand them more fully if we make the attempt.
At the same time, saying everyone's beliefs are okay might seem a little flip and disrespectful.
This should not be interpreted as saying that beliefs based on faith are somehow inadequate. Rather, it is to say that one's beliefs in anything, particularly something as important as one's religious beliefs, deserve to be examined. It may turn out, of course, that we are unable to provide sufficient justification and discover that there are certain beliefs that simply cannot be defended on the basis of reason and evidence. Ironically, that discovery comes after the attempt has been made, and the intellectual obligation to explore these beliefs has been attempted. Thus, even those who adopt the fideist position seem to be committed to the requirement that fideism itself results from the careful and critical examination of religious belief.
Perhaps in the context of religion, even more than in other areas of philosophy, the frustrations are substantial, and the temptation is great simply to throw up one's hands and abandon all attempts to use reason. But this seems to assume that we confront two possibilities here: Either there is a single correct answer, or all answers are equally correct. The advantage of philosophy here is to see that there may be more options than these two. A single correct answer seems a bit much to hope for, and even if there is, human beings seem doomed not to be able to discover what it is. But the mutual exploration of that answer brings with it enormous benefits for understanding not just one's own views but those of others. To treat all answers as equally correct is to treat serious, profound considerations of spiritual truth without the respect they deserve, and to abandon the productive kinds of conversations that can be held among theists, and between those who embrace and those who reject theism. Perhaps as an alternative to these approaches, one might recognize that the debate itself is of value, even though it may continue forever. After all, given that religion deals with some of the most enduring and difficult questions that have provoked and challenged human beings ever since there were human beings, it seems peculiar to expect debate over such issues to come to an end. The value philosophy offers in this context is an increased understanding of our own views and the views of others—particularly those with whom we disagree—and the courteous, tolerant, yet critical conversations that can be enormously productive in promoting mutual respect for the dignity all human beings deserve to have respected.
4.4
Persons and Souls
Having examined some traditional issues in religion, and various ways of looking at the human person, we now turn to some specific problems that revolve around the notion of personhood. What, exactly, is a person? Can we tell the difference between a person and a very sophisticated computer? What should be done about a person who is in a persistent vegetative state? What is the relationship between intelligence and personhood? These are difficult philosophical issues, and our examination of them will raise more questions than are answered. But given the difficulty and importance of the issues here, that should not be very surprising.
The Turing Test
One morning, you turn on your computer to check your e–mail. You see a note from someone named "Susie"; it doesn't seem to be an advertisement, or spam, so you open it up and read it.
It turns out Susie is a long–forgotten classmate from fourth grade, just trying to regain contact with some of her childhood friends. She mentions a few of the other students from your past, the name of your school and your teacher, and adds a few more details that you're surprised to discover you still remember. For the life of you, you can't remember Susie, but you write her back, exchanging memories, and updating what each of you has been doing since those long–ago school days.
Computers can be programmed to impersonate humans, resulting in crimes of fraud. The trick is being able to tell that the person on the other side is genuine.
Susie, in describing her current situation, tells you that she has suffered some bad luck: A divorce, the loss of her job, and an illness have depleted her of all her savings. She tells you she hates to ask, but wonders if you might loan her $500.
You decide, before sending her the money, to check Susie out a little more. You dig up old pictures, contact a couple of friends you're still in contact with from those days, and come to find out there was no such person as "Susie." It slowly dawns on you that Susie is running a scam, trying to find a sympathetic person she can convince to send her $500.
Might it turn out that "Susie" isn't even a person? Might she be a well–designed computer program, written by someone who put in various details about you to convince you that Susie is a real person? What kinds of things might you ask Susie to make sure she is who she said she is? What kinds of questions might you ask Susie to make sure she was a human being at all?
In 1950, a famous mathematician named Alan Turing posed this very question. If we couldn't see who was answering questions, could we tell by the responses whether they came from a human being or a computer? Are there things about the responses made by people that reveal them as distinctively human? What are the implications if there aren't? In 1950, when Turing proposed what became known as the Turing test, there was, of course, no Internet. But what Turing outlined was very similar to the story about Susie.
In what is now known as the Turing test, Alan Turing suggested that a robot would not be able to answer the same type of questions that a human could. What questions would you ask?
Imagine you are in front of a computer. You are told that you have five minutes to type questions and send them to "Alice" and "Bob," who will then answer them. You can't see Alice or Bob, but you are told one of them is a human being and one of them is a computer. You can ask anything you wish; at the end of the five minutes, you will be asked to tell, just from the answers you receive, whether the human being is Alice or the human being is Bob. Turing's claim was that if you couldn't tell who was the human being, then a computer could successfully "think" like a human being. The only difference then would be the external appearance, not the experience of communicating. Can you think of some questions that might help you figure out who is who (or who is what)?
There are some restrictions: The computer could lie (so, for instance, it is at least possible that if you were to ask "are you a computer?" the computer could say "no"). The kinds of mathematical calculations computers can do much more easily than human beings (multiplying, for instance, two very large numbers together) aren't allowed.
It seems, at first, that it would be easy to figure out which answers came from the computer. Perhaps you would ask, "How do you feel?" or "Have you ever been in love? What was it like?"—questions that bring with them certain emotional and psychological experiences that are foreign to computers. But, clearly enough, a computer (who, remember, can lie!) could easily be programmed in such a way as to respond to such questions by saying, "I feel pretty good today; you?," or "I once was deeply in love but my heart was broken. I hope I never feel that bad again." Similarly, a computer can be programmed to talk about a mythical family, a mythical past, and whatever else a clever computer programmer might add to help the computer "pass" the Turing test.
A better kind of question might do with those kinds of things that humans themselves aren't always very eloquent at explaining. For instance, jokes. Imagine telling the computer this: "A horse walks into a bar. The bartender says, 'Why the long face?'" and then asking, "Why is that funny?" The computer may not have much success in explaining this; on the other hand, a human being might respond, "That isn't funny at all!" But what most human beings see, that even a very well–programmed computer might not, is that the term "long" means different things in terms of looking sad and in terms of the structure of a horse's face. We should keep in mind that human beings are notoriously bad at explaining jokes; but while they may not give a "good" explanation of why something is humorous, their response may sound distinctively human.
Another possibility is metaphor. Perhaps your question is this: "Shakespeare says in Romeo and Juliet that 'Juliet is the sun.' What do you think that means?" Perhaps Alice answers: "Shakespeare means that Juliet is brilliant and dazzling, and is as necessary for Romeo's life as sunlight is for the life of a plant." Bob, on the other hand, responds: "Shakespeare means Juliet is an enormous ball of gas, fueled through the fusion of hydrogen atoms, 93 million miles away from earth." It is pretty clear from those answers that Alice "gets" the metaphor while Bob does not. But, again, human beings aren't always that successful at articulating metaphors and other literary techniques authors like Shakespeare use.
Computers are getting closer and closer to passing the Turing test, and if Alan Turing was right, then the only thing we can point to that helps us determine whether someone is a real human being or a well–designed computer is how it looks. In fact, one interpretation is that a computer that passes the Turing test is no different than a human being. That can make things confusing.
Hollywood, of course, has used this idea in movies, perhaps the best known of which is The Terminator. In this film, a machine (the Terminator) is sent back from the future to kill the mother of its enemy (John Connor, leader of the human resistance). It sounds complex, but the result, if successful, is that the Terminator will eliminate John Connor by killing his mother (so he is never born). Two interesting points are made in the film: (1) the computers only decided to eliminate human beings when the computer system became self–aware or "achieved self–consciousness," and (2) the Terminator is very human–like; we are told it sweats and has bad breath and dandruff!
If we assume the machine passes the Turing test, and we put the machine in a human–looking body (perhaps one more human looking than Arnold Schwarzenegger!), would there be a way of telling the difference between the machine and the human being? Would there be a difference between the machine and the human being?
The Chinese Room
The Turing test presents an interesting challenge for those who wish to distinguish human beings and machines on the basis of what they can do. If a sufficiently well–programmed computer can give responses that are indistinguishable from those a human being can give, then is the difference merely determined by the outward appearance of the two? If we recognize human beings as human beings on the basis of their behavior (which includes their verbal behavior), then if a machine behaves just like a human being, does any difference remain? Artificial Intelligence, on one interpretation, is that there is no difference between a mind and a machine. This interpretation is often referred to as "Strong Artificial Intelligence, or Strong AI.
The philosopher John Searle (b. 1932), in 1980, sought to show there was something fundamentally wrong with Strong AI, and published a famous, influential, and controversial paper that presented what is now known as the Chinese Room argument. Searle argued that the Chinese Room thought experiment demonstrated that Strong AI was false, and that there was, therefore, a sharp distinction to be drawn between what the human mind can do and what a computer can do (Searle, 1980).
John Searle's Chinese Room argument suggests that Strong Artificial Intelligence is false because, like a computer, a person could learn how to construct Chinese characters and sentences by recognizing patterns—but not knowing what the sentences actually mean.
Imagine that Angela has been locked in a room (remember, it's just a thought experiment!). Through a slot in the door she is provided paper, pencils, and an instruction manual, in English, for putting in order the various cards she will be presented. The symbols on the cards—which are Chinese characters—are described, in English, in terms of characteristics Angela is familiar with: their lines, shapes, complexity, and so on. Angela is then asked to put the cards in an ordered sequence, as indicated by the instruction manual. Angela, who knows absolutely no Chinese, has no idea what the cards say or what the symbols mean. But after many failed attempts, she starts to get better at putting the cards in their proper order, and, after time, those who see the cards emerge on the other side recognize that it is fluent, clear Chinese. In short, Angela is just putting the symbols in order as instructed by the manual; but those who view the results, and who can read Chinese, would regard Angela as producing meaningful sentences in Chinese.
Searle makes a number of points about this situation. First, it is generally acknowledged that Angela has no understanding of Chinese and doesn't gain any along the way. Second, she has produced grammatical Chinese that might be indistinguishable from that produced by another human being who did understand Chinese. Finally, on the basis of the relationship between input and output, a computer might duplicate what Angela did and have no more understanding of the results than she did. Consequently, on the basis of this thought experiment, Searle insists that understanding involves something other than merely producing meaningful information, something that can be done by a mind but not by a computer. If this understanding marks a difference between the human mind and a computer, then Strong AI is false.
A child rearranges words to make sentences in a U.K. primary school. Searle suggested that a computer could create meaningful sentences but wouldn't be able to parse out any deeper meaning like a human could.
A somewhat more precise way of putting the point is to draw a distinction between syntax and semantics. Syntax, roughly speaking, provides the rules for a language; one might compare it, then, to grammar. Semantics, also roughly speaking, provides the meaning of the language. Angela really only knows the rules for putting Chinese symbols in their syntactical, or grammatical order; but she doesn't understand what they say, so she can't provide semantics, or their meaning: She cannot give an interpretation of the symbols to tell us what they say (although Chinese speakers can). As Searle famously remarks, semantics requires more than syntax; since a computer deals only with the kinds of instructions syntax provides, we can't attribute mental states or intentions or a mind to a computer.
While the language here may sound a bit technical, the difference is one we can see pretty clearly. If I write a sentence such as "for had The Johnsons Pat dinner," we would have problems figuring out what that sentence meant, because it is ungrammatical; it is not syntactically well–formed. One who possesses a basic understanding of the syntax, or grammar, of English may know the rule "subject verb object," and thus know that one way to rearrange the words in the sentence produces "The Johnsons had Pat for dinner." A computer may know the grammatical rules here and may produce a sentence in a way that is meaningful and can be understood by a native English speaker. But will the computer, on the basis of knowing just these grammatical or syntactical rules, know how to determine whether Pat has been invited for dinner, or whether Pat is to be the main course at dinner? That second level, Searle argues, cannot be achieved without an ability to interpret, which requires a mind, and this, he argues, a computer cannot have and cannot be programmed to have. Obviously enough, this is a distinction that could be of some importance: if only to Pat, in this case (Searle, 1980).
Chinese Room Argument
Here John Searle presents his celebrated "Chinese Room Argument," suggesting that computers, in principle, cannot model human ways of thinking and reasoning.
Question: Do you think that someday we will not be able to distinguish the kind of thought that a person shows and the kind of thought a person does? What sort of changes would that make to our conception of human beings, particularly the idea that human beings are unique?
There have, of course, been many critical responses to Searle's argument, and Searle has given almost as many counterarguments against those criticisms. The debate has continued to rage, and the discussion has gotten very technical and very complex. Indeed, one computer scientist has claimed that the field of cognitive science, in general, consisted of "the ongoing research program of showing Searle's Chinese Room Argument to be false." But for our purposes, we can see that the relationship between the mind and the brain is a complicated one, and not made any easier when we include in our consideration of this relationship the soul. Many have the intuition that human beings, uniquely, possess a soul, and, of course, in the religious context it will be that soul that will ultimately be judged. For philosophers, cognitive scientists, computer scientists, psychologists, biologists, and others, such an intuition may well be correct. But having an intuition, by itself, is not sufficient; it must be supported by an argument, by reasons, and by whatever we wish to count as evidence. In any case, Searle offers a provocative response to the Turing test, providing a way of seeing that even if a computer can act like a human being, there may still be something about the human being—we may wish to call this the "mind" or the "soul," or both—that makes it unique.
The Schiavo Case
While we have looked at some of the theoretical issues that emerge when considering whether the brain can be seen as identical to or distinct from the human mind, or the human soul, these issues do not always remain theoretical. In a number of cases, these questions become intensely personal and profoundly tragic. A specific situation in which the most heartbreaking choices must be made, on the basis of how we answer the question "what is a person?," arises in medical ethics and can be examined through the case of Terri Schiavo.
Terri Schiavo shown here with her mother, Mary Schindler. Schiavo was diagnosed as being in a persistent vegetative state. The Supreme Court had to intervene when her husband, Michael Schiavo, petitioned to have her feeding tube removed, and her parents objected.
In 1990, Terri Schiavo collapsed in her home in St. Petersburg, Florida, not breathing and without a pulse. She was taken to the hospital by paramedics, where she was put on a ventilator. Physicians, including her family doctor, eventually diagnosed her as being in a persistent vegetative state (PVS); after a year, a persistent state is considered permanent. After various failed attempts at therapy, her husband Michael petitioned in 1998 to have her feeding tube removed; Robert and Mary Schindler, her parents, objected and, due to ethical and religious objections, sued to allow feeding and life support to be continued. At that point, a long series of legal battles began between Michael Schiavo and Terri's parents. Various courts upheld Michael Schiavo's position, and these decisions were then appealed. In addition, the Florida legislature, the Florida Supreme Court, the president, the Congress, and the Supreme Court of the United States became involved in various ways. Ultimately, the Supreme Court denied petitions on behalf of the Schindlers, the original court order was upheld, and Terri's feeding tube was disconnected; she died in March of 2005. The autopsy determined that the brain damage that Terri had originally suffered was extensive, affected virtually all the parts of her brain, and was irreversible.
Such cases are clearly tragic and involve the most painful decisions a person will confront. Earlier such cases had helped clarify what was involved in making these decisions. Karen Ann Quinlan in 1975 was in diagnosed as being in a PVS; her parents asked the hospital to discontinue extraordinary measures keeping her alive, but the hospital refused. The New Jersey Supreme Court eventually ruled in favor of the Quinlans; Karen Ann was taken off the respirator, but she was able to breathe on her own and lived in a PVS another nine years, dying of pneumonia in 1985. In 1983, Nancy Cruzan was resuscitated after she was in an automobile accident and found unconscious and not breathing. Five years later, after she was diagnosed as being in a PVS, her parents sued to have her feeding tube removed, eventually reaching the Supreme Court, which denied the suit. However, a Missouri circuit court later found that new evidence indicated that Cruzan's wishes would have been not to remain indefinitely in a PVS; her feeding tube was removed in 1990, and she died 12 days later.
What made this case particularly tragic and ugly was that family members were on opposite sides. Here, protesters pray in front of the hospice where Schiavo was being cared for.
What distinguished the Schiavo case from those of Quinlan and Cruzan was the dispute between family members. While Quinlan's parents and Cruzan's parents sought closure by ending extraordinary measures of life support, Schiavo's parents were in sharp disagreement with Schiavo's husband, and each party had many supporters, making an already painful situation that much worse by being played out in the media. But the ethical question that arises in medicine forces us to confront a number of questions: What is a person? Is a human being who has no discernible brain activity still alive? What does that life consist of? Is it right, or wrong, after repeated attempts fail to alter the condition of a patient in a PVS, to allow that patient to die? How does allowing a patient to die differ from euthanasia, or taking positive steps to terminate a patient's life? If a patient lacks a living will, and otherwise left no indication of what should be done, who should make this decision? Does the state have a compelling interest in determining the outcome, or should the decision be left to the family? If the latter, who should make the decision if family members themselves disagree? Clearly, there are difficult moral and spiritual challenges involved in responding to these questions, and much disagreement over what the correct answers are.
Religious traditions, particularly the three great monotheisms we have focused on, give some guidance. Humans, as created by God, are unique and possess a soul; as created by God, many interpret religious teachings to indicate that only God should determine when a person dies. The Supreme Court has reflected the various views that fuel this debate. In the Cruzan case, Chief Justice Rehnquist insisted that a balance be struck between the "compelling interest" of the state to protect its citizens and the constitutional protection of privacy and individual rights, which may include the right to die. Justice William Brennan, a Roman Catholic, insisted that the balance should be weighed in favor of the individual; that individual should be allowed to reject extraordinary means of life support. In contrast, Justice Antonin Scalia, also a Roman Catholic, did not think it was a constitutional question and thus should be left to the states, indicating that choosing to be allowed to die and actively terminating one's life were indistinguishable as suicide, an issue to which the Constitution does not speak. In other contexts, however, Scalia has seemed to suggest that the balance be struck in a way differently than where Brennan would, and that there is no right to die (Cruzan v. Director, Missouri Department of Health).
There are, then, both moral and legal questions here, and as we can see, an appeal to religious traditions may not solve the problem but add still further perspectives. Ultimately, an analysis of such cases as Terri Schiavo may not resolve these issues, but they can make clearer what the criteria are to which we must appeal in determining what a human life is and at what point it may not qualify as such a life. Understanding more fully these criteria, which may include such notions as "personhood," "intent," "mind," and soul," will then help us to make more informed decisions about some of the most profound, difficult, and tragic questions we will ever face.
Intelligence and Personhood
We have surveyed a number of issues surrounding the notion of what it is to be a human being, or a person, including some very difficult moral problems. Even though most of us don't have any particular problem recognizing each other, and ourselves, as human beings—we fit most any set of criteria one might choose—it is the hard cases where our intuitions may be challenged. It is also the hard cases that require us to think a bit more fully about those criteria. These "hard cases" often come at the very beginning of life, and at its end. For instance, is a blastocyst—a five-day-old fertilized egg—a human being or a potential human being? Is a person in a permanent vegetative state a human being or a former human being? How we answer these questions can tell us a great deal about what we regard as the criteria that must be satisfied for something to qualify as human. As is probably obvious, both religious and ethical decisions may be affected substantially by how we determine these criteria.
When is a person considered a person? Where do humans fit into the Great Chain of Being?
An ancient tradition, going back to classical and medieval philosophy, identified what was known as a Great Chain of Being, viewing all things in the world as constituting a single, continuous chain. At its top, of course, was God, below which came angels, humans, other animals, plants, and minerals; from God to sand there was a continuous chain, in order, of all things. Except for God—as perfect, God is at that chain's highest point—each of these divisions was further subdivided and ranked in terms of relative perfection; for instance, lions were higher on this scale than fish, while gold was higher on the same scale as dirt. Although some of this may sound more than a bit old–fashioned, it played an important role in Christian thought and continues to influence the ways people think about their status. Humans fall between the angels and other animals; as we have seen, various views have been put forth about why humans deserve this place in the Great Chain of Being or why they don't. In a sense, the fundamental question is whether humans are unique and distinct from other animals, or whether we are simply one member of the animal kingdom, possessing no specific or essential features that make us somehow "better" than the others. Obviously enough, one of the things that have been traditionally pointed to, as setting human beings apart, is the existence of the human soul. Others might point to not just the existence of the human mind but to its remarkable development and sophistication, allowing human beings to remember, imagine, and create unlike any other beings. Whether one wishes to characterize this feature as a soul, or a mind, or to regard the two as more or less the same, a plausible conjecture is that reason and intelligence distinguish human beings from the rest of the animal kingdom, whether or not one subscribes to the more elaborate conception contained within the Great Chain of Being.
Within many religious traditions, human beings are created by God, with a soul marking them out as distinct and as unique. While on earth, such souls are contained within the body; when that body dies, the soul is released and judged on the basis of its behavior. Even though there are many theological disputes involved, in terms of resurrection, purgatory, reincarnation, karma, hell, paradise, and many others—all of which bring with them their own complexities—most of these debates assume that there is something about humans that identifies them as special in the eyes of God. Determining what that something is, of course, is rather difficult. But often it is suggested that reason provides a guide: a being who has reason, potentially has reason, or has had reason qualifies as human. But even here we run into difficulties.
On the one hand, there are human beings who, for one reason or another, seem to lack the full capacity for human reason. For instance, a young child may not be aware of certain consequences of his or her actions and is not held responsible in ways an adult might be. Clearly enough, we may not want to attribute reason to a fertilized egg, or fetus; we may not want to do so for a person who is in a permanent vegetative state or has suffered such trauma that mental activity is extremely limited. We will see below how some of these issues are treated, both in ethics and in the law; it will be important to keep in mind that the way these issues are resolved is often done with an implicit or explicit assumption of a human soul.
Are humans really "better" than other animals? Bonobos, for example, are a species of primates that are biologically very close to human beings and seem to exhibit reason.
At the same time, however, animals other than human beings seem to indicate at least some degree of thought, or reason. For instance, according to some who study them, chimpanzees and bonobos, species of primates biologically very close to human beings, seem to display behavior that reflects reasoning. They have been observed to punish other members of the group for not sharing food; they have been observed to communicate, plan, use rudimentary tools, and engage in other kinds of behavior that may indicate sophisticated cognitive development. Cetaceans—such as dolphins and whales—have also had intelligence attributed to them on the basis of their communication and problem–solving skills; some have argued that dolphins are aware of themselves, a very sophisticated cognitive achievement.
Many of these issues are worth further exploration, but one way we can tell how society regards various organisms is to look at how they are treated. Pigs are quite intelligent, but they are raised for food in the United States, but there seems to be considerably less consumption of horses or dogs. Some cultures eat chimpanzees, some eat whales, and still others reject the eating of any animals. These preferences may be learned, inherited cultural choices, but they do indicate that the ability to reason is not the determining factor. It should also be kept in mind that historically, even those we now generally recognize as human were not always treated as such. Women and children were, and in some cases still are, treated as mere property, to be bought and sold as other commodities; clearly enough, slavery is predicated upon the idea that some "people" don't fully qualify as human and can thus be owned as property. In any case, whether we follow various religious traditions and attribute to human beings a soul or regard humans as a particularly advanced member of the animal kingdom, how we treat other humans and how we treat other non-humans tells us a great deal about what is and what is not crucial in establishing personhood.
Rights and Responsibilities
In Samuel Butler's novel Erewhon, people are thrown in jail for being depressed or being sick. The idea was to challenge certain ways of thinking in Butler's own society and what they held people responsible for.
In his 1872 novel Erewhon, Samuel Butler described a mythical society that satirized certain conceptions of what his own society regarded as illegal, and what they did not. In the novel, illness is treated as a crime, and even those who are simply depressed are also put in prison. On this view, if you get sick or depressed, you are responsible; it's your own fault. In contrast, those who commit what we regard as crimes, such as stealing or murdering, do so as the result of their upbringing and their society; one doesn't really commit crimes intentionally, and thus such people are not put in prison, but taken to the hospital.
By turning upside–down our traditional conceptions of responsibility, Butler sought to expose certain ways of thinking in his own society, Victorian England. Some of the questions he raises continue to be discussed today: Is alcoholism a disease, or a choice? Do people choose to be homosexuals (or heterosexuals), or is their sexual orientation the result of their genes? What things, in other words, are people held responsible for, and what things are they not held responsible for? We earlier saw a discussion of determinism; clearly enough, a strong determinist has trouble holding anyone responsible for anything. As we saw, on strong determinism, no one is really "free" to make choices, and it is only when they have that freedom do we hold people responsible. If someone shoplifts as a result of antecedent causal influences completely beyond that person's control, we would no more be able to hold that person responsible than we would "blame" someone for being tall, or for having brown eyes.
Most people dismiss such strong determinism, and given that we do wish to hold people responsible for their choices, it is not really an option. But the problem of determining what we are responsible for and what we aren't responsible for remains, and it seems clear that what we mean by being a "person" includes as an essential component some attribution of responsibility.
When do we begin to hold individuals accountable or regard them as responsible beings? In the Jewish culture, this begins at the age of 13. Here, a girl is in the temple during her bat mitzvah.
Historically, most societies have recognized a certain point in a person's life at which they are granted certain rights, and with those rights come certain responsibilities. One is accepted as a full member of society when he or she is allowed the freedom to make choices and required to take responsibility for those choices. Often, this is known as the age of accountability, or the age of reason, and although this age is frequently set at around one's 13th year, it is clear that some 13-year-olds are quite a bit more mature than others. As can be seen in religious ceremonies such as confirmation and bar mitzvahs and bat mitzvahs, society identifies a point in one's life at which he or she is granted membership into it. Similarly, many Native American groups traditionally marked this point by sending a child on a Vision Quest, in which the child will find his or her direction in life.
When we look to religion, and morals, as well as biology, to determine what we mean by the term "person," certain things are clear: We regard a person as at least someone who has sufficient mental capacity and moral judgment to make decisions and to take responsibility for those decisions. Additional, or related, criteria may, naturally, be added: Perhaps a person is one who knows right from wrong, understands that actions have consequences, has a mind, can form beliefs, or has a soul. Some philosophers prefer the term "rational agent" and characterize a person in terms of "agency."
A baby probably wouldn't be considered an agent, by Immanuel Kant's standards. This baby is exhibiting some self–awareness, but she definitely can't use the pronoun "I" or really demonstrate that she is free.
One influential view is that of Immanuel Kant: Someone qualifies as an agent if and only if he or she can meet certain minimal standards. An agent must be able to refer to itself using the first–person pronoun "I," and thus indicate a minimal degree of self–awareness. This may also require that we recognize not only that we can use the pronoun "I" but that all others we treat as agents can do so as well. An agent must be able to recognize that it is in a world that is, in important ways, independent of the agent; that the world isn't necessarily the way the agent thinks it is. Thus, someone who thought he could fly, and jumped off a building—no matter how strong this belief—may find out otherwise. An agent must be able to regard himself as free. As we saw in our earlier discussion of compatibilism, the agent may not know it is free and may not be able to demonstrate that it is free. If, however, the agent considers himself free in relevant ways, and thus regards others as free in those same relevant ways, that is sufficient for agents to possess a rich enough sense of freedom to be granted both rights and responsibilities.
As we have seen, there are human beings whom, for one reason or another, we are unwilling to treat as agents. They may have sufficiently diminished mental capacity that they cannot be held fully responsible for their actions. Thus, the law recognizes that people who are incapable of telling right from wrong, or understanding the consequences of their acts, may not be treated as a rational agents. (It should probably be noted that such "insanity defenses" are quite a bit more common on television and in movies than they are in real life.) Young children are held to a different standard than adults; we are less willing to judge them as harshly, or we are more tolerant of their misdeeds, because we regard them as not fully comprehending the consequences of their acts. As we have also seen, fertilized eggs, for instance, and those in a permanent vegetative state, raise other questions about what we mean when we call someone a "person." Clearly enough, religious views, ethical views, and the relevant information provided by the biological sciences can help us become clearer about who qualifies, and who may not quality, as a rational agent. Just as clearly, and particularly with the continued development of medical technology, these questions will continue to be examined and debated.
Some Enduring Questions
The questions may seem endless—even daunting, at times—but philosophers see this as an opportunity for discovery and learning.
Many regard philosophy as a continuous, unending debate over questions that have no answers, or at least as producing no answers that will generate general agreement. This can be frustrating; it would, of course, be nice to get a clear, definitive answer to our philosophical questions, once and for all. It would be quite a relief, after all, if we might be as confident in our answer to the question "what is a human being?" as we are in our answer to the question "what do we get when we divide 20 by 4?" Unfortunately, our questions, and thus their answers, don't easily lend themselves to such generally accepted results.
Rather than becoming frustrated at such a situation, philosophers regard this as an opportunity. For what is of more interest to people than people? We may not ever satisfy everyone with our response, but the issues that arise when discussing the human mind, the human soul, and what such terms mean and may, in turn, imply, possess a certain grip on our imagination that surpasses most others.
If a person's life is subject to being judged at its conclusion, and rewarded or punished for how that life was led, then it is pretty clear that we want to understand what makes up that person, and for what things that person can be held responsible. Such accounts as philosophers have offered of the human soul and the human mind may be difficult and abstract, but the topic under consideration, as we have seen, is itself difficult and abstract.
When we examine the question of whether what human beings can do can be reproduced by a machine, we discover not only what we think about what does, or does not, make human beings unique but also the assumptions we make in carrying out such an examination. We find, further, that our answers to this kind of question reveal important implications about our understanding of human freedom, human creativity, and human behavior. Similarly, simply by asking what distinguishes human beings from other animals, particularly those animals relatively close to us in terms of their biological makeup, our assumptions about human beings become clearer, and we can then examine those assumptions to see if they have merit or need to be revised.
A Body Worlds exhibit. It's unlikely that humans will find anything more fascinating than humans themselves.
It seems unlikely that anything holds more fascination for human beings than human beings. It also seems unlikely that our philosophical exploration of what it means to be human will ever cease. Of course, philosophers are not the only ones who examine this topic; it is an endless source for literature, natural science, art, religion, social science, and, in general, those things with which human beings concern themselves. Philosophers offer a rigorous and systematic approach to our understanding of the human being, but they also offer a different perspective by being willing to ask questions that may appear outlandish, if not ridiculous. What are the limits of agency? If there is a soul, where exactly is it found? Can chimpanzees be disappointed? What if I do not have an immortal soul: Does that imply that ordinary morality is worthless, or even more valuable? Can machines learn, and be creative? Will we someday be able to argue with dolphins? If I discover that a piece of music was "composed" by a computer, does that make it less interesting or less pleasurable? Why is it often regarded as cruel to kick a dog but not to test cosmetics on them? Does God have to be self–caused, or did God always exist?
These are difficult questions, in many cases, and require us to examine our assumptions, both about human beings and about human beings' place in both the natural and the non–natural world. For philosophers, as noted, this is then an opportunity to engage in a conversation with others to explore these questions and their implications. Even though it can be frustrating to discover that there may be no final answers—and that the answers we do arrive at simply generate new, and equally difficult, questions—it may be that it is the conversation itself, and the critical exploration of those things of ultimate significance and meaning, that provide the richest results.
Ch 4
What We Have Learned
* Philosophers have offered various arguments for the existence of God, including the ontological and cosmological (first cause) arguments, as well as the argument from design.
* The problem of evil, and questions about the supernatural, have challenged various commitments to the belief in God.
* The Turing test and the Chinese Room raise questions about what it means to be human; end–of–life issues force us to confront very difficult questions that can become clearer by philosophical analysis.
Some Final Questions
1. Which argument for the existence of God do you find most persuasive? Why is it persuasive? What problems arise for this argument, and how might you try to resolve them?
2. In the "Euthyphro," Socrates asks if something is right because God says it is right, or if God endorses something because it is right. Which do you think is the case? What are the moral consequences of saying that whatever God says is right is right?
3. Imagine having a conversation over the Internet with someone, over several months. What kinds of things would indicate to you that you were talking to a real person, or a person like you? What kinds of things do you think a computer might say, in a conversation, that a human being would not say?
Web Links
For a discussion of the relationship between science and religion, with many additional links promoting explorations of religion, politics, science, and morality, see: http://www.templeton-cambridge.org/fellows/great_issues_section.php?issue=1
For a critical discussion of the new atheism, with sources for various perspectives on the debate between theism and atheism, see: http://www.guardian.co.uk/commentisfree/belief/2010/sep/21/beyond-new-atheism
For a provocative account of the question of the human soul and the role it plays in medical ethics, see: http://www.cbhd.org/content/brain-mind-and-person-why-we-need-affirm-human-nature-medical-ethics
This site will take you to a very simplified version of the Turing test, where you can ask your own questions. However, in this case, the computer will not be trying to fool you! http://testing.turinghub.com/
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment