Sunday, February 5, 2017

Philosophy: Mind & Machine Chapter 3

Chapter 3 The Theory of Knowledge: What We Will Discover * Philosophy can help us become more aware of the difference between claiming to know something and showing that we know something. * Skepticism poses a threat to many of our knowledge claims, and we have an obligation to meet that skeptical challenge. * Our knowledge claims have important implications for a wide range of questions, including those about science and religion. 3.1 How Does One Know Something? Epistemology—the theory of knowledge—looks at the kinds of things we want to say we "know." Here we will begin from the perspective of a commonsense view and then slowly begin to introduce some of the perspectives and terminology philosophers have developed to address the problem of knowledge. Common Sense and Knowledge Imagine you run into an old friend of yours at the grocery store, and she greets you by saying, "Whaddya know?" After some small talk, you drive home and decide to take her question seriously: What do you know? You sit down and start making a list. After a couple of hours, you've filled up page after page of things you know. Consider what such a list might look like: I know . . . How to change the oil in my car 7 + 5 = 12 Little Rock is the capital of Arkansas Susan If an object changes motion, some force has acted upon it. The shortest distance between two points is a straight line. A fool and his money are soon parted. The sun rises in the east. If Chicago is north of Dallas, and Dallas is north of Houston, Chicago is north of Houston. Observational reports aren't always completely reliable. Our eyes may see water in the distance on a hot afternoon, but through experience we understand this is merely a mirage caused by a heat haze effect. You quickly realize that this list will be impossible to complete; just listing the simple mathematical truths of arithmetic will, by itself, result in an infinite number of entries. Listing the various facts about the world you know, including the laws of science, and all of the various other things—people you know, things you know how to do—makes it clear such a list will be endless. (Which is now another thing you know.) Rather than listing individual claims, then, perhaps certain differences among the kinds of knowledge claims can be given. In addition to mathematical claims, there are scientific claims, claims about certain skills you possess, geographical claims, and even an old proverb. Or perhaps you choose to organize your knowledge claims in terms of how you come to know them, whether through observation or otherwise. Philosophers call this kind of inquiry epistemology. Epistemology, or the study of knowledge, investigates what we know, how we know it, and what kind of confidence we can have in our knowledge claims. This is one of the areas of philosophy in which things that seem obvious at first become more and more complicated the closer we look at them. But we can start by looking at what many people might say about a specific knowledge claim, and contrast their answers with some of the results that philosophers have offered. We will, in other words, begin with what one might call a naïve or commonsense view. By the time we conclude our look at epistemology, we will see that this view has a lot going for it, but can also lead us astray. As always, we will discover that our answers will lead to new questions. Skills that follow a specific process, such as changing a tire, are applied through Procedural Knowledge. Tommy is a 10–year–old boy; he is asked by our philosopher, "How do you know the sky is blue?" Tommy's reaction is one philosophers often receive, giving the philosopher a look indicating that the question is pretty stupid. He then humors the philosopher, and responds, "Because it looks blue." But Tommy knows that the sky sometimes is not blue; he also knows that sometimes his eyesight has fooled him. Even though Tommy may represent the commonsense view, he also realizes that observational reports aren't always completely reliable, as he knows from having seen what appears to be water at the end of the highway on a hot day, which isn't really water. While we, just as Tommy does, rely on our senses—such as the traditional seeing, hearing, tasting, smelling, and touching—for a great deal of the information we have about the world, we also recognize that things may not always be as they seem. Tommy's confident belief is expressed in terms of what we will call propositional knowledge. So when he says, "I know that the sky is blue," he asserts a proposition, or a sentence, that indicates a fact of some sort. Such claims don't have to be based on the senses, of course; a simple mathematical claim, such as, "triangles have three sides," or the kind of truth philosophers call analytic, such as, "all bachelors are unmarried males," are also examples of propositional knowledge. We may come to know these truths in different ways, but the way they are asserted—using indicative or declarative sentences—shows why they are considered examples of propositional knowledge. Perhaps the easiest way of seeing this is to contrast propositional knowledge with other kinds of knowledge claims. One might say, "I know how to make scrambled eggs" or "I know how to fix motorcycles": These might be called examples of procedural knowledge. Propositional knowledge is sometimes referred to as "knowing that," whereas procedural or descriptive knowledge is referred to as "knowing how." We might contrast both of these with what is called knowledge by acquaintance. When we assert such things as "I know London well" or "I know Mary," we are saying we know London or Mary in a way differently than we know that something is the case or how to do something. These are the kinds of things epistemologists investigate. The epistemologist wants to find out if we can discover when these kinds of reports are reliable, when (and how) we can use them to discover the truth about things, and if we need to add to, or supplement, these sensory reports to give an improved account of our knowledge. This will be tricky enough to do for propositional knowledge, so that will be our focus here. Sophisticated Empiricism Our sense of vision simply presents information to our brain so that we can understand it. When light strikes the human eye, the waves are transmitted to the brain through the optic nerve. Our representative of the commonsense view, Tommy, decides to go home and ask his mother why she thinks that—or how she knows that—the sky is blue. Tommy's mother is a physician and has a considerably more complex account, based on her understanding of the human eye, the human brain, and how humans visually perceive things. She tells Tommy that it is complicated, but that light is a wave of energy, and that what we see depends on how the energy of that light is absorbed by the atmosphere and what part of that light human beings can see. Light itself is made up of a range of colors, from red to violet; the atmosphere "scatters" the blue light more than the other colors, and that's what we see on a sunny day. This light wave strikes the retina of the human eye and is then transmitted to the brain through the optic nerve. The brain then takes this information and allows us to decode it in such a way that we see. Epistemologists must not only explain why a theory works, but they must justify its implementation as well. Tommy's not sure he understands this explanation—in fact, he's pretty sure he doesn't understand it—but he has a more general question. This is a very complicated and technical sounding theory about how vision works, provided to us by science; but why should we believe what scientists tell us? After all, scientists themselves disagree with each other, so we can't simply accept the argument that if a scientist makes a claim that claim is true. Furthermore, scientific claims in the past have turned out to be incorrect. Even Aristotle, one of the greatest scientists of all time, was certain that the earth was at the center of the universe. Later, of course, was it discovered that what we call the solar system is, in fact, a small part of an enormous galaxy, tucked away in a corner of the cosmos containing an untold number of other galaxies. Empiricism is the theory that our knowledge arises from the exeperience that sensory information gives us. To check if a piano is in tune, for example, you strike a key and listen for the result. The lesson we should draw from this is not to discount the discoveries of science, of course. Science has improved our lives in a vast number of ways. Smallpox, for instance, is estimated to have killed between 300 million and 500 million people in just the 20th century, but it has now been eliminated as a threat. Rather, we should see that even though commonsense may bring with it certain problems, we don't get rid of those problems simply by constructing a more elaborate theory. We need to show why a theory is correct, or at least better than other competing theories. That means, for the epistemologist specifically, we need not only give a good, persuasive explanation of how we know things, but we need to be able to defend that explanation by arguing for it, or justifying it. Here we can see what many regard as the fundamental way philosophy operates: A theory is put forth, others criticize it, and those criticisms are then discussed. If the criticism is good, the theory is revised, or even discarded; often another theory is put in its place. The history of these battles is one way of seeing the history of philosophy; as we will discover in this chapter, looking at specific battles over how we know things—and even if we know things—can allow us to see this history through a very specific set of questions. It should be fairly clear that what both the commonsense view of Tommy and the more sophisticated view of his mother have in common is that both views regard our senses as being of fundamental importance in understanding the world and understanding how we come to know that world. This view, which is and has been widely accepted, is known as empiricism. Although empiricism itself has many different versions, we will regard it simply as the theory that our knowledge fundamentally, and ultimately, arises from our senses and the experiences that sensory information gives us. If we want to know if the window has been closed, we go look at it; if we want to know if the milk has gone bad, we smell it; if we want to know if a piano is in tune, we listen to it; if we want to know the texture of a piece of cloth, we feel it; if we want to know if the pie is cherry or rhubarb, we taste it. Obviously enough, we use our senses in various combinations, and doing so often gives us more confidence in making claims based on our perceptions and observations. But, as we will see, empiricism has itself been challenged. For instance, those who are regarded as adopting the approach known as rationalism have argued that certain things, among them our knowledge of God and of sophisticated mathematics, simply cannot be explained as originally arising from the senses. Furthermore, epistemologists have always been confronted by skepticism, the idea that we may not, or even cannot, really ever know anything at all. Alternatives to Empiricism Ever the Empiricist, David Hume (1711–1776) argued that nearly all of our knowledge comes from the senses. Perhaps the greatest empiricist was the Scottish philosopher David Hume (1711–1776). Hume argued that, with the exception of some simple parts of mathematics and logic, all our knowledge comes from the senses. Thus, all non–mathematical claims to know something are fundamentally grounded in our sense perceptions, and the degree of confidence we have in those claims is relative to the amount, and quality, of evidence we possess. Yet, as Hume himself recognizes, because these kinds of claims are based only on past experience, they can never be established with certainty. To take an extreme example, Hume, in order to be consistent, has to claim that you can only have a strong expectation that if you drop a bowling ball on your foot, you will feel pain; it isn't necessary that you will. Even the laws of nature, such as gravity, or the sun rising in the morning, are simply the kinds of things we have the greatest confidence in. If you think it is implausible that you only have a strong expectation that if you throw a normal, uncooked egg at a brick wall that it will break—that it doesn't have to—you are in good company. Both the rationalists and the later German philosopher Immanuel Kant thought genuine knowledge required something more than being likely. We will see both of their approaches in what follows. Descartes and Rationalists What about things that we cannot perceive with our senses? Some philosophers argue that we are born with "innate ideas" about some concepts, such as our ideas about God and the infinite. The rationalists did not believe that all our knowledge comes from the senses. They argued, in different ways, that there were certain kinds of truths that had to arise from some other source. It seems obvious enough that if we want to know if clapping our hands together will make a sound, we just clap our hands together and listen. But, to consider a couple of examples from the French mathematician and philosopher René Descartes (1596–1650), what about our knowledge of God? Or our knowledge of infinity? How can we "perceive" the cube root of 27? Can we use our senses in these cases? One might argue that we see the effects of God or can use such things as railroad tracks disappearing into the distance to represent the infinite, but that wouldn't really qualify as perceiving God or perceiving the infinite. Very few philosophers, including most of the empiricists, thought the senses provided a way to find the cube root of 27. Descartes, and other rationalists, came to somewhat similar conclusions but followed different methods—we are born with certain ideas, such as our ideas of God and the infinite. These are said to be innate ideas. After a complex argument, Descartes realizes that he has various ideas that did not come from the senses, but about which he can be absolutely certain: that he exists, that he has a mind and a body, and that God exists. God, as an all–good being, will not deceive Descartes; therefore, there is a set of truths, which Descartes calls "clear and distinct," the absolute certainty of which God will guarantee. These, for Descartes, tend to be truths of mathematics and mathematical physics, but his more general view is simply this: Empiricism cannot account for the certainty and necessity of certain kinds of knowledge. Since these knowledge claims are, on Descartes's view, absolutely certain, this reveals that empiricism is inadequate to account for that certainty. Immanuel Kant Immanuel Kant argued that certain outcomes are seen to be true simply because they are true, not just likely. The history of modern philosophy is often regarded as a debate between the rationalists (Descartes, Spinoza, and Leibniz) and the empiricists (Locke, Berkeley, Hume). This period is often said to culminate in the philosophy of Immanuel Kant (1724–1804). Rationalism emphasizes the mind over the senses; empiricism emphasizes the senses over the mind. Kant argued that both were correct in some ways, but deeply wrong in other ways. Kant's Critical Philosophy, then, adopts what he finds to be correct in both traditions, while discarding what he regarded as the dogmatic excesses of each. Kant's arguments are notoriously complex and difficult, and continue to generate controversy today; however, we can summarize his view in a way that shows his contribution to epistemology. We will look at this in a bit more detail later. Kant's view is that the human mind brings with it certain fundamental concepts and forms that make it possible for us to receive various sensory perceptions such as seeing or hearing something. The forms he identifies are space and time; we will be able to make judgments, or knowledge claims, about all objects "in" time; all objects in what we have been calling the external world will be in space. Furthermore, we use certain concepts—chief among them are substance and causality—to make judgments about these perceptions. Kant treats mathematical claims in a slightly different way, but our ordinary empirical judgments thus have to be made with a mind equipped with certain concepts to make the content we gain through our senses understandable. Kant also argued that we understand certain objects to have a causal relaltionship. For example, one could deduce that a bat striking a pitch traveling 90 mph would result in a change of direction for the ball. One way of seeing this is with an example. You go to a baseball game and see the pitcher throw the ball, the batter hit it, and the ball travels 400 feet for a home run. There are uncountable sense perceptions involved here of the things we see, hear, and otherwise perceive. But we see them in a temporal order: The ball is thrown before the batter hits the ball, and the ball leaves the park after it is hit. We see them in space, where the pitcher is a certain distance from the batter, and the ball is, if only for an instant, right next to the bat; then the ball travels through space to leave the stadium. The pitcher and batter, as well as all of the other objects that make up the event (not just the ball and bat but the stadium itself, the spectators, and so on) are enduring objects that persist in time, and interact with each other. As such, these objects are substances; and they interact causally, in that we say the pitcher caused the ball to move toward the batter, and the batter caused the ball to leave the stadium. For Kant, any individual empirical report could be true or false; we could, for some reason, have misperceived parts of the experience. But what has to be strictly necessary and absolutely universal is that we bring with us space, time, and a small set of universal and necessary concepts. Without these, all we would have, according to Kant, would be a string of individual sensory reports, without having any way of interpreting them as knowledge claims. As we can see, both the rationalists and Kant reject the kind of empiricism that can't account for some degree of necessity in our knowledge claims. While mathematical claims are always a bit tricky to deal with, both the rationalists and Kant make strong arguments that we need more certainty than the empiricists can provide, and both, in different ways, try to provide that certainty. But, as we will now see, there are those who think the empiricists offer far too much certainty, and that we can never be certain of anything. These are philosophers known as skeptics. Skepticism Being skeptical is part of human nature, as it is based around justifying one's knowledge claims. Although "skeptic" originally meant someone who simply looked at things, it later became a term to describe a person who doubted various kinds of claims. Skepticism plays an important role in epistemology by forcing one to justify one's knowledge claims, admit one cannot justify them, or completely give up one's confidence in some or all such claims. Skepticism has a long and interesting history, but here we will focus on the part of that history that has had the most influence on contemporary epistemology. Skepticism originally developed from Plato's school, the Academy, as a reaction to what some of its proponents saw as the acceptance of claims on the basis of inadequate reasoning or evidence. After Plato's death, his students took over, and Plato's own views were discussed, criticized, and developed. In this way, a school was developed that, because it was associated with the Academy of Plato, became known as Academic skepticism. Academic skeptics argued that it was impossible to know anything with certainty, and through various authors such as Cicero became influential in insisting that knowledge claims were always uncertain. Academic skeptics such as Cicero popularized the theory that knowledge claims were always uncertain. Naturally, there were philosophers who disagreed with the Academic skeptics. Many, of course, wished to defend the certainty and truth of knowledge claims; others, however, thought the Academic skeptics still claimed to be certain of too much. These philosophers, known as Pyrrhonic skeptics, argued that those who assert that they know nothing were, in fact, claiming to know with certainty that they know nothing. Through the development of various sophisticated arguments, these radical Pyrrhonic skeptics tried to show that any claim could be refuted. It is important to see that the Pyrrhonic skeptics never asserted any claims themselves; they always argued against claims made by others. Perhaps even more important was the goal of the Pyrrhonists: to quit deciding whether a claim was true or not. Rather than trying to discover if some knowledge claim was in fact true, or false, they insisted that one should simply give up any such attempts. If this goal of "giving up"—what they called "bracketing" the hope of knowing anything with certainty—could be achieved, one would discover a feeling of peace and tranquility. Philosophers have, of course, always been a bit skeptical; their first instinct, when told something, is either to doubt it or to ask why it should be believed. As formal "schools," or approaches to philosophy, both Academic and Pyrrhonic skepticism were both very influential. But in 1562, a translation (into Latin) of a central text of skepticism, the Outlines of Pyrrhonism of Sextus Empiricus, appeared, radically changing the history of epistemology, particularly due to the work of the René Descartes, about whom we've already heard a bit. Descartes lived in an era where much—maybe everything—that had seemed certain had been called into doubt. As mentioned previously, the astronomical truths of Aristotle had been disputed by Copernicus, Galileo, and Descartes himself, by insisting that the earth revolved around the sun. The religious truths of the Roman Catholic Church, that had dominated Europe for 1400 years, had been challenged by Martin Luther and the Reformation, leading to the development of Protestantism. Now, with the doctrines of radical skepticism available to Descartes, it was no longer clear what, if anything, could be relied upon. Descartes found this situation intolerable, and thus set out to defeat radical skepticism and put both scientific truths and religious truths on a firm foundation. Skepticism Stephen Toulmin discusses classical and modern skepticism, pointing to some important differences between their approaches to our knowledge claims. Question: What are the limits to skepticism, if any? If doubt is a good thing, explain at what point some doubt becomes too much doubt. In trying to defeat skepticism, Descartes himself constructed a powerful version of skepticism. He began by noting that "if a source of information could mislead or deceive us even once, it cannot be trusted" (Descartes, 1984). We know, from mirages and optical illusions, among other things, that our senses have deceived us. We also know that many things we think are real have also appeared to us in dreams; what if everything we see around us is really a dream? If we adopt Descartes's strategy, we might be wrong, and therefore we shouldn't trust these kinds of claims as reliable. These were traditional skeptical arguments, but, as Descartes observes, even if our senses deceive us, and even if we are dreaming, we still know that 2 + 2 = 4, and triangles have three sides. Descartes, in constructing his own radical skepticism (in order, again, to defeat it), then makes an original contribution to skepticism by considering the possibility of an evil genius. This is a being that has all the powers traditionally associated with God, but is evil, so evil that it has convinced us that even simple mathematical claims are true, when they are not. With the introduction of the evil genius, Descartes seems to have constructed a skepticism so powerful that it calls into question anything we have ever been certain of: that we have bodies, that there are other people around us, that we're awake when we think we are, and even that 2 + 2 = 4. Most important, for epistemology, is that Descartes transforms the discussion into one of doubt about what we call the external world: the world of objects that are outside of our mind, including the ordinary objects, such as tables and chairs, about which we make our most confident knowledge claims. Ever since Descartes's Meditations on First Philosophy in which he presents this argument (1641), epistemologists have tended to focus on two specific issues: Can we be certain of our claims about the external world? If we can, how can we demonstrate this certainty? Responses to Skepticism In some respects, philosophers and children are quite similar in their quests for knowledge. Who else constantly feels the urge to ask, "Why?" Most of us, of course, when confronting someone who denies that we have hands, or who suggests that we might be dreaming, would simply think the person is crazy. But this is where philosophy sometimes diverges in its approach from the way we ordinarily go about our lives. We might wave our hand in front of the person, or simply respond, "Of course I'm not dreaming." But philosophers are rarely content with mere assertions: They seek reasons, and arguments, that justify a conclusion. It can often be frustrating: Talking to a philosopher can, at times, resemble talking to the curious five–year–old who insists on always asking, "Why?" But the epistemologist has to respond to the skeptic if he or she wishes to establish a claim, rather than merely assert it. Consider John and Mary. John is an epistemologist; Mary is a skeptic. Clearly enough, John wants to demonstrate that his knowledge claims are to some extent true, or at least reliable. Mary wishes to deny this, and John and Mary sit down and argue it out. There seem to be three possible results from which John has to choose, and these three reveal the three standard responses epistemologists have given to the skeptic. First, Mary can simply win the argument. In this case, John has to give up his original goal of establishing true or reliable knowledge claims and become a skeptic himself. At best, John can go home licking his wounds, hope to live to fight another day, and find better arguments with which to defeat Mary. However, such decisive victories seem to be rare in philosophy. John's second option is to defeat Mary and show that skepticism is, for one reason or another, incorrect or indefensible. This, naturally, has been the most popular approach among epistemologists. Different philosophers have developed different techniques they believe refute the skeptic, but we can look at one of the most famous arguments in the history of Western philosophy to see the kind of lengths to which one may have to go to do so. As we saw, Descartes developed his own version of radical skepticism, but did so in order to defeat it. His strategy was to show that if even the most powerful version of skepticism—that which Descartes himself provided—could be refuted, then skepticism would no longer pose a threat. So let's consider how he went about showing the skeptic was wrong. Descartes's famous statement "I think, therefore I am." refutes the radical skeptic's belief that nothing can be true: You can know you exist because you think. The skeptic, particularly the radical skeptic, claims that no one can know anything for certain. For Descartes, then, that means if the skeptic is correct, every claim must be doubted. But if I'm doubting, then aren't I doing some kind of thinking? Don't I have to exist to do such thinking? In other words, even if I were somehow to doubt the claim "I am thinking," I would still have to be thinking to doubt it. Thus, the skeptic has to be wrong: The skeptic, in asserting some claim, has to think it. Famously, Descartes says that anyone who thinks has to exist; that is, "I think, therefore I am." By showing the skeptic that the statement "I don't think" (because really the skeptic is saying, "I think that I don't think") can never be true, Descartes establishes one claim that is absolutely true, and which cannot be coherently denied, even by the most radical skeptic. Whew. That seems to be a lot of work to do, just to show that something we already know to be true really is true. But it is important to remember that the stakes are high here: If the skeptic cannot be defeated, then all knowledge claims may come into question. So even though Descartes works very hard just to establish this one simple truth, he seems to defeat the skeptic's general view that nothing could be true. There have been other ways put forth to defeat the skeptic, often following Descartes's strategy: finding a claim that even the most extreme skeptic could not deny and then using that claim as a foundation on which to build a more general epistemological theory. But John has one more option: Rather than being defeated by Mary's skeptical arguments, or trying Descartes's approach and defeating those arguments, John can have a more complicated response. This is similar to that presented by David Hume and has become increasing popular among philosophers. It also has the advantage of seeming to sound like common sense. Hume's advice to John, to put it a bit informally, is to relax. Hume would tell John that there are two different kinds of skepticism, extreme and moderate. Descartes is clearly dealing with the extreme skeptic, who requires that even the simplest and most obvious truths be demonstrated beyond any doubt, or she will conclude that nothing at all can be true or even reliably known. Unlike Descartes, Hume's brand of skepticism doesn't require one to question everything. While he agreed we can't be totally certain of anything, he'd argue that farmers can rest easy knowing that the sun will rise again tomorrow. Hume suggests that we are on much safer ground if we recognize that some skepticism is called for but realize that although we may not be certain about things, we are uncertain about everything. To use a famous example, we have a great deal of confidence that the sun will rise tomorrow. Do we know it in a way that will satisfy the radical skeptic? Probably not. Does that mean we should be in an utter panic about what will happen tomorrow morning? Of course not. In many circumstances, our evidence and past experience give us plenty of reasons to be very confident about what will and won't happen, and Hume suggests that these are the kinds of things we would rarely stop to consider (unless we were talking to a philosopher). At the same time, a little skepticism is frequently called for, for often our actions are based on inadequate or incomplete evidence. This then, might be the third response to Mary that John could adopt: moderate skepticism, proportionate to the evidence and experience that is available. Imagine you have a good friend. You discover she has lied to you once in the past; should you conclude that you can never, ever trust any single thing she says in the future? That might be an extreme response. But a moderate response might be reasonable, and you may hesitate in accepting without question what she says in the future. Of course, if you discover that she has lied more often than you originally realized, your reluctance to believe her will increase. Thus, moderate skepticism recognizes that many things we believe to be true are, for all practical purposes, certain or at least reliable. But the moderate skeptic also recognizes that some things we believe to be true may not be, and we should adjust our degree of confidence in those beliefs accordingly. The epistemologist, as we see, operates within two extremes. On the one hand, one can reject any and all knowledge claims. The closer we head to that extreme, the closer we reach its endpoint of absolute skepticism, where nothing is true: a position Descartes and others have suggested is impossible to maintain. But at the other extreme is what one might call the endpoint of "absolute gullibility," where one believes that everything is true. Clearly that is also incoherent: One who believed that everything was true would have to believe, among many other things, that triangles have three sides, that triangles don't have three sides, and that triangles are both three–sided and not three–sided. In addition to fairly obvious logical problems that arise for the absolutely gullible (if, for instance, you believe everything is true, you also have to believe it is true that not everything is true), the recognition that there are some things we want to reject as false identifies this position as impossible to maintain. Between absolute skepticism and absolute gullibility is a wide range of epistemological theories. We will now turn to some of these theories, which will help fill in some of the ideas outlined in the preceding discussion, and begin to fill in some of the gaps in order to give a more complete picture of what epistemology hopes to provide in its account of human knowledge, and how it seeks to provide that account. 3.2 Theories of Knowledge We've now looked at some of the general goals of epistemology and seen some of the general ideas of how to approach those goals, while also recognizing that the one important thing any epistemologist must do is provide a response to skepticism. We can now go a bit deeper into the specifics of the theories philosophers have put forth. Correspondence Theories of Truth and Knowledge In our previous discussions, we had various people represent perspectives on questions of knowledge; for instance, we saw Mary the skeptic and Tommy the naïve empiricist. To make it a little easier to keep track of the person claiming to know something (or the skeptic, who claims not to know), and whatever it is that person claims to know, we will introduce a couple of abbreviations: "S" for the subject, and "p" for the proposition that subject claims to know. Thus, rather than saying, "Ann knows that grass is green," we will substitute "S" for "Ann," and "p" for "grass is green." The result will be "S knows that p," and we can then substitute any knowing subject for S and any proposition for p. This is such a standard way of treating knowers and what they know that it is sometimes called the S knows that p epistemology. "If" a large bus is yellow and has the name of a school district on it, "then" it is a school bus. This is an example of a correspondence theory of truth. For most of the history of philosophy, the obvious candidate for an adequate account of how we know something began with the subject and its knowledge claims, and then proceeded to explain that these claims were justified, or reliable, or true: if what the statement said matched up, or corresponded with, the world in an appropriate way. If Bill sees a cat sitting on the sofa, and then says, "The cat is sitting on the sofa," Bill qualifies as knowing this if, in fact, the cat is sitting on the sofa. That is, if Bill's claim corresponds to the facts of the world, we can say the claim is true, and that he knows it; for this reason, this is known as a correspondence theory of truth, and giving us a related theory of knowledge. Using our new terminology, we can say, " 'S knows that p' is true if and only if what S claims about p corresponds to p." Written this way, it may look more difficult than it really is; it is a good idea to keep in mind we are still talking about cats and sofas. The basic idea here is simply this: Our claims are true, and we can come to know them, if our claims match up (correspond) with the world correctly. It seems likely that this is the theory most people adopt without thinking about it, but, as we will see, philosophers have challenged it for various reasons. While this cat may be pondering its own philosophical questions, our belief that the cat is on the couch is justified because (a) we believe the cat is on the couch, and (b) the cat is, indeed, on the couch. We have a statement of knowledge then, and a correspondence account of its truth. But how do we determine that, in fact, the statement and the world match up in the appropriate way? Here, unsurprisingly, philosophers disagree with each other. Some insist that we have access to certain foundational perceptions, quite similar to those we saw the empiricist put forth: I see the cat, or I have a visual stimulus of a cat, or "I am appeared to by a cat–like object." As the last example indicates, the more philosophers talk about this issue, the less clear it becomes what exactly we have on our hands when we make a perceptual claim. Some even talk about "raw feels," "immediate phenomenological states," and "intuitive perceptions"; however, in spite of this forbidding terminology, the general goal is the same. We need to account for how we can have a perception (visual or otherwise) of the external world that gives us reliable information. If we get too confident about our abilities here, we always have the skeptic to challenge that confidence. This issue, about the relationship between the thinking or knowing subject and what it wants to say in terms of its knowledge, turns out to be trickier than one might suspect at first glance. But we can already see how the correspondence theory of truth allows us to provide a very basic definition of knowledge, a definition that can be found in Plato and that has dominated most of the history of epistemology. We can conclude this brief outline, then, with this traditional definition of knowledge, assuming we have some sort of grasp of what the correspondence theory of truth says. We can begin with our original example, and then by formalizing it, make it more general. "Bill knows that the cat is on the sofa" is true (he does know it) if the following three conditions are all satisfied, or met: Bill knows that the cat is on the sofa if and only if 1. Bill believes the cat is on the sofa; 2. The cat is on the sofa; 3. Bill's belief is justified. These are, then, the three traditional components of the definition of knowledge: Knowledge is justified true belief. Using our new terminology, then, we can summarize this conception of knowledge, what one might call the "classical" conception of knowledge, as follows: S knows that p if and only if 1. S believes p; 2. p is true; 3. S's belief that p is justified. Yet again, it seems as if philosophers work awfully hard to establish something that pretty much everyone already knows. But, as we will see, various philosophers have argued, in a number of different ways, that this account is inadequate. Some, such as Edmund Gettier, have provided very influential arguments showing that something might well be a justified true belief but not be something we would claim to know (Gettier, 1963). But for our purposes, we can see that we now have what appears to be a plausible theory of knowledge, based upon a plausible theory of truth, one that seems to capture our basic, commonsense notions of what makes a proposition true, and thus what then allows us to know what that proposition asserts about the world. We can now turn our attention to the problems these theories seem to have, prompting epistemologists to offer something as an alternative to the idea of correspondence. Coherence Theories of Knowledge Arguing "I believe abortion is wrong because the Bible says it's wrong" or "I believe we need school reform before X studies show that we need school reform" are forms of infinite regress, in which beliefs rest on other beliefs. However true the statements may be, this concept is prevalent in political campaigning. Our friend Bill tells us that he knows the cat is on the sofa. Perhaps we are in a skeptical mood, so we ask him how he knows this. Bill tells us that he sees the cat on the sofa; we, in turn, ask him how he can trust his eyesight to be reliable; hasn't he been fooled before, by mirages, or optical illusions? Or perhaps he once thought he recognized someone from a distance, but upon moving closer, it turned out to be someone he didn't in fact know? Bill gets a bit frustrated at these picky objections but tells us that when he is close enough and the light is good and he is sufficiently alert and not sick, he can trust his eyesight. Naturally, we bring up the standard skeptical objections: How close is close enough? How do we know the light is good? What does it mean to be sufficiently alert? How can you be certain you aren't sick? It isn't going to satisfy us if Bill says he knows something if he satisfies the conditions for knowing something; that just assumes what he is trying to justify (what philosophers call begging the question). Bill may run out of patience and say his version is good enough for him, but if he wants to continue to explore the epistemological issues here, he has to respond to our questions. Very quickly, he discovers what has been called the problem of infinite regress. If my beliefs (or knowledge claims) rest on other beliefs—as when Bill's perception of the cat relies on my belief that Bill can generally trust what his eyes tell him—then those beliefs in turn need to be justified. Of course, those beliefs themselves rest on still more beliefs, and we seem to end up with two options. Either I have a set of unjustified beliefs (which the skeptic will be happy to point out are unjustified), or I have one or more beliefs that I accept as simply true and undeniable. (In our discussion of Descartes, we saw his claim that when he thinks, he knows he exists, was a claim that had to be accepted as true and undeniable. But that seems quite different, and conceptually far away, from our being able to accept the reports of our senses.) Should we say that what the senses tell us are automatically true? This seems to be a problem, because we know that sometimes what our senses tell us is not true. Or should we say that the reports our senses give us are unjustified? This is uncomfortable for some epistemologists. We want our knowledge to be justified, but if our knowledge rests on perceptions that are either unjustified or potentially misleading and unreliable, well, this is a problem. We can succumb to the temptation to just tell the epistemologist to shut up. But if we want to offer a solution that will satisfy the philosopher, that won't work. In fact, philosophers have addressed infinite regress by offering an alternative to the correspondence theory of truth that seems to lead us into an infinite regress: a coherentist approach to the problem of knowledge. The coherence theory of knowledge implies that our beliefs form a complex, interconnected structure. With beliefs stemming from other personal beliefs, one's set of opinions and theories has a web–like relationship. The fundamental idea of a coherence theory of knowledge is that no single belief, or set of beliefs, is privileged or foundational. Rather, our beliefs, or knowledge claims form a complex, interconnected structure—sometimes compared to a spider's web. Any one belief (call it B) will rest on a large set of other beliefs, and each of those will rest on other beliefs (which could include, in part, B itself). The crucial feature of this approach is that we don't have a structure that rests on some set of truths that are certain—true—that can thereby support all of our other claims. Rather, all of our beliefs "hang together" as a whole. We can return to Bill to see how this might work: He tells us—yet again—that he knows that the cat is on the sofa. Now when we ask him to justify this claim, he begins to mention all of his other beliefs: the general reliability of his senses and other beliefs that he has based on those senses that turned out to be true; he might even point to the fact that he heard our question correctly as indicative of a reliable sensory report, and, of course, his belief that we are listening to him is still another belief that seems to fit together nicely with all of the others. No single belief Bill mentions plays a foundational role, but all of his beliefs cohere in such a way that he is able to rely on his knowledge claims. It is also worth noting that there are beliefs—maybe we suggest to Bill that pigs can fly—that will not cohere with all of his other beliefs sufficiently for him to accept the claim. The claim doesn't come automatically labeled as false, of course; rather, it is a matter of how it fits with the rest of a set of beliefs. Cats on sofas cohere pretty easily with our other beliefs; flying pigs do not. The obvious advantage of coherentism is that, because it doesn't rely on foundational beliefs at all, it doesn't have to worry about foundational beliefs being unjustified or potentially false. As we have seen, that seems to be a serious problem for the correspondence theory. However, the coherence theory, sadly, seems to confront a difficulty that may be as serious, if not worse. Solipsism A well–known doctrine among philosophers is called solipsism; the idea that, for an individual, the only thing that exists in the world is the mind of that individual. On this view, the only thing you have access to is your own mind and its contents. This would mean everything—your body, everyone else, the stars and planets, what you had for lunch, everything—is simply a projection of your own mind. Not too many philosophers take solipsism seriously, for what may be obvious reasons. But let's call a given theory T; if solipsism is consistent with, or worse, supported by, T, that isn't an argument for solipsism: It's a pretty good sign something is wrong with T. Critics of the coherence theory of knowledge point out that solipsism is a completely coherent, logically consistent viewpoint. They then want to find out if the coherence theorist really wants to say what the solipsist says should count as knowledge. Again, for what should be obvious reasons, that is a problem for the coherentist: Either a thoroughly wacky view such as solipsism counts as knowledge, or some other criterion has to be added to coherence to support its claims of justifying knowledge. Presumably, we don't want a "wacky epistemology," but if we add something to coherentism, it isn't clear that we any longer really have what would qualify as a coherence theory of knowledge. That is, in order to defend the coherence theory of knowledge, the coherentist has to abandon it. If this isn't bad enough news for the coherence theorist, many have argued that the problem just mentioned leads to a more general worry. If we rely solely on the coherence of our beliefs to justify our knowledge claims, then how do we move from those beliefs to what those the beliefs are about? That is, what is the relationship between our knowledge claims and the "real world" that we, presumably, want to know about? It isn't entirely clear from the coherentist position how we get that aspect of the world that we seem to confront, in the form of the objects that make up the external world. In short, if this objection is legitimate, the coherentist seems to sacrifice the real part of the real world, and loses what philosophers sometimes call the "robustness" of our knowledge that, in the long run, the theory should be able to account for or explain. So far we've seen that the correspondence theorist seems to have one kind of problem, that certain beliefs have to be foundational that are, themselves, either unjustified or unreliable. But the coherence theorist has a different kind of problem, having a completely coherent epistemology that may be completely bizarre (such as solipsism), or fails to account for the objective aspect of the real world that is imposed upon us, whether we like it or not. These seem to be substantial problems; as always, we can just say we know what we know, or tell the epistemologist to quit bugging us. But there are other philosophical options still available, so before we give up, we can look at one of these options. Kant's Theory of Knowledge The rationalists, such as Descartes, emphasized that our sources of true and certain knowledge came from reason; that is, what we can say we know is based on what we can gain solely through our mind. The senses, for the rationalists, often interfere with our ability to discover the truth. The empiricists, on the other hand, regarded the senses as the source of our knowledge, and thus thought many of our ideas were really just general claims based on the information we derive from such sensory organs as eyes and ears. The confrontation between classical rationalism and classical empiricism is a part of the history of philosophy called "modern" philosophy, and is often thought to end with the philosophy of Immanuel Kant, who sought to show that both of these traditions had some important things right, but also had some important things quite wrong. Kant's Categories Kant argued that we do have many general concepts that simply cannot be found through the senses, for the simple reason that we need these general concepts in order to understand what the senses tell us. But those general concepts—what he called "categories" (Kant, 1996)—can't be the whole story because so much information about the world comes to us through our interaction with it through the senses. On Kant's view, the human being makes judgments about experience: Those judgments have to employ certain concepts that make our sensory information comprehensible, but the content of those judgments comes from our senses. Although it sounds complicated—and it can get that way—Kant regards himself as simply showing that when we say things about the world around us, we bring with us a mind, equipped with certain concepts that allow us to understand the information the world gives us. One way of thinking about this is to imagine a friend videotaping a swim meet and showing it to you. Your friend has a peculiar sense of humor however, and decides to show it to you backwards. How long does it take for you to realize this? Rather than seeing the swimmers dive off the starting blocks into the pool, and begin swimming, you see the swimmers moving backwards through the water, then emerging into the air and landing, somehow, on their feet on top of the starting blocks. While this isn't an example Kant himself uses, it is a way of showing what he had in mind when he tells us we bring to our experience certain concepts that make that experience possible. An "event" is a situation where we expect a sequence of things to unfold in a particular order. At a swim meet, for example, we expect the swimmers to line up, the gun to sound, and the race to commence. Watching something like the swim meet, we are dealing with what we can call an event, a complicated sequence of things that have an order: For instance, the official fires off the starter's pistol (A), the swimmers dive into the pool (B), and then they begin swimming down the pool (C). A–B–C is the natural sequence, while C–B–A is the sequence your friend showed you when he ran the tape backwards. Our perception of the entire event is made up of an uncountable number of individual perceptions, all that we take in through our eyes as well as whatever other sense organs are involved. But we organize all that information with concepts that allow us to make sense of it. What is crucial for Kant is the idea that we couldn't have gotten those organizing concepts, the categories, from the information our senses give us because they are what allow us to make sense of that information to begin with. One of the categories Kant emphasizes is that of causality: We bring with us to our sense experiences the concept of cause and effect. One thing we know about causality is that a cause precedes its effect: I hit my thumb with a hammer (cause) and quickly feel a sensation of pain (effect). If Kant is right, we already have a notion of causality that allows us to put that event in its appropriate sequence of (1) hitting thumb and (2) pain. Looking at our experience, to see what we need for it to make sense, are these kinds of categories, such as substance (that something endures through time), causality, and the causal interaction among substances (say, between a thumb and a hammer). This doesn't mean we can't make mistakes about specific, individual causal claims. Suppose you have a headache: You take some aspirin, drink a lot of water, and take a nap; soon the headache disappears. Perhaps the aspirin causes the headache to go away, perhaps it was the water, perhaps the nap, perhaps the combination. But we can be pretty confident of the sequence here, and that the things you did to get rid of the headache didn't cause the headache. Simply to make sense of this sequence, then, we assume (or, technically, presuppose) causality to make any sense of that sequence. Kant's response, then, to the empiricist (such as Hume) is to argue that we need categories, such as causality, to make sense of the information the world gives us. Without these concepts, that information wouldn't make any sense. So we need categories before experience to make sense of experience, and for that reason they precede experience and can't be gotten from it. Grammar and Logic While Kant's theory is challenging, it isn't very hard to see what he has in mind if we use one of Kant's own ways of describing what is going on here. Long before you studied grammar and the rules of language, you spoke, more or less, grammatical English. That is, long before you knew the difference between a noun, a verb, and a direct object, you knew that "Bob threw the ball" was correct, and "Threw ball Bob the" was incorrect. Eventually, you learned that one follows the rules of grammar; the other does not. Through a "logic of experience," detectives can spot an irregularity at a crime scene and often use it to help paint a mental picture of what occurred there. Kant argues that just as language has rules for speaking correctly (grammar), we have rules for thinking correctly (logic). We know, for instance, that "I am six feet tall" and "I am not six feet tall" can't both, at the same time, be true. We knew that well before we ever heard of something called "logic," but if we study logic, as we did grammar, we find out that this is a rule of logic called the principle of non–contradiction. So just as we generally speak grammatically, we generally think logically, even if we don't know what the rules are that we are following. We also know when these rules are broken. Does this mean we always think logically? Sadly, it does not; just as we don't always speak grammatically, we don't think logically. But it is a good idea to know these rules if we wish to improve our abilities to think "correctly" and to speak "correctly." Kant insists that just as we have rules for thought, we have rules for knowledge; these rules for knowledge, then, function as a logic of experience, and things like causality then become rules we employ in experience. We may not always get it right, but the rules that a logic of experience provides help us understand what a genuine experience looks like. (This, then, would be a way of explaining why we knew, immediately, that the videotaped swim meet our friend showed us was running backwards.) As Kant and many other philosophers urge, the better we understand these rules of experience, the better we understand the knowledge claims that experience gives us. On Kant's analogy, then: to speak better, learn the rules of grammar; to think better, learn the rules of logic; to know better, learn the necessary rules that make knowledge possible. Robustness and the "Myth of the Given" So far, we've seen three of the best–known attempts to explain the ability of human beings to know things. Generally, we do assume that we know things, although the skeptic is always around to remind us that maybe our confidence shouldn't be too high. The correspondence theorist insists that our knowledge claims are true, or at least very reliable, if our claims match up, or correspond, to the way the world actually is. The coherence theorist, in contrast, suggests that our various beliefs all must fit together, or cohere, correctly. Kant offers an alternative that combines a correspondence theory of truth (that is, our claims are true if they correspond to the world) and a rule–oriented theory of knowledge (that is, what we call knowledge must not break any of the rules that give us the ability to make those knowledge claims in the first place). We have also seen some of the problems these theories confront, and epistemologists continue to debate the various advantages and difficulties of each. As always, we can simply say we know what we know and be done with it, but for philosophers, that isn't enough, for what is probably a pretty obvious reason. If two people disagree, but are content with their knowledge claims, we seem to have no way of resolving the dispute. Many people are content with that result, but philosophers want to continue to see if there is a more satisfactory way of resolving the dispute. Epistemologists and, as we will see, scientists insist that another condition must be considered when considering human knowledge: that our knowledge claims make sense and, in a fundamental way, make a difference. Sometimes philosophers use the term "robust" to describe theories that present results that are widely known and widely accepted, help solve problems that remain, and inform our further research. To see this, we can return to our example of solipsism, the idea that the entire universe is simply a projection of an individual's mind. As noted, most philosophers regard this theory as entirely consistent: Everything can be explained within it, and there are no obvious logical problems, so one can go along one's merry way as a solipsist without fear of ever being shown to be wrong. We might think this would be a great theory that offered such a result, but most of us, and most philosophers, regard solipsism as simply crazy. A more formal way of saying this is that it isn't robust: It is neither widely known nor widely accepted, it solves problems in a completely unsatisfactory way (or really doesn't solve problems as all), and it hardly helps us do further research. The most obvious way to demonstrate the failure of solipsism to be robust is to imagine a solipsist trying to defend the view: Who is the solipsist talking to? Why is such a defense necessary? There is no world outside of the solipsist's mind, so it isn't really clear what role any sort of communication with others would even play. So we can see that a consistent and coherent theory—here, solipsism—isn't enough to establish its view, whether because we want to say it lacks robustness, or, more informally, that it simply doesn't make enough sense to be of any real interest to anyone. There are those who would say they can simply look at the world and know there are certain truths in life. Philosophers, of course, would require additional explanation to this theory. There is the temptation, on first reading some philosophers, to grow impatient with their very picky objections to things that we may already think we have a pretty good handle on. As we have seen, some things we think we have pretty well figured out turn out to be more difficult that originally thought. As philosophers, we have the obligation to see if we can explain things to address the difficulties that arise if only to show, as Wittgenstein tried, that many of the problems that arise don't really need to. A quick way of expressing one's impatience is simply to tell the epistemologist that we are able to look at the world, make our knowledge claims, and thus read the truths of the world by looking at it. Naturally, the philosopher is suspicious of this ability and thus generates yet one more problem for our naïve, commonsense view of human knowledge. If we can simply read the truths of the world off of it by looking at it, then presumably there will be widespread agreement on what those truths are. I see an apple and say it is red; you see an apple and say it is red. We both, that is, have the same sense perceptions and make the same judgment ("the apple is red"): We agree. Sounds simple, but by now we probably realize that things that sound too simple may be too simple. Let's change the example to a slightly more complicated experience. You and I are watching the Super Bowl together: Our favorite teams—we are passionate, die–hard fans—are playing each other. The score is tied; with 30 seconds to go, the quarterback of my team throws the ball into the end zone to his receiver, who fails to catch what would be the winning touchdown. It is entirely obvious to me that the receiver was pushed, and there should be a penalty called; it is entirely obvious to you that the defender made a magnificent, and completely legal, play to prevent the catch. While most of us agree that an apple is red, is it necessarily a given? Of course it is red, but what shade of red? What characteristics would you use to describe it as red? These questions fall under the "myth of the given." We both have virtually identical perceptions of this event, yet come to radically distinct conclusions. If we can read the truths off of the world, how can there be this disagreement? More generally, isn't our interaction with the external world complicated? So aren't our judgments about that world more like the complex events of the football game, rather than identifying a single, isolated object such as an apple and judging that it is red? Philosophers have come up with various ways of describing this aspect of human knowledge; the most famous, due to the formulation of Wilfrid Sellars (1912–1989), is called the myth of the given (Sellars, 1956). The given is the idea that the world is simply out there, ready for us to look at it, judge, and thus determine what the "facts" are about the world. Sellars's argument is that this, sadly, isn't what happens; rather, we perceive the world with a complicated apparatus of concepts, assumptions, prejudices, theories, biases, languages, and emphases. A common way of expressing the myth of the given is to say that all perception is "theory–laden"; that is, there is no such thing as an immediate perception of the external world, and all such perceptions are embedded in a much broader context. Even the apple example may not be as easy as it appears: Perhaps you are an accomplished painter, and thus have a much more complex understanding of color terms. You may then describe the color of the apple as a quite specific shade of red, while I only know the one general term "red." The point isn't that one of us is wrong and the other right; in fact, in a certain sense, we are both right. But we perceive the apple, the football game, and, in general, the various features of the world not directly, but using our minds to interpret the information we get from the world and thus often coming to widely different conclusions based on that information. Descartes: "Clear and Distinct" We've seen a combination of competing theories—correspondence, coherentism, and Kant's—and the challenge of radical skepticism, as well as the requirement of robustness and the recognition that we may well not be able to look at the world directly and determine its truths. Assuming we're not quite ready to give up and simply ignore epistemology, two questions seem to emerge: What exactly is it that would satisfy the epistemologist? Can that satisfaction ever be accomplished? In other words, is there some standard we have to meet to say that we know something? Is there any possibility of finding that standard? One traditional answer goes back to Descartes. For Descartes, we can claim to know some claim "p" if and only if it meets his standard of being "clear and distinct": So if I know that the book is on the table, or that rectangles have four sides, or that Trenton is the capital of New Jersey, I must know these things clearly and distinctly. But what does that mean? Unfortunately, Descartes isn't very clear himself about what is involved here. The basic idea seems to be that an idea of a thing is clear and distinct if and only if it represents all the essential features of that thing. As is often the case with Descartes, an example from mathematics may help. Let's say I have an idea of a triangle; that idea is then clear and distinct if and only if it has all the essential features of a triangle. Essential features are those things that a thing must have to be that thing, so a traditional triangle's essential features are having three sides, three angles, and interior angles adding to 180 degrees. An object (a polygon) that has three sides, three angles, and interior angles adding to 180 degrees is a triangle, and something can't be a triangle unless it has the properties. So these are essential properties of a triangle, and my idea of a triangle is clear and distinct if it represents all the relevant essential properties. We might also think of an example from chemistry, where the essential feature of a molecule of water is that it has two atoms of hydrogen and one of oxygen (or H2O). So if I have an idea of water that says it is H20, that would be clear and distinct. It becomes considerably less obvious what the essential properties would be for more ordinary objects to have corresponding clear and distinct ideas. What, for instance, are the essential features of a chair? Must it have four legs? Must one be able to sit on it? Could we imagine a chair that might have some of these properties and lack others; would our idea of a "chair" then not be clear and distinct? What about even more complicated terms, such as "love"? Are there features of being in love that must be present? If one or more of these essential features isn't present, are we not in love? What would the list of essential features of "love" include and exclude? The other rationalists used slightly different language, but generally followed Descartes in thinking that we have these ideas of reason that are certain and true. Descartes claimed we know these ideas are true because they are guaranteed by God; God would not allow human beings to have clear and distinct ideas that were not true, for that would be a deception. God, as all–good on Descartes's conception (and most conceptions of God), would not deceive us, so we can rely on God's guarantee. Not everything has to be questioned. For instance, Descartes believed that we know certain truths by the "light of nature." But we can see that Descartes has an extremely high standard for what we can take to be certain (or clear and distinct): a perfectly clear idea of it, containing all the required features (and leaving out all those that are not required), and the guarantee of God that these clear and distinct ideas are certain and true. Just in case we are still a bit skeptical, Descartes insists that we know these certain truths by the "light of nature," the ability of human beings to immediately perceive the truth of ideas through a natural ability of the human mind. Here is one response, then, to the issue of what the epistemologist requires for something to be known: absolute, certain truth (guaranteed by God or not) that is immediately and intuitively obvious to everyone who takes the time to discover it. For many, even those sympathetic to Descartes's project, there are too many problems here for his solution to be acceptable. An explanation of our knowledge that relies on God's guarantee seems to mix up theological and epistemological questions that should be kept separate. Even if one accepts that guarantee, what it means for ideas, outside of mathematics and physics (and mathematical physics), to be clear and distinct doesn't really provide much of an answer. To then be told that we know these things through an unexplained "light of nature" does little to help, and it seems as if this kind of response to the demands of the epistemologist is simply insufficient. But, as we will see, there is another response that takes a completely different approach from that of Descartes, challenging not his answer to the epistemologist but revising our conception of what we need in order to provide that answer. Too High a Standard? Perhaps somewhere between Descartes's exacting standard for knowledge—absolute and certain truth—and the temptation just to abandon any hope of meeting the epistemologist's demands, there is room for an answer that will satisfy at least some epistemologists, and provide a plausible response to the skeptic. This seems to have been the approach taken by a number of philosophers who regarded Descartes's standards for knowledge as being too high; so high that they could never be met, at least for most of our most ordinary knowledge claims. Do we want, then, to abandon our knowledge claims and give in to skepticism, or should we perhaps rethink what our standards for knowledge should be? David Hume and Immanuel Kant disagreed on many important philosophical issues, but both, in somewhat different ways, shared a mutual respect for the Scottish philosophical tradition known as commonsense philosophy. This may not look precisely like what we would call common sense, but its fundamental idea is that many of our knowledge claims are pretty reliable. Indeed, most of the time, what we claim to know is quite sufficient for us to get around in the world, communicate with one another, plan for the future, and do all the other things we would hope our knowledge would help us do. The question then becomes whether this is good enough; if we fail to meet Descartes's standards do we lose all confidence in what we claim to know? Or is it more plausible to think that most of us (quite possibly including Descartes), in going about our everyday lives, have sufficient confidence in our beliefs, while, importantly, recognizing that we may have to change some of those beliefs from time to time? While philosohers might disagree on how we come to believe things, they do agree that we have the capacity to change our beliefs in light of new evidence. If the latter approach is legitimate, the issues between Hume and Kant return. Hume insists that all our knowledge claims (except those of mathematics and logic) are based on our experience, and that no such claims can ever be necessary. They may be so likely or probable that we would never think to doubt them, but there is always the possibility that they will turn out false. Kant agrees with Hume that specific knowledge claims—whether the cat is on the sofa or the baseball broke the window—can turn out to be false, and thus are always open to revision. But Kant disagrees with Hume, arguing that these specific knowledge claims have to assume, or presuppose, certain general concepts that are necessary in order to be able to make the specific claims themselves. We may be mistaken, that is, that some specific cause led to some specific effect; but we cannot be mistaken that understanding our world requires us to assume, or presuppose, the notion of cause and effect. Otherwise, for Kant, we would simply have unorganized sets of sense perceptions, without the ability to turn them into actual knowledge claims, or claims about experience. In this way, Kant believes he has an answer to at least some versions of skepticism that Hume does not. If our understanding of the world, and our ability to experience it, requires that certain concepts (such as causality) are absolutely universal and strictly necessary, then we have some restrictions on what we can and cannot experience. That limits the skeptic, in Kant's eyes, to claiming that individual empirical claims about knowledge might be false; this is something both Kant and Hume are willing to accept. Thus, the lowering of the standard from Descartes's to that of Kant's (or Hume's) allows us to respond to the skeptic while satisfying the idea that we can explain a large part of our knowledge as reliable and useful, along with the recognition that we may have to change some of our beliefs in the light of new evidence. One last point, however, should be made here about a specific kind of knowledge, namely mathematics. With rare exceptions, philosophers have insisted that claims such as "2 + 3 = 5" and "all hexagons have six sides" are different kinds of knowledge claims than ordinary empirical claims about cats and hammers and sofas and thumbs. Typically, empiricists adopt the idea that mathematical truths are true by definition or can be proved to be true without relying on any empirical information; they are just different kinds of truths. Rationalists have tended to take mathematics as an example of the kind of certainty all our knowledge claims should have, but, as we have seen, this is a standard that for many is simply too high. Even though questions of mathematical knowledge continue to engage philosophers, and are full of interesting epistemological issues, they are sufficiently complex and difficult that we haven't discussed them much here. It is, however, a good idea to keep mathematical truths in mind when trying to determine what kinds of claims can really qualify as knowledge. 3.3 The Philosophy of Science Science provides us with many of the kinds of knowledge claims that we regard as the most certain and the most reliable. Philosophers have raised difficulties for what justifies our knowledge claims within science, and even question what qualifies as science. Epistemology has its own interesting issues and questions, but it also has many significant implications for other fields. In this chapter, we will look at some topics in the philosophy of science, in terms of what philosophers have said about some of the basic philosophical concepts at the center of scientific inquiry, as well as some of the more modern developments that arise when philosophers examine science and its methods. Causality If gasoline is flammable and a lit match will ignite gasoline, will a match dropped in a large gas tank cause a flammable reaction? Not necessarily, because the sheer amount of liquid gas will likely extinguish the match. This is one example of how causal relationships often have different variables. Central to the sciences, both the natural sciences (such as physics and astronomy) and the social sciences (such as psychology and economics), is the notion of causation. Consider some of these basic causal claims: * Dropping a lit match on rags soaked in gasoline will cause a fire. * Too many houses for sale in a neighborhood will cause the prices to drop. * Eating bananas will cause me to get enough potassium. * Excessive sunspots will cause my radio reception to get worse. In general, then, as we can see, we make causal claims all the time; not just in science, of course, but constantly in our everyday lives. (If I'm hungry, eating will cause me not to be hungry, or at least be less hungry.) This won't come as much of a surprise, but it can be a challenge to determine what, specifically, cause–and–effect relationships look like, how they can be established, what confidence we can have in predicting future such causal claims, and what kinds of mistakes can be made in thinking about cause. If we want to talk with some degree of rigor about causal relations, it will help again to use some abbreviations. Let C be the cause and E be the effect; so C → E could be read as C causes E. To take one of the preceding examples: Let C be excessive sunspots and E be my radio reception gets worse. Then C → E can be read as excessive sunspots cause my radio reception to get worse. There are difficulties here, though, that begin to surface when we start looking a bit more closely at events, either because the event itself can be complex, or because the causal relationship isn't entirely clear. For instance, one might have a close correlation between C and E and yet not be willing to say C → E (or that C causes E). For instance, there is a strong correlation between roosters crowing and the sunrise; but we probably don't want to say that roosters crowing cause the sun to rise. So we need a correlation between C and E, we need to identify which is the cause and which is the effect, and we need to be able to distinguish between what seems to be accidental correlations and genuine, law–like correlations. If we can do all this, we may be confident that we have successfully identified a causal relationship between C and E; but doing all of this isn't always easy. It may start to seem that philosophers take obvious things, that everyone already understands, and then transform these same things into very confusing and very difficult topics. Sometimes that happens; but what philosophers also want to do is determine whether or not we really do understand these things that we think we do. Critically examining our beliefs can be frustrating, but it can also be very rewarding to be able to justify—not just assert—that we really do understand some of the things we claim to. It can be even more frustrating, but in its own way also very rewarding, to realize that certain things we always took to be obvious and true are neither. As we begin to look at causality more closely, it will start to become clear that causality plays a central role in understanding science; many experiments and investigations seek to determine causal relationships. Does smoking cause heart disease? Does dieting cause weight loss? Do credit cards cause bankruptcy? These and many other causal claims can involve a number of factors; scientists often try to isolate one of these factors and test it to see if it produces the effect in question. After we look at some of the theory that philosophers have developed to examine these questions about causality, and after we introduce some of the relevant terminology, we will turn to a specific controversy in science, and try to apply these results. Hume's Problem David Hume, whom we have already met, noticed something of interest in examining our expectations of the future. We expect the future, in many ways, to be like the past (although obviously not in all ways, which would make for a very boring future). What gives us our confidence that certain things that we relied on in the past will be reliable in the future? We can't say the future will be like the past because in the past the future has always been like the past: For one thing, that isn't true (things happen differently than we expect them to). Also, this argument looks as if it is saying our claim (the future will resemble the past) is true because our claim is true (in the past, the future has resembled the past). So Hume wanted to discover where we got our confident expectations about the future. To understand his argument, we need to introduce the notion of an inductive argument. An inductive argument has reasons, or premises, for a conclusion; but no matter how many premises one may have, or how strong one's reasons for accepting a certain conclusion, the premises can be true and the conclusion false. A "good" inductive argument is said to be strong, and inductive arguments are evaluated on a continuous scale, from very strong all the way to very weak. An example will probably make this clear. I read Shakespeare's play "Hamlet," and it was difficult. I read Shakespeare's play "Othello," and it was difficult. I read Shakespeare's play "Romeo and Juliet," and it was difficult. therefore The next Shakespeare play I read will be difficult. Perhaps you've read more than just these three plays, and they were also difficult; that would make this inductive argument stronger. Or perhaps you read one of Shakespeare's plays, and it wasn't quite as difficult as the others; that would make this inductive argument a bit weaker. But the possibility remains that you may have read a great number of Shakespeare's plays and found them all difficult; then one day you discover one that you find quite easy. Thus, the premises would all be true, the argument would be relatively strong (because those premises make the conclusion very likely), and yet the conclusion is false. What does this have to do with causality, according to Hume? Hume argues that all our understanding of causal relationships comes in the form of such inductive arguments. Using C, E, and C → E, his argument looks like this: C1 E1 C2 E2 C3 E3 Cn En Cn+1 → En+1 That is, we have seen one thing (C) followed by another thing (E) one, two, three—any number (n)—times, so we develop the expectation that if we see C again (Cn+1) it will be followed by E again (En+1). The more times C is followed by E, the stronger our expectation is that C will be followed by E. Indeed, our expectation is so strong that we regard C as the cause of E. But, as Hume insists, this is an inductive argument. So no matter how strong the support for the conclusion (that C → E, or C causes E), it could be false. Many of our strongest held beliefs are of this nature, and it may sound odd to describe these beliefs as "habits of the mind," as Hume does in An Enquiry Concerning Human Understanding, (Hume, 1910, § IV) We often draw strong conclusions from very few premises: For instance, if you ate at a local fast food restaurant and got food poisoning, you might go back again. But if you ate there twice and got food poisoning both times, you might hesitate about going back a third time. But here you are relying on the claim that this particular restaurant caused your food poisoning, and you've draw this conclusion on the basis of only two examples. For Hume, and probably for most of us, the most strongly held beliefs are those we consider the "laws of nature." These are the kinds of things that tell us, for instance, that we can't walk on water, that if we get too close to an open flame it will hurt, and that if we drop a heavy object it will fall. If Hume is right, these are also simply the result of very, very strong inductive arguments, but, as inductive arguments, their conclusions don't have to follow. That is, the laws of nature are only the kinds of things we have the best support for; they aren't necessarily true. Hume recognizes that we rarely, if ever, doubt them; however, for some, saying the law of gravity is a "habit of the mind," or simply something we expect but can't know is necessary, isn't good enough. Don't the laws of nature have to hold everywhere and for all time? This is a problem that Hume identifies—sometimes known as the problem of induction—and that philosophers (and scientists) have continued to discuss. It may seem pretty easy to determine which inductive arguments give us reliable information, and which do not. For instance, it is unlikely that you have dropped a bowling ball on your foot 350 times, yet you are probably quite confident that it would hurt to do so (and, therefore, it is something to avoid). We might even want to say that if C is followed by E 350 times, without exception, we can rely on the claim that C → E. But we can bring out Hume's problem with a famous story widely attributed to Bertrand Russell, a well–known 20th century British philosopher. Here, we will see C happen more than 350 times, and each every time it is followed by E. Yet concluding that C will always be followed by E, or that C causes E, can be, as we will see, be a very hazardous thing to conclude. Starting on December 1, a turkey wakes up every morning, and hears the farmer slam the farmhouse door; and then, after a few minutes, the farmer comes in and feeds the turkey. After a few days, the sequence "door slam–turkey fed" is pretty well established. After months, it seems undeniable; for 100 days, it has happened. Then 200 days. Then 300 days. Then 350 days. By now, the turkey has no doubt that when the door slams, it will soon be followed by his breakfast. Then around the end of November, it's Thanksgiving morning. Here we have a very strong inductive argument, but the conclusion is obviously false, as the turkey learns when he gets his head cut off in order to play his central role as that afternoon's dinner. Responses to Hume As mentioned previously, many would like to be able to say that the law of gravity is true, or certain, or necessary. That is, given the mass of an object, and how it is affected by another large object (such as a piano being affected by the earth), we can predict with certainty what will happen. For example, a piano that is pushed off the roof of a 10–story building will come plummeting down. Hume (1910) seems to reject this, saying that even the laws of nature do not establish necessary connections between a cause and effect. We may have such strong and confident expectations that we would never question them, but to move from a confident expectation to a claim of necessity is, for Hume, unjustified. Dogmatism is asserting something rather than demonstrating something. Various forms of scripture, for example, contain dogmatic explanations for religious origins. A number of responses to Hume are available. We could agree with him, we could simply say he's wrong, or we could try to construct an argument showing that he is mistaken. These, and other responses, have been put forth since Hume's era, but we should at least take note here that just asserting that he is wrong is not regarded as a good philosophical response; one must try to demonstrate that he is wrong. Philosophers tend to dismiss those responses that merely assert claims, without providing arguments for them: It is a position known as dogmatism, and while dogmatism may be appropriate in some contexts, it is rarely acceptable within philosophy. However, it is certainly true that when one really believes something, but has difficulty saying why, dogmatism can be very tempting. The two most prominent responses to Hume are (1) to agree with him, but to carefully state what that agreement means, and (2) to show that he assumes something about causality that shows his conclusion isn't really justified, but reveals a stronger conception of causality than he is willing to admit. We can look at this first response before turning to the somewhat more complicated second one. Traditionally—that is, before Hume—most philosophers and scientists regarded the relationship between cause and effect as necessary. If a cause was correctly identified, it had to be followed by a specific effect. Aristotle, for instance, gives a very complex account of causality (which we can sidestep here) that involves a number of different kinds of causes; but even in the era in which Hume wrote, few doubted that causality involved a necessary connection, which is what Hume, famously, denies. But let's say Hume is right and accept the idea that our understanding of cause and effect doesn't establish a necessary connection between them. Have we lost something important? Are we wholly incapacitated, unable to figure out how to go about doing things for the rest of our life? In other words, if Hume is right, does this lead to some sort of intellectual paralysis? Hume certainly did not think so, and it is important to see just what he denies (and, therefore, what he accepts). We have all sorts of beliefs, some well–established, some extremely well–established, and some not terribly well–established. We can rank these in terms of confidence, and that confidence might be expressed as a bet. For instance, very few people would be willing to bet that the sun will not rise tomorrow or that if one drops a heavy object it will not fall to the ground. These are the kinds of beliefs that we have utter confidence in and never even think about questioning. At a certain point, conclusions of inductive arguments are so strongly supported, we act as if they are necessary; but to claim that they are, in fact, necessary, is what Hume resists. On this way of looking at Hume's results, then, all we have lost—if he is correct—is the philosopher's concern that our very strong confidence in causal relations isn't strong enough. But Hume might himself wonder what difference it makes, if we act as if there is no reason to doubt that if some cause will lead to some effect. We are confident that a baseball thrown at an ordinary window will probably cause it to break; if we want to insist that it has to break the window, we may not be satisfied. But, again, if Hume is right, we may then simply have to accept the fact that we will not be satisfied. Many of us may not regard this as much of a loss. As was noted earlier, many see the history of philosophy, since 1800, as a battle between those who are more sympathetic to Hume and those who are more sympathetic to Kant. Kant claims that it was his reading of Hume that revealed to him how to address what he saw as Hume's skeptical results. Kant insists that the concept, or category, of causality is necessary, while recognizing (with Hume) that any specific causal claim may not be necessary. Kant constructs a sophisticated and complex argument, but we can give its outlines to at least see how he provides an alternative to Hume. Kant begins by distinguishing a "state of affairs" from an "event." Consider, for instance, a house: We see the house, and can look first (A) at the door, then (B) at the windows, and then (C) at the roof; but we might choose to look at its various parts in a different order (We can, that is, follow the sequence A–B–C or C–A–B or B–C–A.) This is because "house" refers to a state of affairs, which is different than an event. An event seems to bring with it a sequence of things in time, and those things have to have some kind of order. Consider, again, the ball being thrown at an ordinary glass window. We see (1) the ball thrown, then (2) the ball strike the window, and then (3) the glass shatter. That sequence of 1–2–3 characterizes this as an event; one cannot simply choose to see 3 before 1 as one might choose to look at the roof of a the house before looking at its windows. Kant's argument is subtle, but the basic idea should be clear. Hume himself distinguishes between states of affairs and events and would accept the difference sketched out here. He would not say that seeing the roof "caused" us to then see the windows, and he would recognize that this is different than experiencing the event of the ball breaking the window. Kant points out that we distinguish states of affairs from events by seeing that events have this kind of order, or sequence, in time that states of affairs do not. We can identify something as an event if we put its various parts into the kind of temporal sequence we saw earlier (as 1 then 2 then 3). One way of putting these parts into this sequence is to see one of the parts as having to be earlier than another part (that 1 has to be before 2 in our window–breaking story). We might, Kant admits, get the actual specifics of the sequence wrong, but we do have to have some kind of sequence. So if Kant is right, Hume can only give his account by assuming an account of events that requires that the parts of events be put into this kind of order. So Hume is sneaking in an account of event that has a necessary sequence (1–2–3) to give his account of causality. But that would mean that Hume is sneaking in the idea of necessity when he tries to deny that causality has any sort of necessity involved with it. Did this car window shatter and then get struck by a crowbar? Not likely, which is necessary sequences (crowbar strikes window, windown shatters) are crucial to our understanding of events. The argument, again, is complex, but if Kant is correct here, this provides a very powerful response to Hume. To summarize it briefly, the idea is that Hume has to use a conception of events for him to say that we develop strong and confident expectations about how these events will occur, or how we can reliably predict that if one thing happens, something else will follow. Hume would be happy to admit that if a baseball were to strike an ordinary window, the window would probably break. But to understand that, he has to be able to identify this as an event, with a necessary sequence (ball hits window, then window breaks, and not the other way around) built into our understanding of it as an event. So if Hume is correct in saying that our understanding of events doesn't allow us to predict that one specific thing will necessarily cause another specific result, Kant agrees. But if Hume is right, he is assuming a notion of causality that incorporates, or has built into it, a sense of necessity; and that's Kant's point. Our understanding of the world of events brings with it a notion that events have a causal sequence necessarily built into it. Without bringing with us that concept of causality, we wouldn't understand events in the first place. Since Hume recognizes that we understand events, Kant argues that Hume should recognize that our understanding of events brings with it a notion of a necessary causal order to those events. While the argument is challenging, it has, of course, generated much discussion and debate among philosophers. But for our purposes here, we can at least see that there are responses to Hume that go beyond simply saying "Hume is right" or "Hume is wrong." But to go beyond just indicating agreement or disagreement can require some hard thinking. Confirmation Theory Although determining what, if anything, justifies causality is a bit tricky, it is clear we would be at a loss without using some kind of conception of causality, and doing so with a great deal of confidence. We avoid, for instance, jogging on interstate highways because we are quite confident that we will not do well if we are struck by a car or truck traveling at 65 mph. We probably don't need a philosopher to tell us that. But assuming we have an idea of causality that works, we still need to see how we might apply it, and other ideas, in actually doing science. Perhaps we want to investigate the causal relationship between sugar intake and diabetes, or whether being good at video games makes one a better pilot. How do we go about doing so? Philosophers and scientists will talk in this context of putting forth hypotheses. This then sets up what you probably know as the scientific method: The scientist puts forth an idea (or hypothesis), designs an experiment to test it, tests it, determines what the test results say about the hypothesis, and then uses those results to generate a new hypothesis. The part of this method we want to look at here is the test itself: How can we know a particular result shows that our hypothesis is correct (or incorrect)? In the language of the philosophy of science, this is a question about confirmation, and its study is known as confirmation theory. While contemporary discussions of confirmation theory can be very rigorous, with a substantial amount of statistics and other kinds of mathematics, we will approach the basics here much more informally. Let's assume you want to investigate birds, specifically ravens. This is a famous example from the philosopher of science Carl Hempel (1905–1997) (Hempel, 1945). You've noticed that a lot of the ravens you've observed in the past have been black; you now formulate your hypothesis: "All ravens are black." We won't define ravens as black, for that would eliminate this as the kind of claim one would need to test. To test your hypothesis, you go to an area known for its large population of ravens, take out your binoculars and notebook, and start making observations. One, then 10, then 100, then 500, then 2,500 ravens are observed; each and every one of them is black. The question is, then, at what point do we have enough observations to assert that our claim, all ravens are black, is true? Have we seen enough? What if the 2,501st raven is white? What if 50,000 ravens are black, but the 50,001st raven is white? Can we ever be certain? If we cannot be certain, at what point are we certain enough? We could change our claim, of course, and just make our hypothesis: "Most ravens are black." But scientists want to test the strongest claims possible, and, in any case, we would have to settle what "most" means in this context. So that may not solve the problem entirely. As is probably clear, this is again a question about inductive arguments; how strong does the support of a claim have to be before we accept it as true? We saw earlier the turkey conclude, on the basis of a very strong inductive argument, that it would be fed when it fact it became food. Imagine, for instance, a football team that had an enviable winning streak, of, say, 100 consecutive victories. We probably still wouldn't say they won't lose again: If that were the case, no one would be willing to play them. As these examples indicate, it is not just the number of confirming instances (such as the number of black ravens we observe), but the relevance of those instances and the kind of information they provide. Thus, one might suggest that football teams change over time, so a team that won 100 consecutive games might, when it plays its 101st game, be a different team. These are all factors the scientist must include in considering if a hypothesis is confirmed, and how strongly it is confirmed. But there is a famous paradox that arises with our raven example, a paradox that has suggested to some a different approach should be adopted in testing hypotheses. Our original hypothesis was H1: "All ravens are black." Logically, this is said to be equivalent to the claim H2: "All non–black things are non–ravens"; that is, if H1 is true, H2 is true, and vice versa (and if H1 is false, H2 is false, and vice versa). This raises a puzzle, however, for if they are logically equivalent, then any observation that confirms one will confirm the other. We confirmed H1 by observing ravens and seeing that they were black; but it is much easier—too easy, it turns out—to confirm H2. White pieces of paper, blue toothbrushes, yellow bananas, and green shoes are all non–black things, and they are all non–ravens. But would we really want to say that a pair of green shoes confirms our hypothesis that all ravens are black? Notice how quickly our observations can be numbered in the thousands, or hundreds of thousands: Look around you and see how many non–black things there are that are not ravens. Each confirms H2; H2 is equivalent to H1; so each confirms H1. If we were convinced of the truth that all ravens are black after 2,500 observations of black ravens, how many more thousands of non–black things can we appeal to in order to confirm our hypothesis? The problem, of course, is that it seems very odd to say that green shoes confirm anything about black ravens, but it isn't entirely clear why, or what precisely the oddity involved here is. Conflict Between Consensus Theories of Truth and Popper's Scientific Realism Two philosophers discuss the contrast between Sir Karl Popper's approach to scientific hypotheses, and what they call "consensus theories of truth." Question: Provide an example of a scientific hypothesis. How would you test that hypothesis by using Karl Popper's method of refutation? Considerations such as these led a well–known philosopher, Karl Popper (1902–1994), to recommend a different approach to confirmation theory. Popper suggested we continue to put forth our strongest hypotheses, but that we do our best to show that they are false, or to disconfirm them. Popper described his procedure as putting forth conjectures—bold claims that went well beyond the evidence—and then seeking to refute those conjectures. A conjecture (or hypothesis) that can survive this process of actively trying to find counterexamples, or to falsify the conjecture, becomes stronger and stronger the longer it withstands this critical attack (Popper, 1963). Let's go back to our original example, then. First, we change the hypothesis to one that is a bit more conjectural, given our current evidence: H3: "All ravens in northern California are black." We then travel to northern California to test H3, not by finding confirming instances but by actively looking for white (or at least non–black) ravens. This focuses our investigation by allowing us to concentrate on finding counterexamples; the longer we look for these counterexamples and fail to find one, the stronger H3 becomes. This doesn't mean it is true, of course, but on Popper's view the conjecture becomes increasingly plausible as tests designed to refute it fail. As we should be able to tell at this point, some of the simplest–sounding ideas can lead to some puzzling results, whether it is how we know one thing causes another, or how we determine if a specific observation really makes a given belief more likely. Here we have just presented some of the outlines of the responses philosophers of science have offered in thinking about how one does science. Shortly, we will turn to an extended, but specific, example of an issue within science to see where these various theoretical ideas can start to make a big difference. But before doing that, we will first look at a famous way of looking at the history of science, its development, and what that development tells us about science. Kuhn's Scientific Revolutions Galileo's view that the sun, and not the earth, was the center of the universe landed him under house arrest from 1633 until the end of his life. One of the most famous episodes in the history of science was the trial of Galileo in 1633. Galileo, having made a number of observations using a relatively new invention, the telescope, concluded that the earth and the other known planets revolved around the sun. In this way, he affirmed what had been stated earlier by Copernicus, and is known as the heliocentric theory, that the sun is at the center of the universe. The heliocentric theory was opposed to the geocentric view that the earth was the center of the universe; the geocentric view was associated with Aristotle, Ptolemy, and the official doctrine of the Roman Catholic Church. Thus, to deny the geocentric view was to contradict official Church doctrine. For doing so, Galileo was arrested, put on trial, and convicted. He was forced to deny the heliocentric theory and was placed under house arrest for the remainder of his life. Several hundred years later, however, the Catholic Church recognized that Galileo had been right all along. We can use the debate over the heliocentric and geocentric theories to demonstrate a well–known and influential view of the history of science put forth by the philosopher and historian of science Thomas Kuhn (1922–1996). The traditional account of the history of science had been to regard it as a smooth, continuous development, allowing scientists to give an account that described some aspect of nature more and more accurately. Thus, Aristotle had a view of meteors, the objects we sometimes see enter the atmosphere and burn up as "shooting stars." Aristotle's view was replaced by that of Isaac Newton, and his theory of gravitational attraction; modern physics has continued to revise (often extensively) Newton's theory in terms of quantum mechanics and other discoveries that followed Newton's era. In this way, then, the modern view is said to be "better" than the earlier views, and thus science progresses, coming closer and closer to the truth. Similarly, Galileo's view is an improvement upon the earlier geocentric view. Kuhn, essentially, says that this is, for the most part, nonsense. Rather, Kuhn argues that the history of science should be looked at in a completely different way. On his view, a specific science generally operates with what he called a "paradigm," a generally accepted view of how things work, according to the science of the day. Thus, the geocentric view would have been the paradigm for most scientists working in astronomy after Aristotle (until Copernicus); Kuhn calls working within this paradigm "normal science." It is important to see that, for the most part, the paradigm functions as a set of unquestioned assumptions; one simply wouldn't think of challenging it, and the general scientific community would tend to regard one who did challenge the dominant paradigm either as not doing science or as being perhaps a bit mad. But, as we've seen, new evidence (such as the observations Galileo made with his telescope), and difficulties in explaining certain observations, can lead some scientists to challenge the paradigm that informs the (normal) science of the day. If sufficiently strong arguments are put forth, along with solid scientific evidence, a new situation arises, where one paradigm (that of normal science) confronts a new paradigm (the proposed replacement for that paradigm). The battle between these two views, or paradigms, is called by Kuhn "revolutionary science." Often, the reigning or dominant paradigm wins; the challenge is seen as either not as good, or lacking evidence or argumentative support. But sometimes, as in the case of the heliocentric view, the new paradigm starts to become the accepted new paradigm. If it becomes the paradigm most scientists adopt, and they begin using it without question, the period of revolutionary science is over, science returns to the situation we saw as normal science, but with a new paradigm informing that scientific worldview. Kuhn sees the history of science as reflecting this sequence of battles between paradigms, arising sporadically and unpredictably, while most of that history is informed by the paradigms of the normal science within which most scientists work most of the time. Kuhn's view, most famously presented in his 1962 book The Structure of Scientific Revolutions, has been enormously influential, interestingly enough, in disciplines other than those of natural science, such as the social sciences and literary theory. Philosophers have also been intrigued by Kuhn's thesis, particularly in one of its implications. For Kuhn, two paradigms—say, those which inform the geocentric and the heliocentric view—are said to be incommensurable (more or less, impossible to compare). That is, the worldview expressed in terms of one paradigm is completely and fundamentally distinct from the worldview of an opposing paradigm. So when Aristotle referred to the earth, and Galileo referred to the earth, they were really talking about the earth in radically distinct ways, so radical that one might say they were talking about two different planets. For Kuhn, all our explanations come within our theories, so all those explanations we think are correct are relative to the dominant paradigm that guides our thinking. Because our theories are dependent upon the language we use to express them, if we are using a language in one paradigm that is incommensurable with the language of another paradigm, it seems to some that those languages described two distinct worlds. There is no "common language" to discuss both of them because the languages involved are relevant to the very paradigms fighting it out with each other. To put it bluntly, if Aristotle and Einstein, for instance, sat down at lunch to discuss physics, the world Aristotle would describe would be a completely distinct world than the one Einstein would describe. This view, sometimes called anti–realism, seems to be a bit far–fetched at first, but as is often the case, it is easier to say that it is wrong than to show that it is wrong. In any case, philosophers, scientists, and many others have debated some of these implications since the publication of Kuhn's book. Kuhn, for his part, eventually came to reject some of the radical relativism that some saw implied in his account. That is, if a claim is true only within the framework of a paradigm and the language of that paradigm, and two paradigms are incommensurable, then one can't really say one claim is true. One must say a claim is true relative to a given paradigm, and Kuhn himself rejects the idea that one can assert that one paradigm is "true": Paradigms are successful if they work and are scientifically productive. Indeed, the very word "true" on this account may be relative to a paradigm. Although Kuhn did not fully embrace some of these metaphysical implications of his view, as is often the case in the history of ideas, the parent of an idea has little control over what others may do with it. Science and Philosophy If we return to our commonsense view of things, it seems that we might well say that we are pretty clear about a number of things: We know when something causes some other thing, we know when we have sufficient reason to believe some claim is true (or false), and we know that science has continually gotten better and better at describing the world. Philosophers then seem to come along and confuse this whole picture. We may think gravity causes the tides to come in and go out; the philosopher wants to know if this isn't just a habit, or expectation, we have because we've seen it so often before. We know that turtles are slow; but the philosopher wants know how we rule out the possibility of really speedy turtles. We know that Aristotle and Ptolemy were wrong, and that Galileo was right, and that the earth revolves around the sun; the philosopher wants to know how we are so certain that the dominant paradigm might not shift, leaving us to look as foolish about this as we think earlier thinkers did. While scientists run tests to prove their theories, philosophers still challenge their findings as being absolute truths. Of course, the philosopher is (probably) not saying that we can't use the idea of causality with confidence, that we can't rely on our observations to give us good solid information about the world, or that we have considerably more sophisticated accounts of the world from the science of our day than earlier eras did. The philosopher, as usual, represents that annoying little voice asking, "Why?" Can we explain these notions of causality, evidence, and scientific "progress"? Can we develop defensible theories that fit the evidence and provide ways to continue to do productive research? Can we respond to the skeptics, and the cynics, and justify our methods of science, while demonstrating that other kinds of claims do not qualify as scientific? Those are the challenges philosophers pose to scientists, and while a working scientist may be able to ignore them while in the laboratory or in the field, most scientists regard responding to these challenges as an obligation the scientist must, ultimately, satisfy. We can look at this from a different direction by considering the daily horoscope. Many newspapers and web sites carry columns on astrology; most of us in the West are familiar enough with astrology to know how to answer if someone asks, "What's your sign?" One day you open up the newspaper to read your horoscope. Would we regard its prediction as "scientific"? If so, how much confidence do we have in the prediction? If not, can we say why we find such predictions to be more like entertainment than science? Philosophers have offered, over the years, criteria that a claim has to meet to qualify as scientific. Typically, for a claim to qualify as scientific it has to be consistent (a claim must at least be able to be true), be falsifiable and testable (we can determine what would be involved for the claim not to be true, and how we could actually test it), and require as few extra details or ad hoc (specific details relative to the claim) additions as possible (the simplest explanation, all other things being equal, is the preferred explanation). Imagine you are a Sagittarius (born between November 22 and December 21), and your horoscope were to say something like, "You will take a trip over water"—although this seems plausible, it is virtually impossible for it not to be true. It is almost certainly the case that at some time, during the remainder of your life, you will indeed take a trip over water. Nor is it clear how we could test such a claim to show it is not true. Astrologers are aware of this, of course, and thus tend to make claims that sound plausible, seem informative, but yet can't really be tested or ever shown to be false. An astrologer who makes predictions such as "you will inherit a million dollars this week" or "you will be married within the next two months" is very likely to be making predictions that could turn out to be false. This is bad for business. It is a much better strategy to make predictions that can't turn out to be false: "You will meet someone interesting," "You should listen to well–intentioned advice from loved ones," or "Financial issues could come up between you and a friend." (These are all actual examples, by the way.) These are interesting and perhaps provocative claims, but because it is difficult to determine how any of them could be shown to be false, they aren't scientific claims. As some have suggested, and for these reasons, astrology is related to science much like professional wrestling is related to competitive sports. Thus, we see that philosophers do, in fact, have some contributions to make to the practice of science, and how to evaluate a claim as scientific or not. There is no guarantee that a scientific claim that is consistent, is testable, is not falsifiable, and doesn't require additional ad hoc details will be a "good" scientific claim. But we can be pretty confident that a claim that fails to meet these criteria won't be of much use as a part of doing genuine science. 3.4 Controversies in Science We've seen a fair bit of the theory, as well as some of the jargon, that philosophers have developed in looking at science and the procedures and results science has generated. We will now look at a particular issue within the history of philosophy of science, the battle between those who advocate a necessarily supernatural component to their explanations of life (Creationism and Intelligent Design), and those who do not so advocate (evolutionary theorists). This is, of course, an extremely controversial set of issues, but rather than trying to settle the debate here, the basic arguments for each will be presented before looking at some more general responses to the controversy as a whole. Creationism It is often said that the duck–billed platypus is "proof that God has a sense of humor"; however, some argue that the species is proof of Creationism. There are a number of accounts of the origins of life on earth, and how it came to have the remarkable diversity it has. One need only consider some of the odder representatives of the natural world—the duck–billed platypus, the Venus flytrap, the blobfish—to see that some sort of explanation might be needed. In the Western tradition, specifically that informed by Judaism and Christianity, such an explanation has been rooted in the Bible. Here we will look at one specific interpretation of the Bible that has been very influential historically, called Young Earth Creationism (YEC). The idea here is to present, rather than evaluate, its arguments, specifically in supporting it as an alternative to evolutionary theory. Young Earth Creationism takes as its central text the book of Genesis from the Hebrew Bible, and seeks to show that geological, biological, and other kinds of observations are consistent with Genesis and confirm the claims made there. While the YEC arguments can be complex, they draw these conclusions: 1. The earth and the rest of the universe were created by God, sometime in the past 10,000 years. 2. The fundamental event on earth was the flood involving Noah (the Noachian flood), and this event took a matter of years to determine the fundamental characteristics of the earth. This is often referred to as catastrophism. 3. The animals that survived the Noachian flood gave rise to the animals on earth; some creationists accept that there is microevolution, or small changes among animals, while others do not. All assert that the origin of life and of the major groups of animals and plants arose though a specific and original act of God, namely, Creation. Corresponding to these conclusions in support of YEC are objections to other views, particularly evolutionary theory. 1. The fossil record is incomplete and contains a number of gaps between forms; if evolutionary theory is correct, these "intermediate forms" should be present. 2. Evolutionary theory cannot explain the origins of life or how complex organic life (such as plants and animals) arose from relatively simple inorganic molecules. 3. The Second Law of Thermodynamics states that the order and organization of a system (sometimes called its entropy) will increase over time; yet evolutionary theory seems to violate this law, by claiming that systems have shown an increase in order and organization. YEC takes as its basic text the book of Genesis and regards it, and the Word of God, as undeniably and literally true. YEC argues that not only is it consistent with the evidence from the geological and biological world, but that this evidence, in fact, supports YEC. At the same time, that evidence either conflicts with evolutionary theory (as interpreted by creationists) or demonstrates that evolutionary theory cannot account for some things, such as the origin of life, that YEC can address satisfactorily. As such, YEC presents itself as a thoroughly scientific doctrine that can be used to interpret the natural world, revealing that world as the result of a single, unique act of Creation. There are, of course, other interpretations of Creationism than that of YEC. For instance, Old Earth Creationism (OEC) accepts the basic idea that God created the universe but did so much earlier than YEC proposes. There are, in fact, different versions of OEC, such as Gap Creationism, which argues that there was a "gap" between the formation of the universe (including earth) and the creation of human life. Day Age Creationism reads the claim in Genesis, that God created the world in six days, as involving a notion of "day" that is not a standard 24–hour day, but days that might each be hundreds of thousands, or even millions, of years long. There are still other interpretations within the general view known as Creationism, but all share the basic idea that the origin of the universe, the origin of life on earth, and the diversity of that life all require a special and supernatural act of Creation from God. Intelligent Design Clarence Darrow and William Jennings Bryant from the so–called Scopes Monkey Trial. The idea that the universe, and more specifically life on earth, indicates a sophisticated complexity that could not have arisen by accident is a very old idea, which can be found as far back as Plato. It can also be found in the writings of St. Thomas Aquinas (1225–1274), but its most famous version is probably that given by William Paley (1743–1805) in 1802, who argues, more or less, the following. If we were hiking in the woods and ran across a stone in our path, we would have little doubt that the stone could have gotten there by very ordinary ways; perhaps the stone rolled down the hill and stopped in the path (Paley, 2009). But if we found a watch in our path, we would be quite sure that, due to its complexity and design, it had to be intentionally designed. No watch could have just accidentally arisen, but it needed a watchmaker to engineer or design it. If one then considers the natural world—for instance the way bats successfully navigate by sound (known as echolocation), or the remarkable and undeniable complexity of something such as the human eye (let alone the human brain)—these things are considerably more sophisticated than a mere watch. Thus, if a watch can't arise naturally and accidentally, but needs a designer, then the world itself, which shows infinitely more complexity than a watch, must need an infinitely greater designer than a mere watchmaker: namely, God. This argument traditionally has been called the argument from design, but it has taken on new life in recent years, defended by those who wish to deny that evolution can provide a satisfactory explanation of the natural world, a world that shows such a high level of design that it could not have occurred accidentally. This is the view known as Intelligent Design (ID). The proponents of ID generally focus, as might be expected, on the complex systems found in nature, arguing that they could only arise by intentional design. Two distinct kinds of complexity are identified, "irreducible complexity" and "specified complexity." We can look at these in that order. Irreducible Complexity Irreducible complexity simply identifies organisms, or parts of organisms, that have intricate parts that must work together as a whole in order to function. The biochemist Michael Behe (b. 1952) is most closely associated with the argument from irreducible complexity; Behe proposes the analogy of a mousetrap to make the point. A mousetrap has various parts (the base, the catch, the spring, the hammer, and the hold–down bar), all of which must work together to function. If any one of these is removed, the mousetrap is useless. Behe presents detailed and informed examples from the natural world—particularly at the level of biology and biochemistry—that reveals the same kind of irreducible complexity, arguing that such features as blood clotting, or an organism's immune response when fighting disease, simply couldn't have arisen without being put together the right way at the beginning. All the parts have to be there at the beginning; if they are not, the system cannot function. Thus, Behe argues that this kind of complex design requires some sort of designer, although he is generally unwilling to identify this proposed designer as God or a specific supernatural, intelligent being. Specified Complexity Specified complexity is a second feature that, according to ID, indicates the need for a designer. Specified complexity shows that (a) there is a pattern that can be briefly described (this makes it specified) and (b) it is very unlikely, in terms of probability, that this pattern occurs. Thus, if a sequence or pattern is seen that seems unlikely—very unlikely—to have emerged accidentally, the need for a designer is, again, indicated. William Dembski (b. 1960), who is often credited for this idea, gives a clear example of what he means by "specified complexity" by using the letters of the alphabet. A single letter of the alphabet in Dembski's use of the term, is specified, but not complex, whereas a whole string of random letters is complex, but not specified. Thus, C, as specified but not complex, conforms to an independent pattern and is easily described, whereas CBCNJFXDXDV, as complex but not specified, doesn't conform to an independent pattern but would require complicated instructions to generate that specific sequence of letters. Dembski then argues that a poem—say a sonnet by Shakespeare—is both specified and complex, and thus indicates a designer (or author). Since thousands of monkeys clattering away on computers could not produce a Shakespearean sonnet (or it is, at least, highly unlikely), if there are similarly specified complexities in nature, an author (or Creator) of that specified complexity is needed. Dembski points out that such things as complex molecules, and DNA—the very building blocks of life—exhibit this kind of specified complexity, and thus require intentional design, and a designer, to produce it. Dembski, unlike Behe, seems much more willing to identify this designer as God. There are other aspects to the arguments for ID, such as the fine–tuning argument. This view points out that various constants in the mathematical equations that physics uses to describe the world, including the forces that hold together the atom and the gravitational attraction between objects, are very precise numbers. Were these numbers changed, just slightly, the world as we know it would be so different that we might well not be here to know it. Those who advocate this argument suggest that the fine–tuning of these mathematical features of the world requires a fine–tuner, or God. Many of the ID arguments, including those of Behe and Dembski, also introduce very sophisticated mathematical and scientific techniques: All point to the fact that evolutionary biology cannot explain either irreducible or specified complexity; if the fine–tuning argument is correct, it also cannot explain how we are here in the first place. While there is a good bit of agreement between YEC and ID, in that both argue that an adequate explanation of the universe and its contents demand introducing some external feature, such as God, a creator, or a designer, there is a good bit of disagreement as well. Many, if not most, of those who advocate ID do not accept the YEC claim that the earth is 10,000 years old (or less); many do not accept the YEC view that the Bible is literally true and must be taken to be the fundamental source for, and check on, what science can discover. Young Earth Creationists can also be seen to object to ID in that its conception of a designer is too abstract, and not sufficiently close to the YEC conception of a personal God. Yet one thing does unite, above all else, YEC and ID, and that is their opposition to evolutionary theory. Evolutionary Theory Evolutionary theory, which is discussed in Darwin's The Origin of Species, provides an explanation for how some species, like the Galápagos giant tortoise, have evolved to support their incredible longevity. Some live to be 150 years old! In 1859, Charles Darwin (1809–1882) published On the Origin of Species. Darwin's book is almost without a doubt discussed much more than it is read. Here we will briefly describe Darwin's basic view, how it has been developed by evolutionary biologist in the 150 years since he published it, and give at least a brief response to the objections to it that have been made by YEC and ID. The fundamental ideas of evolutionary theory are, in fact, fairly simple. Organisms compete in a specific environment, or niche, for food, shelter, other resources, and mates. Those who are the most successful at obtaining these will pass on their genes to a larger number than will their competitors. Let A and B be competitors: Perhaps A is just a bit stronger than B. A's offspring (a) will get A's gene's, and because A is more successful at gaining access to resources than B, there will be more a's than there will be b's. Thus, A's genes, in all the little a's, will become much more numerous than B's genes, in all the little b's. Biologists say that given this environment and competitive scenario, A's genes were selected for relative to B's. It is important to notice that this is not to say A is better than B, or that being A is progress relative to being B; it is only that A was more successful than B, in that specific competitive niche, in getting more copies of its genes reproduced than B. Using that, and adding two other features, gives a pretty good idea of what evolutionary theory states. These two features are (1) billions, or even trillions, or more, of reproductive events—each plant and animal producing offspring—and (2) a very, very long time. Geologists date the earth as being approximately 4.5 billion years old—a very long time. Some microbial remains have been discovered that are 3.5 billion years old, meaning that these competitive struggles have been going on for at least that long. With a lot of time, and a lot of competition for resources—and a lot of reproduction—various organisms have arisen. Some have been very successful for quite awhile and then gone extinct, such as the dinosaur; others have been very successful and continue to be, such as beetles. (Scientists estimate there are between 5 million and 8 million species of beetles, an enormous number of which have not even been named.) There is also a species that is relatively recent and currently seems to be quite successful at gaining access to resources relative to its competitors. This species seems to have branched off from an ancestor it shared with another evolutionary branch, the great apes, approximately 6.5 million years ago. Gorillas, chimpanzees, baboons, and bonobos developed along one branch; along the other branch developed a different genus, leading to various species. A relatively recent species along this branch had, among other things, a large brain, an ability to walk upright, and the power to use language in such a way that it proved, within its competitive environmental niche, very successful. This species is, of course, what we call human beings, or Homo sapiens. Evolutionary biologists can use DNA analysis and other techniques to date when this common ancestor gave rise to two distinct branches; a good indicator of that common ancestor is the fact that human beings and chimpanzees share about 96 percent of the same DNA. One can compare that to the genetic difference between mice and rats, which is 10 times greater than that between human beings and chimpanzees. Design and Natural Selection Philosopher Daniel Dennett discusses the the views put forth by Charles Darwin and their relationship to what it means for something to be "designed." Question: What kinds of things can Darwin's view not explain? How do human beings demonstrate their uniqueness, even if they are the products of the mechanisms described by evolutionary theory? While one often hears the phrase "survival of the fittest" as a slogan to describe Darwin's basic view, most biologists avoid using it (in part because it wasn't Darwin's own phrase) and instead speak of "natural selection" or "descent with modification." Two mechanisms provide the changes in an organism along a path of children, grandchildren, and so on: by recombining genes (thus, a child will receive part of its genetic makeup from its mother, and part from its father) and through mutations that occur randomly. Over time, the genetic makeup of a group will thus change, if only slightly. But if one of those changes gives a competitive advantage, it will be selected for: That is, that advantage will become more common, relative to competitors, and thus more offspring will be produced, also having that advantage. A slightly oversimplified example gives the basic idea: Imagine a group of rabbits and a fox. The fox wants to eat the rabbit; the rabbit wants to avoid being eaten by the fox. Within this population of rabbits, perhaps due to a particular combination of parents or a mutation, a rabbit develops the ability to run just a bit faster than the other rabbits. The fast rabbit is, then, less likely to be caught by the fox and will live on to have more children than the relatively slower rabbits. It's good to keep in mind the fact that to avoid being caught, the rabbit that survives only needs to be faster than another rabbit. The combination that gave rise to this slight increase in speed will, then, be passed on to that rabbit's children, and that competitive advantage will become more common: It will be selected for. Since Darwin originally published his revolutionary work, two important additions have made evolutionary theory considerably more sophisticated, leading to rich and productive fields of research in biology, genetics, medicine, and even economics and anthropology. These additions—a rigorous explanation of the mechanism of the gene and the application of mathematics—gave rise to what is now called the neo–Darwinian synthesis. Darwin was unaware of the actual mechanism through which information was passed genetically from parent to child, a discovery for which Gregor Mendel (1822–1884) gets credit. Understanding this mechanism, and using mathematical models and techniques to describe it, transformed biology into the science we know today. To return to the terminology of Thomas Kuhn: The neo–Darwinian description of descent with modification functions is the basic paradigm for contemporary biology and its sub–disciplines. We can now see how the evolutionary theorist can respond to the criticisms of both YEC and ID. Fossil Record Philosophers don't feel an organism requiring several working parts to function is sufficent evidence that a "designer" was involved. A tornado requires several working factors to occur, yet people would hardly argue that a tornado is designed. Even though these debates continue, becoming both more scientifically sophisticated and often more vicious, the evolutionary biologist's response can at least be outlined here. The biologist admits that the fossil record is incomplete; this is unsurprising, given the age of fossils and the conditions that must be met for them to survive for us to discover. The biologist will also point out how rich the fossil record in fact is, with a large number of fossils that preserve precisely the developmental sequence evolutionary theory predicts. But the real issue here is whether this is a fair objection to begin with. We can use the example of the horse, and let an ancient form of the horse be Fossil One and the modern horse be Fossil Fifty. Between these are many possible fossils, which would show evolutionary changes. But between any two fossils there will always be the possibility of a form in between them. Thus, Fossil Ten might be a transitional fossil, and Fossil Twelve a distinct transitional fossil: but Fossil Eleven is "missing." The evolutionary theorist can point to a long and surprisingly complete set of fossils, but unless a perfectly continuous set of fossils can be presented, there will always be "gaps" in the fossil record. This isn't surprising, but the biologist points to the strength of the evidence and how each new fossil discovery has been fit into this pattern. The real question is whether it is fair to object to gaps but then, when those gaps are filled, change the objection to a new set of "gaps." Origin of Life Most biologists agree that they do not have an account of the origin of life for a simple, and perhaps compelling, reason: That isn't a question that necessarily arises within biology. Evolutionary biologists, to put it simply, deal with life once it began, not with how it began. There are certainly biologists, and many others, who speculate and theorize about the origins of life. How life began is a fascinating question and has been explored by philosophers, theologians, and scientists at least since Aristotle. There is even a term for such study, abiogenesis. But these investigations are still quite speculative compared to other fields within biology, and most evolutionary biologists recognize that fact. In short, it seems about as fair to complain that evolutionary theory can't explain the origins of life as it is to complain that Michael Jordan wasn't a good painter. Law of Thermodynamics Creationists invoke the Second Law of Thermodynamics, noting that it states that things always go from order to disorder, while evolutionary theory seems to claim the opposite, and that order increases. But this law refers to what is known as a closed state, in which no new energy is being introduced. Earth, on the other hand, is an open state, because the sun constantly introduces new energy. Thus, the Second Law doesn't really apply to the earth, and order can increase within an open state. In fact, we see it all the time; it is, for instance, what occurs when a disorganized system of water gives rise to very ordered and structured patterns we call "snowflakes." Complexity Needs a Designer The ID complaint, that complexity—irreducible or specified—requires a designer, has generated a good bit of discussion, some of it quite technical, but the basic evolutionary response is to point out that all sorts of complex systems, such as weather patterns, arise from natural sources. A tornado, for instance, only works if all the parts are there together in order to make the tornado possible. But we probably don't think the tornado was therefore designed. The more general point is that evolutionary change is not, as it is sometimes presented, random. Rather, there are severe selection pressures on organisms. An advantage is selected for or against, and in nature, being selected against often means death and the end of reproduction. While some genetic changes may be random, whether they are successful or not will be determined by the environment: rewarded by success at gaining resources and reproducing, punished by starvation, less reproduction, and death. Harshly, but effectively, that procedure rewards structures that are more successful, often meaning better "designed." But it is only looking at the finished product that a naturally occurring process gives the appearance of having been designed. Science and Religion as Separate Spheres While the debate between evolutionary theorists and those who oppose them continues to rage, others have stepped into the fray to suggest that those engaged in this dispute fail to recognize that they are talking at cross–purposes. That is, science has its legitimate area, and religion has its legitimate area. Science should talk about observable facts, confirmable theories, empirical evidence, and so on; within its area of expertise are things like atoms, amino acids, bacteria, and stars. Religion should talk about issues of morality, faith, God, and so on; within its area of expertise are things like the soul, the afterlife, angels, and turning the other cheek. On this view, one should no more use scientific methods to investigate issues of religion than one should use religious concepts to provide scientific claims. In short, we ought no more to ask what role faith plays in the structure of a molecule than we would ask how much the human soul weighs. The evolutionary biologist Stephen Jay Gould (1941–2002) is perhaps best known for popularizing this view, which he describes in terms of magisteria and insisted that one should not interfere with the other, a view he called NOMA, for Nonoverlapping Magisteria (Gould, 1997). Science is one magisterium; religion is another. For Gould, if they are careful not to overstep their own boundaries, those who operate in the two magisteria can have respectful and productive conversations with each other, and each has much to learn from the other. Gould insists that scientists must recognize that many of life's most important questions cannot be answered by science alone, but he, along with many other scientists, also wants it to be recognized that religion should not attempt to address legitimate scientific claims with techniques that fall outside the magisterium of science. A quick way of putting the distinction is that the magisterium of science should stick to claims that involve "is," while the magisterium of religion should stick to claims that involve "ought." Some writers claim science aims to answer the "what?"question, whereas religion seeks to answer "why?" More recently, Gould's idea has been endorsed by Karen Armstrong (b. 1944), who claims that while God is fundamentally unknowable, what is crucial to religious faith is how people act, rather than what they know or believe. She characterizes Gould's magisteria in terms of factual knowledge (provided by science) and the more general meaning that factual knowledge can help offer (Armstrong, 2009). Thus, science might tell us that a beloved friend died from a specific cause, but it is the other magisterium—of faith, or belief, or religion—that tells us how we can understand that death and gain some meaning from it. Science, that is, can tell us the what, but religion or spirituality often provides the profound and meaningful responses to "why?" Clearly this kind of compromise between science and religion will be rejected by those who advocate YEC, and almost all of those who advocate ID. YEC insists that nature itself was created intentionally by a personal God, whose activities are described specifically in Genesis and elsewhere in the Bible. On that view, both the origins of the natural world and how we can understand it must be informed by the Bible. To neglect that information is a recipe for guaranteed failure, for both science and religion. In a similar way, because ID requires a designer, then the magisterium of science has to include that designer in its scientific explanations, and thus there can be no plausible scientific account that does not overlap, extensively, with the magisterium of faith and religion. Both YEC and ID, that is, insist that to separate religion and science does irreparable harm to our understanding of both because religion informs our scientific understanding just as our science complements our religious understanding. But objections have also been raised against Gould's conception of nonoverlapping magisteria by scientists. These tend to fall into two distinct categories, but both suggest that the idea that these magisteria can identify which questions belong only to one or the other magisterium isn't very plausible. The first objection is that science has a great deal to say about certain important issues that are central to religion. One might, for instance, consider miracles. From the perspective of natural science, people cannot walk on water, nor can they bring the dead back to life. Clearly enough, if one adopts that perspective, then the idea of the ministry of Jesus looks considerably different than it does from a more traditional religious perspective. Consequently, when one magisterium has something of relevance for the other to consider (or refute), they can no longer be regarded as "nonoverlapping." The second objection raises the same point, but from a different direction; namely, YEC and ID refuse to allow science to do its work without having to respond to its criticisms, criticisms which the scientist may well regard as fundamentally religious in nature. A scientist may wish to explore evolution, but if that exploration suggests results that conflict with the idea of special creation as described in Genesis, it will conflict with YEC. If a scientist wishes to explore how a bacterium moves around and suggests that it is the result of natural selection, this can conflict with one of Michael Behe's favorite examples of irreducible complexity, and thus conflict with ID. So, on the one hand, the magisterium of faith is confronted by the magisterium of science, while, on the other hand, the magisterium of science is forced to confront the magisterium of faith. In both cases, it seems that the two magisteria are intricately involved with each other, which is quite a bit different than Gould's (and Armstrong's) claim—or desire—that each is independent of the other, and each can successfully allow the other to focus exclusively on its own legitimate area. How to Disagree It's important to explain disagreements rather than simply exclaim why the other point of view is wrong. Otherwise, arguments can turn urgly pretty quickly. As mentioned earlier, philosophers sometimes resemble children in their insistence on asking "why?" and seeming never to be satisfied with an answer, but always responding to every answer with another "why?" There is the temptation to tell the philosopher to shut up, or to ignore any objections and remain satisfied with what one thinks, knows, and believes. But philosophy requires us to at least consider objections to our beliefs because we all know that sometimes we make mistakes and that preventing those mistakes can be extremely helpful. Whether we decide, in the long run, that we are willing to scrutinize and criticize our own beliefs—as well as those of others—may be an indication of our taste for philosophical inquiry. But even those who run screaming from the room when the word "philosophy" is mentioned will have to admit that looking at our beliefs critically is both useful and often the only way we learn things: even if we only learn that we might, on occasion, be mistaken. This means that philosophy is full of disagreements and full of arguments. In some ways the philosopher should emulate the eternally inquisitive child, always being curious and always wanting to know more, but it is probably not a good idea to follow the standard five–year–old's model of argument, which looks something like the following: A: My brother can beat up your brother. B: No he can't. A: Yes he can. B: No he can't. A: Yes he can, infinity. This really isn't a good model for a philosopher to follow (not to mention probably not a very important philosophical view to be arguing about). Are there better approaches to disagreements and arguments than this model? Although one would hope this to be the case, passions run high in these disagreements. For instance, the eminent biologist Richard Dawkins has said, "In order not to believe in evolution you must either be ignorant, stupid or insane" (Gilder, 2001). That doesn't sound very respectful, but one of his opponents, William Dembski, has characterized Dawkins as "virulently against religion of any stripe and uses evolution as a club to beat religious believers" (Humes, 2007). Those who comment in the more popular media about these issues are, if anything, more direct. Thus, Sam Harris, a well–known atheist and defender of evolutionary theory, has commented that "there is no worldview more reprehensible in its arrogance than that of a religious believer" (Harris, 2006). On the other hand, Phyllis Schlafly, who defends teaching Creationism in public schools, notes "Darwin's influence on Hitler's political worldview, and Hitler's rejection of the sacredness of human life" (Schlafly, 2008), thus implying some significant connection between evolutionary biology and Hitler. While using bigger words, this way of disagreeing may not be much of an improvement over the five–year–old's model we hoped to avoid. Clearly enough, passions run high in disagreements over religion, science, and the relationship between the two. But if a few rules are observed, these disagreements can be considerably more productive, and rather than generating more hostility between those debating, it may result in mutual respect. Although this may not always happen, it is a good thing to keep in mind when discussing philosophical issues, particularly those that have the potential to challenge people's most important beliefs, and therefore where people's feelings are most easily hurt. Here are some suggestions, then, to make such conversations less hostile and more productive. 1. Remember that an argument does not have to be a confrontation. Presumably, those arguing should be more interested in finding out the correct answer, if possible, than just winning the argument. 2. Be nice. As obvious as that may sound, you have a better chance of a useful encounter by being pleasant and relaxed, rather than entering into a conversation bristling and nasty. 3. Be fair. If you make a mistake, recognize it. If you make a factual error, concede the point and determine how it affects the overall argument; if the issue is one of interpretation, spell out the relevant meanings of terms and explain how the argument, or its conclusion, might still be saved. 4. Have a sense of humor. It should be remembered that the conversation is simply that, and one must keep it in perspective. 5. Be willing to concede another's point. If your opponent defeats you in an argument, recognize it rather than dogmatically persisting in trying to defend the indefensible. 6. Back up claims. If you make a factual or evidence–based claim, provide some support for it by offering some indication why someone should accept your claim. 7. Be consistent. If you argue for a particular position on the basis of specific claims, you should stick to those claims or say why you have changed them. 8. State things clearly. It takes only a small amount of time to proofread your work to spot typographical errors, mistakes, clumsy expressions, incoherent claims, logical gaps, and other problems. 9. Be succinct. If a point can be made in a sentence or a paragraph, don't use two or three. But don't be so brief as to leave out important information. One might use the Golden Rule to sum these rules up: You should argue with someone in the way you would like them to argue with you. That means taking the other person seriously and keeping the argument focused on the issues involved. Following these rules won't change the fact that there is a disagreement involved; respecting someone is much different than agreeing with that person. But by sticking to the issues, there is much less chance of the argument becoming nothing more than an exchange of insults. Conclusions from a Controversy Wonders of the world, such as the Great Wall of China, are certainly easier to explain than the purported occurrence of miracles. Undoubtedly, the debate among evolutionary biologists, creationists, and advocates of Intelligent Design will continue. As we have seen, some have suggested a way out of this debate by dividing scientific questions from religious and spiritual questions into magisteria; we have also seen objections to how this could even be done. These objections have been made by various members of those engaging in this dispute. A second way out of this, which is similar in some ways to Gould's proposal of Nonoverlapping Magisteria, involves the notion of naturalism and distinguishes questions that should be treated following what is called methodological naturalism. Methodological naturalism is the view that all explanations one seeks should be based on natural explanations; given some phenomenon to explain—whether why litmus paper turns red when dipped into an acid or whether there are ghosts—the methodological naturalist seeks explanations from nature and the laws of nature. Imagine you go to see an amazing magician, and he proceeds to make a volunteer from the audience disappear before your eyes. It is, of course, possible that the magician has done exactly that and genuinely possesses powers that violate the laws of nature as we understand them. It is also possible that you were fooled by some sleight of hand or some trick. The methodological naturalist proceeds upon the assumption that the laws of nature are not violated very often—if ever—and seeks an explanation in terms of everything else nature has seemed to indicate about people, about magic, and about illusions. Assuming some natural explanation is discovered, you decide that it is much more likely that the magician fooled you rather than actually made a person disappear before your eyes. To go from the claim that this is how we should proceed in our scientific investigations—adopting methodological naturalism—to the further claim that there are no other explanations available is to take a second step, and so propose a worldview known as metaphysical naturalism. Metaphysical naturalism is the idea that nature is all there is, and that any explanation that requires something beyond or outside of nature—something supernatural—has to be false. Many scientists accept methodological naturalism while either rejecting, or being unwilling to accept, metaphysical naturalism. But it is easy to get the two confused. Imagine someone going back to the 1300s with a battery–operated DVD player and television monitor. Even the most learned and brilliant people of that era would regard what was being shown as "miraculous." Yet we know that—other than going back in time—no laws of nature were violated, and that there was no miracle involved, just some very sophisticated technology that wouldn't be invented until many hundreds of years later. The methodological naturalist may not be able to give the explanation in terms of nature and its laws but will assume there is one to be discovered. A methodological naturalist may well think that the evidence for a miracle can be explained in terms of nature, and that once we have that explanation, we may not be willing to call it a "miracle." But he or she will not reject a miracle as a possibility; the metaphysical naturalist, on the other hand, will deny the very possibility of a miracle. One simply adopts a method (the methodological naturalist), whereas the other adopts an assumption about what the world is really like (the metaphysical naturalist). It is important to keep these two views distinct, even though some critics of evolutionary biology regard them as the same. The difference, although a bit subtle at first, is very significant. The methodological naturalist will not rule out supernatural forces (God, angels, etc.) but will seek to provide an explanation without using them. The metaphysical naturalist will rule out supernatural forces, and so not only doesn't use such things as God and angels in providing an explanation but will assert that they do not, and cannot, exist. Adopting methodological naturalism does not, however, require adopting metaphysical naturalism. Evolutionary biology does adopt methodological naturalism, and although many biologists are atheists and agnostics, many also believe in God and follow traditional religious doctrines. The idea sketched here offers a somewhat different approach to these issues than does Gould's use of magisteria. Here we have more of a philosophical view that when one does science, one does not use supernatural explanations if natural explanations are available. But that does not imply that such supernatural things do not, or cannot, exist. To return to the influential terminology of Thomas Kuhn: Evolutionary biology—descent with modification, or natural selection—is unquestionably the reigning paradigm in contemporary biology, and the various disciplines that rely on its results. Working within this paradigm, biologists and scientists in associated fields have accomplished many remarkable things: discovered penicillin and virtually eradicated many diseases, such as smallpox; mapped the entire genetic system of the human being (the human genome project); discovered the structure of the molecule that contains the genetic information of all living organisms (DNA); and in the last 100 years raised the life expectancy of a person living in the United States from about 50 years to 78 years (a 35% increase). More generally, those working within the model we have been calling methodological naturalism have changed our lives in dramatic ways; imagine your life without television, the transistor, the World Wide Web and the Internet, or the personal computer. Astronomers, again employing methodological naturalism as a working assumption, now indicate that the age of the universe is approximately 13.75 billion years old, and the diameter of the observable universe is at least 90 billion light years (that is, traveling at the speed of light, it would take 90 billion years to go across the observable universe). These are extraordinary accomplishments, but again, it is clear that there is no requirement that one who adopts methodological naturalism accept the philosophical view of metaphysical naturalism. Science operates on the basis of the best possible explanation, given the evidence, while recognizing that any current theory is always subject to being changed, revised, or even overthrown. Some object to evolution by saying that it is a "theory"; biologists will agree that it is a theory—as is gravity, and our current conception of the atom—but they also maintain that it is the best currently available theory. That means it may be wrong, either in some or many details, or even entirely wrong; just as the geocentric model of the world that placed the earth at its center was eventually discarded, the model the vast majority of today's working biologists use may also eventually be discarded. Evolutionary theorists will argue that YEC and ID are not scientific views because they do not provide claims that are testable or falsifiable. YEC and ID object to evolutionary theory because it is incomplete and cannot give a complete description of the origins and development of life on earth. As always, the philosopher will wish to continue the debate, by examining the methods used by those engaged in that debate, what counts as evidence, how well confirmed its claims are given that evidence, and what prospects for future research and investigation each promises. One thing remains clear: Philosophy has a contribution to make to this discussion, by making clear what the questions are and what will count as answers to those questions. Ch 3 What We Have Learned * Justifying knowledge claims has generated different and contrasting epistemologies, such as correspondence and coherence theories of knowledge. * Since Descartes, philosophers regard it as necessary to respond to skepticism. * Understanding what scientific claims involve, and how they are stated and tested, is an important philosophical component of the scientific enterprise. Some Final Questions 1. Identify three different kinds of things you think you "know." How certain are you that they are true? What must you do to justify those three things? 2. Pretend you are a professional astrologer. Give an example of the kind of prediction an astrologer might make. Why do you want an example that cannot be shown to be false? 3. If certain claims cannot be shown to be based on our current understanding of science, are there other reasons we might still want to believe those claims? Give an example of such a claim, and explain why someone might believe it for reasons other than those science offers. Web Links Types of Knowledge A concise description of different kinds of knowledge we may claim to have: http://www.theoryofknowledge.info/typesofknowledge.html Skepticism A clear, short account of skepticism can be found here: http://www.wsu.edu/~dee/GLOSSARY/SKEPT.HTM The Vocabulary of Epistemology A useful listing of many of the technical terms used in discussions of knowledge: http://humanknowledge.net/Philosophy/Epistemology.html Perception and the Myth of the Given A fairly detailed account of perception and knowledge, with a good account of Wilfrid Sellars's "myth of the given," is provided here: http://www.iep.utm.edu/epis-per/#SH3b Free Will and Determinism A quick summary of views on free will, causality, and determinism: http://www.trinity.edu/cbrown/intro/free_will.html Evolution and Its Critics A vast amount of information in support of the theory of evolution can be found here: http://pandasthumb.org/ Young Earth Creationism is advocated here: http://www.icr.org/ A clearinghouse of information on Intelligent Design can be found here: http://www.intelligentdesign.org/

No comments:

Post a Comment