Search This Blog

Showing posts with label Evolutionary Debunking Arguments. Show all posts
Showing posts with label Evolutionary Debunking Arguments. Show all posts

Tuesday, May 17, 2011

Thurow on Cognitive Science and Religious Belief (Part Three)



(Part One, Part Two)

This post is the last in a brief series on Joshua Thurow’s article:

“Does Cognitive Science Show Belief in God to be Irrational? The Epistemic Consequences of the Cognitive Science of Religion” (2011) International Journal of the Philosophy of Religion

It is part of a broader series of posts on evolutionary debunking arguments (EDAs). These arguments claim that if a belief-forming faculty is the product of a process that does not track the truth with respect to the relevant class of propositions, then any beliefs produced by that faculty are unjustified.

Thurow’s article focuses on evolutionary accounts of religious belief and the implications of such accounts for the rationality of religious beliefs. We finished the last part by outlining Thurow’s second (and more robust) version of the religious belief debunking argument. He calls it the CSR Process Defeater Argument.

Although this argument has successfully weathered some criticism, Thurow thinks it is susceptible to another line of criticism. Let’s see what this is.

(You might like to keep part two, with the process defeater argument, open in another tab or window for reference purposes).


1. Propositional and Doxastic Justification
The process defeater argument, in the form we have been considering, has a relatively simple structure. It begins by specifying that people believe in God in a non-inferential or basic manner due to the presence in them of a cognitive faculty (the HADD) that generates such beliefs. It then points out that the cognitive faculty in question is not a reliable producer of such beliefs because it would produce such beliefs even if they were not true. It concludes that religious belief is unjustified.

It might be tempting to object to the argument on the grounds that it commits a kind of genetic fallacy: It impugns people’s beliefs on the grounds that they originate in the HADD, but fails to address the possibility that they might have other (justified) grounds for holding those beliefs.

Tempting as it seems, this response is mistaken. To see why, it is useful to distinguish between propositional and doxastic justification. Your belief is propositionally justified when you have good reasons for believing as you do. Your belief is doxastically justified when the reasons you actually use to justify your belief are good reasons. The distinction is subtle, but crucial, because a belief might be propositionally justified even when it fails to be doxastically justified.

How does the distinction apply to the argument at hand? Well, the idea is that while people may have good reasons at their disposal for justifying their religious beliefs, they do not actually rely on those reasons. The actual grounding for their beliefs (the one they rely on) is non-evidential and is attributable to the HADD (or similar faculty) and that grounding, as we have seen, is unjustified. Obviously, this claim could not cover all religious believers, but maybe it covers a good proportion of them?


2. Engaging with the Real Reasons for Belief
Not so fast, says Thurow. He doesn’t think the argument works even when targeting doxastic justification. The reason is that the cognitive scientists claims about the origins of religious belief are unrealistically general.

Take, for example, defenders of the HADD-theory. According to them, certain strange events tend to be attributed to supernatural or divine agents due to the presence of a HADD. As such, their claims only cover a general kind of religious belief, not a theologically and culturally specific kind such as Christianity. The general belief uses abstract, minimally counter-intuitive concepts that are filled in by a whole host of contextual and cultural factors.

When justifying their specific beliefs, religious believers will appeal to a wide diversity of reasons including, but not limited to: they think they have witnessed miracles; they believe the Bible is reliable; they think certain of their prayers get answered; the world seems to have been designed for certain theologically significant purposes; and so on. Such reasons are necessary to move from the general to the specific.

This is a problem for the process defeater argument. To show that a believer’s specific religious beliefs are doxastically unjustified, an opponent would have to engage with all those reasons on their own merits, i.e. by engaging with the arguments and principles used to defend those beliefs. Some of those reasons might be bad, but learning this would be no different from the usual philosophical game. And it would imply that the cognitive science theories by themselves cannot change the contours of the philosophical debate over the rationality religious beliefs.

For those of you referring back to the argument itself, Thurow’s objection seems to attack premise (7). Still, he goes on to claim that it does not directly threaten basic belief religious epistemologies. I’m not really sure what his argument for this is since this part of the article is brief and simply describes a number of basic belief positions.

My guess is that Thurow is pointing out that basic belief style epistemologies (particularly of the Plantingan-variety) appeal to epistemically possible models of warrant and that these models can only be challenged using concepts not advanced by the CSR process defeater argument. According to such models, it is possible, for all we know, that God created us with cognitive faculties that allow us to have direct experiences of his existence. And since many believe they have had such experiences, they are warranted (absent defeaters) in believing in his existence. Such a model can be challenged on the grounds that we have no good reason for thinking God would create us with such a faculty, but this is very different from saying that our religious belief-forming cognitive faculties were by-products of evolution.


3. A Final Rejoinder
Even if all this is successful, one might still object to religious beliefs on the grounds that our argument-evaluating faculties are also unreliable. For example, one might argue that we are biased or predisposed towards finding arguments in favour of a belief in God and so we shouldn’t even trust the way in which we evaluate arguments for religious beliefs. This is certainly not implausible. Hugo Mercier and Dan Sperber have recently put forward the hypothesis that our reasoning faculties have evolved not to pursue the truth but to persuade others of conclusions we already accept.

Thurow responds to this line of argument in a couple of ways. First, even if we favourably evaluate arguments for the existence of gods in general, a specific argument for the existence of a specific god might easily be undermined. Indeed, this seems to true HADD across the entire range of agents (both natural and supernatural). And second, confirmation biases presumably work across a diverse array of propositions. Some of these biases might work against religious beliefs and thus make them more difficult to sustain.


4. Conclusion
To sum up, debunking arguments maintain that certain beliefs are unjustified because they are produced by unreliable belief-forming processes. The CSR process defeater argument is an example of such an argument. It specifically targets the rationality of religious beliefs. Although the argument is a strong one, and survives several lines of attack, it is not completely persuasive. This is because modern scientific theories of religious belief formation describe such belief formation in an unacceptably general manner: they fail to engage with the real reasons that people offer for specific religious beliefs. The only way to engage with those reasons it to perform the usual philosophical analysis of the strengths and weaknesses of these reasons. In doing that, one must abandon the debunking argument.

Monday, May 16, 2011

Thurow on Cognitive Science and Religious Belief (Part Two)


(Part One)

This is the second part in a brief series on Joshua Thurow’s article:
Does Cognitive Science Show Belief in God to be Irrational? The Epistemic Consequences of the Cognitive Science of Religion” (2011) International Journal of the Philosophy of Religion
This is part of a broader series of posts on evolutionary debunking arguments (EDAs). These arguments claim that if a belief-forming faculty is the product of a process that does not track the truth with respect to the relevant class of propositions, then any beliefs produced by that faculty are unjustified.

Although he does not use the terminology of the EDA, Thurow’s article is very definitely a contribution to the burgeoning literature on this topic. As we saw in part one, he focuses specifically on the implications of so-called by product theories of religious belief formation. According to these theories, religious beliefs are by-products of cognitive faculties that have evolved for other purposes. The classic example being the HADD-theory, which maintains that belief in divine agency is the product of a hyperactive agency detection device.

Thurow’s goal in the first two-thirds of his article is to present a robust version of a religious belief debunking argument based on the by-product theory. I introduced his first version in the previous entry. Things seemed to be going well for this version: it had been challenged from a couple of different directions and appeared capable of withstanding the assault.

But alas things are not so easy. In this part, I’ll present Thurow’s own criticism of this first version of the argument. I will then present his alternative version of the argument (the “process defeater argument”) that responds to this criticism.


1. Problems for the Reliability Test
The version of the debunking argument discussed in part one proposed the following test for the reliability of a belief-forming process:


  • (6) If a process would produce a belief in X even if X did not exist/occur, then that process is unreliable.


This reliability test impugns the HADD since it would produce belief in God or gods even if such beings did not exist. The test has already survived one type of criticism, but Thurow thinks there is another type of criticism that it does not seem to survive.

The criticism employs an analogy with mechanical or computational devices created by human beings. Thurow points in particular to the example of an astronomical device that gives us the locations and distances to all the planets and moons in our solar system. This device is programmed by human beings who have solid evidence for all the relevant coordinates and distances. As a result, we who use the device will trust in what it tells us and it looks like we are within our epistemic rights to do so.

But here’s the problem: the device whose outputs we are trusting does not pass the reliability test proposed by Thurow. Once again, if it turned (perhaps per impossibile) out that the planet Mars was not in the location specified by the device, the device would still specify that location. This is because the device is not sensitive to real world changes. Nevertheless, it seems like our trust in the device remains warranted because it was designed in accordance with sound design principles.

The analogy is apt because according to theists we are in a similar position with respect to our own belief-forming faculties. They, after all, believe that these faculties were designed by God, a being who surely knows of his own existence and can reliably program us to believe in his existence. And we, therefore, can be justified in believing in his existence

This, of course, echoes the Plantingan position.


2. Responding to the Objection
What are we to make of this objection to the reliability test? Maybe very little. When it comes to the astronomical device, our trust seems warranted because we know that the designers of such devices rely on sound principles and evidence. When it comes to our own cognitive faculties, we know no such thing.

Imagine if you came across a device, carelessly strewn on the heath, which purported to give you the locations and distances to all the planets and moons in our solar system. Upon closer investigation, you discover that the device takes no inputs from the external environment (e.g. no telemetry from satellites), that it just produces outputs in the form of the information described. Should you trust such a device? Maybe. But only if you had independent means of confirming or validating its results.

It seems like we are in a similar position when it comes to belief in God as produced by the HADD. We just have a device, that produces a particular kind of belief as an output (even in cases where the object of the belief is absent). So unless we have some independent means of verifying or confirming the reliability of that output, we have nothing that would count as the basis of a warranted belief.

It’s important to see that this response does not lead to global scepticism. Thurow is not claiming that we should only trust the output of our innate faculties when we have independent evidence for the reliability of those faculties. He is claiming that in the specific case where we have a faculty that produces a belief in X even when X is not present, we should suspend judgment about the reliability of such a faculty.


3. A New Argument
All of the preceding discussion leads Thurow to draft a new version of the debunking argument. He calls it the “CSR (Cognitive Science of Religion) Process Defeater Argument”. It looks like this (note: I’ve reordered the premises and added an intermediary conclusion to the version that appears in the Thurow’s article):


  • (7) If theory T is true, then religious beliefs are produced and sustained by process P, which is a basic belief forming process.
  • (8) There is no independent process to validate the reliability of P (from 7)
  • (9) Process P has the following feature: if religious beliefs were not true (i.e. no god existed), then P would still produce religious beliefs.
  • (10) If the process by which a belief X is formed and sustained is structured in such a way that if X were false, the process would still generate belief that X (and the process is not an inductive argument), then we should suspend judgment about the reliability of that process with respect to X, in the absence of independent evidence for the reliability of the process.
  • (11) Therefore, we should suspend judgment about the reliability of process P with respect to our religious beliefs.
  • (12) If we should suspend judgment about whether the belief-forming process we use is reliable with respect to X, then we are not justified in believing X.
  • (13) Therefore, if theory T is true, our religious beliefs are not justified.




Most of this argument makes sense in light of the preceding discussion. Premise 10 now states the new reliability test and adds the qualifications that result from the challenges and objections to the original version. Premise 12 seems like a sensible epistemic principle: one cannot justifiably believe in something about which one must suspend judgment. And the overall conclusion does indeed seem to follow.

So have we done it now? Have we constructed a watertight debunking argument based on the cognitive science of religion? The answer is “no”; there is another line of objection that undermines this version of the argument. We’ll see what it is in the final part.

Saturday, May 14, 2011

Thurow on Cognitive Science and Religious Belief (Part One)



I’ve recently been covering evolutionary debunking arguments (see index of posts here). These are a sub-class of the more general naturalistic or causal debunking arguments. Such arguments suggest that if a belief (X) is produced by a causal process (P), and if that process is not reliable or truth-tracking with respect to X, then belief in X is unjustified or unreliable. Evolutionary debunking arguments obviously focus solely on the causal process of evolution.

Debunking arguments of this sort can be deployed to cover a range of beliefs. Joshua Thurow’s recent article:
Does Cognitive Science Show Belief in God to be Irrational? The Epistemic Consequences of the Cognitive Science of Religion” (2011) International Journal of the Philosophy of Religion

is concerned with the impact of evolutionary explanations of our religious belief-forming faculties on the rationality of religious belief (specifically, belief in God).

I’m going to summarise Thurow’s argument over the next few posts. Since a large portion of what Thurow says in introducing his argument has been covered elsewhere in the series on debunking arguments, I’m going to try and cut straight to the chase. In this part I’ll outline his simple version of the debunking argument. In the next part, we’ll consider an objection to this version and develop a stronger one.

The only thing I will say now, in the interests of setting the scene, is that although Thurow acknowledges that there are three basic types of theories in the cognitive science of religion (adaptationist, by-product, and exaptationist), his analysis is limited to by-product theories. But he thinks his arguments can, pardon the pun, be exapted to cover the other theories as well.

The by-product theory suggests that our religious beliefs are the by-products of cognitive faculties that have evolved for other purposes. The best-known and most widely-cited example being the faculty for agency detection. This faculty is what allows us to attribute events and states of affairs to the actions and intentions of other agents. It produces belief in supernatural agents because it is hyperactive (hence, it is sometimes called the hyperactive agency detection device or HADD).


1. Thurow’s First Version of the Argument
One of the nice things about Thurow’s article is that although he is ultimately rejects the use of debunking arguments in challenging religious beliefs, he does try to give a decent formulation of the argument. He does this in two parts. First, he develops a relatively simple version of the argument. He then finds this to be deficient and formulates a stronger version. We’ll look at the simpler version first. It runs like this:


  • (1) If theory T is true, then religious beliefs are produced and sustained by process P (in this case as a by-product of other cognitive faculties or “Pbp”).
  • (2) Process P is unreliable and does not make use of good evidence.
  • (3) If the process by which a belief is formed and sustained is unreliable and does not make use of good evidence then that belief is unjustified.
  • (4) Therefore, religious belief is unjustified.


The structure of this argument should be familiar to anyone who has read the other entries on debunking arguments. One nice feature is that premise (3) is agnostic as to whether epistemic internalism or externalism is to be preferred.

The key to the argument is premise (2). Focusing on the by-product theory (or process Pbp), we must ask: what grounds do have for thinking that this process produces unreliable and evidentially deficient beliefs? The answer, according to Thurow, comes from the following argument (this is my interpretation of his reasoning):


  • (5) If the by-product theory is true, then even {if there were no God or gods, we would still believe in their existence}.
  • (6) If a process would produce a belief in X even if X did not exist/occur, then that process is unreliable or uses poor evidence.
  • (7) Therefore, process Pbp is unreliable and does not make use of good evidence.


The First Argument (Slightly Cleaned-up)


There are two key premises here. Premise (5), which is a counterfactual claim about the nature of the by-product process of forming beliefs; and premise (6), which proposes a principle for testing the reliability of a belief-forming process.

Premise (5) would usually be defended by reference to the various kinds of experiment that cognitive scientists perform on the HADD. These experiments reveal that the HADD produces a belief in the existence of agents even when they are not around. That said, premise (5) can still be challenged by some forms of theism. Premise (6) can also be challenged on the grounds that it is not a good test. We’ll consider both of these objections briefly.


2. The Anselmian Objection
The first objection comes from the Anselmian theist. According to them, God cannot not exist because God is a necessary being. This means that the first part of the counterfactual proposed in premise 5 (the first part of the counterfactual within the squiggly brackets, that is) is impossible.

Why is this an issue? Well, it is traditionally supposed that counterfactuals with impossible antecedents (so-called counterpossibles) have trivial truth values. Hence they pose no significant challenge to the views being challenged. This point is often marshaled in defence of Divine Command Theories of morality (for example, the classic Euthyphro-inspired objection “what if God commanded something terrible...” is said to have an impossible antecedent and hence not a true objection to the theory).

There is a by-now standard response to this kind of objection. It is to point out that many disputes in philosophy turn upon the acceptation or rejection of necessary propositions, and that there doesn’t appear to be anything circumspect or trivial about these disputes.

There is another kind of objection that the theist can make. It is to argue that we depend necessarily on God for our existence (and for our belief forming faculties) and so once again the counterfactual required by Thurow’s arguments has an impossible antecedent.

This objection can be responded to using the same response as was adopted above, but there is also another kind of response that is particular to this objection. It is to point out that the reliability of a belief-forming faculty is distinct from its dependence on something else, and that it is thus right and proper to investigate the former by imagining scenarios in which the latter does not hold true.

Thurow illustrates this with an example. Imagine Jones, an ordinary man who, due to wishful thinking, believes that there is a beer in his fridge. Now suppose that there really is a beer in his fridge and that this beer is pressed down upon a button. The button is linked to some explosive device such that, if Jones removes the beer from the fridge, he will be instantly blown up.



In this scenario, two facts appear to be true: (i) Jones’s existence depends upon the actual presence of a beer in the fridge; and (ii) Jones’s belief that there is a beer in the fridge is formed by an unreliable process (that of wishful thinking). This suggests that the reliability of a belief that P, is not always sustained by a dependency relationship between P and the believer in P. This, according to Thurow, suggests that even if there is some kind of dependency relation between religious believers and the object of their beliefs, it is still right and proper to question the processes through which they form such beliefs using counterfactuals of the kind outlined in his argument.


3. The Reliability Test
The other potential bone of contention with Thurow’s initial defence of the debunking argument is the actual test it proposes for reliability, which was:


  • (6) If a process would produce a belief in X even if X did not exist/occur, then that process is unreliable.


The problem is that this test seems not to apply to certain kinds of inductive (Bayesian) inferences. Here’s an example from the philosopher Jonathen Vogel:

Two policemen confront an armed mugger who is standing some distance away. One is a rookie and one is a veteran. The rookie attempts to disarm the mugger by firing a bullet down the barrel of the mugger’s gun. The chances of pulling this off are virtually nil. The veteran knows what the rookie is trying to do. When it comes to the actual firing of the shot, the veteran can’t see the outcome. However, based on his years of experience, and his knowledge of the chances of success, he believes (correctly as it turns out) that the rookie probably missed.

I suspect most people will think that the veteran’s belief, in this kind of scenario, was reasonable. He is making a plausible inference about the likely success of the rookie’s shot based on his background knowledge acquired after years of experience.

The problem for Thurow is that the veteran’s belief would be impugned if we were to use his proposed reliability test. After all, the veteran would have made the same kind of inference even if the rookie’s shot had been successful.

Thurow concedes that this is a counterexample to his proposed reliability test, but he his undeterred for two reasons. First, the counterexample only covers beliefs formed through inductive inference. The kinds of religious belief we are interested in here are basic or non-inferential in nature. Second, even if those weren’t the relevant kinds of belief, he reckons an alternative test would still yield the same result because there is a significant disanalogy between the veteran’s belief in Vogel’s case and religious belief formed using, say, the HADD. What is this disanalogy? It is that veteran has a good inductive argument for his belief, and the believer using the HADD does not. After all, inferring divine agency from strange events does not, without further supporting argument, warrant the belief in question.


4. Where to Next?
So far so good for the proponent of the debunking argument. The crucial premise of the original version can be defended using another argument and the premises of this additional argument can in turn be defended from two objections. We might think we’re home and dry by now. But this is not the case. As Thurow points out, there is a persuasive reason for abandoning his proposed reliability test. We’ll see what that is in part two.

Monday, May 9, 2011

Evolutionary Debunking Arguments (Index)

Much to my own surprise, I've ended up writing quite a number of posts on evolutionary debunking arguments. In case you've stumbled across this page in your travels through the great world wide web, you should be made aware that evolutionary debunking arguments are not what you might initially think. They are a fairly specific class of philosophical argument that create problems for the warrant and justification of our beliefs, they are not arguments that aim to debunk the theory of evolution.

Here's an index for all the posts I've written on this topic.


1. David Enoch on the Epistemological Challenge to Metanormative Realism

2. Guy Kahane on Evolutionary Debunking Arguments

3. Darwin and Moral Realism: The Survival of the Iffiest

4. Griffiths and Wilkins on Evolutionary Debunking Arguments

5. Thurow on Cognitive Science and the Rationality of Religious Belief


Saturday, May 7, 2011

Griffiths and Wilkins on Evolutionary Debunking Arguments (Part Two)

Would anyone trust in the convictions of a monkey's mind?
(Part One)

This post is the second in a brief series on the paper "When do Evolutionary Explanations of Belief Debunk Belief?" by Paul Griffiths and John Wilkins (GW - again, apologies for the abbreviation). In this paper, GW argue that it is possible to respond to evolutionary debunking arguments (of the sort covered here) by constructing a Milvian Bridge, i.e. showing how truth-tracking can be complementary to evolutionary success.

In the previous entry, I outlined the basic elements of GW's argument. The last thing I discussed was their claim that our commonsense beliefs are not debunked by evolutionary explanations. In other words, their claim that a Milvian Bridge can be constructed to cover commonsense beliefs. The key question now is how much further can the Milvian Bridge be extended. GW argue that it can be extended to cover scientific beliefs, but not ethical or religious beliefs.

Although I agree with GW about scientific and ethical beliefs, I am not entirely convinced by their support for the evolutionary debunking of religious beliefs. My contention is that the argument they offer in defence of scientific beliefs could easily be co-opted by the defender of religious beliefs. Whether I am right in this depends heavily on whether I am correct in my interpretation of the argument they offer in support of scientific beliefs, so I turn to that first.

(Note: I am not going to cover GW's discussion of ethical beliefs since I have covered that topic at length before)


1. From Commonsense to Science 
As noted last time, although we can be reasonably confident that our cognitive mechanisms do not fundamentally mislead us about the nature of the objects and entities with which we interact on a daily basis, the kinds of beliefs we have about such objects have no ultimate ontological significance. This is in stark contrast to scientific beliefs about the nature of such objects and entities (and more besides) which, while maybe not representative of the ultimate truth, are thought to get us a good deal closer to the ultimate level.

How can these scientific beliefs be justified? Surely, even if the proponent of the evolutionary debunking argument accepted GW's point about commonsense beliefs, they could still maintain that scientific beliefs are debunked. Scientific beliefs, they will say, take us beyond the realm of commonsense, and while it may be true that our cognitive mechanisms have evolved to track the truth within the realm of commonsense, this gives us no ground for thinking that those same cognitive mechanisms can extend us beyond that realm.

GW offer two points that count against the debunker's arguments in this regard. The first seems slightly weak, and I don't think GW mean for it to count for much, but I'll mention it anyway. It is that scientific beliefs, unlike the kinds of commonsense or intuitive beliefs that may have some evolutionary salience, are not the product of one organism's innate cognitive mechanisms. Scientific beliefs are cognitive innovations, built upon the shoulders of giants, and spread by cultural diffusion.

I take it that while the possibility of cognitive innovations being spread through cultural diffusion has some significance, it doesn't really count against the debunker's arguments. Why not? Because there are many beliefs spread by cultural diffusion but that might be thought to lack the status of knowledge. Indeed, religious beliefs may be a classic example of this.



GW's second point is rather more interesting and significant. It is that we can be confident in the content of our scientific beliefs because they are arrived at via a method that is itself justified by commonsense standards. GW refer to this as an indirect, as opposed to a direct, Milvian Bridge. I think their argument has a certain amount of appeal. Speaking for myself, I can certainly say that when I first learned about double-blind testing it seemed like an obviously correct process for removing biased or distorted interpretations of experimental results.

Since I think this point has significant ramifications, I want to try to sketch out their reasoning in slightly more formal terms. I call this the "indirect Milvian Bridge argument":

  • (1) Our commonsense beliefs are warranted due to the fact that they are produced by cognitive mechanisms that have evolved to track the truth within the commonsense realm (premise, from previous argumentation).
  • (2) If a set of beliefs X is likely to be warranted, and a set of beliefs Y can be derived using standards set by X, then Y is also likely to be warranted (indirect Milvian bridge principle).
  • (3) Scientific beliefs can be derived using standards set by commonsense beliefs.
  • (4) Therefore, scientific beliefs are likely to be warranted.

I hope this is an accurate reflection of GW's argument. Note that the content of scientific beliefs need not be consistent with commonsense, all that matters is that the method used to derive those beliefs is consistent with commonsense. Indeed, scientific beliefs are quite often counter-intuitive.


2. Debunking Religious Beliefs
As noted in the intro, I'm going to skip over GW's discussion of ethical beliefs. One point worth noting is that GW endorse the view held by Street and Kahane that a possible response to evolutionary debunking arguments in ethics is to reject realist conceptions of ethical truth. This endorsement is significant because GW query at the end of their article whether a similar strategy might be available to the religious believer. But I'll leave this issue to the side in order to focus solely on their argument that religious beliefs are debunked by evolutionary explanations.

GW support this argument by reference to some of the leading theories on the evolutionary origin of religious beliefs. Broadly speaking, there are two main categories of such theories (i) those that maintain that religious beliefs confer some sort of evolutionary benefit; and (ii) those that maintain that religious beliefs are a by-product of other cognitive mechanisms that conferred some kind of evolutionary benefit. There might also be a third category that combines both of these approaches (i.e. first a by-product, then an adaptation).

An example of a theory belonging to the first category is that of David Sloan Wilson. He argues that religious belief was selected for due to its potential to enhance social cohesion and prosocial behaviour. Examples of theories belonging to the second category would be those of Barrett, Boyer and Atran. They argue, for instance, that belief in a divine agent is a by-product of a cognitive mechanism for detecting agency (sometimes called the "hyper-active agency detection device" or HADD).

GW argue that neither of these theories can be used to support the existence of a Milvian Bridge for religious belief. Why not? Because in neither case is there any suggestion that religious beliefs were the product of truth-tracking processes. In Wilson's case, the beliefs are selected for their social benefits, not for their ability to track the mind-independent truth. In the case of by-product theories, the beliefs are produced by a mechanism with a propensity for making type 1 (false positive) errors.


3. An Objection
Although I am certainly inclined towards their conclusion, I think GW's argument against religious beliefs is a little too quick. In particular, I worry about their dismissal of by-product theories. They seem to accept, too readily, that beliefs in the existence of a divine mind will be the result of a type 1 error by the relevant cognitive mechanism.

Given my earlier formulation of the indirect Milvian bridge argument, it will probably come as no surprise to learn that this forms the backbone of my objection. I'm inclined to ask: If we are allowed to build an indirect bridge from the realm of commonsense to the realm of science, then why can't we build a similar bridge from the realm of commonsense to the realm of the divine? Here's what I have in mind.

First, I presume our beliefs about the existence of other agents are not massively erroneous (i.e. that, even though the rate of type 1 errors might be high, our HADD still picks out real agents more often than not), I do so on the grounds that, following GW's earlier arguments, other agents are part of our commonsense realm and our beliefs in this realm are likely to be truth-tracking.

Given these presumptions, I think it is plausible that our method for identifying agents in the commonsense realm could (maybe using other criteria set by our commonsense beliefs) be used to derive beliefs about the existence of other minds, including the divine mind. To quote Griffiths and Wilkins talking about scientific beliefs:
"If evolution does not undermine our trust in our cognitive faculties, neither should it undermine our trust in our ability to use those faculties to debug themselves - to identify their own limitations, as in perceptual illusions or common errors in intuitive reasoning."
Quite so, but why assume that we can't debug the HADD and still arrive at a belief in the existence of a divine mind? 

I suspect there are two types of response that GW might make to this. 

First, they could argue that they already acknowledge this possibility since they accept (at the very end of their article) that debunking is not disproving and that other reasons could be adduced in support of religious belief. I think that's right, but then I'm forced to wonder why we need the kind of argument that GW offer in support of scientific beliefs. Surely the concern of the debunker in both cases is with the possibility of justifiably moving beyond the commonsense realm; and surely the response, in both cases, is that an indirect Milvian bridge can be built? 

Second, they could argue that the problem with the HADD is that it is, contrary to my presumption, massively erroneous. That might be true too, but then I can't see why this wouldn't undermine their argument in defence of our commonsense beliefs. Surely other agents are part of our commonsense realm, and surely our cognitive mechanisms would have evolved to track the truth about such entities?

Okay, that's all I have to say on this for now. Hopefully, John or Paul might pop-up in the comments and offer some critique of what I've said. 

[Addendum: I'd like to add that I just came across this paper which seems to address the religious belief issue at considerable length.]

Griffiths and Wilkins on Evolutionary Debunking Arguments (Part One)

Would anyone trust the convictions of a monkey's mind?


Last week, I put up a series of posts on Guy Kahane's article "Evolutionary Debunking Arguments". In the comments to one of them, John Wilkins said he would be interested in my comments on a related paper that he wrote with Paul Griffiths entitled "When do Evolutionary Explanations of Belief Debunk Belief?". This post is the first in a two-part series that attempts to do exactly that.

Griffiths and Wilkins (GW - I trust you'll forgive the abbreviation) argue that evolutionary explanations do not debunk commonsense and scientific beliefs, but that they do debunk moral and religious beliefs. In this post I'll try to summarise the basic elements of their argument. In the next post, I'll look in more detail at their arguments relating to scientific and religious beliefs. If you're interested in the critical commentary, skip to part two.


1. Debunking Arguments and the Milvian Bridge
Following Kahane, GW accept that evolutionary debunking arguments have the following basic structure:

Causal Premise: S's belief that P is caused by the evolutionary process X
Epistemic Premise: The evolutionary process X does not track the truth of propositions like P.
Conclusion: Therefore, S's belief that P is not justified (warranted).

As GW point out, the easiest way to undermine a debunking argument of this sort it to attack the epistemic premise, i.e. to argue that the evolutionary process does track the truth with respect to the relevant class of propositions. They refer to this response, cleverly,  as constructing a Milvian Bridge.

The Battle of the Milvian Bridge

The name comes from the famous battle of the Milvian Bridge which is often pinpointed as the origin of the emperor Constantine's conversion to Christianity. The reference is relevant in that Constantine's success at the battle is often attributed to his belief in (the Christian) God's existence. Assuming this belief to be true, as traditionally was the case, the idea is that Constantine's pragmatic success can be linked to his true beliefs. Thus the idea promoted by GW is that sometimes evolutionary success can be linked to warranted belief by means of a Milvian bridge:

Milvian Bridge Principle: X facts are related to the evolutionary success of X beliefs in such a way that it is reasonable to accept and act on X beliefs produced by our evolved cognitive faculties.

GW think a Milvian bridge can be built for commonsense and scientific beliefs, but not for moral and religious beliefs.


2. Levels of Explanation
One of the reasons that evolutionary debunking arguments seem compelling is their reliance on the assumption that fitness-tracking and truth-tracking are alternative forms of explanation, not complementary forms. Thus it seems that a cognitive bias or heuristic can be accounted for in terms of its fitness-enhancing capabilities, this rules out or lessens the probability that it is truth-tracking.

This, GW argue, is a mistake: fitness-tracking and truth-tracking are not alternatives. A fitness-tracking explanation takes place at a different level from a truth-tracking explanation. When we ask "why was trait A (seemingly) selected over trait B?", we are asking a question at the most abstract level of evolutionary explanation: the population level. At this level, one determines fitness by looking at the heritability, frequencies and fitness functions of a particular trait.

The truth tracking explanation takes place at a lower and more specific level. It concerns how an organism actually interacts with its environment. To say that a cognitive mechanism tracks the truth is akin to saying that a pair of claws are efficient flesh tearing machines. There is no reason why a cognitive mechanism cannot be both truth-tracking and fitness-enhancing; just as there is no reason why a pair of claws cannot be both efficient flesh-tearers and fitness enhancers.




3. Trade Offs and Constraints
To say that cognitive mechanisms can be both truth-tracking and fitness enhancing is not to say that they actually will be truth-tracking. GW argue that cognitive mechanisms will only be truth-tracking within certain constraints. They give two examples of such constraints.

Cognition is clearly a costly enterprise. Estimates vary, but approximately 20% of the human body's oxygen consumption is taken up by the brain, despite accounting for only 2% of its mass. Because it is so costly, GW argue it is unreasonable to assume that cognition plays no role in shaping human behaviour. At the same time, it needs to be borne in mind that resources spent on cognition cannot be spent on, for example, tissue maintenance or gamete production.

If the opportunity cost involved in cognition is taken into account, then it might turn out that many of the biases and quirks in human cognition are less arbitrary than they first appear. Indeed, Gerd Gigerenzer and his colleagues have been arguing for years that many human biases and heuristics are best understood as reasonable accommodations between cost and accuracy. In other words, our evolved cognitive mechanisms can be seen to track truth with relative efficiency subject to various costs.

In addition to this, the ability of our evolved cognitive mechanisms to track truth may be limited by the fact that on some tasks involving decision-making under uncertainty it is impossible to avoid making some kinds of error. Abstractly, these tasks require the organism to produce appropriate behavioural responses to signals of uncertain meaning. In such a task there are two basic errors the organism can make: (i) a type 1 (false positive) error; and (ii) a type 2 (false negative) error.

Each of these errors comes with an associated type of risk. Suppose the signal in question is the colouring of certain types of food and the appropriate behavioural response is either to eat the food or discard it. The organism might assume that bright colours indicate poisonous foods. In this case, an organism that makes a large number of false positive errors will run the risk of discarding a large number of consumable foodstuffs. An organism that makes a large number of false negative errors will run the risk of being poisoned.

But, and here's the crucial point, because uncertainty is a part and parcel of the cognitive task, evolutionary processes will be unable to avoid making some kind of error: any reduction in the risk of type 1 errors will increase the risk of type 2 errors, and vice versa. So the best that evolution can do is to achieve the most reasonable balance between the types of error that the organism makes.

Once again, there can be truth-tracking, but only subject to constraints.


4. The Milvian Bridge for Commonsense Beliefs
So far, GW have presented three key claims: (i) that the easiest way to respond to an evolutionary debunking argument is to construct a Milvian bridge, i.e. show that true beliefs can be linked to pragmatic success; (ii) that fitness-enhancing and truth-tracking explanations are not alternatives to one another; and (iii) that our cognitive mechanisms can track truth subject to certain constraints.

What sorts of implications can be drawn from these three claims? The first implication, according to GW, is that our commonsense beliefs are overwhelming likely to have tracked the truth because understanding of the commonsense realm is highly likely to be linked to evolutionary success. Now we must be careful about this for two reasons. First, we need to understand what GW mean by "commonsense belief" and second, we need to understand their views on the ontological significance of a commonsense belief.

As regards meaning, GW say that a commonsense belief is an everyday belief that guides action. I take it they intend by this to refer to our beliefs about the mid-sized objects and entities that we interact with on an ongoing basis: our fellow humans, basic foodstuffs, our bodily limbs, rocks, trees, chairs, cars, cabbages, kings and so on.

As regards the ontological significance of our everyday beliefs, GW make an interesting and important observation: the conceptual realm that is created by our commonsense beliefs has no ultimate ontological significance. Indeed, there is a certain relativity or flexibility to such conceptual schemes.

Lorenz with Geese

One example they use to illustrate their point is drawn from the field of ethology (study of animal behaviour). The cite Konrad Lorenz's famous work on birds which suggests that birds do not perceive or conceive of their fellow birds as particular individuals in the way that we do.  Instead, they conceive of a set of stimulus features that mark something as the appropriate object of a suite of behaviours. In doing so, the birds make no essential mistake or error about the ontological reality of their fellow birds. There is nothing necessarily wrong about ours or the birds' conceptual schemes, they are both valid.

To say that our commonsense beliefs have no ultimate ontological significance is not to say that we lack the means to acquire knowledge of the more fundamental levels of ontology. That is what science allows us to do, but our confidence in our scientific beliefs can be derived from our confidence in our commonsense beliefs, as we shall see in part two.

Sunday, May 1, 2011

Darwin and Moral Realism: The Survival of the Iffiest


I’ve covered Sharon Street’s evolutionary debunking argument of objective (mind-independent) moral reality before. I’ve also covered some responses to the argument, along with some more general reflections on arguments of this sort. I’d recommend reading some of those posts before tackling this one.

In this entry, I’ll be covering the following article:
Skarsaune, K.O. “Darwin and Moral Realism: Survival of the Iffiest” (2011) 152 Philosophical Studies 229
It’s a direct response to Street’s Darwinian dilemma. So a quick recap on her argument is our first port of call.


1. Street’s Darwinian Dilemma
Non-natural moral realists believe that moral properties (values, rights, wrongs etc) exist in some mind-independent realm. Further, these realists tend to believe that we acquire knowledge of this realm through the exercise of our intuitive evaluative judgments. In other words, our intuitive evaluative judgments are some how in tune with this mind independent realm.

Now we all know (irrespective of our metaethical proclivities) that most of our intuitive evaluative judgments are the products of a complex evolutionary history. That is to say, most of our intuitive judgments can be said to have come about due to their fitness enhancing abilities. So, for example, our belief that we have certain duties to protect our offspring, or our enthusiasm for reciprocal forms of altruism, can be explained in terms of fitness enhancement.

But this creates a problem for the defender of moral realism. It implies that our evolved judgments have somehow managed to coincide with the mind-independent moral truth.

The thoughts set out in the preceding paragraphs can be expressed as an evolutionary debunking argument:
(1) Causal Premise: Our intuitive evaluative judgments are the product of a complex evolutionary processes.
(2) Epistemic Premise: Evolutionary processes are fitness enhancing, they do not track the truth of mind-independent moral properties.
(3) Therefore, our intuitive evaluative judgments are unjustified.

Street thinks that in responding to this argument the realist is confronted with a dilemma. As with all dilemmas, there are two horns on which they can impale themselves:

First Horn: They can argue that there is a relation between evolutionary processes and the mind-independent moral truth (which seems implausible).
Second Horn: They can argue that there is no such relation, in which case they land themselves in an unwelcome form of scepticism.

Skarsaune thinks the realist can be open to both horns of the dilemma, no matter how pointy they may appear to be. To be precise, he argues that the realist can provide an account of the relation between evolutionary processes and moral truth; and that if this account fails, then they could happily embrace the scepticism conclusion. He also makes some interesting concluding remarks about the nature of moral realism. We’ll look at all three of these elements in what follows.


2. Pain and Pleasure
Skarsaune’s willingness to embrace the two horns of the dilemma stems from his belief that he can provide a pre-established harmony account of the relationship between evolved evaluative judgments and the moral truth. If you have read my series on David Enoch’s response to Sharon Street, this will be a familiar idea, however, I think Skarsaune’s account is slightly more persuasive.

Enoch focused on the correlation between evolutionary goals such as survival and reproduction and deeper moral truths like the intrinsic goodness or badness of pain; Skarsaune focuses directly on the evolutionary utility of pain and pleasure.

He begins by making the following substantive claim:

  • P: Pleasure is usually good (for an agent) and pain is usually bad (for an agent).

This claim is agent-relative and non-absolute. It accepts that there may be exceptional cases in which pain might be good for the agent. This possibility does not matter for Skarsaune’s argument, provided such cases are relatively rare.

P can be restated using the language of reasons as follows:

  • P*: The fact that x would give the agent (S) pleasure is usually a reason for S to bring about x; likewise, the fact that x would give S pain is a reason for S to avoid bringing about x.

Skarsaune argues that two plausible inferences can be made from P (or P*). The first of these maintains that if P is true, then the first horn of Street’s dilemma is tenable. That is to say, a connection between evolutionary processes and moral truth can be found. The second inference maintains that if P is false, then the second horn of Street’s dilemma is tenable. That is to say, the realist has nothing to fear if P is false because this would imply that an awful lot of what we think we know about morality and about reasons for action would be in error. Realists usually accept such risks of error (indeed, one reason for being a realist is that one accepts the risk of error).



3. The Evolutionary Utility of Pain and Pleasure
Turning our attention to the first inference, we need to understand Skarsaune’s confidence in the connection between evolution and pain and pleasure. Surely, we might argue, evolutionary processes direct us toward that which is fitness enhancing, not that which is intrinsically good or bad?

Yes, responds Skarsaune, but one way in which evolution directs us toward that which is fitness-enhancing is by making things that are evolutionarily beneficial pleasurable and things that are evolutionarily detrimental painful. (You might like to read this series I did on Paul Draper’s evidential argument from evil for related ideas).

It’s easy to see how this is the case in relation to pleasure: the belief that something is pleasurable provides motivation to act; those individuals who took pleasure from evolutionarily beneficial activities were motivated to act in evolutionarily beneficial ways; as a result they tended to leave more offspring than those who did not take pleasure from such actions. Thus, the fact that we associate pleasure with certain activities plays a part in evolutionary explanations of our existence.

So far so good, where do we go from here? Well, we simply point out that P is true - that pain and pleasure are part of the essential moral fabric of the universe - and so evolutionary processes can track the truth of moral states of affair. Problem solved; dilemma dissolved. Right?

Not so fast. Doesn’t this really only imply that evolutionary processes create value? In other words, that evolutionary processes have made certain things valuable to us, and not that evolutionary processes have somehow managed to direct us towards the independently determined moral truth?

Skarsaune has a response. It is true that evolution has created a type of value by using the pleasure/pain mechanism, but we need to be clear about what this is. Evolution has merely made it the case that we find certain states of affairs to be valuable; it has not made it the case that pleasurableness itself is a good. The goodness of pleasure (and badness of pain) is independent of evolution. The distinction is subtle, but essential.

So there is, in effect, a pre-established harmony between the pleasure-seeking (and pain-avoiding) mechanism used by evolution and an independently specified moral truth. This allows the realist to embrace the first horn of Street’s dilemma.


4. What if we are wrong about P?
Not content to rest on the foregoing argument, Skarsaune also thinks he can show that a moral realist should be undeterred by the second horn of Street’s dilemma. The realist might be forced into that horn by someone arguing that P is wrong: that pain and pleasure are not usually good/bad for the agent pursuing them.

Skarsaune asks us to assume, for sake of argument, that this critic is correct. What would follow from this? Only that most of our beliefs about practical reason and morality are false. And presumably if this is the case, we’ll have bigger things to worry about than the plausibility of moral realism.

For example, suppose it were really the case that pleasure did not supply you with a reason for action. This would mean that most of your beliefs about what you have reason to do would be false. Maybe that's not significant since it relates to the agent-relative aspects of pleasure and pain. What about the agent neutral aspects? Well, similarly, the falsity of P would deprive us of many reasons for acting in the interests of others since most of those reasons relate to making the lives of others more pleasurable.

Either way, if P is false, then a lot of what we take for granted is false.


5. Realism and Mind-Independence
Even if Skarsaune’s argument to this point has been convincing, there is one final objection. Indeed, this is an objection raised by Sharon Street in her original presentation of the dilemma. It runs roughly along the following lines:
Sure, we can say that pleasure and pain are valuable and can be tracked by evolution, but in saying this aren’t we renouncing our commitment to moral realism? After all, realism is usually understood to be the view that moral facts are mind-independent. But pain and pleasure are very clearly mind-dependent. So to place them at the centre of our moral theory is to switch from realism to anti-realism.
Skarsaune disagrees. He thinks Street only makes this argument because she has an impoverished conception of moral realism. Realism is not committed to the view that moral facts are entirely mind-independent. We have to be more discriminating in our understanding of what the realists think is independent from what.

To be precise, we need to more discriminating in our breakdown of the various mental states from which a moral fact can be independent. Skarsaune argues, citing the examples of Nagel, Parfit and Shafer-Landau, that realists do not think moral facts hold independently of all mental states. They think that moral facts hold independently of beliefs and judgments, not independently of affective states.

I don’t know what to make of that claim. In some ways, I don't care what those defending realism actually believe. It is often the case that individuals hold beliefs that are inconsistent with the theories they defend. What matters to me are the implications of those beliefs. I think that if what Skarsaune says is true, then there is no important difference between realist and anti-realist theories. This would be an odd result, but I'd be glad to hear that defenders of realism are coming round to my position.

Thursday, April 28, 2011

On Evolutionary Debunking Arguments (Part Three)

(Part One, Part Two)

This post is the third in a short series of posts on the following article:


As we learned in part one, an evolutionary debunking argument (EDA) is something that attempts to undermine the warrant or justification for a particular belief by pointing out its evolutionary origins. All such arguments begin with a causal premise which specifies how evolution brings about the belief in question; they follow it up with an epistemic premise arguing that evolutionary processes do not track truth; and they thereby conclude that belief is unwarranted.

We saw in part two how such arguments are sometimes employed in disputes in normative ethics. The example given came from the work of Joshua Greene and Peter Singer. Both of these authors seemed to argue that deontological intuitions could be undermined by an EDA. This fact could be marshalled in support of utilitarian principles.

In response, it was argued that Singer and Greene’s argument is difficult to sustain since it needs to show that their preferred utilitarian principles do not draw upon other debunked intuitions. In other words, we need to be given some reason for thinking that global EDAs are not possible.

In this entry we consider whether global EDAs are possible.



1. Joyce and Street
In recent times, two authors in particular have pushed the idea of a global EDA. One of them is Richard Joyce; the other is Sharon Street. (I’ve discussed Street’s work at considerable length elsewhere on this blog, should you want more detail than you’ll be getting here). Michael Ruse should probably get an honourable mention as well.

Joyce argues that all our moral judgments can trace their origin to cultural and environmental influences affecting the hominid line. If we were, say, evolved from the social insects, we would come with a completely different set of pre-packaged moral commitments.

Joyce thinks this won’t do. On semantic grounds, Joyce maintains that moral discourse is committed to a type of absolutism, i.e. our moral discourse purports to provide us with a set of reasons for action that apply to all times, places and subjective dispositions. The contingency implied by modern evolutionary theory is diametrically opposed to this kind of absolutism. Thus we are forced to embrace a form of error theory about morality. (Joyce thinks we can still be happy with pragmatic, subjective reasons for action).

Street makes similar claims, but arrives at a different implication. She thinks that moral realists (particularly of the non-natural variety) should be deeply troubled by evolutionary history.

This history implies that many of our evaluative beliefs are directly moulded by the pressures of survival and reproduction. For example, altruism towards kin can be readily explained through evolutionary game theory. Despite this, realists must still believe that somehow these beliefs line-up with abstract moral truths. But surely this is incredible? Wouldn’t it be too much to think that the selective pressures of evolution just happened to coincide with the abstract, causally inert moral truth?

Street thinks this argument provides good reason for rejecting metaethical realism and embracing some form of antirealism (constructivism in particular). This position is not nihilistic or sceptical about moral truth. It just thinks that moral truths are not mind-independent.

Note that neither Joyce nor Street quite goes “all the way” with their debunking. Joyce still thinks it is rational to act in accordance with our subjectively perceived self-interest; and Street thinks moral truth can still exist. It might be possible to go even further with the debunking and point out that all normative beliefs (including beliefs about epistemic norms) are undermined by evolution. This is, effectively, what Alvin Plantinga does in his argument against evolutionary naturalism.


2. Responding to the Global EDA
At this stage its worth identifying the potential responses to EDAs by proponents of ethical objectivism/realism. There are three of them, and they should be unsurprising to anyone familiar with epistemological debates of this sort:

  • They can say that no evaluative beliefs are affected by the argument.
  • They can say that some evaluative beliefs are affected by the argument.
  • They can say that all evaluative beliefs are affected by the argument.

The third option seems unattractive for a variety of reasons. As noted above, if the proposed scepticism leaks into other normative domains then it’s basically impossible to rationally justify anything. The first option looks equally unappealing. Someone wishing to make this response would need to argue that evolutionary processes really do track moral truth (see here for a version of that response).

The second option is probably the most attractive but it is precariously balanced. Its defender needs to show why certain beliefs are unaffected. Basically, this requires that they show how the evaluative belief they wish to protect originates in or is supported by considerations that override evolutionary history. It is this kind of position that interests Kahane since it is maintained by the likes of Singer and Greene.

Consider once more Singer’s position. He thinks that an EDA can undermine deontological intuitions but not utilitarian ones. How can he be so sanguine? Because he thinks utilitarianism is supported by rational reflection that is not the outcome of our evolutionary past.

Does this kind of response work? Here is where the role of the reflective equilibrium (RE) in normative reasoning might be important. The RE proposes a kind of test for ethical beliefs. The test is coherentist in nature. It begins with a set of moral principles, it then tests these against a range of scenarios, and then modifies these principles in accordance with what seems reasonable, usually appealing to intuition when doing so.

Such an approach to normative reasoning might be uniquely susceptible to an EDA. Why? Because the equilibrium could be based on debunked intuitions. If that is how Singer ultimately justifies his utilitarian principle then he could be in trouble.

Wednesday, April 27, 2011

On Evolutionary Debunking Arguments (Part Two)

(Part One)

This post is the second in a short series of posts on the following article:


As we learned in part one, Kahane’s article sets out to examine the use of evolutionary debunking arguments (EDAs) in normative and meta ethics. An EDA is an argument which attempts to show that the evolutionary origin of a particular belief undermines the warrant for that particular belief.

We’ve only really considered the use of such arguments in the abstract to this point. In this post we’ll look at an actual example of such an argument being used in a normative debate.


1. Deontology vs. Utilitarianism in the Trolley Problem
Most people reading this will be aware of the widespread use of trolley-problem thought experiments in ethics. The experiments are designed to show how people’s moral intuitions can vary as a result of seemingly insignificant changes in circumstances.



In the classic example, the trolley problem involves you having to make a decision to kill one person in order to save five or alternatively, let the five people die. In one scenario, the killing of the one person is an indirect consequence of another action (flipping a switch or a lever); in the other scenario, the killing of the one person is a direct consequence of your own actions (e.g. pushing someone off a bridge).

Most people, when asked what they would do in these scenarios, say they would be willing to indirectly kill the one in order to save the five in the first scenario, but they would not be willing to directly kill the one in order to save the five in the second scenario.

From a utilitarian perspective, these responses are difficult to explain. The utilitarian calculation is surely the same in both: one person dies, five people live. Killing the one person is clearly merited in both cases, isn’t it?

From a deontological perspective, the responses might be a little easier to explain. There is an absolute injunction against an action which directly kills another; there is no such injunction against an action which indirectly kills another. No problem.

For some reason, the majority of humanity seems to intuitively back the deontological position. Are they right to do so?


Joshua Greene


2. Singer and Greene: Debunking Deontology
Peter Singer and Joshua Greene think not. They reckon that the deontological intuition can be debunked using an EDA. Greene in particular seems to reason as follows:

  • (1) Our commitment to the deontological intuition in the trolley problem is merely due to the fact that “up close and personal” violence was common in our environments of evolutionary adaptation (EEAs) and so an aversion to direct violence was selected for; indirect methods of killing are more evolutionarily recent and have not been selected against.
  • (2) Evolution does not track the truth of evaluative propositions.
  • (3) Therefore, we are not justified (or warranted) in our commitment to the deontological intuition.

There are several things that could be said about this argument. For one, the causal premise (1) could easily be challenged (as such explanations often are). For another, as noted in part one, the argument seems to assume metaethical realism/objectivism. If the person who is committed to deontology is not a realist, then they will be unswayed by this argument.

We will not focus on these points here. Instead, we’ll address the supposed implications of this argument. No doubt, both Singer and Green take it that this argument somehow supports utilitarianism. But clearly this is not a straightforward implication, however seductive or appealing it may seem to be.

To successfully infer that, one would have to show that the commitment to utilitarian principles is not also undermined by this argument. In other words, one needs to provide some reason for thinking that the EDA does not spread to all of our evaluative principles.

This raises the potentially unwelcome possibility of global debunking arguments. We’ll discuss these in part three.

Tuesday, April 26, 2011

On Evolutionary Debunking Arguments (Part One)

Regular readers of this blog will know that I have previously covered some articles looking at the use of evolutionary debunking arguments in metaethics (see here and here, for examples).

I recently finished reading the following article which also looks at the use of such arguments:


I must not have read the abstract closely enough before downloading this one since I was expecting Kahane to offer a more fulsome critique of the use of such arguments, but it turns out that wasn’t really what he was interested in.

Nevertheless, the article is a good one and I thought I might share some of its details here. One thing I particularly liked about it was Kahane’s attempt to elucidate the general structure of such arguments and to show how they are used in normative and meta- ethics.

In this first part, I’ll cover Kahane’s general analysis of debunking arguments. In the next part I'll consider their use in normative ethics.


1. An Introduction to Debunking Arguments
First things first, we need to be clear about the nature of a debunking argument (DA). Obviously, “debunking” is, in its everyday usage, intended to denote the practice of undermining or exposing some set of beliefs as being false. Given that this seems to be a common goal in philosophy, there could be many philosophically interesting DAs worth considering.

For present purposes, the focus is on what might be called causal-DAs. These are arguments that try to undermine some belief or set of beliefs by explaining its causal origins. Such arguments are, in fact, widespread. You probably use them all the time.

To consider an obvious example, anyone who has studied Marxism will know that Marx seemed to think (or at least, can be interpreted as thinking) that offering a causal-historical explanation of an ideology could be an effective way of undermining that ideology.

Now expressed in these terms such an argument is patently unsuccessful. It is an example of the genetic fallacy: Just because an ideology has a particular causal origin, does not mean that the ideology is false. But this in itself doesn’t mean that causal DAs are without merit. They just need to be reinterpreted in a more subtle way - as arguments that don’t undermine the truth of particular belief, but do undermine the justification (or warrant) that the believer might have for holding that particular belief.

Here’s an example. Suppose Bob believes that there is a particular object (X) outside his house. Now suppose further that we learn that Bob has a rather peculiar process for determining what to believe is outside his house. Every morning he flips a coin, and if the coin comes up heads, he believes there is an X outside his house. Surely, such a causal explanation for a belief undermines the warrant or justification for holding that belief?

Contrast this with an alternative causal explanation. According to this explanation Bob acquires the belief that X is in front of his house through visual perception of the object. Surely this causal explanation (assuming no additional defeaters) supports the justification or warrant for Bob’s belief?

We can refer to the difference between these causal explanations in terms of processes that are “on track” (i.e. track the truth of whatever proposition is under consideration), and processes that are “off track” (i.e. don’t track the truth of whatever proposition is under consideration). This is illustrated below.



Note that in the example given, the “off track” process is purely random. A more common variety of “off track” process might be one that is biased in a particular direction.


2. The General Structure of (Evolutionary) Debunking Arguments
Building upon the preceding example, we can specify the general template or structure that is shared by all causal DAs. It looks something like the following:

Causal Premise: S’s belief that P is caused by causal process X. 
Epistemic Premise: Causal process X is off track 
Conclusion: The belief that P is unjustified (or unwarranted).

As you can see, this is an argument that could work with any type of belief and any allegedly off track causal process. The concern in this article is to specifically examine arguments that focus on the evolutionary process and its implications for our evaluative beliefs.

Such arguments will have the following structure:

Causal Premise: S’s evaluative belief that P is caused by the process of evolution. 
Epistemic Premise: The process of evolution does not track the truth of evaluative propositions of type P. 
Conclusion: S’s belief that P is unjustified, or unwarranted.

Before we consider specific examples, there are three things to note about this style of argument.

First, the causal premise refers to the process of evolution in general. It does refer to any specific mechanisms of evolutionary change. Although it is fair to say that most attention has been paid to adaptive explanations that appeal to the mechanism of natural selection, genetic drift or other processes could also be targeted.

Second, the epistemic premise needs to be supported in a particular way. It must be shown that the off track process (in this case evolution) completely removes the influence of any potentially on-track processes. If there are on-track processes that also contribute to the causal explanation of the belief, then they might restore justification or, at the very least, not completely undermine it.

Third, and this is one of Kahane’s key observations, the argument assumes that S is an ethical realist/objectivist. To be more precise, it assumes that S believes in mind-independent moral properties. Someone who believes that moral properties are constructed out of whatever evaluative attitudes we happen to have will be unpersuaded by this argument.

Okay, that’s enough for now. In part two we’ll consider an example of an EDA drawn from the field of normative ethics.

Friday, August 13, 2010

Enoch on The Epistemological Challenge to Metanormative Realism (Part 3)

David Enoch, on the left

This post is part of a short series on David Enoch's article "The Epistemological Challenge to Metanormative Realism". Parts one and two are available here and here.

In the previous part, we outlined the precise nature of the epistemological challenge facing non-natural ("Robust") moral realists. We can briefly restate it as follows:

  • Robust realists think that there is some coincidence between our moral beliefs and moral truth. They owe us some explanation of how this could be the case given that, by their own lights, moral truths are independent of our judgments and are causally inert.

In this part, we will see how Enoch responds to the challenge.

He starts with some methodological points.


1. The Plausibility Game
As outlined above, the challenge is an explanatory one. But there is a problem with this: there could be brute, unexplainable facts. There is certainly no obvious logical contradiction in the notion of a brute fact. Given this possibility, robust realism's lack of an explanation for the coincidence simply deducts a few plausibility points from the overall theory. It may still win the plausibility game.

That said, robust realists shouldn't shirk the challenge. If they can provide an explanation, and if their theory is more plausible than alternatives, it would be all the better. And when looking for an explanation we follow the standard rules of the explanation game: we try to satisfy a set of explanatory criteria and we compare and contrast competing explanations.

Three additional points need to be made about the kind of explanation we are looking for.

First, Enoch thinks that it is important to bear in mind that the coincidence between our moral beliefs and moral truth is not all that striking. No moral realist thinks that we always get things right or that our intuitions are infallible. They would agree that sound moral reasoning requires special training and sensitivities. (Note that the same is true for mathematical Platonists.) And because the correlation is weak, a relatively weak explanation is all that is required.

Second, we must accept the possibility that an fallible reasoning mechanism could become more refined and accurate in its judgments over time, e.g. by eliminating inconsistencies, increasing coherence, drawing analogies and so on. There could even be some evolutionary story about how, say, the more primitive primate reasoning mechanism was refined into the more sophisticated homo sapiens mechanism. Could the same not be true of the portion of the reasoning mechanism responsible for moral judgments?

Third, there are two traditional ways to explain a correlation between two sets of facts. First, we can say that all B-facts are constituted by A-facts. Second, we can say that all B-facts are (causally) responsible for A-facts. Obviously, neither of these approaches is an option when it comes to robust realism.

However, there is another possibility: a third factor explanation. In other words, an explanation that shows how A-facts and B-facts are linked by a third factor C that is responsible for both A-facts and B-facts. Pre-established harmony explanations are of this sort, and Enoch wants to offer a pre-established harmony explanation for the troubling coincidence.


2. Survival as a Moral Good
Enoch's explanation is roughly as follows.

Let's begin with some assumptions about what is or is not a moral good (or, more generally, a moral truth). For example, let's assume that pain is morally bad and pleasure is morally good (not always and everywhere, but for the most part). Let's further assume that survival and reproduction are, in some sense, moral goods. This may be because they are correlated with the more basic moral good of pain.

This is not a strong assumption. Survival and reproduction do not trump all other moral goods, but they are usually better than the alternatives (death and infertility), in an all-things-considered sort of way.

Now, obviously, survival and reproduction are the key selection pressures in evolutionary history. So it would be unsurprising to find that evolved beings like ourselves have beliefs and desires that were correlated with them.

But since those selection pressures are in turn correlated with some moral truths (namely, that survival and reproduction are moral goods) we have an explanation of the puzzling coincidence between our normative beliefs and the mind-independent, causally inert moral truth.

In other words, what is evolutionarily beneficial has pre-established harmony with what is morally good, and what is evolutionarily beneficial causally shapes our cognitive faculties. This is, in effect, an inversion of Street's Darwinian Dilemma. It shows how robust realists have nothing to fear from evolutionary processes.

While you are digesting all of this, bear in mind that Enoch does not think that the correlation that needs to be explained is particularly impressive. So it does not matter if survival is not a primary or essential moral good. All that matters is that there is some slight correlation between normative beliefs and moral truths.


3. Too Good to be True?
Isn't this explanation just a little too convenient? Isn't there something fishy about it? For starters, why think that survival is a moral good?

Enoch offers a few responses. First, as noted above, it may be because it is itself correlated with other more basic moral goods, such as the absence of pain. Second, it may be true that some creatures are particularly evil and their continued existed would be bad. That, he thinks, does not really matter. As long as survival is good for creatures like us -- even if only because it makes other moral goods possible -- it is enough.

Maybe it is, but even so, hasn't Enoch just replaced one puzzling coincidence with another? I mean, isn't it somehow miraculous that evolutionary selection pressures just happened to align themselves with moral truths? This is a problem arising from the lack of counterfactual robustness inherent in Enoch's explanation: if things had been different...

Again, Enoch has a few responses. First, it is not clear that things could have been different. Could evolutionary processes really have aimed at something other than survival and reproduction?

Second, Enoch thinks it significant that he has reduced several puzzling coincidences -- between different normative beliefs and different moral truths -- to just one puzzling coincidence -- between the central evolutionary aim and a moral good. In the context of the plausibility game outlined above, this helps to raise realism's attractiveness.

Finally, brute luck is surely lurks behind other cognitive faculties that are shaped by evolutionary forces, e.g. those used in mathematical reasoning. So there is a level playing field when it comes to this issue.



4. Concluding Thoughts
So that's it; that's Enoch's solution to the epistemological challenge. He agrees that there are still concerns, but thinks his response can help to increase the plausibility of robust realism. In the final section of his article, he considers whether his solution could generalise to other contexts in which similar epistemological challenges are faced.

Unfortunately, he doesn't say very much. He says that its generalisability will depend on whether an analogue of the "goodness of evolutionary aims" can be found in other domains.

Maybe I can say a few words. As noted in part two, mathematical Platonism is the other theory that clearly faces a similar challenge. But, in some ways, I think the challenge may be easier to meet in that domain than it is in the case of morality.

Why do I think this? Well, I suppose I have in mind the fact that physical reality seems to be structured in a mathematical way. This is certainly the assumption underlying the science of physics. And since we have causal interactions with that physical reality, the fact that our mathematical beliefs line-up with mathematical truth is not particularly surprising.

Now there are gaps in this explanation. Certainly, the more abstruse aspects of number theory may have little connection with the physical structure of reality (I'll have to plead ignorance here). But that's not too worrying since, as Enoch noted, the correlation is relatively weak: not everyone is a good mathematician.

The question we need to consider is whether the response to the challenge in the case of mathematical Platonism is any better or worse than Enoch's response? It might be better in that the fact that evolutionary processes are subject to mathematical physical laws is less surprising than the fact (if it is a fact) that they are subject to moral laws. I certainly find the latter to be odder than the former, although I can't articulate the reason for my discomfort.

On the other hand, it may just be that I have found the analogy Enoch was alluding to.

Any thoughts?


Related Posts Plugin for WordPress, Blogger...