On Disability: Should We Be Ableists?

Remember when some members of the Society of Christian Philosophers lost their minds regarding Richard Swinburne’s claim that homosexuality is a disability (see here)? Part of their outrage emanated in response to the idea that disability is a bad thing, something to avoid, remedy, or cure. Apparently, that’s “ableism,” something we are called to see as bad or oppressive. But I don’t think so.

As I see it, human beings are creations with a nature: There is human form of which we all share, something that directs us and our bodily matter to certain ends. For example, it is a natural fact that humans have two ears. Our ears have a telos, which, obviously enough, gives us the actuated ability to hear at the normal, human level. If a human being becomes deafened, then he has a depravation– that is to say, something went wayward in his natural development or within his body. His deafened ears are thus not a variation of human hearing, but a deviation, a sort of defection from how they’re supposed to be or how they’re supposed to function in virtue of his humanness. In his case, deafness is a bodily condition that impairs the natural function of his ears; and thus, he has a disability.

Notice that this account of disability is not pertinent to social or environmental conditions, measures for equity, or hurt feelings. What matters here is whether the natural state or function is impaired by his bodily condition. Not much else is relevant. Thus, on this account, even though there are deaf communities that have adapted well to their affliction, creating their own languages and cultures, which doubtlessly have their respective beauty, this idea that deafness is not a disability, or that deafness is merely a different way of being, is still wrong (see here and here). True, sometimes disability is a part of a person’s identity, as it is for some deafened persons (e.g., the culturally Deaf), but this is a social identity, which is always and necessarily subordinate to human identity. Consequently, neither deafness nor any other disability is a part of our human nature or identity. Of course, in saying this, we might hurt some feelings, and that always sucks, but to say something to the contrary, while still believing my depiction of human nature, is a false compassion. When we lie or cater to delusion, we only cooperate and perpetuate falsehood, and that is something contrary to right action.

In any case, given the above, to extent that a condition is a disability, it is in need of correction, betterment or assistance, preferably a cure. And why? Because goodness is found in living and functioning to our natural ends: It is good that humans can see, walk, hear, and think to the capacity and degree upon which we are intended. Duh.

What baffles me the most about this outrage is that the persons who objected to Swinburne’s depiction of disability are themselves Christian. What the heck is that? I mean, seriously, if they are Christian, and not just progressivists draped in Christian gowns, they should believe that God created us, that the world is orderly and rational, that conditions such as blindness and deafness are negative consequence of the Fall, and that these conditions should be healed. I mean, heck, such healing is the sort of thing Jesus himself did – a lot. His acts of healing were not just miracles, mere attempts to wow his audience and suggest his divinity, no; they were also acts of kindness and mercy, which implies that such acts of healing are, in fact, good. I am therefore baffled to hear Christians balk at a fellow Christian who depicts disability in a negative light, as if disability is not, in fact, a physical evil.

Get a grip, people. Disability sucks.

35 Comments

  1. “Notice that this account of disability is not pertinent to social or environmental conditions, measures for equity, or hurt feelings.”

    What’s the account, exactly? Is it, “A disability is an impairment of a body part’s natural function”?

    If that’s it, what is an impairment, exactly? It’s true that you account has nothing explicitly environmental in it, but I wonder if it sneaks environmental stuff in implicitly. I think that will depend on what determines normal levels of ability–for instance, you talk about normal levels of hearing. That might be environmental, or social.

  2. Catholic Hulk,

    I’m a bit reluctant to reply, given previous exchanges, but I’ll give it a shot. I don’t have the belief that humans were [i]intended[/i] for anything, but I don’t think that’s required for proper function, illness, disability, etc., but in any case, I’ll just say that:

    1. It’s not the case that it’s always a good thing that a human being sees, walks, etc., normally. It’s very easy to find hypothetical scenarios in which it’s better for a person to have some sort of malfunctioning, and/or is better for others that a person have some sort of malfunctioning, even if that’s of course not the most common situation in reality.
    In other cases, it would be neither good nor bad, but neutral.
    2. Even when, in a specific case, curing an illness, malfunctioning, etc., would be a good thing – the most common case -, that does not imply that attempting to cure it by some means X would be a good thing. Sometimes – historically, pretty often -, the attempted cure would be much worse than the disease. Sometimes, it’s not known what would be worse. Sometimes, the means X is not clearly effective at curing the illness based on the available info, but it’s extremely likely to cause suffering to the person on whom it’s used – sometimes, a lot of suffering.
    3. Even when curing an illness would be a good thing and there is a means that would bring it about without much suffering, it does not mean that there is a moral obligation on the part of the sick person or on the part of others to use such means and cure it. We don’t have a general moral obligation to bring about any good.

    In case none of the above is in conflict anything you said, I still want to make those points, because I think that in this context, they are relevant, as readers are likely to misunderstand your post in that case.

    On that note, you said ” to extent that a condition is a disability, it is in need of correction, betterment or assistance, preferably a cure.”. But you don’t explain what you mean by “is in need”, and it’s not clear. Are you saying that someone has a moral obligation?

    Another case: “conditions such as blindness and deafness are negative consequence of the Fall, and that these conditions should be healed.”
    Here, the difficulty is the use of the passive voice in the moral claim. You say that they “should be healed”. But you don’t say who should heal them – if anyone. Are you claiming there is a moral obligation to cure them?

    • Angra,

      Concerning 1, you muddle the senses of goodness. It is always and necessarily good, that is, naturally good, for a person not to have any malfunction, to be free of disability. Now suppose there was a sudden environmental condition that led there to be disabling noise for human ears. In this case, deaf or deafened people would an advantage over the rest, which is a sort of good for those deafened people, but it is still not good (naturally) for them to be deaf or deafened. Their deafness is still a bad thing qua human nature, even if it gave an advantage within this environment.

      Concerning 2, the claim is just that they should be cured; I didn’t suggest that any means to cure them is good or legitimate.

      Concerning 3, yes, other things being equal, there is a moral obligation to restore ability or repair the dysfunction, because that is what is good for us as human beings. We ought to do the good; and what is good is the good for us qua human being. We can exercise prudential judgement about the timing of this obligation to restore (as in the scenario of 1), but that only postpones the effort to fulfill our obligation. Of course, nothing about this suggests we can force repair or restoration onto a person, but all other things being equal, the disabled person is still obliged to seek repairment or restoration.

  3. Catholic Hulk,

    Concerning 1, I’m talking about the sense that has a chance of being relevant in this context. If there is some sense of “good” in which it’s good always to not have any malfunctioning, that sense does not seem relevant. It’s still bad for the person not to have some malfunctioning, in some cases (my reply to 3. gets to the crux of the matter).

    Concerning 2, the claim that “they should be cured” is a passive-voice moral claim, and context does not allow a reader to ascertain who is supposed to have the moral obligation to cure. In other words, you say that they should be cured. My question would be: By whom?. Or – more to the point – who has a moral obligation to cure them?

    Concerning 3, I do not agree. I don’t think there is an obligation to do what’s good for us as humans, if there is such a sense of “good” in which it’s always necessarily good not to have any malfunctioning.
    This is a matter of how one tests general moral theories (i.e., theories that provide a certain range of conditions in which a person would have an obligation, or would be acting in a praiseworthy manner, etc). I would say that in general, a proper way to test moral theories is to construct hypothetical scenarios in which one’s own moral sense yields clear verdicts, and see whether the theory passes the test. If you have another way of testing general moral theories, I would ask what it is.
    Now, the general theory you implicitly propose holds that:

    a. It is always and necessarily good, that is, naturally good, for a person not to have any malfunction.
    b. Other things being equal, there is a moral obligation to restore ability or repair the dysfunction, because that is what is good for us as human beings.

    As long as the “other things being equal” part is not intended in a very restrictive way that in practice precludes the theory from making any predictions – rendering it untestable, but also of no use -, it seems clear to me that your theory fails to pass the test.

    Here’s a test:

    Let’s say that Alice does not have any libido. In fact, she feels no sexual attraction whatsoever towars anyone or anything. Surely, something is malfunctioning in some part of her brain/mind. Yet, it may well turn out that she feels no interest in fixing that, even if she could. Perhaps, there is a drug that can effectively deal with that (set this in the future if needed), and it’s very inexpensive or even free of charge. But she simply does not want to have any sexual attraction, which she considers would be a distraction from other things she likes to do, like learning more philosophy (for instance).

    It’s pretty clear to me that she’s not doing anything immoral by choosing not to seek treatment, all other things equal (e.g., no aliens nuking the planet if she fails to seek treatment).

    But that’s merely one example, out of a zillion. Generally speaking, it seems clear to me that there is no moral obligation to fix one’s own illnesses just for its own sake or because it’s in some sense good, even if in some cases there is such obligation (e.g., not to risk spreading an infectious disease).

    Granted, you might say your moral sense yields a different verdict in this example and the others.

    If so, I guess we’ll just disagree because we have clearly different moral senses. I would still ask readers to use their own moral senses to assess the scenarios in question and test your general theory in cases in which the verdict of their own moral senses is clear, rather than accept it untested.

    • Concerning 2, the moral obligation to heal would fall upon the most appropriate person within the situation. Here I use the principle of subsidiarity to guide me. If the person himself can cure it, then the obligation falls on him. Otherwise, it is the family. If they can’t do it, then the community, charity, church, or state.

      Concerning 3, I’m unsympathetic to your so-called moral sense. I suspect that your “sense” depends upon unstated, antecedent beliefs that need to be explicitly stated and justified. So I ask, for you, why is it the case that she is not doing anything immoral? I noticed that, in your analysis, you did not consider whether Alice has an obligation that is irrelevant to what she wants and likes to do. That’s not some small oversight – your test focuses on much more liberal concerns than my own.

      As for your example, I’d say she is obliged, because sexuality and sexual desire is an important part of human nature and experience. Of course, the strength of this moral obligation can be questioned, but I think that she does have an obligation to restore herself to proper, human function, if she can.

  4. Concerning 2, okay that clarifies it. My answer is as in the other case: I reckon that there is no such general obligation of self-healing because it’s allegedly good for oneself, even if the means were available. There might be such an obligation because – say – one has obligations to others that one can’t meet without healing first. But that’s not usually the case.

    Concerning 3, your usage of the “so-called” expression suggests that you’re questioning that we have a moral sense in the first place. But without that, we would have no access to moral knowledge.
    As for your question, “why is it the case that she is not doing anything immoral?”, I don’t have any particular reasons. I just reckon by my own sense of right and wrong – or moral intuitions, or conscience, or whatever one calls it – that it’s not immoral, simply by contemplating the scenario and not having a verdict “immoral”. If someone claimed that it is immoral, then I would ask for reasons, and depending on the reasons, my moral sense might yield a different verdict.

    If you believe that that is not a proper way of testing general moral theories (be it some version of Thomism, or utilitarianism, or Christian morality, or whatever), I would ask how you go about testing general moral theories.

    “I noticed that, in your analysis, you did not consider whether Alice has an obligation that is irrelevant to what she wants and likes to do. That’s not some small oversight – your test focuses on much more liberal concerns than my own.”
    If you have any candidates, I would like to ask what they are.

    “As for your example, I’d say she is obliged, because sexuality and sexual desire is an important part of human nature and experience”
    It’s definitely not important to her. Or rather, it’s important to her not to have them.
    Sure, there are sexual matters that are morally important (e.g., it’s immoral for a person to rape others for fun, and generally just to rape others). But I can tell that by means of my own sense of right and wrong, which tells me also that Alice does not behave immorally. So, whatever sense of “important” you have in mind, it does not seem to create a moral obligation as far as I can tell.

    Here’s some further examples:

    a. Let’s say Bob chooses to have a small tattoo, because he likes the way it looks.
    A tattoo causes damage to the skin, and malfunctioning of skin cells. According to theory you seem to be proposing, tattoos would always be immoral, regardless of size, location, etc. (which would perhaps affect how immoral they are), when they’re done because the person just likes the way they look, out of their own free will, or probably even whenever they’re not done to prevent some evil.
    But that again does not pass the test for me. Bob did nothing wrong, unless he had some further obligation to others (e.g., he had agreed not to have tattoos in exchange for something), but in general, there seems to be no such obligation.

    b. Let’s say that Mary decides to take contraceptive pill, because she does not want to get pregnant, but she wants to have sex. All other things equal, there’s nothing immoral about that. Sure, Catholics usually disagree, but I don’t see any good reason to think they’re right, and my conscience yields a “not immoral” verdict. Surely, that causes a malfunctioning of the reproductive system, even if reversible. But why would she have an obligation not to take the pill?
    Saying that sexuality and/or reproduction is an important part of human nature would lead to the question: “Important in which sense, that is relevant here?”
    For Mary, it’s important to have sex. It’s also important not to reproduce (at least for now, or anymore, or ever, etc.), so the pill helps her achieve something that is important to her.

    Granted, you could say that tattoos and contraceptive pills are usually immoral. But as before, my sense of right and wrong yields a different assessment.

    • “I reckon that there is no such general obligation of self-healing because it’s allegedly good for oneself, even if the means were available. ”

      CH, say that someone decides to live a life of celibacy, say a priest or nun to give themselves over wholly to God. If they have a low libido, do they still have an obligation to make sure they are inclined against their vow of celbacy? Or does that change the situation, due to making a promise with God, in a way that makes it different to Angra’s scenario?

    • To add: if so, does that mean that there is something wrong (to some degree) with making a vow of celibacy? I would see that as like promising to God that you won’t eat or won’t drink.

  5. Billy,

    A usual Catholic position – which seems to be Catholic Hulk’s – is that there is a general moral obligation to fix things in our bodies that are malfunctioning, all other things equal, and also a similar obligation not to cause any parts of our bodies to malfunction. However, that’s “all other things equal”, so there is room for exceptions (e.g., they generally hold it’s not immoral to be vaccinated even if that injures the skin), and also it does not include refraining from using some function, without causing damage (e.g., refraining from having sex).
    Still, in my assessment, that usual Catholic stance is clearly false, as the examples I’ve given (i.e., tattoos, contraceptive pills, a person with no sex drive) show.

  6. Hi Angra,
    Let me ask you some methodological questions. Your position here seems to be that moral theories are ‘clearly false’ if they (seem to) have implications in specific cases that are counter-intuitive. Or, at least, implications that your own conscience or intuitions tell you are not acceptable. I don’t have any problem with your appeal to intuitions or reflective equilibrium (or whatever). However, all moral theories–including your own, I’m sure, if you have one–have implications that many people would regard as counter-intuitive. Is utilitarianism ‘clearly false’ because it implies that very thinly distributed happiness over a huge number of people could justify torturing one person for a lifetime?

    Your own claim that there’s nothing wrong with getting a small tattoo is probably only philosophically defensible under some general theoretical principle that will have some counter-intuitive implications of its own. Maybe you’re relying on something like Mill’s harm principle. Suppose that the tattoo that someone just “like” covers her entire body with images of people raping babies. No one ever sees it, because she’s always wearing my burqa and niqab, so there’s no effect on anyone else. Or what if I just “like” having eyes removed and replaced with marbles? What about consensual cannibalism? Still there’s nothing here that we might intuitively have some obligation or right to fix or undo if we can? These seem to be much like your tattoo example in principle; they differ only in degree.

    Is your view that all moral theories that have counter-intuitive implications are ‘clearly false’? In that case, you’re committed to an extreme skepticism or nihilism (which is itself counter-intuitive). Or do you just think that all theories with implications that you personally find to be counter-intuitive are ‘clearly false’? In that case, you’re just very arrogant (and there’s no way that your own moral principles or theories don’t have some implications that you’d admit to be counter-intuitive for you).

    Alternatively, you might just be appealing to some un-theorized collection of intuitions of yours. But why should that have any rational significance for others–for example, traditionalist Catholics, who can just point out that (for example) they intuit that contraception is clearly wrong, or people interested in constructing viable theories, who will take these conflicting intuitions as the beginning of a philosophical inquiry rather than the end of one.

    Anyway, the usual thing to do once we realize that our favorite theory has some counter-intuitive implications would be to look for some kind of reflective equilibrium. Maybe we have to tweak the theory a little or re-interpret or reject some intuitions. I don’t see why a Catholic couldn’t do that in the kinds of cases that you mention. (For example, maybe a small tattoo is only just-barely-immoral, or maybe it’s not really immoral at all, because it doesn’t have any significant effect on natural functioning. Obviously being blind or mentally retarded or losing a limb is a different matter.)

    The idea that blindness (for example) is very bad, something we should try to fix if we can, is a very intuitively compelling idea. Don’t you share that intuition? I mean, we can quibble about the details, but if you’re interested in reflective equilibrium, this seems like the kind of intuition that a reasonable theory might have to accommodate somehow. The reasonable strategy–or one reasonable strategy–would then be to look for ways to make sense of this very strong intuition. Look for ways to make it clearer or more precise, look for ways to formulate it so that it doesn’t have silly or immoral implications. But instead your immediate reaction is to start quibbling over the precise scope of “other things being equal” (or whatever). That’s not very interesting or useful. Any position in philosophy is open to niggling objections–e.g., there’s _some_ possible world where being paralyzed from the neck down would be in _some_ sense better than being able to move your limbs. You seem to be mainly interested in making a bunch of debater’s points. Why not try to offer something constructive? If you have some alternative theory about the normative status of health or natural functioning, you could explain the theory. If you have some alternative moral theory that you think has no counter-intuitive implications, you could explain that theory. Or if you agree with the basic point Hulk was making–that it’s bullshit to pretend that disabilities are really great, or not worse than healthy normal functioning–you could help to tighten up his argument a little. Or if you actually think there’s _no_ sense in which we should be trying to cure things like blindness or mental retardation (etc) you could explain why you think that. These would be more interesting and valuable contributions.

    • Hi Jacques,

      While I agree that for any moral theory, one can find many people that will find some of its verdicts counterintuitive, I don’t think this is a big problem, just as the fact that many people will reckon by their own lights that evolution or an Earth billions of years old are very improbable. The epistemic sense of those people is malfunctioning, and similarly, the moral sense of many people malfunctions in some cases, so that alone would not be enough to be a problem.
      Granted, there is the issue of assessing one’s own moral sense; it’s trickier, but I think one can to some extent check for biases.

      That aside, I would say that when I assess utilitarianism by my own lights, I reckon it’s false – at least, any version that I’m familiar with and that makes actual predictions in specific scenarios, so that it’s testable. But the same goes for any known first-order moral theory. My position is that they’re all false. They’re all generalizations that may approach the correct verdicts in some or even many cases, but there are cases where they get it wrong.

      That alone would not make them useless, just as classical physics is a model that gets some things wrong (and I’d say almost certainly even present-day models), and it’s very useful. A theory can be false if taken as a whole, but still be a generally good approximation to the truth.

      In the case of moral theories, a potential hypothetical example would be as follows: let’s say that moral theory M1 passes the moral sense test in 999 out of 1000 cases in which our moral sense delivers a clear verdict (who counts as “ours” can be problematic, but leaving that aside for now), and there is a case C where the moral sense yields no clear verdict, but M1 does. I think it’s reasonable to tentatively say M1 probably got it right, barring other specific reasons to suspect otherwise.

      Alas, I don’t think actual moral theories are like that. I’m inclined to say they get it wrong much more often than that, and moreover, they do not seem to yield clear verdicts when our moral sense doesn’t. So, I reckon moral theories presented so far are not of much use.
      That’s unsurprising, for a number of reasons:
      For example, it took a very long time for physics to make any significant progress, even in the context of speeds and sizes of inanimate objects we deal with in daily life, the human mind is a more complex object of study than those, psychology is just getting started, and on top of that, people care a lot about morality and moral beliefs are influenced by religions and other ideologies (which creates an ingroup/outgroup conflict), so it’s extremely difficult to come up with a general theory that is a good approximation. At this moment, I think the best method we have is just to use our moral sense directly, but it’s a problem when ideological commitments and/or indoctrination mess with it.

      Your own claim that there’s nothing wrong with getting a small tattoo is probably only philosophically defensible under some general theoretical principle that will have some counter-intuitive implications of its own.

      But that would be, in my assessment, to put the cart before the horses so to speak: what we have is the specific assessments. General theoretical principles are generalizations based on them, and those are tested also against individual cases. You would need a lot of specific and clear cases like the tattoos in order to support the general theoretical principle, and if it fails in one of them, it almost certainly is false (there is the possibility of error, but it’s very low in clear cases if one has no specific reason to distrust the assessment); if it fails in more, it’s false.

      On the other hand, the moral assessment in the specific case does not need the general principle – on the contrary, it’s the proper way of testing proposed general principles.

      Maybe you’re relying on something like Mill’s harm principle.

      Not at all. I’m using my moral sense. It’s how humans normally make and have made for tens of thousands of years moral assessments in most cases.

      Suppose that the tattoo that someone just “like” covers her entire body with images of people raping babies. No one ever sees it, because she’s always wearing my burqa and niqab, so there’s no effect on anyone else.

      When I reckoned that Bob did nothing wrong, I was thinking of standard cases; and talking about the purported obligation not to cause damage to one skin in all cases.
      Now, I said “unless he had some further obligation to others”, and after further consideration, perhaps I shouldn’t have said “to others”, but simply say that she did nothing wrong unless there is some further reason why it’s wrong, other than the fact that the tattoo causes damage to the skin.

      With regard to your case, that’s not at all related to whether it’s a tattoo, or whether it causes damage to the skin. Suppose instead of a tattoo, she covers her entire body in paint with those images, and she keeps doing it all her life, without harming the skin. Her actions won’t be any better. If what she’s doing is wrong, it has to be for some further reason.

      Now, is it wrong?

      You don’t give me enough info, but it seems to me that someone does see the tattoo: whoever made it. Regardless, if it’s not a tattoo, your scenario does not allow me to understand her motivation and actions very well. Is she painting those images on her own body, by herself? How, and why? If she’s being forced to wear a burqa, it’s unlikely she could do that. If she’s not being forced, why is she promoting Islam?
      In any case, it’s an odd character, and I would say there probably is risk that others will see it (“probably” trying to guess how a real situation like that would be like; you could construct scenarios with negligible risk). If there is no such risk, I don’t know whether her behavior is immoral. I would need more information. This is not a case in which my moral sense yields a clear verdict, perhaps due to insufficient info, since I can’t just use background info as I do in real cases due to the extremely unusual setup.

      What is clear is that something is wrong with her brain/mind in the sense that it’s ill, malfunctioning, etc., but it’s not clear that it’s immoral.

      Or what if I just “like” having eyes removed and replaced with marbles?

      If you like that, you’re mentally ill.
      If you go through with it, I don’t think that’s immoral per se. It might be immoral if – say – predictably your behavior will cause much suffering to others who don’t deserve it. But then again, maybe it would not be immoral because if you do that, the sort of malfunctioning of your mind involved in that prevents you from being morally responsible for your actions. At any rate, it’s not immoral if – say – you know you’re the last person on the planet, no one else will contact you ever, etc. It’s just crazy.

      What about consensual cannibalism?

      That’s also immoral on the part of the cannibal, in realistic cases and barring insanity of the sort that would preclude moral responsibility (of course, if you consider unrealistic hypothetical scenarios, it’s sometimes wrong and sometimes not, as with everything as long as you don’t fix a motivation in the hypothesis. So, generally, please assume an “all other things equal” scenario).
      It’s not immoral on the part of the victim. It’s insane.

      Still there’s nothing here that we might intuitively have some obligation or right to fix or undo if we can?

      I didn’t say that. I said it was false that we had always a general obligation (even “all other things equal”) to fix our own bodies. I did not say we never had it. Still, your examples do not provide cases like that, because:
      1. In the rape of babies case, the matter has nothing to do with any bodily malfunctioning. If she has an obligation, it’s not related to how her skin is functioning, but to the message in it.
      2. In the case of having your eyes removed, you may have an obligation because of the pain you would inflict on others. Again, if you know (i.e., beyond a reasonable doubt) that you’re the last person on the planet, etc., I reckon you have no obligation not to remove your eyes, let alone put it back (with a machine) if you already lost them. On the other hand, clearly something in your brain/mind is very seriously malfunctioning if you do that. But that does not entail immoral behavior.
      3. The cannibal has an obligation (all other things equal, etc.) to refrain from engaging in cannibalism, consent or not. But that’s not about the proper functioning of the cannibal’s own body (well, except his brain, but that’s not why it’s wrong, or not primarily, and to the extent it it, again it’s because of danger to others).
      4. The victim of the cannibal is very probably doing nothing wrong, though he is seriously mentally ill. Perhaps, if he can be morally responsible, he is doing something wrong insofar as his actions will cause a lot of suffering on people who don’t deserve it, and he has no justification to inflict it.

      These seem to be much like your tattoo example in principle; they differ only in degree.

      They do not seem to be like them to me. But in any case, our moral sense makes assessments on the basis of a zillion variables, many of which are generally not known to us, so if the verdict is different, I would say the cases are clearly different.
      Still, in the cases you present, there are clear disanalogies, as I’ve pointed out above.

      Is your view that all moral theories that have counter-intuitive implications are ‘clearly false’?

      Our moral intuitions are not infallible. But if a theory collides with them in cases in which my moral sense yields clear verdicts, and I see no good specific reason to doubt my moral sense (that there is conflict with a proposed theory isn’t a good reason), then sure, I would reckon it’s false. Clearly.

      In that case, you’re committed to an extreme skepticism or nihilism (which is itself counter-intuitive).

      Not at all. I’m committed to rejecting all present-day first order moral theories. A basic version of intuitionism might be on the right track, if we leave the metaphysical baggage aside, and we just go with the intuitions. But that’s epistemology, not a first-order ethical theory.

      Or do you just think that all theories with implications that you personally find to be counter-intuitive are ‘clearly false’?

      I use my own lights to test theories. I can’t jump out of my brain so to speak. But I can also look into the development of my moral intuitions and check for potential sources of biases and malfunctioning, to some extent. I’m still not infallible, but the exceptions to known moral theories (at least the ones I know about; if you have another one and want to ask about it, please let me know) are so ubiquitous that yes, I reckon they’re clearly false – except for those that are not testable.

      I would like to ask you: how do you go about testing a first-order moral theory?

      In that case, you’re just very arrogant (and there’s no way that your own moral principles or theories don’t have some implications that you’d admit to be counter-intuitive for you).

      But I do not have a moral theory or a set of explicit moral principles on which I base my moral assessments. I use my moral sense on specific situations.

      Alternatively, you might just be appealing to some un-theorized collection of intuitions of yours.

      I use my intuitions on specific cases, rather than appealing to a collection, but sure, that’s the way to go.

      But why should that have any rational significance for others–for example, traditionalist Catholics, who can just point out that (for example) they intuit that contraception is clearly wrong, or people interested in constructing viable theories, who will take these conflicting intuitions as the beginning of a philosophical inquiry rather than the end of one.

      Why should my assessment that the Moon Landing happened sway a Moon Landing conspiracy theorist that does not deny any observations nor is unaware of them, but interprets them differently?
      Clearly, his brain is not functioning properly. But that’s clear to me (and to many others), not to him.
      The Catholics in question seem to have a similar problem, due to the damage caused by Catholic indoctrination. I used to be indoctrinated like that. Perhaps, they can get out of there too: just think about the scenario without looking at it from the perspective of Catholic moral theories of any sort. But maybe some are too damaged. What do I know?
      At any rate, I wasn’t aiming to persuade people who really have such different intuitions. I made it clear that I appealed to the intuitions of readers. I wasn’t telling them they should believe me because I say so: instead, I asked them to make their own assessment.

      So, no, I don’t think the fact that I make the assessment should sway them. But they should realize Catholicism is false in so many ways, and make their own assessments correctly. Of course, I do not expect to persuade traditional Catholics. At most, I hope to convince one or two readers that the approach to morality proposed by them puts the cart before the horses. Also, hopefully, I’ll get one or two people to reckon that the general theory that there is a moral obligation to self-healing because of the damage to one’s own body, is not true. (then again, given how internet debates go, that’s a faint hope; it’s more likely that none of us will persuade anyone).

      Anyway, the usual thing to do once we realize that our favorite theory has some counter-intuitive implications would be to look for some kind of reflective equilibrium.

      1. I don’t have a favorite theory. I was indoctrinated in Catholicism, but that’s long gone.
      2. What do you mean by “reflective equilibrium”?

      Maybe we have to tweak the theory a little or re-interpret or reject some intuitions.

      Maybe, but if so, then we realize the theory is false. It’s just that a false theory might still be a good approximation to the truth in many cases. I don’t think that any version of Catholicism is.

      I don’t see why a Catholic couldn’t do that in the kinds of cases that you mention. (For example, maybe a small tattoo is only just-barely-immoral, or maybe it’s not really immoral at all, because it doesn’t have any significant effect on natural functioning. Obviously being blind or mentally retarded or losing a limb is a different matter.)

      The “just barely immoral” hypothesis is still wrong, and they’re blaming people for things that aren’t immoral.
      The second hypothesis does avoid that particular example, yes.
      My response in that case would be to go with the other examples I presented – namely the person with no sex drive, and contraception.

      The idea that blindness (for example) is very bad, something we should try to fix if we can, is a very intuitively compelling idea. Don’t you share that intuition?

      Actually, I don’t share it.
      For the sake of others, perhaps there is an obligation, but it’s not clear.
      For example, if I’m blind, I increase the risk of accidents. It’s not my fault because there is no cure. But what if it were?
      Let me give you an example: would it be morally okay to go out walking with our eyes closed?
      No, it would not be. But then again, a blind person may have decades of experience dealing with her condition, and might not be causing a significant increase in any risks. So, perhaps not.
      I would say it’s intuitively unclear to me. I’m inclined to say that the increased risks to others are not compelling enough to impose an obligation. But it varies with the situation.

      Yet, if you’re the last person on the planet (no chances of others coming ever, etc.), you’re blind and you have a machine that can fix that, I find it counterintuitive to say that you would behave immorally if you choose not to fix your blindness. So, at any rate, if and when there is an obligation, it seems to be due to risks to others, not in terms of a purported self-healing obligation.

      Any position in philosophy is open to niggling objections–e.g., there’s _some_ possible world where being paralyzed from the neck down would be in _some_ sense better than being able to move your limbs. You seem to be mainly interested in making a bunch of debater’s points. Why not try to offer something constructive? If you have some alternative theory about the normative status of health or natural functioning, you could explain the theory.

      I do not have a theory, but I think that debunking theories that are generally used to blame people who don’t deserve to be blamed, by people who mistakenly believe the theory is true, is actually constructive. If I can get some people to also stop using theories and trying to use their moral sense instead, I would say that’s even more constructive, all other things equal.
      I mean, I think it’s constructive to the extent that anything we say on the internet can be constructive. It’s more likely that our words will fail to persuade people. There are those with a talent for persuading many, but alas, that’s nearly always not rational persuasion (e.g., many different stripes of leaders).

      Or if you agree with the basic point Hulk was making–that it’s bullshit to pretend that disabilities are really great, or not worse than healthy normal functioning–you could help to tighten up his argument a little.

      I don’t know if it’s bullshit. It’s a mistake, generally. But then again, I find Catholic Hulk’s argument to be an instance of similar harmful arguments, because they promote false theories, and are used to blame people who deserve no blame.
      People on the left who make some usual claims about disabilities are also wrong, but I usually wouldn’t risk engaging them, as civil discussion is extremely unlikely to follow, and I don’t like being demonized by enraged heaps of people.

      Or if you actually think there’s _no_ sense in which we should be trying to cure things like blindness or mental retardation (etc) you could explain why you think that.

      I do not believe we have a moral obligation, in general, to try to find cures for illnesses. At least, most of us have other obligations in our lives. All other things equal, it would be great if there were a cure for those conditions.
      By the way, not a cure, but here’s an example of science bringing some happiness into the world (related to this):

      htt ps://www.you tube.com/watch?v=XSD7-TgUmUY

      These would be more interesting and valuable contributions.

      Yes, but I do not have the means to cure things like blindness or mental retardation. I’m better at debunking bad theories.
      That said, if you’re going for more valuable contributions, this sort of exchange is very probably not that even when one is right. Neither you nor I has any good chance of making a big difference by talking on the internet, in my assessment, or even a bigger difference than we might achieve in other ways. But I don’t think I have an obligation to maximize the difference I make, and I’m no hero. 🙂

  7. Hi Angra,

    Fascinating answers. Now that I understand your position better I think it’s pretty irrational 🙂 First off, it seems to me that, as reflective beings, we’re hard-wired to seek general moral rules or principles underlying our intuitive judgments about cases.

    Suppose I find myself intuiting that it’s okay for people to get small tattoos, or to eat whatever kind of ice cream they prefer, or to decide for themselves whether to return phone calls… and so on. I also intuit that it’s not okay for people to run around swinging chainsaws, or to force other people to eat ice cream when they don’t want to, or to prevent others from answering phone calls when they do want to… and so on. If I’m an intelligent and thoughtful person, I’m going to wonder at some point _why_ cases in the first class are morally permissible, in my judgment, but those in the second are not. It’s going to seem very strange, or just unintelligible, that it’s some kind of brute fact that cases A, C, F and G are morally permissible while cases B, D and E are not. For one thing, I will surely have the _intuition_ that properties like moral permissibility are not fully explained by the fact that it’s chocolate ice cream I want to eat, or the fact that I want to eat ice cream rather than pizza (or whatever). I’ll naturally want to know what properties are common to cases in the first class only such that those ones differ in moral status from cases in the second class. (Would you deny that this is a natural and reasonable thought?)

    In addition, it’s psychologically unrealistic to claim that we can just have intuitions about cases without thereby naturally forming general moral beliefs. Imagine you see some kids torturing a cat on a Tuesday. Are you saying that you just intuit some proposition like “It’s wrong for those kids to torture that cat on this Tuesday” without quite naturally having a more general intuition like “It’s wrong to intentionally harm animals just for fun”? At the very least, I’d think that almost anyone would be prepared to accept (on intuitive grounds) the second proposition given exposure to a few such cases. Or are you saying that even when you do intuit such general principles you try to resist them and just retain more specific intuitions about cases? That seems psychologically very difficult. And why bother?

    Notice also that, if you grant we can intuit generalizations, we can also experience seeming incoherence between intuitions about cases. For a reflective person, it’s going to be very hard to avoid trying to resolve or understand these conflicts by constructing general principles. For example, I have the intuition that it’s wrong for the kids to torture the cat but I also have the intuition that similar behavior might be okay in a lab where scientists are trying to figure out how to cure a serious human disease. Now it might seem that my intuitions conflict: behavior of type B is both wrong and not wrong. I’m not going to rest with that result, since I like to have attitudes that seem rational to me. So I might think something like “B is wrong when the motivation for B is sadism but not when the motivation for B is to help people”.

    Is there a way for a thoughtful person to avoid that kind of thinking altogether?

    You say that you’re able to “check for potential sources of biases and malfunctioning, to some extent” and thus avoid skepticism. (I take it that’s your goal in this quotation?) But how can you reasonably believe that you’re able to do that unless you have some principled basis for distinguishing between biases and reliable intuitions, proper functioning and malfunctioning? Suppose you intuit that contraception is not immoral, and after “checking” somehow you conclude that this intuition is reliable and not the result of your own indoctrination into a false liberal ideology. How did you check? I guess you could try this: “My intuition that contraception is not wrong is reliable given that contraception is not wrong”, judging that it’s not wrong on the basis of your intuition that it’s not wrong. But that’s circular. And you know that Catholics can reason in a different, equally acceptable or unacceptable circle. Unless you have some independent reason for thinking that you are epistemically or morally superior to Catholics (and other people) you now seem to be rationally committed to skepticism.

    Theory might help here. If you had a theory, you might find out that (a) your theory coheres better with your intuitions than the Catholic theory coheres with Catholic intuitions, or (b) you’re able to answer questions about _why_ certain things are morally permissible and impermissible whereas an atheoretical Catholic (for example) has no answers to these questions, or (c) your theory is compatible with a reasonable explanation of _why_ your intuitions count as evidence for moral judgments whereas the other theory is not. These are ways in which you might reasonably regard your position as more rational than its rivals. (There are other ways too, I think, once you allow yourself a theory.) But if you have no theory I don’t think you can avoid skepticism once you reflect more carefully on your epistemic situation.

    I think the preceding gives you some sense of what I mean by “reflective equilibrium”. (Sorry, I thought that was familiar jargon.) We test principles and theories by reference to intuitions, among other things, with the aim of finding some kind of overall coherence. There’s no reason to always give greater weight to judgments about cases, since those judgments may be less plausible (intuitively, even) than certain generalizations. In trying to find equilibrium we may sometimes end up rejecting intuitions in favor of principles–perhaps because those principles enable us to explain and justify a wider range of intuitions. We don’t just say that the theory must be false because it conflicts with an intuition, or that the intuition must be unreliable because it conflicts with a favorite theory. Thus, the mere fact that a theory has some counter-intuitive implications is not a good reason for concluding that the theory is “clearly false”. Or not that it’s _simply_ false, anyway. A more reasonable attitude will often be that the theory is an approximation of the truth (as you seem to allow is possible). So then we try to refine the theory so that it’s a better approximation. This seems more reasonable and fruitful than simply rejecting all theories wholesale. (The method doesn’t require you to “jump outside your own brain” or use standards other than your own.)

    I’m puzzled by this:

    “If I can get some people to also stop using theories and trying to use their moral sense instead, I would say that’s even more constructive, all other things equal.”

    I guess you mean that they should use the “moral sense” that they _would_ be using were it not for their various ideological biases, the “moral sense” that you take yourself to have given your own lack of ideological bias. A few worries about this:

    (1) There seems to be no reason for believing that any natural or pre-ideological moral sense exists. There seems to be good reason for believing that everyone acquires moral intuitions through the same kind of social processes of conditioning and indoctrination. You ave moral intuitions which are strangely consistent with a highly specific liberal moral code that everyone in our society has been taught from pre-school onwards (by the media, the schools, social pressure). I guess you could say that you were just lucky to be born in a society where natural and non-ideological intuitions are finally being expressed and encouraged. But this would be implausible for reasons I’ve noted above. Or you could say that you yourself acquired these moral intuitions in some way that had nothing to do with your being acculturated within this kind of society, with its liberal moral code, so that it’s just a coincidence; that too would be implausible though.

    (2) If there is that kind of moral sense, it seems likely that it operates contrary to your own moral intuitions, that you’re the one who suffers from ideological distortion. In the vast majority of societies throughout history, and in much of the world today, we find lots of basic moral judgments inconsistent with the liberal moral code of our own society–e.g., “sexism”, “racism”, “homophobia”, etc. An analogy: the fact that almost everyone, at all times, has been heterosexual is good evidence that the natural or pre-ideological “sexual sense” of most people is heterosexual, not homosexual. Perhaps on reflection you should be more concerned that you’re the one who may be harming people by passing false moral judgments. Or your goal should be to get people _not_ to rely on any natural pre-theoretical moral sense; but then in order to make that a reasonable goal you’d need to offer some kind of theory that could serve as a better basis for moral judgments.

    (3) Moral theories can’t be isolated from other kinds of theories. Catholics accept general moral principles because they have a certain theory about human nature and origins, etc. And given those other theoretical beliefs (often based on non-moral intuitions) it makes sense for them to accept this kind of moral theory and use it when deciding how to live their lives. So if you want them to “stop using theories” in the domain of morality without being just stupid or irrational, you’ll have to convince them to change their theories about lots of other things. It won’t be enough to convince them that the non-moral aspects of Catholicism (for example) are false or unreasonable. You need to offer some other non-moral theory that’s at least as plausible, and which does not also make it rational to accept the kinds of moral judgments you think are false and harmful. And one possibility–actually a pretty likely outcome, I suspect–is that if you can persuade people to give up such non-moral theories, they’ll naturally begin to doubt that their “moral sense” is (a) reliable or (b) compatible with whatever non-moral theory you’re proposing. For example, if the non-moral theory you get them to accept is strictly naturalistic, believing that kind of theory will (probably) make it reasonable to regard our own moral intuitions as illusions; so this may lead to a general undermining of moral beliefs and motivation (as we can observe, I think, in the history of the west in the last few centuries and especially the last few decades). So the net result of your strategy might just be that people stop caring about morality, which might be at least as harmful as the use of moral theories you don’t like.

    And this also seems puzzling:

    “So, no, I don’t think the fact that I make the assessment should sway them. But they should realize Catholicism is false in so many ways, and make their own assessments correctly.”

    Why _should_they realize this? I think in order to explain why they should–i.e., why they should realize it as rational thinkers–you’ll need to appeal to general principles about truth or reasonableness or evidence or knowledge. It won’t be enough to merely say (for example) that it’s not true that Jesus was God incarnated, or that it’s not reasonable to believe that He was, etc. But the situation with epistemological theories is just like the situation with moral theories. All general principles about truth or rationality seem to have counter-examples. And even your own epistemic claims above–e.g., that we should regard as false any theory that is adequately counter-intuitive–are open to those kinds of objections. Do you have a double standard here? Do you think it’s okay to accept epistemic theories despite counter-intuitive implications, but not moral theories? Or are you saying instead that people (rationally/epistemically) should hold certain beliefs even though we have no way of explaining that obligation in terms of general principles?

    • Hi, Jacques

      First off, it seems to me that, as reflective beings, we’re hard-wired to seek general moral rules or principles underlying our intuitive judgments about cases.

      I don’t know we’re wired specifically about that. We’re wired to look for patterns, but I’m not sure we’re wired specifically to look for patterns in moral judgments. Moreover, we don’t always look for patterns consciously. We just do. And often, we get it right. Sometimes, we overgeneralize. Even Newton’s physics went too far in the generalization. So do present-day models, at least when someone claims they hold in all cases.

      Suppose I find myself intuiting that it’s okay for people to get small tattoos, or to eat whatever kind of ice cream they prefer, or to decide for themselves whether to return phone calls… and so on. I also intuit that it’s not okay for people to run around swinging chainsaws, or to force other people to eat ice cream when they don’t want to, or to prevent others from answering phone calls when they do want to… and so on. If I’m an intelligent and thoughtful person, I’m going to wonder at some point _why_ cases in the first class are morally permissible, in my judgment, but those in the second are not. It’s going to seem very strange, or just unintelligible, that it’s some kind of brute fact that cases A, C, F and G are morally permissible while cases B, D and E are not. For one thing, I will surely have the _intuition_ that properties like moral permissibility are not fully explained by the fact that it’s chocolate ice cream I want to eat, or the fact that I want to eat ice cream rather than pizza (or whatever). I’ll naturally want to know what properties are common to cases in the first class only such that those ones differ in moral status from cases in the second class. (Would you deny that this is a natural and reasonable thought?)

      First, even if it were that it’s okay because it’s ice cream, that would be a generalization from a specific instance. But yes, that seems counterintuitive. You’re already generalizing, looking for a pattern, considering even unconsciously other cases, so that does not look like the pattern.
      Second, when you think about it a little bit more, you’ll realize it will bottom out at some point. The question is where. But that’s particularly difficult to figure. Moreover, if we go too far, there is the problems of vagueness and slight differences among people. Our respective moral senses will be very similar if they work properly, but when it comes to scenarios very different from our daily lives, sometimes there will be no fact of the matter: we’ll have no answer, or we’ll go in different directions, and there will not be a way to resolve the problem.

      In addition, it’s psychologically unrealistic to claim that we can just have intuitions about cases without thereby naturally forming general moral beliefs.

      But that’s not what I’m claiming. What I’m saying is that we have intuitions about more specific cases, and that’s the way to test more general theories.
      Now, even the more specific cases are not fully specified (there is some generality to them; we just don’t consider all variables relevant, intuitively), and also we do tend to intuitively look for patterns, so while I don’t know that’s a specialization in the moral case, we make general hypothesis intuitively if we have time to dedicate to that (not if we’re, say, struggling every day to survive). But the way to test those more general hypothesis is, again, to go back to the more specific cases, which is where we do have a system that yields intuitive verdicts.

      In addition, it’s psychologically unrealistic to claim that we can just have intuitions about cases without thereby naturally forming general moral beliefs. Imagine you see some kids torturing a cat on a Tuesday. Are you saying that you just intuit some proposition like “It’s wrong for those kids to torture that cat on this Tuesday” without quite naturally having a more general intuition like “It’s wrong to intentionally harm animals just for fun”? At the very least, I’d think that almost anyone would be prepared to accept (on intuitive grounds) the second proposition given exposure to a few such cases. Or are you saying that even when you do intuit such general principles you try to resist them and just retain more specific intuitions about cases? That seems psychologically very difficult. And why bother?

      Yes, one tends to seek patterns and generalize, and exposure to cases and testing is probably something one does unconsciously sometimes, as part of the pattern-seeking process. One can do further testing consciously, looking for other cases of torture for fun, to see whether the generalization passes the test. It seems this one does.

      Notice also that, if you grant we can intuit generalizations, we can also experience seeming incoherence between intuitions about cases. For a reflective person, it’s going to be very hard to avoid trying to resolve or understand these conflicts by constructing general principles. For example, I have the intuition that it’s wrong for the kids to torture the cat but I also have the intuition that similar behavior might be okay in a lab where scientists are trying to figure out how to cure a serious human disease. Now it might seem that my intuitions conflict: behavior of type B is both wrong and not wrong. I’m not going to rest with that result, since I like to have attitudes that seem rational to me. So I might think something like “B is wrong when the motivation for B is sadism but not when the motivation for B is to help people”.

      But there is no conflict between the intuitions. Those are different behaviors. Behavior includes motivation (if you use another terminology, pick your terms, but the facts don’t change).
      If your generalization was that it’s always wrong to inflict pain, your scientists case (if you really intuit it’s okay) presents a counterexample, so that was an overgeneralization (unless you have specific reasons to doubt your intuitions about the scientists). But if your generalization is that it’s always wrong to inflict pain for fun, then the scientists case does not falsify it.

      Is there a way for a thoughtful person to avoid that kind of thinking altogether?

      Not that I know of. But I would never suggest trying to avoid that. Testing our generalizations is normal and fine. It may be useful. It’s definitely not in conflict with my position at all.

      You say that you’re able to “check for potential sources of biases and malfunctioning, to some extent” and thus avoid skepticism. (I take it that’s your goal in this quotation?) But how can you reasonably believe that you’re able to do that unless you have some principled basis for distinguishing between biases and reliable intuitions, proper functioning and malfunctioning?

      Well, suppose that I have a certain moral intuition on a matter, but it’s something that is in line with traditional Catholic teaching, and an intuition that other people who are not traditional Catholics generally do not seem to have. That would make me suspect that, perhaps, my moral sense was damaged on the matter due to Catholic indoctrination. Of course, I can’t jump out of my mind so to speak, so I already used my own lights to conclude that Catholicism is not a proper source of morality (I used not only my moral sense, but also my assessment on non-moral matters about Catholicism and how it’s generally not a proper source of beliefs, and the procedure of trying to get to truth (moral or not) because of an allegedly revealed source is a bad method.

      I would have to consider the matter more carefully, consider other cases of people acting on similar motivations, etc., and see whether my intuition eventually changes or not. If it doesn’t, then perhaps the coincidence with Catholic thought was just a coincidence. Or maybe my moral sense is so damaged that I can’t appeal to healthy parts of it to try to correct the problem. That would be bad, but usually not the case.

      Suppose you intuit that contraception is not immoral, and after “checking” somehow you conclude that this intuition is reliable and not the result of your own indoctrination into a false liberal ideology. How did you check? I guess you could try this: “My intuition that contraception is not wrong is reliable given that contraception is not wrong”, judging that it’s not wrong on the basis of your intuition that it’s not wrong. But that’s circular. And you know that Catholics can reason in a different, equally acceptable or unacceptable circle. Unless you have some independent reason for thinking that you are epistemically or morally superior to Catholics (and other people) you now seem to be rationally committed to skepticism.

      I wasn’t indoctrinated in a liberal ideology, so that’s my way of checking. I was only indoctrinated in Catholicism, and was actually told that contraception was wrong, and I used to believe it unreflectively because it was the Catholic position. So, I’m pretty sure it wasn’t because of indoctrination in a liberal ideology.
      In general, it’s a matter that one needs to test on a case by case basis. Granted, there might be cases in which the brain malfunctioning is so extensive that one can’t get out of it by resorting to non-damaged parts. Such is life. But that’s not likely to be the case.

      And I’m not committed to skepticism. I managed to get out of Catholicism. Others might too. But what if someone is too damaged?
      Well, then such is life.

      Theory might help here. If you had a theory, you might find out that (a) your theory coheres better with your intuitions than the Catholic theory coheres with Catholic intuitions, or (b) you’re able to answer questions about _why_ certain things are morally permissible and impermissible whereas an atheoretical Catholic (for example) has no answers to these questions, or (c) your theory is compatible with a reasonable explanation of _why_ your intuitions count as evidence for moral judgments whereas the other theory is not. These are ways in which you might reasonably regard your position as more rational than its rivals. (There are other ways too, I think, once you allow yourself a theory.) But if you have no theory I don’t think you can avoid skepticism once you reflect more carefully on your epistemic situation.

      But the assessment of which theory coheres better is also an intuitive epistemic probabilistic assessment. Your previous claim was “And you know that Catholics can reason in a different, equally acceptable or unacceptable circle. Unless you have some independent reason for thinking that you are epistemically or morally superior to Catholics (and other people) you now seem to be rationally committed to skepticism.” That’s not a good argument, and for that matter, a mirror argument yields the same problem for you, because they reckon that their theory coheres better, so how is your epistemic intuition better than theirs? How do you know, without an independent source?

      I think the preceding gives you some sense of what I mean by “reflective equilibrium”. (Sorry, I thought that was familiar jargon.)

      It is, but it’s used in more than one sense.

      We test principles and theories by reference to intuitions, among other things, with the aim of finding some kind of overall coherence. There’s no reason to always give greater weight to judgments about cases, since those judgments may be less plausible (intuitively, even) than certain generalizations.

      Sure, sometimes that happens. But in the moral case, our moral sense does seem to be one that yields judgments about cases, and we don’t have generally alternative methods that appeal to other, stronger intuitions, so that’s the usual and proper way of testing theories. Even so, of course we can have reasons to suspect that our moral sense is malfunctioning sometimes (e.g, the Catholic indoctrination example). I have no problem with that. Remember I said I would test theories against intuitions, but in order to conclude that they’re false, it has to be a case in which one does not have specific reasons to suspect the moral sense is malfunctioning.

      We don’t just say that the theory must be false because it conflicts with an intuition, or that the intuition must be unreliable because it conflicts with a favorite theory.

      But I don’t say that. I say generally first-order moral theory is tested against intuitions in more specific cases, because we’re comparing a generalization that comes from secondary sources (either generalizing from cases, or worse, some alleged revelation, etc.) with our judgments about cases where we can use our moral sense.

      Thus, the mere fact that a theory has some counter-intuitive implications is not a good reason for concluding that the theory is “clearly false”.

      Sure, if we have no good reason to think the moral sense is failing in those particular cases.

      A more reasonable attitude will often be that the theory is an approximation of the truth (as you seem to allow is possible).

      But an approximation to the truth may also be clearly false. It’s just not so far away from the truth in some cases. Those things are not incompatible. But in the case of Catholic theory, it’s actually not a good approximation.

      So then we try to refine the theory so that it’s a better approximation. This seems more reasonable and fruitful than simply rejecting all theories wholesale.

      Sure, if the theory can be refined. But if the central tenet is false, then that’s that. Maybe you can come up with some other theory with some similar features, no problem.

      I guess you mean that they should use the “moral sense” that they _would_ be using were it not for their various ideological biases, the “moral sense” that you take yourself to have given your own lack of ideological bias. A few worries about this:

      No, the moral sense would already be an improvement. For example, a Christian may think something is wrong because Jesus said so, instead of trying to figure out whether it’s wrong by contemplating scenarios. One improvement would be: assume we don’t know anything about Jesus, and try to think about right and wrong.
      Maybe his indoctrination already damaged his moral sense and he’ll have the wrong intuition. But there are plenty of cases in which that would not fail. Moreover, there are cases in which by thinking about many cases, the incorrect intuitions will end up going away.
      And what happens when it’s not fixed?
      Then it’s not. But it’s an improvement at least.

      There seems to be no reason for believing that any natural or pre-ideological moral sense exists. There seems to be good reason for believing that everyone acquires moral intuitions through the same kind of social processes of conditioning and indoctrination.

      While the moral sense of us develops in a social environment (we are after all social animals), that is in my assessment improbable and if true would support a moral error theory (whether epistemic or substantive).
      But that’s another matter, and this is taking too long.

      You ave moral intuitions which are strangely consistent with a highly specific liberal moral code that everyone in our society has been taught from pre-school onwards (by the media, the schools, social pressure).

      That would be curious if true, though it’s extremely improbable. For example, I reckon that it would be epistemically irrational on my part (or on the part of at least nearly all of left-wingers) to reckon that Jenner is a woman. That’s more than enough to get me vilified, demonized and completely condemned in most leftist venues. And it’s merely an example. Want to talk massive immigration from predominantly Muslim countries? (and a long etc.)
      Generally, rightists tend to classify me as a leftist, attributing to me plenty of beliefs I do not have. Leftists tend to classify me as a rightist, attributing to me plenty of beliefs I do not have. Such is life.

      I guess you could say that you were just lucky to be born in a society where natural and non-ideological intuitions are finally being expressed and encouraged.

      No, I was not. I was born in traditional Catholicism. I managed to get out.

      But this would be implausible for reasons I’ve noted above. Or you could say that you yourself acquired these moral intuitions in some way that had nothing to do with your being acculturated within this kind of society, with its liberal moral code, so that it’s just a coincidence; that too would be implausible though.

      It would be. I guess leftists might also argue that it’s improbable that I just got my rightist moral code by luck.
      The truth is, I do not have any such ideological match, which is part of the reason I get condemned and demonized by many on the left and right when they get to know some of my beliefs. Such is life.

      If there is that kind of moral sense, it seems likely that it operates contrary to your own moral intuitions, that you’re the one who suffers from ideological distortion. In the vast majority of societies throughout history, and in much of the world today, we find lots of basic moral judgments inconsistent with the liberal moral code of our own society–e.g., “sexism”, “racism”, “homophobia”, etc.

      Many beliefs were false, based on false nonmoral beliefs. That’s not a failure of their moral sense, but the problem is the nonmoral beliefs on which they were basing their judgments.
      In many other cases, yes, the moral sense was damaged, but was damaged precisely by indoctrination in religion or some other ideology, and one can think about it and try to figure out where the failure lies (i.e., the unreliable and bad source).

      (3) Moral theories can’t be isolated from other kinds of theories. Catholics accept general moral principles because they have a certain theory about human nature and origins, etc. And given those other theoretical beliefs (often based on non-moral intuitions) it makes sense for them to accept this kind of moral theory and use it when deciding how to live their lives.

      I don’t agree it makes sense entirely, but it’s less clearly absurd based on those other beliefs. But then again, some of those other beliefs are themselves absurd. Obviously (it should be obvious), Jesus did not walk on water. They should realize that. No philosophical argument for God (even if successful) would make such events probable.

      So if you want them to “stop using theories” in the domain of morality without being just stupid or irrational, you’ll have to convince them to change their theories about lots of other things.

      I don’t agree, but I readily concede after decades of online exchange, that persuading Catholics, or generally Christians, or Muslims, or Marxists, or other sorts of leftists, -activists, or… at least nearly any person defending an ideological stance on the internet, is beyond my capabilities.

      It won’t be enough to convince them that the non-moral aspects of Catholicism (for example) are false or unreasonable.

      It should be, actually. They should realize that even on a theistic framework, some of those beliefs are not rational to hold. But again, I do not claim I have the capability to persuade them.

      Why _should_they realize this? I think in order to explain why they should–i.e., why they should realize it as rational thinkers–you’ll need to appeal to general principles about truth or reasonableness or evidence or knowledge.

      Purely for example, they should make their own assessment and conclude that following OT laws in Ancient Israel willingly (stoning people, etc.) would have been immoral. In most cases, even by their own moral senses they would reckon that, which makes them feel uncomfortable.
      That’s not to mention not moral issues (e.g., of course Jesus did not resurrect or walk on water, and they should reckon that just as Moon Landing conspiracy theorists should reckon that it wasn’t a conspiracy).

      Or are you saying instead that people (rationally/epistemically) should hold certain beliefs even though we have no way of explaining that obligation in terms of general principles?

      Clearly, just as people clearly should not believe in Moon Landing conspiracy theories, or in Young Earth Creationism, etc.; one does not need a general epistemic theory to realize that that’s true. And yes, they disagree. It happens.

  8. A few quick questions before I try to get into this further. You write:

    “What I’m saying is that we have intuitions about more specific cases, and that’s the way to test more general theories.”

    Okay, but you also say that in your view _all_ moral theories are false. And as far as I can tell, the reason is simply that all such theories have implications that are (to you) counter-intuitive. Or that they have too many counter-intuitive implications. Is that right? In that case, you’re saying that we test theories by comparing them with intuitions _and_ that when we do so, we find that every theory put forward so far turns out to be unacceptable. And given this reasoning you reject all moral theories.

    Is that what you’re saying? And yet, at the same time, you appear to grant that some “general hypotheses” are also intuitively plausible. So in trying to find “patterns” or achieve coherence between our various intuitions, we are going to be trying fit general principles together with judgments about cases (among other things). And you yourself do this, right? But then how is that different from constructing a moral theory? Are you saying that when you do this you just _never_ achieve any kind of consistent overall view which includes general principles–but you nevertheless feel that your position is more rational than the alternatives? That seems very weird to me. Or are you saying that you _have_ achieved some consistent overall view of this kind, but it’s not a “theory”? Or are you saying that it is a “theory” but that, unlike all the other ones, yours is perfectly internally coherent and has no (or few) counter-intuitive implications? Sorry, I’m just very confused about what you’re saying.

    And I remain equally confused about why the Catholic-type position that Hulk is defending is supposed to be “clearly false”. My impression was that you were rejecting his position because, in your view, its principles have counter-intuitive implications. You appear to deny this:

    “But I don’t say that. I say generally first-order moral theory is tested against intuitions in more specific cases, because we’re comparing a generalization that comes from secondary sources (either generalizing from cases, or worse, some alleged revelation, etc.) with our judgments about cases where we can use our moral sense.”

    Explain this to me again, then. Suppose that Hulk uses his moral sense (or what he takes to be his moral sense) to conclude that blindness is something we should try to cure. He has similar intuitions about other cases, like paralysis or retardation. Now he generalizes: “When people have some condition C that prevents or significantly disrupts natural functioning we should try to get rid of C”. He finds that this generalization fits well with most of his “intuitions in more specific cases”, though he’s not entirely sure what to say about small tattoos. I say this is not a big deal. He can reasonably treat the handful of problem cases you mention as puzzles, not counter-examples; or he can just expect that he’ll refine his principles a little bit. Are you rejecting his position because you just assume from the outset that his moral sense must be malfunctioning, because he’s Catholic?

    Finally, a comment about the “mirror argument”:

    “That’s not a good argument, and for that matter, a mirror argument yields the same problem for you, because they [e.g., Catholics] reckon that their theory coheres better, so how is your epistemic intuition better than theirs? How do you know, without an independent source?”

    It’s not necessarily true that they do think this, or would think it on reflection. It could well turn out that, after getting into the details, many Catholics would recognize that their theory is less coherent than mine; or I might realize that mine is less coherent than theirs. Logic can be a neutral or shared standard that people with different moral schemes or world-views can appreciate. Catholics might have just the same logical standards as non-Catholics. Of course, some people might not care as much about coherence as others, but that’s another matter. If I realized that their position really _was_ just as coherent by my standards as my own, and just as good by any other epistemic standards I care about, that _would_ incline me to skepticism. You dismiss this point about disagreement without explanation. Why is it “not a good argument”? Disagreement of this kind is never a good reason for skepticism?

  9. Oh, and a few other things too 🙂

    “But then again, some of those other beliefs are themselves absurd. Obviously (it should be obvious), Jesus did not walk on water. They should realize that. No philosophical argument for God (even if successful) would make such events probable.”

    You can just assert, however many times you like, that this “should be obvious” or that no argument “would make such events probable”. This is not to explain why it’s improbable, let alone “absurd”, let alone how any rational person is supposed be able to just intuitively know that it is without having to appeal to any general principles of epistemic rationality. It’s really _not_ obvious to lots of apparently sane and intelligent and well informed thinkers that Jesus didn’t walk on water It’s certainly not obvious to _me_ that no argument for God could make such events probable.

    As it happens, I think it’s not especially improbable that Jesus walked on water. As far as I’m aware, I think that because I think the probability of theism is pretty high quite apart from any Christian claims or stories, and I figure that if theism is true God would probably need to be incarnated as a human being at some point… and so on. So it appears to me (in the absence of defeaters, as far as I know) that some philosophical arguments actually do make this kind of thing somewhat probable. I wasn’t raised in a religious household or culture–just the opposite, actually. So it seems that you’re just asserting without any real argument that I must be dense or irrational (or something). Is that really what you’d say? And it doesn’t give you pause that, if you like, we could get into a fairly intelligent debate about the probability of such claims, that I could (probably) give you some arguments that you’d have to admit were not so bad? Because that really would an absurdly arrogant attitude, really similar to the dumbest kind of religious fundamentalism. Can it really be that you aren’t concerned about that? Anyway, I would like to know _how_ you think it is that people can just grasp what is and isn’t rational without making use of any kind of “theory” about rationality or justification.

  10. “Purely for example, they should make their own assessment and conclude that following OT laws in Ancient Israel willingly (stoning people, etc.) would have been immoral. In most cases, even by their own moral senses they would reckon that, which makes them feel uncomfortable.”

    How do you know that Catholics who feel “uncomfortable” about stoning adulterers, for example, are really making “their own assessment” using “their own moral senses”? A likely explanation for those feelings is that they’ve been raised in a culture that’s still strongly shaped by Christianity, so that their intuitions are basically Christian even though those intuitions make little sense given other things they now believe (and don’t believe). Their ancestors–Vikings or Romans, say–wouldn’t have had these kinds of intuitions.

    I guess your view is that there are two kinds of people in the world: those whose “own moral senses” align with yours, and who are therefore free of bias or indoctrination, and a few billion others, whose “own moral senses” don’t align with yours, but would if they weren’t biased or indoctrinated. This is coherent, but it doesn’t explain why the people in the second group “should” believe things you believe. What’s the meaning or force of that “should”? How does it square with “ought implies can”, for example? Can we understand the idea that someone born into a devout Islamic society where stoning is widely felt to be totally normal and acceptable, who has never been exposed to any reasons for questioning these attitudes, is nevertheless obligated to believe something that seems to him, even on careful reflection, to be false and baseless and incompatible with other things that, even on careful reflection, he takes to be really important truths? That doesn’t make a lot of sense to me (partly because I don’t think this person could psychologically come to hold these attitudes).

    Also I wonder how this position of yours fits with your initial objection to Hulk’s claim that disabilities should be corrected or cured if possible. Your objection was that in many cases it seems that these disabilities don’t bother the people who have them and don’t affect others much, and so there seems to be no reason why anyone would be obligated to fix or cure them. (At least that seemed to be the objection. Am I misunderstanding?)

    But if we aren’t obligated in general to fix our illness or defects–provided that we don’t mind them, and others aren’t much affected–then it seems to follow that we’re not obligated in general to be rational thinkers (even assuming for the sake of argument that you’re right about what that would involve). Why should we have an obligation to fix or cure our epistemic defects if we have no such obligation to fix or cure our physiological ones? You could say that the epistemic defects are harmful, in the way that you think certain moral theories are harmful. But it’s easy to construct cases (parallel to your low-libido or small tattoo cases) where it seems that the person who has some epistemic defect doesn’t mind or even prefers to have that defect, and other people aren’t affected in any important way.

    So I think you should say that only _some_ people should reject Moon Landing theories and Catholic theology, etc. This will probably be a fairly small minority of humans throughout history, or even right now. (Has anyone really been harmed or threatened by the fact that some people believe the Moon Landing was a hoax?) Reflective equilibrium uber alles! Maybe there’s some principled difference between these two putative obligations, but I don’t know how we could figure that out without appealing to theories about obligation, theories about epistemic obligation and moral obligation, etc.

  11. Okay, but you also say that in your view _all_ moral theories are false. And as far as I can tell, the reason is simply that all such theories have implications that are (to you) counter-intuitive. Or that they have too many counter-intuitive implications. Is that right? In that case, you’re saying that we test theories by comparing them with intuitions _and_ that when we do so, we find that every theory put forward so far turns out to be unacceptable. And given this reasoning you reject all moral theories.

    That’s a sufficient reason, because I see no good reason to think that my moral sense is failing. If it were just one case, they wouldn’t be passing a test, but I guess there is a slight probability that my moral sense failed in that particular case and I couldn’t detect a reason. But several is just too improbable.

    By the way, it’s a sufficient reason, but there are others. For example, there are a number of moral theories out there. Even without using my moral sense, I can immediately tell that either they’re all false, or they’re all false except for one, given that they are mutually incompatible.

    Are they at least good approximations to moral truth?
    Well, some of them are so different from each other that at best, they would be good approximations in their verdicts, not on the essential features. At most, one class of them is right on that point, and the others very wrong – the class can contain a number of similar theories.

    So, enters my moral sense, and they all turn out to be false, or unfalsifiable if they make no predictions.

    What else can I do?

    Let’s see: I can try to assess how well they cohere with the rest of my beliefs. The answer is: they don’t cohere well at all, even if they’re modified. In fact, they’re radically alien, and even before I test them, I would expect them (the falsifiable ones, that is) to be false. Why?
    They seem way too simplistic. We are complex social animals, and we have we have almost certainly evolved very complex rules of behavior. I would expect that we would have a means of detecting those rules (and violations of them), but not that the rules would be simple at all (evolution is messy), nor that they’d be transparent to us. In other words, the inner working of our rule-detection sense would be opaque. We can make hypotheses and test them, but getting all of the details right is extremely difficult, and human psychology is in its infancy – it’s not even properly looking for such rules.
    Now, what about morality?
    Well, the rules that we got from evolution are the moral rules, it seems to me. Now, let’s be clear. I’m not saying that there is some analytical reduction (no “naturalistic fallacy”), nor that rules are all there is to morality (e.g., supererogatory behavior is not a matter of breaking or respecting rules).

    In fact, it seems to me (tentatively) that rules in turn may well reduce to properties of minds (e.g., predispositions to behave in such and such ways, etc.), so take the rules stuff as a way of talk that hopefully will be clear enough, but if you think not, then let’s just say that moral assessments (e.g., Jack is behaving immorally when he’s torturing birds for fun) are descriptions of complex mental properties.
    But whether there are irreducible rules or not, the rules we got (or the specific mental states we normally care about when we talk about morality) are very complex things, and it’s extremely difficult to get a theory right.

    Instead, what best coheres with the rest of my beliefs is – of course – that the best way to make moral assessments is by means of our sense of right and wrong, trying to correct it as we can by looking for improper sources, etc., but still, the basis means is our moral sense, not a theory.
    In the future, there may well be good approximations, and perhaps even a true theory. What do I know? But I reckon we’re a very long way from there (i.e., centuries if not more, barring friendly strong AI because the AI could study humans much better than we can and then all bets are off).

    Is that what you’re saying? And yet, at the same time, you appear to grant that some “general hypotheses” are also intuitively plausible. So in trying to find “patterns” or achieve coherence between our various intuitions, we are going to be trying fit general principles together with judgments about cases (among other things). And you yourself do this, right? But then how is that different from constructing a moral theory?

    It’s less general than some theories, but at any rate, I don’t object to constructing theories. I object to putting the cart before the horses and using the theory instead of our moral sense when the latter yields clear verdicts and there is no good reason to suspect it’s failing.
    Let me give you an example. A general hypothesis I got is that it’s always immoral for a person to rape other people for fun. I concluded so because I generalized and then was unable to find counterexamples. But if I make some generalization X, and eventually I do find a counterexample (i.e., it collides with the verdict of my moral sense), I reckon it’s false or very probably false, unless I have some specific good reason to think my moral sense is failing, and of course the fact that X says otherwise (X was actually a generalization from cases in which I used my moral sense in the first place!) is not a good reason to think so.

    Are you saying that when you do this you just _never_ achieve any kind of consistent overall view which includes general principles–but you nevertheless feel that your position is more rational than the alternatives? That seems very weird to me. Or are you saying that you _have_ achieved some consistent overall view of this kind, but it’s not a “theory”

    I have achieved views like “it’s always immoral for a person to rape others for fun”, “hurt others for fun”, etc., but that’s hardly what usually is considered a theory.
    And yes, I think this position is more rational than the alternatives I’ve seen. It’s not just my position by the way. Intuitionists have that approach (though I disagree with them about metaethics, usually, I think their general approach is correct).
    Ethical intuitionism, however (the epistemic part of it) is at most a theory of epistemology, not a first order theory.

    Explain this to me again, then. Suppose that Hulk uses his moral sense (or what he takes to be his moral sense) to conclude that blindness is something we should try to cure. He has similar intuitions about other cases, like paralysis or retardation. Now he generalizes: “When people have some condition C that prevents or significantly disrupts natural functioning we should try to get rid of C”. He finds that this generalization fits well with most of his “intuitions in more specific cases”, though he’s not entirely sure what to say about small tattoos.

    Is he not sure, or does he reckon intuitively that it’s not true?
    And what about contraception?
    Or bigger tattoos?

    I say this is not a big deal. He can reasonably treat the handful of problem cases you mention as puzzles, not counter-examples; or he can just expect that he’ll refine his principles a little bit. Are you rejecting his position because you just assume from the outset that his moral sense must be malfunctioning, because he’s Catholic?

    No of course not. I make the assessments myself first, and conclude that he’s wrong. My moral sense says otherwise, and I can’t find any good reason to think my moral sense is failing. It’s surely not liberal indoctrination. I don’t adhere to ideologies, and I’m fairly equally condemned from all sides, even if for different reasons.
    Then I reckon that yes, the probable cause of the malfunctioning of his moral sense is his Catholicism. But if it’s something else, so be it. I know him much less than I know myself; there may be sources of bias I’m not aware of.

    It’s not necessarily true that they do think this, or would think it on reflection.

    It’s very probable that nearly all of them will, which I can tell from experience (talked to many intelligent and educated Catholics) and from reading what others said (e.g., Catholic philosophers).
    I would suggest you test that theory yourself, trying to talk intelligent, preferably philosophically informed Catholics out of Catholicism (or other Christians, or Muslims, or Marxists, etc.).
    However, that is not the point. Even if it turns out that Catholics in particular do not react in that way, others do.
    For example, I can try to go with what coheres better with my intuitions (if not moral, my epistemic intuitions, my beliefs, etc.), and as I pointed out, I reckon that the way to go is, well, using our moral sense, and try to look for biases as explained. If your “independent source” argument against my position were correct, then how do you know without an independent source that you’re correct and I’m not – or someone else; if you don’t like the example of Catholics, please pick another one: you can always find plenty of people committed to some religion or other ideology that you reject and that will say that something else coheres better with their intuition; if you don’t believe me on that, I would recommend that you to test that hypothesis by yourself, trying to persuade other people.

    Catholics might have just the same logical standards as non-Catholics.

    A key problem is that theory is underdetermined by observations, and this is not a problem of logic (at least, not transparently), but about probabilistic assessments. A YEC can be consistent and very intelligent after all.

    Of course, some people might not care as much about coherence as others, but that’s another matter.

    Yes, that’s another matter. But what’s not another matter is that people can be coherent and have vastly different views on many, many issues. It’s again the issue that theory is underdetermined by observations. There are infinitely many theories that fit observations, and a zillion that are humanly comprehensible.

    If I realized that their position really _was_ just as coherent by my standards as my own, and just as good by any other epistemic standards I care about, that _would_ incline me to skepticism. You dismiss this point about disagreement without explanation. Why is it “not a good argument”? Disagreement of this kind is never a good reason for skepticism?

    Actually, it depends on the case, but that’s also something one has to assess by one’s own lights (there is nothing else!), and logical consistency is not enough.

    However, you say “and just as good by any other epistemic standards you care about”. So, here’s an issue: if you’re talking about an explicit epistemic theory, they may well have a different one (some people sure do), without being inconsistent. If you’re talking about your own intuitive probabilistic assessments, that’s fine, but they can say theirs are different.

    In the end, you’re still placing your standards first when testing their theory. They can of course say they put their standards first when testing yours. In the end, we all rely (of course) on our own faculties.

    You can just assert, however many times you like, that this “should be obvious” or that no argument “would make such events probable”. This is not to explain why it’s improbable, let alone “absurd”, let alone how any rational person is supposed be able to just intuitively know that it is without having to appeal to any general principles of epistemic rationality. It’s really _not_ obvious to lots of apparently sane and intelligent and well informed thinkers that Jesus didn’t walk on water It’s certainly not obvious to _me_ that no argument for God could make such events probable.

    The fact that they’re intelligent is not the point. There are pretty intelligent YEC for that matter.

    As it happens, I think it’s not especially improbable that Jesus walked on water. As far as I’m aware, I think that because I think the probability of theism is pretty high quite apart from any Christian claims or stories, and I figure that if theism is true God would probably need to be incarnated as a human being at some point… and so on. So it appears to me (in the absence of defeaters, as far as I know) that some philosophical arguments actually do make this kind of thing somewhat probable. I wasn’t raised in a religious household or culture–just the opposite, actually. So it seems that you’re just asserting without any real argument that I must be dense or irrational (or something).

    You’re being irrational about it. It’s not the same as to say that you are irrational in general.

    And I remain equally confused about why the Catholic-type position that Hulk is defending is supposed to be “clearly false”. My impression was that you were rejecting his position because, in your view, its principles have counter-intuitive implications. You appear to deny this:

    But that’s one reason why it’s clearly false: because it makes predictions that, when tested against our method for testing theories (i.e., the moral sense) does not pass the tests. And that happens in several cases, where I see no good reason to even suspect my moral sense is failing. There was no indoctrination by the way, other that Catholic one.

    Is that really what you’d say? And it doesn’t give you pause that, if you like, we could get into a fairly intelligent debate about the probability of such claims, that I could (probably) give you some arguments that you’d have to admit were not so bad?

    Given that theory is underdetermined by observations, of course you could give complex and consistent arguments, and fairly intelligent ones since you are intelligent. But a smart YEC can do that too. It’s not the point. The problem is the probabilistic assessment. That would make the argument bad. On the other hand, it would not be bad in the sense it can be consistent and sophisticated.

    That said, if you want to give it a shot, I’m all for testing theories, including the theory that I would say they’re not so bad – again, they’re bad in terms of your probabilistic assessment, not the sophistication of them.

    Because that really would an absurdly arrogant attitude, really similar to the dumbest kind of religious fundamentalism.

    Of course it would look like that to you. A YEC could make a similar argument. You’re mistaken, but there is not much we can say about it.

    Can it really be that you aren’t concerned about that? Anyway, I would like to know _how_ you think it is that people can just grasp what is and isn’t rational without making use of any kind of “theory” about rationality or justification.

    But I do not have a theory, and that’s what you seem to be asking. Else, I can tell that we have generally reliable faculties, but flawed ones. Among them, we have the ability to make epistemic probabilistic assessments, including assessments about the probability of the beliefs of other people, or their rationality.
    For example, people on a jury are told to assess whether something has been established “beyond a reasonable doubt”. Nearly all of them have no theory about rationality, and if they did, they wouldn’t be using it anyway, since that’s not how people go about normally assessing pieces of evidence. They just look at it and they intuitively reach a verdict which is in no way logically necessitated by the information they received.

    How do you know that Catholics who feel “uncomfortable” about stoning adulterers, for example, are really making “their own assessment” using “their own moral senses”? A likely explanation for those feelings is that they’ve been raised in a culture that’s still strongly shaped by Christianity, so that their intuitions are basically Christian even though those intuitions make little sense given other things they now believe (and don’t believe). Their ancestors–Vikings or Romans, say–wouldn’t have had these kinds of intuitions.

    Yes, they would have had those intuitions with regard to many biblical laws – not all of course, but many for sure.
    But I’m saying they’re using their own moral sense because they’re doing just that, rather than appealing to some theory or another. I guess what you might want to ask is how I know their moral sense is not malfunctioning in those cases, due to indoctrination? Well, I make my own assessment, as I already explained, and I see that they’re right. And I don’t see any good reason to think that when they actually depart from their indoctrination and aren’t applying any theories, their moral sense has been damaged.
    At any rate, the question of why I think they’re right is a different question from the one about why I think they’re using their moral sense: they are because they’re making an intuitive judgment, not one based on a theory.

    I guess your view is that there are two kinds of people in the world: those whose “own moral senses” align with yours, and who are therefore free of bias or indoctrination, and a few billion others, whose “own moral senses” don’t align with yours, but would if they weren’t biased or indoctrinated.

    That’s a caricature of my views. No, everyone has a moral sense, and they’re generally reliable most of the times. Disagreement is salient because it matters to us, but it happens upon a vast background of agreement. But following general theories of morality at this stage of development is something people usually not do (even if they believe they do, in their daily lives they just go with intuition, most of the times at least), and when they do, leads them to error.

    This is coherent, but it doesn’t explain why the people in the second group “should” believe things you believe. What’s the meaning or force of that “should”?

    I’m certainly not going to try to give a definition of “should”. I have never seen a correct theory of that. But I would say it’s pretty obvious (well, it is to me, and it should be to you) that a person who has looked at the evidence carefully should not be a YEC (for example), in the epistemic sense of “should”. And the same happens to, say, beliefs that Jesus walked on water, etc.

    I don’t need to use the word “should”, by the way. I would say that they’re being epistemically irrational if they believe such things. In my assessment, it (very probably) necessarily follows that they (epistemically) should believe otherwise, but no matter, it’s enough for me to say they’re being epistemically irrational.

    But if we aren’t obligated in general to fix our illness or defects–provided that we don’t mind them, and others aren’t much affected–then it seems to follow that we’re not obligated in general to be rational thinkers (even assuming for the sake of argument that you’re right about what that would involve).

    In the sense of epistemic “should”, I think it’s either analytical or at any rate a brute fact about epistemic obligations that one epistemically should not be epistemically irrational.

    In the moral sense of “should”, that does not seem to be the case, because of the dangers irrational thinking often represents to others, if you’re talking about moral obligations.

    Why should we have an obligation to fix or cure our epistemic defects if we have no such obligation to fix or cure our physiological ones?

    That seems to mix epistemic and moral “should”.
    I think “You shouldn’t be epistemically irrational” is, in the epistemic sense of “should”, tautological or a brute fact.
    Whether there is a moral obligation to avoid irrational thinking (or to try) is more complicated. It seems to depend on the case.

    You could say that the epistemic defects are harmful, in the way that you think certain moral theories are harmful. But it’s easy to construct cases (parallel to your low-libido or small tattoo cases) where it seems that the person who has some epistemic defect doesn’t mind or even prefers to have that defect, and other people aren’t affected in any important way.

    Yes, indeed, it’s easy. In those hypothetical cases, I reckon it’s not the case they morally should not have those irrational beliefs. I do think they epistemically should not have them, but if you like, I will just say they’re being epistemically irrational by having them.

    Then again, irrational beliefs about morality often involve (in real world conditions) significant risks to others, though I do think there are real-world cases in which (very probably) they’re not being immoral.

    So I think you should say that only _some_ people should reject Moon Landing theories and Catholic theology, etc.

    Epistemically, they all should reject Moon Landing conspiracy theories. But if that doesn’t follow from the fact that it’s epistemically irrational (but I think it does), then I would still say they’re being epistemically irrational. I don’t know whether they morally should, but probably at least most of them morally should not believe that.
    The Catholic example is more complicated (e.g., people in the Middle Ages with no access to better info, but still there probably is an obligation when it comes to some moral beliefs), but in the present world, people with access to the internet, who live in a modern society, etc., epistemically should reject claims of walking on water, etc., and morally should do so as well (very probably, at least in most cases), barring the sort of brain malfunctioning that makes them not guilty (there are some sort of malfunctioning that do and some that don’t, but that too is something I only reckon by considering cases).

    This will probably be a fairly small minority of humans throughout history, or even right now. (Has anyone really been harmed or threatened by the fact that some people believe the Moon Landing was a hoax?)

    Yes, of course, as long as we consider the consequences of people acting on a belief some of the consequences of the belief, which is reasonable. The point is that if a person believes that it was a hoax, there is a fairly good chance that he’ll act upon that belief, which usually consist in propagating it, thus unjustly tarnishing the reputation of many other people, which is immoral.

    Maybe an overreaction?

    ht tp://nerdist.com/lets-all-remember-the-time-buzz-aldrin-punched-a-conspiracy-theorist-in-the-face/

    Still, not all conspiracy theorists do the same amount of damage. Most of them talk to a few people, whereas a few dedicate a lot of resources to spreading the belief, so I would say there are different degrees of immorality. In some cases, it might be minimum, and – perhaps -, in some cases, they have an excuse. As usual, I would say it depends on the case.

  12. Ethical intuitionism, however (the epistemic part of it) is at most a theory of epistemology, not a first order theory.

    Sorry, I misspoke. Instead of “the epistemic part of it”, that should read “leaving aside the metaethical claims”.

  13. Jacques,

    Regarding the sense of right and wrong and how to properly make moral assessments, I’d like to add a few points, and ask you some questions about your position.
    Throughout at least most of human history, at least most humans had no access to the correct general moral theory. That’s because if one moral theory that exists today (or at some point in time existed), then that theory did not exist in most places where most humans lived throughout most of history (that’s an understatement, but at least that much holds).
    What would have been the proper method for them to go about making moral judgments?
    Think of someone living 1000 years ago in Australia, China or South America – for example -, or 10000 years ago in North America, Tibet, or India (i.e., the places; of course, Australia didn’t exist 1000 years ago, India did not exist 10000 years ago, etc.), or again through most of human history.
    Granted, some of them had time to come up with general theories. But how would they go about coming up with a theory but by generalizing from their intuitions in specific cases? (not 100% specific, of course, but considerably so).
    Moreover, the vast majority had no time for that. They had to make a living for themselves and their families. They couldn’t spend time theorizing. They intuitively could come up with some partial generalizations sometimes, but not a general theory. Moreover, those generalizations would be also generalizations from their own intuitive judgments on specific cases (what else?)…well, unless of course they went with the local religion/ideology, but that either came from just people making things up, or else from generalizations by other people, based on their own sense of right and wrong and their own assessments – the problem of course is that there is a good risk of overgeneralizations in those cases, and there seems to be no particular reason to give up their judgments and use instead the traditional ones when they differ.

    In any case, what do you think they should have done?
    Moreover, without a sense of right and wrong, how do you think they could possibly have come to know about right or wrong in the first place? If it’s tradition, religion, or whatever, how do you think tradition, religion or whatever could possibly have come to know about those things, if not by people using their senses of right and wrong? Reason? If by “reason” one means deductive reasoning, then that’s not going to properly yield moral conclusions without moral premises (save for analytical reduction of moral to non-overtly moral language, but if there is one, we do not know it and neither did they, so they couldn’t have used it). If they had moral premises, how did they come to know them without a sense of right and wrong?

    If it’s not deductions but probabilistic reasoning, again how would they assess the probability of moral claims without a sense of right and wrong?

  14. We seem to be talking past each other–I think? Despite a lot of talk. Let me begin with one of your last comments:

    “But how would they go about coming up with a theory but by generalizing from their intuitions in specific cases?”

    I have no idea! In case it’s not clear, I have no objection to the idea that people come up with moral theories (if they do) by precisely this method. They generalize from intuitions about cases. On the other hand, I think that’s not the _only_ thing they can do. They can also go back and forth between generalizations and “intuitions in specific cases”. Once they have some generalizations, and they find that these seem to yield further reasonable conclusions and fit well with other generalizations as well as other things–for example, meta-ethical theories or epistemological theories or cosmological theories or whatever–then they might reasonably treat those generalizations as being at least as plausible as one or two or seven intuitions about specific cases which may appear to be counter-examples to those generalizations. So a reasonable thinker might on occasion accept some generalizations even though they have some counter-intuitive implications, and give up the intuitions about specific cases that seem to conflict with those generalizations. If you allow that all this is possible, and possibly reasonable, we have no disagreement here in the abstract about methodology or rationality.

    “unless of course they went with the local religion/ideology, but that either came from just people making things up, or else from generalizations by other people, based on their own sense of right and wrong and their own assessments”

    How do you know these are the only two possibilities? You just know that the Ten Commandments, say, are just things that people “made up”? That no Higher Power has ever transmitted reliable moral knowledge to human beings? Well, anyway, it doesn’t much matter in this context since I’m certainly not denying that people have a moral sense, or that they can use their moral sense to figure out how they should act. So I don’t really know what to make of your question “without a sense of right and wrong, how do you think they could possibly have come to know about right or wrong in the first place?” I’m also not sure that “reason” in some broad sense couldn’t be a source of moral knowledge. Maybe there are rules for mutually beneficial co-operation that just make sense to people given certain basic interests and preferences that almost all of them have, and moral axioms or morally acceptable dispositions develop over time as a result. But, again, it doesn’t really matter since I have no objection to the idea of a moral sense.

    Regarding the claim that people (morally) should not believe in conspiracy theories (for example) you say “it depends on the case”. Okay, so there’s no general obligation not to believe in these theories. But what determines whether a person in a given situation has this obligation or doesn’t have it? I don’t know how there could be any answer to that question without appealing to some system of principles, which to me would seem to constitute an epistemic theory. Now I add a further point that I hope isn’t too controversial: when a person’s obligations (or lack thereof) are fixed by some system of principles, the person must be somehow aware of those principles (and aware that they apply to him, etc.). For example, I can’t be morally obligated to uphold the laws of Sweden if I have no awareness of those laws. Knowing which laws are the laws of Sweden is a necessary condition for being capable of obeying Swedish laws, and being capable of obeying them is a necessary condition for being obligated to obey them. Likewise, if disbelieving p is an obligation for some thinker T in situation S, but not for T* in S* — or if the strength of the obligation varies from T in S to T* in S* — then T has to know the principles that relevantly distinguish T in S from T* in S* with respect to p. Just a sketch of the argument, but I think this implies that people do after all need some kind of moral theory in order to be obligated or not obligated in the way you’re describing. Or, at the very least, a rational observer has to rely on some theory in order to believe that people have either kind of obligation in some cases but not others. And if the observer thinks (like you) that all theories are false, it seems hard to understand how the observer could rationally think that either kind of obligation holds in just some cases. Are we now just disagreeing over whether to call a system of principles fine-tuned to the extent possible to be coherent with intuitions and other beliefs a “moral theory”? (If so, what would a moral theory be in your opinion?)

    “In those hypothetical cases, I reckon it’s not the case they morally should not have those irrational beliefs. I do think they epistemically should not have them, but if you like, I will just say they’re being epistemically irrational by having them.”

    As before you seem to be just asserting that such beliefs are irrational without offering any explanation. People who believe in Catholicism or conspiracy theories occupy a vast range of epistemic situations. They have all kinds of differing capacities and knowledge that might constrain what counts as a rational belief for them (or, for that matter, might greatly augment the scope of rational belief). I really have no idea what you have in mind here — what it might be that _makes_ these kinds of beliefs irrational for any arbitrary person in these classes of people. Is it just the content alone? We’re supposed to just realize, using some ‘epistemic sense’, that the hypothesis of a human being walking on water or being resurrected is ‘absurd’ or very improbable? Because I simply don’t have this intuition or knowledge or whatever it may be. (Am I just an irrational thinker, in some way that I fail to notice?) Or is there supposed to be some specific chain of reasoning or evidence available to all of those people (in some realistic sense of “available”) such that we can rightly fault them as thinkers for failing to appreciate the chain? Not rhetorical questions. I just honestly have no idea what you’re thinking. Using my ‘epistemic sense’, if I have one, my judgment is that almost anything can be non-irrationally believed by someone somewhere. I don’t even know why I should think that believing in Catholicism, for example, would be just irrational for someone like me. (Certainly I’ve toyed with it from time to time.) And I actually _do_ believe in some ‘conspiracy theories’.

    Likewise, you say I’m “being irrational” in thinking that the probability of Jesus walking on water isn’t low, or that certain arguments for theism could make that fairly probable, but you don’t say why. I’ve been thinking about these topics for a few decades, doing my best epistemically, as far as I can tell, and my opinion is that some arguments for theism are pretty compelling. And I also have the opinion that, given theism, the probability of incarnation is pretty high. Of course you know that lots of other trained philosophers and other smart thoughtful people have similar opinions. We’re supposed to just accept your assertion that all of this is not just false but “irrational”? Even though you have no real idea _why_ I hold these opinions, e.g., which arguments I’m talking about or why I think they can be defended against various objections? I wonder what the point of this assertion could be. Anyway, I’d suggest that your own attitude here is pretty irrational. You’re offering a very strong assessment of the beliefs of lots of other people that you know to be basically competent and well informed thinkers without knowing how they came to hold those beliefs. That’s weird!

    “people on a jury are told to assess whether something has been established “beyond a reasonable doubt”. Nearly all of them have no theory about rationality, and if they did, they wouldn’t be using it anyway, since that’s not how people go about normally assessing pieces of evidence. They just look at it and they intuitively reach a verdict which is in no way logically necessitated by the information they received.”

    Maybe we’re just talking past each other here too. Suppose they have to decide how much credence to give to testimony. They figure that three similar stories from people who appear to be honest and unbiased count for more in their overall assessment than five similar stories from people have a vested interest and a track record of making things up. And they think all this testimony counts for less overall than video footage and DNA evidence that points the other way. I’d say that in making this kind of judgment they’re relying on general principles about how to assess the reliability of testimony and how to weight different bits of evidence of various kinds. I call that kind of thing a “theory”. Do you have some more rigorous definition in mind? At any rate, it’s more than just a bunch of intuitions and “looking”.

    “if I make some generalization X, and eventually I do find a counterexample (i.e., it collides with the verdict of my moral sense), I reckon it’s false or very probably false, unless I have some specific good reason to think my moral sense is failing”

    Earlier I tried to represent your methodology by saying that (it seems to me) you reject moral theories when they conflict with intuitions. I said that if we want reflective equilibrium this is too simple. You then replied that you _don’t_ do this, that you’re not rejecting (for example) Hulk’s theory _just_ because it conflicts with various intuitions. But here you seem to be saying that this is after all just the test that you have in mind. The only difference seems to be that you check to make sure your moral sense isn’t “failing”. Well, if that’s the test, then again I ask why we aren’t able to go back and forth, as I suggested above: test theories against intuitions, and intuitions against theories? I don’t see you offering any answer to this question. Why are intuitions (plus the intuition or belief that one’s moral sense is working normally) always supposed to have more rational significance than generalizations? Why is occasionally favoring theory over “moral sense” some kind of mistake, “putting the cart before the horse”? (Why can’t the theory be the horse sometimes?)

  15. oops — typo: ” I don’t know how there could be any answer to that question without appealing to some system of principles, which to me would seem to constitute an epistemic theory.” should be: “a moral theory”

  16. We seem to be talking past each other–I think?

    That’s very frequent in online discussions in my experience, so that might be at least part of what it’s happening. I’ll try to clarify the best I can given time constraints.

    I have no idea! In case it’s not clear, I have no objection to the idea that people come up with moral theories (if they do) by precisely this method. They generalize from intuitions about cases. On the other hand, I think that’s not the _only_ thing they can do. They can also go back and forth between generalizations and “intuitions in specific cases”.

    Sure, that would make sense. But the basis was the intuitions from specific cases, and it would not make sense to reject said intuitions when they collide with a generalization made from other intuitions, unless there is a specific reason to doubt the specific intuitions.
    Let me give you an example. Let’s say that Alice is a scientist, trying to figure out the frequencies of light that correspond to green objects. She asks people to tell her whether an object is green – an object that they see with usual, daylight-like light -, and when they say it is, she measures the frequencies the object is reflecting.
    On the basis of that, she comes up with a general hypothesis. In order to test it, she asks again people to tell her whether some objects are green, and she checks whether the theory matches people’s answers.
    But if there is some object that – say – reflects a combination of other frequencies and not what her hypothesis predicts, she ought to realize her hypothesis is false – even though it yields correct verdicts in many cases, so it’s an approximation to the truth -, and try to fix it or ditch it if it’s too bad. What she should not do is insist that the object is not green, and conclude that the eyes of the subjects (picked randomly, who all report they have normal color vision, and who are looking at the object in standard conditions) are malfunctioning, let alone try to use that theory in order of her color vision to tell whether an object is green.

    Granted, in the case of a moral sense, the conditions aren’t so easy to check, so there is a chance of undetected sources of failure. But when one does not find any, the most probable scenario is that the generalization was an overgeneralization – and so, it’s false, even if it may still be a good approximation.
    If the theory fails in more cases and she can’t detect a probable source of malfunctioning in any, it’s time to ditch it, or change it radically.
    In any event, as a general method, intuitions come first, and theories are tested against intuitions.

    Once they have some generalizations, and they find that these seem to yield further reasonable conclusions and fit well with other generalizations as well as other things–for example, meta-ethical theories or epistemological theories or cosmological theories or whatever–then they might reasonably treat those generalizations as being at least as plausible as one or two or seven intuitions about specific cases which may appear to be counter-examples to those generalizations. So a reasonable thinker might on occasion accept some generalizations even though they have some counter-intuitive implications, and give up the intuitions about specific cases that seem to conflict with those generalizations. If you allow that all this is possible, and possibly reasonable, we have no disagreement here in the abstract about methodology or rationality.

    “Possible” is a very low bar, in terms of metaphysical possibility, but generally, that course of action would usually not be reasonable as long as they have generalized from moral intuitions, and as long as they can’t find a likely source of malfunctioning in specific cases (see my color example above), though it may be so when there are specific reasons (not the generalization!) to suspect that the intuitions are failing.

    What about other theories, like epistemological theories?
    One difficulty is that such theories do not conflict with first-order moral theories. If it seems they do, then it seems those were a combination of epistemic and first-order moral theories (barring perhaps analytical reduction of moral statements to non-overtly moral ones, but I don’t think any theory actually has such a reduction without being massively wrong, at least for now and perhaps not ever), so one can reject the moral part.

    Still, there might be indirect cases, e.g., a theory that is well supported and actually provides good reasons to suspect that in some specific cases, our moral intuitions will fail for some reason. But the idea is to carefully separate in those cases what the epistemological theory says, and what the first-order ethics theory says, and be careful with that.
    A similar point applies to metaethical theories, except that those do have first-order parts sometimes, in which case they can be tested against intuitions too – aside from cases in which they give good reasons to think specific moral intuitions will fail in some cases.

    How do you know these are the only two possibilities? You just know that the Ten Commandments, say, are just things that people “made up”? That no Higher Power has ever transmitted reliable moral knowledge to human beings?

    It’s pretty clear to me, but not the point, for the following reasons:
    1. Even if a powerful superhuman person had given some people commands, etc., then they would still have to find a way to assess whether the person in question is good, truthful, etc. Without a sense of right and wrong, how can they distinguish between a supervillain, demon or whatever you call it, and a morally good creator? By means of preexistent theories? But then, we would be back to the question of how they come up with those theories, unless they have a sense of right and wrong first?
    2. We can consider the cases of everyone who lived before the Ten Commandments, or after them, but in places and societies where they were unknown, etc., so the issue remains.

    Well, anyway, it doesn’t much matter in this context since I’m certainly not denying that people have a moral sense, or that they can use their moral sense to figure out how they should act.

    Okay, but you said earlier:

    (1) There seems to be no reason for believing that any natural or pre-ideological moral sense exists. There seems to be good reason for believing that everyone acquires moral intuitions through the same kind of social processes of conditioning and indoctrination. You ave moral intuitions which are strangely consistent with a highly specific liberal moral code that everyone in our society has been taught from pre-school onwards (by the media, the schools, social pressure).

    That was not true of me, but leaving my case aside, the point is that even if the normal development of the moral intuitions require social interaction with other agents, etc., that does not imply there is no general human moral sense (e.g., the normal development of the sense of sight requires light, but there is a human sense of sight), and if there were no such sense and it all comes from indoctrination, the question is: how do people in societies in which there is not even a claim that there was a law handed down by a powerful being figure out what’s morally correct?

    I’m also not sure that “reason” in some broad sense couldn’t be a source of moral knowledge. Maybe there are rules for mutually beneficial co-operation that just make sense to people given certain basic interests and preferences that almost all of them have, and moral axioms or morally acceptable dispositions develop over time as a result.

    Right, but in that case, “over time” is over evolutionary time, and the issue is “given certain interests or preferences”.
    If you prefer not to call that “a moral sense”, but “reason”, and it’s only a terminological matter, no problem. But other than that, my point is that moral statements are not logically derived from nonmoral statements (again, barring analytical reduction, but either there is no such reduction, or it’s unknown, so that does not make a difference). They need to start with some moral assessments, and they do not come from deductive reasoning from nonmoral stuff. As for probabilistic reasoning, they would still need some moral sense.

    But let me try a different argument:

    Let’s say that there are two other planets in the Local Group, where advanced civilizations developed. On Earth 2, 2-squids are smart, advanced, and evolved from something like squids. On Earth 2, 3-elephants evolved from something like elephants. While they are all social beings, their basis interests and preferences will be quite different in many respects. They will be overlaps, of course, since they have to resolve similar problems after all, but one can’t expect they would come up morality by means of reason, even if they’re good at reasoning. They would probably have some analogues to morality, say 2-squid-morality and 3-elephant-morality if you like, but reason will not give them morality.
    Personally, I would say that they probably would have a sense of 2-squid-right and 2-squid-wrong, etc., but if you want to call it “reason”, still, the point is that at least some basic assessments are made non-inferentially and intuitively.

    Okay, so there’s no general obligation not to believe in these theories. But what determines whether a person in a given situation has this obligation or doesn’t have it?

    That question can be interpreted in different ways. If you mean to ask “how do we go about ascertaining whether a person has an obligation”, etc., I can tell you I contemplate the specific scenario, think about predictable consequences, intent, etc., and my sense of right and wrong yields a verdict. Then, I can try to check whether there are specific reasons to think it might be malfunctioning.
    If you’re asking about what truth makers, I can try to hypothesize on the basis of my intuitive assessments in specific cases, but all I have is that it very probably has something to do with the risks for the person in question and for others, but I don’t have an alternative method for figuring that out.

    I don’t know how there could be any answer to that question without appealing to some system of principles, which to me would seem to constitute an epistemic theory.

    I’m not sure how or why you would do that, but I’ll try an alternative angle: in most human societies, the vast majority of people did not have such a theory. Even today, most people do not seem to have anything like that. How do they ascertain whether a person has an obligation, if a theory like that is required? Do they need to come up with a theory first? I don’t think I understand how you think people actually do these sort of things. Could you elaborate, please?

    Now I add a further point that I hope isn’t too controversial: when a person’s obligations (or lack thereof) are fixed by some system of principles, the person must be somehow aware of those principles (and aware that they apply to him, etc.). For example, I can’t be morally obligated to uphold the laws of Sweden if I have no awareness of those laws.

    You may not know the law of Sweden, but you may well have a moral obligation to know the law, and to uphold it. Or you may be unaware of it even if you know it, though perhaps you mean they don’t need to be aware at all time, but you need to be able to become consciously aware when they decide to think about it? At any rate, you can still be obligated to learn the laws of Sweden and uphold them.

    But leaving that aside, there is a very big difference between knowing right and wrong and knowing the general principles about right and wrong. In fact, I think our moral sense generally allows us to know our moral obligations, but it doesn’t allow us to know the general system of principles on which they’re based. We might try to figure out some of them by generalizing, testing, etc., if we have enough time. But that’s not a requirement for knowing right and wrong, and it’s not something at least nearly everyone in nearly every human civilization knew (i.e., the correct system of principles).

    Knowing which laws are the laws of Sweden is a necessary condition for being capable of obeying Swedish laws, and being capable of obeying them is a necessary condition for being obligated to obey them.
    But maybe you do not know them but you are capable of knowing them, and then you are capable of obeying them – you just need to inform yourself first.

    Likewise, if disbelieving p is an obligation for some thinker T in situation S, but not for T* in S* — or if the strength of the obligation varies from T in S to T* in S* — then T has to know the principles that relevantly distinguish T in S from T* in S* with respect to p.

    That’s not “likewise”. It’s very different. Knowing right from wrong does not require knowing the general principles.

    For example, if that were required, then it seems to me that most people in most human societies would not have had any moral obligations, since they did not know the principles in question.
    In fact, most people had no moral theory, or had a false one – and so false on the basics that didn’t even come close to being truth; I can tell that without even using my moral sense, just by the mutual incompatibility of theories.

    But if you don’t find that argument persuasive, here’s another one:

    Let’s consider a philosopher like, say, Richard Joyce. He is a moral error theorist. He believes he has no moral obligations. He certainly is not aware of general principles that distinguish his obligations, and it seems clear to me that he does not know them, either. But that does not imply he has no moral obligations. Of course, he has moral obligations, even though he does not know the general principles.

    Now, I think he probably knows that he has obligations in specific cases, because he still has a moral sense. But he surely does not know the general principles.

    I guess you might think that even if he’s not aware of the general principles and can’t consciously bring them to his mind or write them down, he still knows them unconsciously. But that would be very different from your argument. In fact, that would seem to be a sense of right and wrong, that yields verdicts in specific cases but of course works on the basis of some more general principles – which are not consciously accessible.

    Or perhaps you believe that Richard Joyce has no moral obligations? But that seems too improbable, even if you leave aside the fact that the same would apply to most people in most societies, if it were true.

    Just a sketch of the argument, but I think this implies that people do after all need some kind of moral theory in order to be obligated or not obligated in the way you’re describing.

    I think that your thesis here is false. But one important thing I’d like to highlight is how I went about arguing it’s false: I showed counterexamples, and I picked them by means of my own sense of right and wrong. For example, I reckon by my own sense of right and wrong that Joyce has plenty of moral obligations, and then I raise that example, etc.

    Or, at the very least, a rational observer has to rely on some theory in order to believe that people have either kind of obligation in some cases but not others.

    I don’t think that’s true, either. Nor do I see why it would follow. Your sketch of an argument makes some false claims about what is required for moral obligation. It does not seem to support any further claims in this context, such as the claim that a rational observer has to rely on some theory in order to believe that people have either kind of obligation in some cases but not others.
    I hold that I rationally use my sense of right and wrong – like most people in most societies did -, and that tells me that sometimes people have obligations, and sometimes they do not. I don’t see anything in your argument – which, again, posits a false principle – to change my take on that.

    And if the observer thinks (like you) that all theories are false, it seems hard to understand how the observer could rationally think that either kind of obligation holds in just some cases.

    I think all present-day theories are false. But I don’t know why it’s hard for you to understand that. I believe we have a moral sense that is a generally reliable guide to moral truth. At least in most cases, it will allow us to make distinctions and see when an obligation holds, and when there is no obligation.

    Are we now just disagreeing over whether to call a system of principles fine-tuned to the extent possible to be coherent with intuitions and other beliefs a “moral theory”? (If so, what would a moral theory be in your opinion?)

    We’re not just disagreeing about that. For example, you sketched an argument to some conclusion I think is false, and I argued above why it’s false (i.e., Joyce, people with false theories, etc.). But it may be that in addition to actual disagreements, we are partially talking past each other.
    Here, I was thinking of a moral theory as something (roughly) general enough to yield verdicts (and so be tested) in at least many different situations.

    As before you seem to be just asserting that such beliefs are irrational without offering any explanation.

    Yes, explanations bottom out at some point.
    Someone with access to the internet, who has read arguments, etc., can coherently tell me that they believe that tomorrow Jesus will come, or that an alien spacecraft will take them to some paradise, or that they know that Jesus talks to them, that they have seen conclusive evidence that vaccines cause autism – and consistently insist on that after looking at the evidence -, that the Earth is less than 15000 years old – and consistently insist after they look at scientific publications, etc., and so on.
    I reckon they’re being epistemically irrational. Of course, if a person living in an almost uncontacted tribe in the Amazon has some weird beliefs about planes, I’m not going to be so sure they’re being epistemically irrational – perhaps, someone deceived them? Perhaps, they were told a story they didn’t have time to think about? Who knows? But the more I know them, the better I can make an assessment. On the other hand, if they tell me that the spirits are out there, on the beach, and they can actually see them, etc. (a real example), I will conclude that their brain is malfunctioning in some way.

    And if I’m a juror, the defendant can always consistently claim that he was framed by aliens from another planet. I would reckon that it would be epistemically irrational on my part to assign non-negligible probability to such claims, and would by my own lights – my epistemic intuitions – conclude that it’s beyond a reasonable doubt that he’s guilty – or not, depending on the case, but still by my own intuitions.

    People who believe in Catholicism or conspiracy theories occupy a vast range of epistemic situations. They have all kinds of differing capacities and knowledge that might constrain what counts as a rational belief for them (or, for that matter, might greatly augment the scope of rational belief). I really have no idea what you have in mind here — what it might be that _makes_ these kinds of beliefs irrational for any arbitrary person in these classes of people. Is it just the content alone?

    No, it’s not the content alone. Remember I would say that “The Catholic example is more complicated (e.g., people in the Middle Ages with no access to better info, but still there probably is an obligation when it comes to some moral beliefs), but in the present world, people with access to the internet, who live in a modern society, etc., epistemically should reject claims of walking on water, etc….”

    So, it depends on the information available to a person.

    We’re supposed to just realize, using some ‘epistemic sense’, that the hypothesis of a human being walking on water or being resurrected is ‘absurd’ or very improbable?

    Yes.

    Because I simply don’t have this intuition or knowledge or whatever it may be. (Am I just an irrational thinker, in some way that I fail to notice?)

    Don’t you?
    If you hear someone on the street yelling and saying someone stole her car, you would be inclined to think that’s likely what happened, unless you have good reasons to suspect she’s lying.
    If someone on the street tells you they just saw a dead person, smelly and all, recover and walk away, you’re going to assign (intuitively) an astronomically slim probability, and believe that either they’re lying or their brain is not working properly. If you don’t have this intuition despite living in our world, something is wrong with your epistemic probabilistic assessments. But it’s much more likely that the malfunctioning only happens when Jesus or something like that gets involved.

    Granted, you think it’s more likely in the case of Jesus. Well, there is more evidence. But just negligibly more. How do I know that? It’s an intuitive probabilistic assessment, of course! 😉
    I could speculate about the causes of my assessment, if we were discussing that, but again, this is an obvious assessment. Yes, it’s not obvious to many other people; such is life. But I have good reason to think they’re biased. I don’t have a good reason to think I am – which again, I reckon intuitively by contemplating scenarios! What can I say? I know of no other method, and alternative proposed methods either in the end are not alternative methods (it’s what I’m actually doing), or seem to be epistemically improper (again, intuitive assessment, as usual).

    Using my ‘epistemic sense’, if I have one, my judgment is that almost anything can be non-irrationally believed by someone somewhere.

    Sure, but it’s not the case for a resurrection or walking on water for people who live in the present-day world, in a developed country, etc.

    And I actually _do_ believe in some ‘conspiracy theories’.

    I’d need more information. Which ones?

    Likewise, you say I’m “being irrational” in thinking that the probability of Jesus walking on water isn’t low, or that certain arguments for theism could make that fairly probable, but you don’t say why.

    Yes, and the same would happen with the YEC who has already seen the evidence and consistently insists. I don’t have a psychological theory about what’s going wrong with your brain in particular, or his brain.

    And I also have the opinion that, given theism, the probability of incarnation is pretty high. Of course you know that lots of other trained philosophers and other smart thoughtful people have similar opinions. We’re supposed to just accept your assertion that all of this is not just false but “irrational”?

    As I said before, I’m not remotely saying that people should believe me because I say so. I’m saying that they epistemically should believe it on their own lights, making their own intuitive epistemic probabilistic assessments. If they don’t, well of course I will not be able to persuade them.

    Even though you have no real idea _why_ I hold these opinions, e.g., which arguments I’m talking about or why I think they can be defended against various objections?

    That’s not true. The claim that I have no idea goes way too far. I’ve read the arguments of plenty of Christians, including very intelligent Christian philosophers.
    Moreover, even if your arguments are so radically different, I do know by my own assessments that there is no argument that would make that probable to a rational person with present-day knowledge and time to think about it (and you have both). I don’t need to consider every single possible argument in order to make an assessment like that. And I’m pretty sure that you don’t do that, either.

    Still, once this debate ends (not during it, because this is already taken too long for me, and it’s becoming an issue), I offer to listen to your arguments and reply to them in a civil manner, in a venue of your choosing. I predict, however, based on both our exchange so far and my interaction with other people on the internet, that you would find all of my replies bad in one way or another, and not be in the least persuaded, whereas I would not be in the least persuaded by your arguments. But such is life. I am willing to listen, though.

    Maybe we’re just talking past each other here too. Suppose they have to decide how much credence to give to testimony. They figure that three similar stories from people who appear to be honest and unbiased count for more in their overall assessment than five similar stories from people have a vested interest and a track record of making things up. And they think all this testimony counts for less overall than video footage and DNA evidence that points the other way. I’d say that in making this kind of judgment they’re relying on general principles about how to assess the reliability of testimony and how to weight different bits of evidence of various kinds. I call that kind of thing a “theory”. Do you have some more rigorous definition in mind? At any rate, it’s more than just a bunch of intuitions and “looking”.

    Maybe we are talking past each other. I said that they’re not relying on a theory about rationality, about what’s beyond a reasonable doubt, etc., and I was talking about a general theory that would allow them to reach a verdict.

    They do have sometimes very partial hypotheses about the comparative weigh of testimony, etc. Some other times, they do not. In this case, they probably did not. In other words, jurors did not go with a previous theory that explicitly said “three similar stories from people who appear to be honest and unbiased count for more in their overall assessment than five similar stories from people have a vested interest and a track record of making things up”. Rather, they intuitively made the assessment and came up with that generalization on the spot, if asked to explain their verdict.

    Moreover, even when they do have general hypotheses – whether they had them before or came up with them on the spot -, at least for the most part they’re not relying on them to make the assessments.

    For example, let’s say they have the theory you mention about the witnesses. They still have to make an assessment about whether an account is honest and unbiased if they want to apply the theory. But that is something they assess intuitively. In other words, they make an intuitive, nonconscious probabilistic assessment that a person is being honest and is not biased.
    However, that’s not at all good enough. Application of the theory would not allow them to properly compare the testimonies, since it’s not about whether A’s testimony weighs more than B’s. Rather, it’s about things like how much more, and generally, they would need to make an epistemic probabilistic assessment (not with numbers, but still precise enough to assess reasonableness) about what happened, which the hypothesis you mentioned is of no or at best just a little help.

    The same goes for DNA evidence and video footage.

    For example, let’s say that the defendant’s lawyer – who is points out that it’s consistent with observations that the video was manufactured by a computer, and the DNA evidence was planted. That’s of course consistent. Would it be a reasonable doubt? They’re not at all relying on explicit generalizations. They can ask experts, but then they have to intuitively reckon how probable it is that some expert, witness, etc., is telling the truth, etc. All of that happens intuitively and unconsciously.

    Granted, there is of course room for conscious deliberation. But it’s on the basis of previous intuitive probabilistic assessments – e.g., the generalizations that you mention – and it’s also followed and interwoven with intuitive probabilistic assessments all around. The final verdict is also an intuitive probabilistic assessment about what is a reasonable doubt. They implicitly rule out – say – that perhaps a matrix overlord framed the defendant, or aliens from another planet that are studying humans, etc.; if the defendant’s lawyer were to mentioned those alternatives, they’d rule them out explicitly, but in any case, they’d need to make first an intuitive probabilistic assessment, using their own epistemic sense.

    Earlier I tried to represent your methodology by saying that (it seems to me) you reject moral theories when they conflict with intuitions. I said that if we want reflective equilibrium this is too simple. You then replied that you _don’t_ do this, that you’re not rejecting (for example) Hulk’s theory _just_ because it conflicts with various intuitions. But here you seem to be saying that this is after all just the test that you have in mind.

    The “just because” it conflicts with intuitions disparages the method implicitly. What I’m saying is that I reject it because it makes false claims. And of course I reckon that it makes false predictions by comparing the predictions or claims or whatever you call them with the intuitions. But what I also do (which is why it’s not only because it conflicts with intuitions) is try to figure out whether there is a specific reason to distrust my intuitions on the matter. If I find no specific reason, I go with intuitions because our moral sense is generally the proper method to make moral assessments.
    So, I would say I conclude it’s false because it makes false claims, and I reckon that it makes false claims because it compare the claims with the verdicts of my moral sense – which is the normal proper way of making moral assessments -, and they are in conflict. And then when I look for other sources, try to figure out whether there are good specific reasons to think my moral sense is malfunctioning, I don’t find them.
    That’s the usual and proper way of making moral assessments and testing moral theories, in my assessment. So, we do seem to disagree on this part, rather than only miscommunicate.

    The only difference seems to be that you check to make sure your moral sense isn’t “failing”. Well, if that’s the test, then again I ask why we aren’t able to go back and forth, as I suggested above: test theories against intuitions, and intuitions against theories?

    No, that’s not how I test it. I test it by looking for potential causes of damage, such as indoctrination in some ideology/religion (not the case), or bias because I or someone close to me is being condemned (not the case), and so on.
    Of course, in order to do that, I have to rely on my epistemic intuitions. I can’t jump out of my mind.

    I don’t see you offering any answer to this question. Why are intuitions (plus the intuition or belief that one’s moral sense is working normally) always supposed to have more rational significance than generalizations?

    Because the latter puts the cart before the horses, so to speak. Again, general moral theories are generalizations from intuitive assessments in specific cases. The proper way to test them is against such intuitions, at least in general. As parallel, I offer the case of Alice, the scientist studying color (see above).

    Why is occasionally favoring theory over “moral sense” some kind of mistake, “putting the cart before the horse”? (Why can’t the theory be the horse sometimes?)

    As in the color case, the general method is to test theory against intuitive verdicts, since our moral sense is our normal and proper way of making moral assessments. But sometimes, under weird light conditions, an object might appear green to us but it’s not green, and a general theory measuring frequencies might get it right. In that case, something is interfering with the functioning of our color vision. So, yes, that might happen. But still, the proper way to test the color theory is to go with color vision, and just try to exclude sources of error, like weird light conditions. I’m saying a similar story holds for morality, only it’s considerably more difficult both to come up with general hypotheses and to look for potential sources of error. But the basic direction of testing is the same.

  17. ME: I’m certainly not denying that people have a moral sense, or that they can use their moral sense to figure out how they should act.

    YOU: Okay, but you said earlier:

    ‘(1) There seems to be no reason for believing that any natural or pre-ideological moral sense exists.’

    Right, I don’t think there’s any reason for believing in a _natural_ or _pre-ideological_ moral sense (though I don’t rule it out). This is consistent with believing, as I do, that “people have a moral sense”. But for all I know, my own moral sense is the result of a very specific acculturation and isn’t “natural”. Your earlier arguments often seem to be directed against someone who thinks there’s no such thing as a moral sense; that’s clearly not my view, though, since I’ve been assuming from the outset that almost everyone does have one. (It’s needed for reflective equilibrium, for example, and my first comment was an appeal to that method.)

    “I do know by my own assessments that there is no argument that would make that probable to a rational person with present-day knowledge and time to think about it (and you have both). I don’t need to consider every single possible argument in order to make an assessment like that. And I’m pretty sure that you don’t do that, either.”

    So your “assessments” are based on “present-day knowledge” in some rational way; you’re relying on inferences from things that people in the present are supposed to know or reasonably believe. You think through this, that and the other thing and come to the conclusion that it’s absurd to think Jesus walked on water (for example). At least I have no other way of understanding this and similar things you’re saying. But then how can it also be that you just “intuit” the absurdity of such claims? This seems incoherent, if the epistemic intuition you’re talking about here is similar to the intuition that (for example) it’s wrong to hurt people just for fun.

    For example:

    “Yes, explanations bottom out at some point.
    Someone with access to the internet, who has read arguments, etc., can coherently tell me that they believe that tomorrow Jesus will come, or that an alien spacecraft will take them to some paradise, or that they know that Jesus talks to them, that they have seen conclusive evidence that vaccines cause autism – and consistently insist on that after looking at the evidence -, that the Earth is less than 15000 years old – and consistently insist after they look at scientific publications, etc., and so on.
    I reckon they’re being epistemically irrational.”

    You’re saying there’s no further explanation possible for the claim of irrationality. Such beliefs are just obviously irrational, and if someone doesn’t recognize that he’s just irrational himself. But then you go on to allude to a bunch of specific facts about our epistemic situation that are surely meant to _explain_ the irrationality of such beliefs for people like us. At least I don’t know what else would be the point of mentioning the internet or whatever. Of course, these facts don’t really explain anything, but they do seem to be meant as indications of the general kind of thing that would explain your claim of irrationality.

    Do you think a person can believe that p by “intuition” when he believes that p on the basis of all kinds of specific bits of knowledge or evidence that most people until recently didn’t have? Maybe I’m not understanding what the role of “present-day knowledge” is supposed to be in relation to this inexplicable “intuition”.

    I really have no idea what “present-day knowledge” is supposed to make it absurd to believe in the miracles or the supernatural, etc. Or what specific knowledge we have that enables us to dismiss as _absurd_ the very idea that arguments for theism could make miracles fairly probable. (Or do you deny that if theism is probable miracles are not so improbable?)

    Also some of these examples don’t fit well with your view that intuitions always count for more rationally than theories or hypotheses or principles. That’s not how scientific reasoning works, for example. Scientists regularly hold on to theories that conflict with lots of strong pre-theoretical intuitions (for example, because the theories appear to be very predictive). Quantum mechanics seems to have some really counter-intuitive implications, but scientists don’t generally respond by denying quantum mechanics.

    So this seems wrong, or equivocal:

    “In any event, as a general method, intuitions come first, and theories are tested against intuitions.”

    They come first temporally, since we usually don’t have a theory before we have intuitions on the topic. It doesn’t follow that they forever come first epistemically. And in fact, theories are sometimes tested against intuitions and sometimes used to test the reliability of intuitions or intuitive faculties. To use one of your examples, a meta-ethical subjectivist might come to accept subjectivism for good reasons–e.g., it seems true, it fits with the available evidence, it fits with other things he believes about the world, it explains inferences that seem valid–and then regard his own objectivist intuitions about ethics as some kind of mistake or illusion. I’m not a subjectivist, but I don’t think it’s just a _mistake_ for the subjectivist to systematize his thinking in this way rather than some other way.

    “If someone on the street tells you they just saw a dead person, smelly and all, recover and walk away, you’re going to assign (intuitively) an astronomically slim probability, and believe that either they’re lying or their brain is not working properly.”

    Sure. On the other hand, I also have reasons for thinking that it’s fairly probable (or at least, far from absurd) things we would normally rightly consider highly improbable happen occasionally. These include reasons for theism, which I take to greatly raise the odds of those kinds of things happening. (But not often; in fact these reasons make it very improbable that they’d happen often.) Of course, it’s a separate question whether the specific resurrection we’re talking about here was actually an instance of that kind of thing.

    “Let’s consider a philosopher like, say, Richard Joyce. He is a moral error theorist. He believes he has no moral obligations. He certainly is not aware of general principles that distinguish his obligations, and it seems clear to me that he does not know them, either. But that does not imply he has no moral obligations. Of course, he has moral obligations, even though he does not know the general principles.”

    I’d simply deny that someone like this _really_ believes that he has no moral obligations, unless maybe he’s a psychopath. (I think believing in an error theory is irrational, Moore-paradoxical or something.) He acts as if he has all kinds of obligations. If he grades his students’ papers by arbitrarily writing numbers without reading, he probably feels guilty and thinks he did something wrong. Then he turns around and says, in class or in publications, that he doesn’t think he has any obligations. But that’s just empty talk. His philosophy doesn’t reflect his real beliefs, as manifested in his actions and mental life. On the other hand, if he _really_ doesn’t believe that he has any moral obligations, and he _really_ has no idea which rules are supposed to apply to his own behavior, then I don’t find it “clear” that he does have obligations anyway. Do psychopaths have moral obligations, assuming that they really have no understanding of moral rules or ability to distinguish them from arbitrary conventions? I doubt it.

    “You may not know the law of Sweden, but you may well have a moral obligation to know the law, and to uphold it. Or you may be unaware of it even if you know it, though perhaps you mean they don’t need to be aware at all time, but you need to be able to become consciously aware when they decide to think about it? At any rate, you can still be obligated to learn the laws of Sweden and uphold them.”

    You can’t be obligated to uphold it _while_ not knowing it, though you could at that time be obligated to get to know it. But what if there is no law of Sweden, or no way for you to find out what it is? Then you’re not obligated to get to know it either. Now add that whether you’re (supposedly) obligated to paint your house red depends on the laws of Sweden. Maybe there’s something in Swedish law to the effect that every house owner has to paint his house red unless he’s already done his military duty or he has more than three dependents (or whatever). Well, you’re just not obligated in that case–not obligated to uphold the law or to find out what the law says so that then you’d be able to uphold it (and obligated to). Likewise, suppose that my moral obligation not to harm innocent people in a given situation depends on all kinds of further factors, and it depends in ways that are fixed by principles (i.e., the principles that explain why harming innocents is okay in some cases but not others, obligatory in some cases and impermissible in others. Then I say my moral obligations are fixed (specified, limited) by principles; the relevant principles comprise a theory (unless, again, you have some very specific concept of a moral “theory”). But _you_ say no one knows of any reasonable workable theory, and you also _seem_ to think that no one (including you) is able to figure out any such theory. So I think you should conclude that no one has any moral obligations. Again, we don’t just “look” and “intuit” that x is wrong in cases C1, C2 and C15 but not-wrong in C4 and C7. Ordinary moral reasoning (ignoring law and philosophy) involves looking for generalizations that explain the relevant common properties, building systematic belief systems on that basis. To the extent that our beliefs about these things remain unsettled or incoherent, our ordinary moral judgments are unsettled or uncertain. Think of debates about abortion. We can’t agree on whether personhood is the relevant criterion, or whether a fetus is a person at a given developmental stage, etc. And that’s why, unsurprisingly, thoughtful people tend to disagree or be uncertain about the moral status of abortion. If just “looking” and “intuiting” were enough, this wouldn’t happen. Anyway, the point is that _if_ there is no known theory and no way at present to invent or discover one, there’s also no way for us to know which obligations we have or where they end. Alternatively, we do know that, to some extent, because to some extent we know some reasonable moral theory.

    I wonder why you think no one had one in the past. If people accepted Catholic moral teachings, for example, they were (in effect) accepting a moral theory. The more intellectual ones were able to actually state and rationally defend the theory, appealing to certain moral intuitions and explaining away the ones that seemed to conflict. (E.g., those other intuitions are the result of sin, or human ignorance.) The less intellectual ones at least knew enough about the general practical rules implied by the theory, and could use those fairly competently in deciding what to do. I’m sure that was also the case for traditional Jewish or Muslim or Hindu or Confucian cultures, and many others. Again I wonder what exactly a “theory” is for you. If it has to be something like quantum mechanics then, sure, no one has ever had the moral equivalent of that kind of thing. But what I’m describing is clearly more than just a “moral sense”. It’s a rational system of principles with implications for practical reasoning.

  18. Right, I don’t think there’s any reason for believing in a _natural_ or _pre-ideological_ moral sense (though I don’t rule it out). This is consistent with believing, as I do, that “people have a moral sense”. But for all I know, my own moral sense is the result of a very specific acculturation and isn’t “natural”. Your earlier arguments often seem to be directed against someone who thinks there’s no such thing as a moral sense; that’s clearly not my view, though, since I’ve been assuming from the outset that almost everyone does have one. (It’s needed for reflective equilibrium, for example, and my first comment was an appeal to that method.)

    Some of them were, but I was covering considerable ground, and some of my arguments were also directed at a moral sense like the alternative you seem to be proposing (I just wouldn’t have called that a moral sense, but that’s just notation), namely against a moral sense that is not built-in in normal adult human organisms, but results from specific acculturation. This is not to say that the proper development of a moral sense does not require social interaction. It does. But for that matter, the proper development of the human sense of sight requires specific conditions (in particular, it requires light), but that’s not a problem for the view that there is such thing as a human sense of sight. Of course there is one.

    Now, I don’t know what you mean by “natural”, but the sort of moral sense I have in mind is a system or systems (specially dedicated to morality, or to part of the moral domain, or perhaps also with other functions; this is not crucial) that allows us to make generally reliable moral assessments, and which, when working properly, yields intuitive verdicts that do not depend on specific culturally prevalent moral beliefs, but are species-wide.

    I guess my arguments did not convince you, but let me explain in greater detail whether (almost certainly) one of the following is true:

    A. There is such sense, or
    B. An epistemic moral error theory is true, or
    C. A substantive moral error theory is true.

    By the way, I believe it’s A, to be clear.
    To argue for that, I’ll assume B and C are false, and argue that A is true.
    Why?
    People generally have the means of making true moral assessments, not infallibly but usually, and at least in their daily lives (because C. and B are false).
    But if not a moral sense as I described, then where are the sources of moral knowledge?
    A book like the Bible couldn’t do without the moral sense, because leaving aside the fact that it’s terribly innacurate, full of falsehoods, etc., there is the following insurmountable difficulties:

    I. There are plenty of societies where there is no Bible, and no other allegedly revealed book, or for that matter, any alleged revelation. Yet, people do have the means of knowing right from wrong.
    II. Even if a powerful person handed down a book of rules, they need a method of ascertaining moral truth prior to the handing over of the rules, and which would allow them to, say, distinguishing between a benevolent being and a supervillan.

    So, what else could be the source of moral knowledge?
    The ability for deductive reasoning alone is not going to do, because there is no way to go from nonmoral premises to moral conclusions while reasoning validly, or if there is, no one knows it yet.
    The ability to make probabilistic assessments still requires a previous shared understanding of moral concepts if it is to be used to assess the probability of moral claims, but the concepts themselves have to come from somewhere. Moreover, an ability to properly make probabilistic assessments involving morality already is a sense of right and wrong of the sort I’m positing. It cannot come entirely from indoctrination, etc., because then there would be no way to assess the indoctrinated norms, and moreover, the concepts would be culturally variable (i.e., the meaning of “wrong” or similar claims). That’s why I gave the example of the 2-squids and the 3-elephants (the aliens).

    So your “assessments” are based on “present-day knowledge” in some rational way; you’re relying on inferences from things that people in the present are supposed to know or reasonably believe. You think through this, that and the other thing and come to the conclusion that it’s absurd to think Jesus walked on water (for example). At least I have no other way of understanding this and similar things you’re saying. But then how can it also be that you just “intuit” the absurdity of such claims? This seems incoherent, if the epistemic intuition you’re talking about here is similar to the intuition that (for example) it’s wrong to hurt people just for fun.

    That’s not at all incoherent, and I already gave you examples.

    Let me try again: Alice is a juror. The defendant’s lawyer, Missy, points out that all event are consistent with her client being framed by advanced aliens from another planet. Missy offers no evidence, but points out it’s consistent that the aliens may have been stealthy so that we failed to detect them. Alice reckons that that is true, but it does not introduce a reasonable doubt. Her epistemic probabilistic assessment of Missy’s alternative hypothesis is that it’s astronomically slim. In particular, it would be epistemically irrational not just to believe it, but even to be in doubt.

    In fact, when jurors (or professional judges, depending on the system) make assessments of what is a reasonable doubt, etc., they’re implicitly ruling out plenty of alternatives (finitely many if you only count those humanly comprehensible, but still a gazillion), as too improbable.

    Here’s another example: A YEC claims that the Earth is less than 15000 years old (or 10000 years, whatever). He is consistent and has looked at the evidence, but says maybe Lucifer planted fossils, or Yahweh tests our faith, or gives a complex explanation that involves no alleged deception. But plenty of people properly reckon that the Earth is far older. How do we know that, given that theory is underdetermined by observations?
    Well, we make intuitive probabilistic assessments. And just as in the case of the alien hypothesis and the jury, we reckon also that the YEC is being epistemically irrational. And so on.

    You’re saying there’s no further explanation possible for the claim of irrationality. Such beliefs are just obviously irrational, and if someone doesn’t recognize that he’s just irrational himself. But then you go on to allude to a bunch of specific facts about our epistemic situation that are surely meant to _explain_ the irrationality of such beliefs for people like us. At least I don’t know what else would be the point of mentioning the internet or whatever. Of course, these facts don’t really explain anything, but they do seem to be meant as indications of the general kind of thing that would explain your claim of irrationality.

    Well, given my history of interactions on line, I don’t believe I have a significant shot at convincing Christians (for example), or people who debate me on line. After our interaction so far, I reckon convincing you is much less probable than it was before. But it’s interesting to argue, and perhaps one can convince readers.
    My explanations about those bunches of facts are in part meant to highlight them to you or readers, or even to inform people who haven’t thought of them (it depends on the case), so that they make their own intuitive assessment: hopefully, their assessments will be proper.
    But other than that, there is a more direct reason I explained that I brought up the internet, etc. You asked what my position was, and actually also gave an interpretation of it that was wrong in many ways. I want to clear things up. For example, even people who agree with me about the irrationality of beliefs that Jesus walked on water, resurrected, etc., for people living in the present day in the West, might actually think that I’m being epistemically irrational if they come to believe that I believe that humans in all circumstances would be epistemically irrational if they believed in such events. I don’t want that to happen, so I want to clarify my views when misconstrued, and even explain them when someone asks, at least as time permits.

    Do you think a person can believe that p by “intuition” when he believes that p on the basis of all kinds of specific bits of knowledge or evidence that most people until recently didn’t have? Maybe I’m not understanding what the role of “present-day knowledge” is supposed to be in relation to this inexplicable “intuition”.

    Yes, indeed. My point is not that they will not make an assessment based on the bits of information (or lots of information) that they get. Rather, my point is that the assessment on the basis of that information will be an intuitive probabilistic assessment.

    Take, for example, Alice the juror again. When the trial begins, she does not have the belief that the defendant – say, Jack – is guilty. But as prosecutor Juan presents the pieces of evidence, she eventually becomes convinced. Of course, she factors in the information. And she needs conscious deliberation to focus on what she wants to evaluate, etc., but she has to make intuitive probabilistic assessments along the way, and all the time. For example, she reckons that the witness is very probably not lying. And she surely rules out aliens framing the defendant: with no specific evidence backing up that claim, its probability is stuck at no more than the prior, and it’s negligible.

    You say believes “on the basis” of all kinds of specific bits of knowledge. Well, I would say there is a lot of observations that she needs to assess; that will give her some knowledge. But observations – this is crucial do not uniquely determine a theory. In fact, there are infinitely many hypothesis consistent with all observations, and very probably finitely many but certainly still a gazillion that are humanly comprehensible. It’s only because of her ability to make intuitive probabilistic assessments that she can make sense of the observations, etc.
    But she does not go around explicitly applying Bayes’ Theorem and updating her probabilities conditioned to events like observing that the prosecutor says such-and-such, the alleged witness does so, etc.; that’s all unconscious and intuitive processing. Even when she’s thinking about the matter consciously, unconscious assessments just come to her all the time, and allow her to end up delivering a verdict.

    Another example I gave: I’m walking on the street, and someone tells me that she was robbed. I intuively assess a lot of variables, and reckon that she’s telling me the truth. But if she had instead told me that she had been robbed by aliens in a flying saucer, or by someone flying ala superman, I would believe she’s either lying or mistaken. And it’s proper of me to do that. But what I observed is logically consistent with the hypothesis that it was aliens. Or the guy flying like Superman. I just rule that out intuitively. It’s way to improbable. It would be unreasonable to have doubts about the fact that she’s stating falsehoods. Maybe she’s not doing so deliberately, but she is. In fact, it would be epistemically irrational on my part to believe her (again, realistic scenario in which I’m walking on the street; I think we all have sometimes encounter people making no sense, even if there is no apparent contradiction – maybe there would be if we stuck around long enough, but that’s not the point). It would also be epistemically irrational of you to believe her, or of anyone else (again, no weird evidence of aliens, etc.)

    I reckon the same happens with claims that Jesus walked on water, resurrected, etc. It would be epistemically irrational on my part to believe them. And also it would be epistemically irrational on the part of other people living in our modern world (adults of normal intelligence or more), not in conditions of abject poverty or anything extreme.

    I really have no idea what “present-day knowledge” is supposed to make it absurd to believe in the miracles or the supernatural, etc. Or what specific knowledge we have that enables us to dismiss as _absurd_ the very idea that arguments for theism could make miracles fairly probable. (Or do you deny that if theism is probable miracles are not so improbable?)

    While I reckon theism is astronomically improbable, I also reckon that any specific claims of miracles is also very improbable even assuming theism, going by usual examples of miracles, and not by some definition (on the other hand, there are definitions of miracles in which they wouldn’t be so improbable; if so, the point remains because I’m assessing probability of specific claims).

    Might it be that they’re more likely than on nontheism?
    Sure, but still very improbable.
    For example, if I assume theism is true, I reckon it’s still irrational to give any significant weigh to the defendants suggestion that, say, the body is no longer were it was because she came back to life and walked away.

    Also some of these examples don’t fit well with your view that intuitions always count for more rationally than theories or hypotheses or principles. That’s not how scientific reasoning works, for example. Scientists regularly hold on to theories that conflict with lots of strong pre-theoretical intuitions (for example, because the theories appear to be very predictive). Quantum mechanics seems to have some really counter-intuitive implications, but scientists don’t generally respond by denying quantum mechanics.

    We need to be careful here.
    Scientists know that the theories are predictive, and they also know that our intuitions about what sort of behavior you could expect from particles and the like do not hold in the realm of the very small, or very fast, or big, or massive. But those are not the epistemic intuitions I’m talking about. Scientists also make intuitive probabilistic assessments on the basis of information available to them. Part of that info conflicts with some of them would have expected before. But it happens. They’ve observed it. In fact, by accepting the observations, they’re intuitively ruling hypotheses such as, say, that they were confounded by a demon that messed with the instruments.

    That said, I think there is a tendency in some (maybe many) scientists (not social sciences) to attribute to some theories a bit more probability than they should, especially in matters such as QM or GR, but it’s nothing like believing in people walking on water.

    They come first temporally, since we usually don’t have a theory before we have intuitions on the topic. It doesn’t follow that they forever come first epistemically.

    I’m not making a deductive argument, so it does not follow. But the color analogy should be clear enough. The theory is based on a generalization from the verdicts our system for detecting colors. It should not be put before that system. That’s cart before horses. Similarly for the moral theories based on our moral sense.

    And in fact, theories are sometimes tested against intuitions and sometimes used to test the reliability of intuitions or intuitive faculties.

    Not on normal conditions, if the theory is a hypothesis based on generalizations from intuitions in that precise domain. That would not be proper.
    If the theory is a generalization from observations about judgments about greenness and other color properties, and there is an object that under normal conditions people say is green but the theory says it’s blue and yellow, the theory needs to be either modified or ditched, because it’s almost certainly wrong.
    Now, a theory based on intuitions on one domain perhaps could be used to test the reliability of intuitions in another domain. But that’s another matter.

    To use one of your examples, a meta-ethical subjectivist might come to accept subjectivism for good reasons–e.g., it seems true, it fits with the available evidence, it fits with other things he believes about the world, it explains inferences that seem valid–and then regard his own objectivist intuitions about ethics as some kind of mistake or illusion. I’m not a subjectivist, but I don’t think it’s just a _mistake_ for the subjectivist to systematize his thinking in this way rather than some other way.

    The word “meta-ethical subjectivist” has been used and is used in so many different ways (e.g., see the SEP article on moral antirealism), that I don’t know what sort of theory you have in mind. But if – say – the subjectivist reckons from apparent disagreement that we have no reliable moral sense, and goes from there to conclude that apparent disagreement is miscommunication or something like that, in any event, he is not making an assessment from a first-order theory based on generalizations from our moral intuitions on specific cases, to rule out assessments made by our moral sense in specific cases and normal conditions.
    Rather, he’s looking at morality in a nonmoral way, and studies how humans – himself included – make moral assessments.
    In a way, he’s doing what alien scientists studying humans might do (though they very probably wouldn’t get the wrong conclusions).

    To go back to the color analogy, what the subjectivist is doing (if I get your scenario right; else, I ask you to elaborate) is not like using a theory that is a generalization from observations about judgments about greenness and other color properties, to say that an object is blue and yellow even though people generally regarded as having normal color vision say it’s green under normal conditions. Rather, what the subjectivist is doing is like studying a population, see that their color-like judgments vary widely, and reckon that there is no reliable human color sense. It’s a very different sort of behavior and testing.

    Sure. On the other hand, I also have reasons for thinking that it’s fairly probable (or at least, far from absurd) things we would normally rightly consider highly improbable happen occasionally. These include reasons for theism, which I take to greatly raise the odds of those kinds of things happening. (But not often; in fact these reasons make it very improbable that they’d happen often.) Of course, it’s a separate question whether the specific resurrection we’re talking about here was actually an instance of that kind of thing.

    Yes, but that’s what I’m saying. You do find a resurrection to be an extremely improbable event, given only that info. You raise the probability from almost zero to high or whatever under the assumption of theism but only in the case of Jesus, or very few cases. What I’m saying is that you’re making the wrong probabilistic assessment in that case, but it’s not that you fail to make proper probabilistic assessments about resurrections in general. Generally, you do it right.

    I’d simply deny that someone like this _really_ believes that he has no moral obligations, unless maybe he’s a psychopath.

    Some part of his brain believes he has moral obligations – he has a moral sense, after all. I’m not inclined to think that’s knowledge in his case, because he believes those feelings are not linked to obligations, because he thinks obligations have some metaphysical demands that are not met. He proposes fictionalism, so he makes “pretend” moral judgments.
    But maybe you think those are actual moral judgments, and somehow that counts as conscious knowledge?

    In any case, we don’t have to consider that case specifically. Take a moral error theorist who is not a fictionalist but an eliminativist. He’s not going to say to himself “It’s my obligation”, etc., and then act upon that. So, either they don’t know their obligations, or if they do, they know them unconsciously.

    On the other hand, if he _really_ doesn’t believe that he has any moral obligations, and he _really_ has no idea which rules are supposed to apply to his own behavior, then I don’t find it “clear” that he does have obligations anyway. Do psychopaths have moral obligations, assuming that they really have no understanding of moral rules or ability to distinguish them from arbitrary conventions? I doubt it.

    If a psychopath rapes a child just for pleasure, is he not behaving immorally?

    Anyway, I don’t really know that psychopaths have no ability to distinguish moral rules from arbitrary conventions. If it is true that they do not, that gives me another way to show that we have a general, species-wide sense of right and wrong. Otherwise, how is it that non-psychopaths distinguish between the two? It’s not because society says so. Smart psychopaths can also tell what people in their society generally believe about moral obligations (still, this is not the main argument for a species-wide sense, as psychopaths actually might be able to tell for all I know; I gave the argument above).

    But regardless, I would use the case of error theorists who are not psychopaths (and not fictionalists, if you prefer) as example of people who don’t know their obligations consciously.

    All that aside, I don’t need moral error theorists to make my point. I can just point out that most people in most societies (that’s a serious understating) did not have conscious access to a general theory on the basis of which they would make moral judgments properly (more below).

    Now add that whether you’re (supposedly) obligated to paint your house red depends on the laws of Sweden. Maybe there’s something in Swedish law to the effect that every house owner has to paint his house red unless he’s already done his military duty or he has more than three dependents (or whatever). Well, you’re just not obligated in that case–not obligated to uphold the law or to find out what the law says so that then you’d be able to uphold it (and obligated to).

    But suppose that the government publishes that obligation on line, and in newspapers, etc., so you read that and you know it. Surely, in order to be obligated, you don’t need to know any general principles or theories underlying Swedish law in order to be obligated to abide by it, let alone being able to consciously access such principles.

    Likewise, suppose that my moral obligation not to harm innocent people in a given situation depends on all kinds of further factors, and it depends in ways that are fixed by principles (i.e., the principles that explain why harming innocents is okay in some cases but not others, obligatory in some cases and impermissible in others. Then I say my moral obligations are fixed (specified, limited) by principles; the relevant principles comprise a theory (unless, again, you have some very specific concept of a moral “theory”). But _you_ say no one knows of any reasonable workable theory, and you also _seem_ to think that no one (including you) is able to figure out any such theory.

    You seem to be conflating two different senses of “theory”, but if you’re not, in any case you misunderstand my view.
    So, here goes:

    Given that I think there is a moral sense that yields results in specific cases, and that that moral sense has a way of functioning in order to yield its verdicts, I would say that there are some more general rules, that the moral sense applies to specific cases. I do not think the rules are likely to be remotely as general as current theories about what the rules are often say, but they might be like an extremely long and casuistic book. But regardless of how general they are, I don’t think the relevant rules comprise a theory, in any usual sense of the word.

    A theory is something that – among other conditions – we do consciously, whereas the more general rules do not need to be consciously accessible, and I don’t think they are. We can, however, use our sense of right and wrong to specific cases, and theorize about what the more general rules might be. That’s all fine, but knowledge of the theory – and just to reduce the risk of misunderstandings, the explicit stuff we come up with -, in no way is required for knowledge of our moral obligations. At most, we just need to know that we have an obligation in some situation, etc. (assuming that we do need to know; I’m not sure about the psychopaths, but the point isn’t central).

    So I think you should conclude that no one has any moral obligations.

    I don’t agree at all (as you probably expected by now ;-)). I think we have moral obligations, and moreover, in normal specific cases, we know our moral obligations, by means of our sense of right and wrong.

    Again, we don’t just “look” and “intuit” that x is wrong in cases C1, C2 and C15 but not-wrong in C4 and C7. Ordinary moral reasoning (ignoring law and philosophy) involves looking for generalizations that explain the relevant common properties, building systematic belief systems on that basis.

    This I deny. Ordinary moral assessments are intuitive, like ordinary epistemic probabilistic assessments. While we do reckon what’s morally wrong, obligatory, etc., or what is probable, unreasonable, etc., we do not have conscious access to the workings of our moral sense or our epistemic sense.

    If just “looking” and “intuiting” were enough, this wouldn’t happen.

    First, moral disagreement happens under a wide background of agreement. Disagreement is salient because we care about it. But agreement is much more common. And moral judgments are, generally, intuitive and not based on theory, and result in agreement. Otherwise, social life would not be possible.
    Second, intuitions are on the basis of contemplating cases described in nonmoral terms. People may be call a case the same (i.e., “abortion”), but actually disagree about the nonmoral properties of the case, and that also is a source of different assessments.
    Third, there are situations that simply aren’t clear for our moral sense: I think there are non-consciously accessible more general rules, but there is a degree of fuzziness, so that might be a source (though I don’t think that’s likely).
    Fourth, abortion is a case in which there is are source of damage to many people’s moral sense – namely, false theories. Our moral sense is fallible.

    Anyway, the point is that _if_ there is no known theory and no way at present to invent or discover one, there’s also no way for us to know which obligations we have or where they end.

    That is not true, and there is no reason to think it is. We don’t have enough information yet. I’d say not even close. Perhaps some day we will. Perhaps not. But either way, it’s clear that that does not prevent us from knowing our obligations intuitively, in the vast majority of cases.
    The same goes for epistemic assessments, by the way.

    Here’s an example: consider people who lived in most human societies in the past. Surely, they did not have the means to discover the general theory, but they had moral obligations. Or do you think they had a means of discovering the correct general theory?
    Just considering what you seem to be demanding: take any of the present-day theories (that make predictions, not to unfalsifiable ones). They are complex philosophical theories?
    Imagine a male hunter (and maybe a bit gatherer) in Australia (i.e., what’s now Australia), 25000 years ago, a very long time before there was even writing, who had to spend her life hunting to sustain himself and his group? Or a female gatherer (and maybe a bit hunter), etc.

    Surely, those people did not have the intellectual tools to come anywhere near knowing one of the present-day theories – which I think are all false, but even if one were true.
    In fact, they never had any theory. They made moral judgments intuitively (like people do today in nearly all cases, which are not salient, but leaving that aside), using their sense of right and wrong. But if having the means to discover a theory were required for moral obligations, they would not have had any. Of course, they did have moral obligations.
    Now, they also might have intuitively come up with some generalizations sometimes, no doubt. But they did not have a general theory, and in any event, those generalizations were after they made the intuitive moral judgments, using their sense of right and wrong. That’s how they knew right from wrong in the first place.

    I wonder why you think no one had one in the past. If people accepted Catholic moral teachings, for example, they were (in effect) accepting a moral theory.

    Partially, because most people didn’t know them very well. But in any case, I never said no one had one in the past.

    What I said was: “For example, if that were required, then it seems to me that most people in most human societies would not have had any moral obligations, since they did not know the principles in question.
    In fact, most people had no moral theory, or had a false one – and so false on the basics that didn’t even come close to being truth; I can tell that without even using my moral sense, just by the mutual incompatibility of theories. ”

    Take, for example, people living in Australia 40000 years ago. Surely, they did not have the tools to develop any present-day theory, so even if one of the present-day theories were correct, they did not have it.
    But did they have any theory at all?

    Almost certainly not in the sense of a general theory that yields verdicts in at least most common situations.
    What they did have were some very, very partial generalizations, probably some true and some false ones. But like humans have always normally done, they made moral judgments using their own moral sense, without conscious access to the rules it’s applying, and without having the time and cognitive resources – not even close – to figure out those inner workings (i.e., the general rules).

    Most human societies were small bands of hunter-gatherers. Do you really think they go around using a general theory when they make moral judgments?
    Most human societies that weren’t like that were still pre-writing, and nearly everyone had no time or cognitive resources to do anything like even what either of us is doing now in terms of thinking about human psychology and/or philosophy, and we – very clearly to me – do not remotely have the resources to figure out the general rules by which our sense of right and wrong works; it’s not even close. Maybe after centuries of serious research in human psychology, someone will figure it out (leaving aside as always strong general AI, because then all bets are off). Maybe.

    Now, we can in fact generalize from specific cases, and come up with some true general hypotheses. For example, any adult person has a moral obligation not to rape people for fun. Well, if you’re right about psychopaths, not even that. But that’s very, very little, in terms of degree of generality. It’s hardly a theory, and it surely is not a theory that tells us the actual rules.

    The less intellectual ones at least knew enough about the general practical rules implied by the theory, and could use those fairly competently in deciding what to do.

    I would say that that simply is not how human psychology works, and how humans have nearly always made moral judgments. They did it with their sense of right and wrong, not with knowledge of a general theory that they would apply in each case. Do you actually think hunter gatherers go around making judgments like that?

    I’m sure that was also the case for traditional Jewish or Muslim or Hindu or Confucian cultures, and many others.

    That’s still a tiny minority of societies.
    But that aside, also of course they were all wrong theories, even if some of their verdicts were correct. But surely, nearly all of the inhabitants simply did not have the means to critique them on a philosophical level and come up with the true one.

    Again I wonder what exactly a “theory” is for you.

    As I said, I was thinking of a moral theory as something (roughly) general enough to yield verdicts (and so be tested) in at least many different situations.
    But given one of the arguments you made now (i.e., the one where you said that the relevant principles comprise a theory), then it seems you’re talking about something fairly complete too.

    In my assessment, that’s a matter for perhaps centuries or more of research in human psychology, given the extreme complexity of the human brain/mind, and in particular the probable extreme complexity of the system of rules or principles, as well as the (from my perspective clear) failure of extremely simplistic attempts so far, and on top of that the fact that psychologists usually aren’t focused on that or even philosophically informed enough to even try with any chances of success. Maybe combinations of teams of psychologists and philosophers (General AI aside), but still, it’s pretty clear to me no one is even close.

  19. “If a psychopath rapes a child just for pleasure, is he not behaving immorally?”

    Well, if he really has no understanding or appreciation of any moral rules or values, then I don’t see how he could be. If a shark bites off your leg, is it behaving immorally? A psychopath who rapes a child is still doing something really horrible, something we should try to prevent, something that would normally be a violation of moral rules (if the being doing it were a normal moral agent). But if behaving immorally means behaving in some way that legitimates moral blame, I find it natural to say the psychopath is blameless. (Though I’m not sure that all or many so-called “psychopaths” really are like this psychologically.)

    “Scientists know that the theories are predictive, and they also know that our intuitions about what sort of behavior you could expect from particles and the like do not hold in the realm of the very small, or very fast, or big, or massive.”

    How do they “know” this? Because they already accept a (counter-intuitive) theory about the very small (etc.) and they think it’s more plausible that the theory is true than that “our intuitions” on this topic are reliable or accurate. In other words, they give epistemic priority to theory over intuition. The intuitions in question were initially intuitions about objects or stuff in general. (Intuitively, for any x, either x is determinately here or it’s determinately not-here.) But now scientists decide that these intuitions that intuitively seemed to be just common sense regarding stuff in general is really just common sense about certain specific kinds of objects.

    “Imagine a male hunter (and maybe a bit gatherer) in Australia (i.e., what’s now Australia), 25000 years ago, a very long time before there was even writing, who had to spend her life hunting to sustain himself and his group? Or a female gatherer (and maybe a bit hunter), etc. Surely, those people did not have the intellectual tools to come anywhere near knowing one of the present-day theories – which I think are all false, but even if one were true.”

    I really have no definite idea about what these kinds of people might have thought or how they might have lived. Did they really think or behave “morally”? I’m pretty sure they had instincts and some basic emotions and social bonds similar to ours. Beyond that, who knows? For all I know, their version of “morality” was just a bunch of tribal traditions and taboos taken for granted with no reflection. Maybe they were eating people in other tribes or enslaving them and raping them and they thought all of that was just fine, just because things had always been that way (as far as they knew). Or maybe they never gave any thought to whether it was okay or not. I just have no idea, and it wouldn’t surprise me if there was little or nothing in their psychology or social life corresponding to this “moral sense” that you’re describing.

    But I also don’t understand why this matters. Your argument on this point seems to be that (A) such people did make reasonable moral judgments, but (B) they had no moral theory, therefore (C) moral theory isn’t necessary for making moral judgments. Even if the premises are true, I don’t think I ever denied them, or the conclusion (if that’s your conclusion). My point was that we, now, are inclined to theorize and we can make more reasonable moral judgments by using a method of reflective equilibrium. In this method, theory can sometimes take priority over intuition; therefore, your objections to Hulk’s position aren’t so convincing. That was where I started, anyway. Since then, I’ve claimed that having moral obligations requires knowing (or being able to know) a moral theory of some kind; but you can make moral judgments and act morally without having obligations, it seems to me.

    “My point is not that they will not make an assessment based on the bits of information (or lots of information) that they get. Rather, my point is that the assessment on the basis of that information will be an intuitive probabilistic assessment.

    Take, for example, Alice the juror again. When the trial begins, she does not have the belief that the defendant – say, Jack – is guilty. But as prosecutor Juan presents the pieces of evidence, she eventually becomes convinced. Of course, she factors in the information. And she needs conscious deliberation to focus on what she wants to evaluate, etc., but she has to make intuitive probabilistic assessments along the way, and all the time. For example, she reckons that the witness is very probably not lying. And she surely rules out aliens framing the defendant: with no specific evidence backing up that claim, its probability is stuck at no more than the prior, and it’s negligible.”

    Okay, so when you initially claimed that people nowadays should be able to just “intuit” that Catholicism is false, you did not mean that (for example) if their brains are functioning properly it will just seem to them that it’s false. Rather, you meant that if their brains are functioning properly _and_ they have all kinds of (as yet largely unspecified) information _and_ they reason adequately on that basis _then_ it will just seem to them that it’s false. It will seem that way in light of some possibly quite complex and sophisticated deliberation or inference, weighing of competing hypotheses, etc. Is that right? Because in that case this “epistemic intuition” seems _very_ different from paradigms of “intuition” as we normally use the term, e.g., it just seems to me intuitively that people shouldn’t hurt others just for fun. And it sounds a _lot_ like what I was earlier describing as theorizing.

    And I take it that you’re claiming we do this kind of thing–whatever we call it–when we make moral judgments and think about morality? Because, if that’s your view, it seems you should doubt that hunter gatherers in the stone age were doing anything like that with the information available to them. (Though maybe they were–who knows?)

    Right-wingers make fun of liberals who appeal to “the current year”. E.g., “Wow, just wow, this asshole is saying blacks commit more crimes than whites. It’s 2017 people!” I assume you don’t think “the current year” is a good form of argument. But it looks as if you’re doing something similar. You just assert theism is “astronomically” improbable, that Catholicism is “absurd”, and so on, without explaining any of the specific information or inferences (or whatever) that are supposed to enable people to reach these strong conclusions. Instead you just say that we have the internet now. You really did say that! You are clearly a very smart guy–or gal, just maybe–but that’s a very strange thing to say. It certainly would faze me to realize that very smart people who also have the internet regard these things as probable, even after decades of careful reflection and dealing with arguments from very smart opponents. I certainly would hesitate to just say that those people are “being irrational”, or that their brains must be malfunctioning. I think (and find it intuitively obvious) that there can be reasonable disagreement about these kinds of topics. They’re hard, they’re abstruse, they’re remote from normal human experience; if evolution is true, we probably didn’t evolve mental capacities geared to reliable intuiting wrt these topics… etc.

    And it’s suspicious that you’re not able to set forth the specific reasoning that you allude to here. If it’s reasoning that any normal person in this society (with internet access) should be able to access or invent, it should also be reasoning that you can state in a pretty adequate way in a blog comment. Is there some reason you can’t just say what it is that we all can know or reasonably believe now, that would enable us to reach your conclusions? Not being facetious–I really would be curious to know what you think it is, since I have no idea. For example, what did we learn in the last 2500 years that now enables to realize that no cosmological argument makes theism probable, or that no argument could make miracles probable?

    Interestingly, my own sense is that “present-day knowledge” makes supernaturalism or transcendentalism far more plausible than it might have been 100 years ago. We have more experience with science and naturalistic explanation, and we’re finding that over and over there seem to be fundamental features of reality and experience that just don’t fit into that paradigm. We can’t even imagine how they could fit. The internet, for example, seems to illustrate and amplify puzzles about minds and information and consciousness that wouldn’t have been so clear in the past. Another example is the moral sense itself. I think it’s going to be very hard to come up with any coherent naturalistic explanation for how that could exist, and how it could have any normative force. The less plausible naturalism, the more plausible supernaturalism. Not yet an argument for theism, of course, but it’s not _absurd_ to think along these lines and end up with intuitions that you’d dismiss as just malfunctioning.

    Also I wonder if this line of thought is compatible with what you said about scientists dealing with intuitions: when it comes to QM, you say, they recognize that intuitions appropriate to one domain aren’t reliable when dealing with the QM domain. Okay. But then why can’t a Catholic philosopher say that ordinary empirical intuitions about biological death or water aren’t reliable when it comes to the domain of theology or metaphysics? (For what it’s worth, I find it intuitively absurd to suppose that those kinds of intuitions would be reliable there; but I know it’s not worth much since my intuitions must be all messed up given that I disagree with you.)

    “I. There are plenty of societies where there is no Bible, and no other allegedly revealed book, or for that matter, any alleged revelation. Yet, people do have the means of knowing right from wrong.
    II. Even if a powerful person handed down a book of rules, they need a method of ascertaining moral truth prior to the handing over of the rules, and which would allow them to, say, distinguishing between a benevolent being and a supervillain.

    So, what else could be the source of moral knowledge?”

    But, again, I don’t deny that intuition is a source of moral knowledge. Of course it is. But it’s not the _only_ source of moral knowledge. For example, once you construct a set of principles on the basis of intuitions, you might gain new knowledge by applying those principles to new cases, getting counter-intuitive results, and choosing to reject some of the intuitions with which you began.

    Are you sure that people in _all_ societies or even _most_ societies did “have the means of knowing right from wrong”? People who were eating their enemies or doing mass human sacrifices or wiping out entire tribes just because they wanted their land? I think the actual behavior of many groups in the past is often pretty vile and disgusting, if we judge using our current “moral sense”. Not just their behavior, but their principles or values (if they had any). Now, of course, people did have ways of distinguishing what _they_ took to be “right and wrong” or some roughly–maybe only very roughly–analogous distinction, e.g., honorable and shameful. But if that’s all we’re talking about, the evidence cuts against your claim that everyone has some kind of _natural_ human moral sense. If we do, it’s very often undeveloped or ignored; or else its dictates are so minimal and vague that it becomes unclear why it should matter much.

    I didn’t mean to say that people need an “alleged revelation” or a book of rules in order to have some kind of moral theory. I’m sure the theory could be encoded in oral tradition or myth; or it could be implicit in customs and practices that reflective members of the group can interpret and articulate, or could if they were pressed to explain themselves. But, again, I don’t insist that everyone was always moral, if “moral” refers to some fairly specific set of attitudes or dispositions like ours.

    True, even if there was some revelation people would need to use their own moral sense in order to decide whether it was really coming from a benevolent being or a super-villain. But, again, I’m not claiming that we can think or act morally _without_ intuition or moral sense; instead I’m claiming that we can’t think or act morally (in anything like our current sense of the term) with _only_ intuition or moral sense and _without_ theorizing and reflective equilibrium, etc. And it seems you might actually agree with me on this–which would be truly absurd after all this back and forth! Since you seem to be defining “intuition” so that a complex conscious deliberative process of “factoring” all kinds of recently discovered empirical data can count as “intuition”. In that case, yes “intuition” or “moral sense” is the only thing we need and the only thing we could have. But I’m guessing it won’t be so easy for us to agree.

  20. Jacques:

    The shark is not morally guilty. It’s not a moral agent at all. But it’s not clear to me that a psychopath is also not a moral agent. I guess it depends on the psychology of psychopaths, but they can learn moral truths from others.
    Still, psychopaths are not a central issue, so I’ll leave it at that.

    How do they “know” this? Because they already accept a (counter-intuitive) theory about the very small (etc.) and they think it’s more plausible that the theory is true than that “our intuitions” on this topic are reliable or accurate.

    Actually, they know they do not hold because they conduct experiments, and they make their own intuitive assessments about what’s going on in the experiments; in particular, they properly assess that some stuff is not behaving in a way that does not conflict with some our intuitions about what sort of behavior you could expect from particles and the like.
    But again, they’re not rejecting their intuitive probabilistic assessments. My point is that they go by their own epistemic intuitions, and that leads them to set aside some other intuitions.
    As I said, though, sometimes I think they go too far in assigning too high a probability to some of the models. But on the other hand, they’re on firm ground when they reject the view that some particles behave as some of our intuitions would have it.

    In other words, they give epistemic priority to theory over intuition. The intuitions in question were initially intuitions about objects or stuff in general. (Intuitively, for any x, either x is determinately here or it’s determinately not-here.) But now scientists decide that these intuitions that intuitively seemed to be just common sense regarding stuff in general is really just common sense about certain specific kinds of objects.

    Indeed, but notice that when scientists do that, they’re relying on their epistemic intuitions that lead them to reckon that some other intuitions are not likely true in those realms. This is in no way in conflict with anything I’m saying.

    I really have no definite idea about what these kinds of people might have thought or how they might have lived. Did they really think or behave “morally”?

    Sure, since they were our same species. Why would you call their behavior not moral behavior?
    They made moral judgments, they punished wrongdoers, etc. They were no psychopaths.

    For all I know, their version of “morality” was just a bunch of tribal traditions and taboos taken for granted with no reflection. Maybe they were eating people in other tribes or enslaving them and raping them and they thought all of that was just fine, just because things had always been that way (as far as they knew).

    Maybe some of them thought so, because of error about the sorts of minds of those other people (e.g., demonizing or dehumanizing the outgroup), due to group-bias, or religion, or whatever. Or maybe they just weren’t doing so. Maybe it depends on the group. However, that’s not the point. The point is that they behave immorally sometimes, not immorally some other times, etc. They were moral agents. And they had a way of knowing their moral obligations, which certainly was not a theory like you suggested.

    Or are you saying that maybe they weren’t moral agents, and nothing they did was immoral?
    That would be extremely improbable. They were very close to us. Morality comes from a very long evolutionary history. It’s not new. Even chimps have some rules and enforce them, albeit pretty imperfectly.

    Or maybe they never gave any thought to whether it was okay or not. I just have no idea, and it wouldn’t surprise me if there was little or nothing in their psychology or social life corresponding to this “moral sense” that you’re describing.

    That would be again extremely odd. Even chimps have something like chimp-morality. And these were humans. There is no way our moral sense could have evolved so quickly, and we do have one – again, barring an error theory.

    But I also don’t understand why this matters. Your argument on this point seems to be that (A) such people did make reasonable moral judgments, but (B) they had no moral theory, therefore (C) moral theory isn’t necessary for making moral judgments. Even if the premises are true, I don’t think I ever denied them, or the conclusion (if that’s your conclusion). My point was that we, now, are inclined to theorize and we can make more reasonable moral judgments by using a method of reflective equilibrium.

    You have questioned more than once that there is a sense of right and wrong like the one I’m describing.
    I on the other hand, hold that there is, and that theories are properly tested against it. As for reflective equilibrium, I’m not sure how you’re construing it yet, so I don’t know to what extent if any I deny it.

    Just to be clear, I’m not saying that there is no room for moral generalizations, or that they’re not useful (more below).
    Rather, my position is, as I explained, that generalizations from the verdicts of our sense of right and wrong in specific cases, are properly tested against the verdict of our sense of right and wrong in specific cases. If they do not pass the test, then unless there is good reason to think that our moral sense is malfunctioning in the specific case in question, it’s very likely or certain that the generalization is wrong. How likely or certain depends on the case, but that is generally how one falsifies a moral generalization from specific cases, or an even more general theory constructed by other means instead of generalization from specific cases (e.g., improper means such as deductions from religious claims).

    So, how can generalizations (whether very partial or complex one) be of use?
    There are a number of ways, but purely for example:

    1. Let’s say that our moral sense does not yield a clear verdict on whether X is immoral. If we have a generalization that has passed many tests so far and has not failed any of them, and that says X is immoral, then that provides some good evidence that X is immoral. How good the evidence is depends on factors such as how many tests the generalization passed, or what we can tell about the reasons for our moral sense not to yield a clear verdict in the case, but it’s something.

    2. Moral discussions with other people.
    For example, let’s say that Bob reckons that Y is immoral, but Alice reckons that Y is not immoral. So, Bob considers the matter more carefully, but he still reckons that it is immoral. How would he go about trying to conving Alice?
    Well, it would depend on why she’s likely to be making a mistaken assessment, from Bob’s perspective.
    But one reason she may have failed to realize that Y is immoral is that when contemplating Y and using her moral sense, she’s leaving aside some morally relevant factor.
    More precisely, when we contemplate a certain action in order to see assess whether it’s immoral, we need to consider non-moral properties of the action (e.g., intent, expected result), and then our moral sense yields a verdict. But it may turn out that Alice does not not know of some of the non-moral properties of Y (so, in her head, she’s contemplating something different from what Bob is contemplating) that happen to be the ones that makes Bob’s moral sense yield the “immoral” verdict, say property P to simplify (it could be more properties).
    So, it would be useful for Bob to point out to Alice that Y has property P – or to argue that it does, in case Alice denies it.
    But then again, it would be very difficult for Bob (in terms of time consumption and ability to keep’s Alice’s attention) to just randomly list zillions properties of Y in the hope that one or more of them will trigger an “immoral” verdict in Alice’s moral sense.
    But suppose that Bob has some tentative generalizations from other cases, and reckons that properties such as expected consequences, intentions, etc., are likely to play a significant role in moral judgments. Then Bob needn’t just list random properties of Y. Instead, he can focus on the properties he thinks have a good chance of being the properties that triggered his own moral assessment of immorality. The better his generalization is, the better he can narrow the list of candidates.
    Now, a previous generalization from other cases is not the only way to narrow down the candidates. Another way would be to try to see at what point in his contemplating Y he reckoned it was immoral, and so the candidate would be the property he was considering. But that might be very difficult to do – maybe Y just looked obviously immoral to Bob from starters. So, a generalization can come in handy.

    Granted, also, that might not work. For example, it may be that the property Bob lists is enough for an “immoral” verdict given other properties of Y that Bob knows about, but Alice lacks knowledge of another properties. But it’s a start. It might not be easy to convince her; in fact, he might not succeed in the end, but surely, it’s generally a lot better to be able to narrow the list of properties instead of just picking randomly among the gazillion properties of Y.

    In this method, theory can sometimes take priority over intuition; therefore, your objections to Hulk’s position aren’t so convincing. That was where I started, anyway. Since then, I’ve claimed that having moral obligations requires knowing (or being able to know) a moral theory of some kind; but you can make moral judgments and act morally without having obligations, it seems to me.

    But the “therefore” does not actually follow. In other words, “theory can sometimes take priority over intuition” does not entail that my objections are not convincing.
    In fact, I said repeatedly that theory can take priority over intuitions, if we have good reasons to believe that the latter are failing in a specific case, e.g., when they were influenced by religious or other ideological indoctrination. The point is that “can sometimes” is a very low bar.
    What I’m talking about is the general proper method of testing theory – namely, against intuitions. It’s not that the method is infallible. But it’s the way to go in general, and I don’t see any good reason to suspect this method in this case.

    Okay, so when you initially claimed that people nowadays should be able to just “intuit” that Catholicism is false, you did not mean that (for example) if their brains are functioning properly it will just seem to them that it’s false. Rather, you meant that if their brains are functioning properly _and_ they have all kinds of (as yet largely unspecified) information _and_ they reason adequately on that basis _then_ it will just seem to them that it’s false. It will seem that way in light of some possibly quite complex and sophisticated deliberation or inference, weighing of competing hypotheses, etc. Is that right? Because in that case this “epistemic intuition” seems _very_ different from paradigms of “intuition” as we normally use the term, e.g., it just seems to me intuitively that people shouldn’t hurt others just for fun. And it sounds a _lot_ like what I was earlier describing as theorizing.

    Even in the case she has to decide on, there is an intuition at bottom, and along the way, as I was saying. While she may have to think about complicated matters, they do not determine her conclusions. That is crucial. She intuitively reckons that the probability that the defendant engaged in such-and-such behavior is extremely high, even though all of the pieces and bits of info she has in mind do not entail so, and are entirely compatible with, say, aliens framing him, the matrix overlords framing him, all of the witnesses framing him, etc.

    As for Catholicism, I think it should look to her like the alien framing theory, in the sense that it warrants immediate dismissal on the basis of already available info (and even if we leave moral intuitions aside entirely).

    And I take it that you’re claiming we do this kind of thing–whatever we call it–when we make moral judgments and think about morality? Because, if that’s your view, it seems you should doubt that hunter gatherers in the stone age were doing anything like that with the information available to them. (Though maybe they were–who knows?)

    They were making intuitive assessments of probability, and of morality. They had much less time to consider different variables, so their judgments were what their epistemic and moral senses respectively would yield on the basis of the very little contemplating they had time for.
    Still, they weren’t that bad at epistemic probabilistic assessments; else, we would not be here – they would have been killed by predators, or starved to death, or something.

    Right-wingers make fun of liberals who appeal to “the current year”. E.g., “Wow, just wow, this asshole is saying blacks commit more crimes than whites. It’s 2017 people!” I assume you don’t think “the current year” is a good form of argument. But it looks as if you’re doing something similar. You just assert theism is “astronomically” improbable, that Catholicism is “absurd”, and so on, without explaining any of the specific information or inferences (or whatever) that are supposed to enable people to reach these strong conclusions.

    Yes, and Alice the juror also implicitly holds that it’s extremely improbable that the defendant was framed by aliens, or by demons, and so on. So improbable is it, that she does not even consider that that would be a reasonable doubt.
    Nothing wrong with that.
    To be clear, it’s not that I can’t argue against Catholicism. What I’m saying is that people should reckon immediately that it’s false, not that I can’t give an argument if I so choose. I will make here the same offer I made with regard to an earlier suggestion: after this exchange is over, if you would like to debate Catholicism, I offer to do so in a civil manner in any venue of your choosing. But that’s not going to change my immediate assessment, with places Catholicism in a similar category to Moon Landing conspiracy theories, aliens framing defendants, demons doing so, etc. (to be clear, I’m not saying their probability is the same, only that it’s negligible and it’s beyond a reasonable doubt that they’re false; but one of them might be a zillion times less probable than another).

    Instead you just say that we have the internet now. You really did say that!

    No, I never said “we have the internet now”. The way you put it suggests I was suggesting that what makes the difference was precisely the internet. I’m not going to speculate about when it became so that it should have been obvious that Jesus did not resurrect, etc. I make an intuitive probabilistic assessment. But when I did mention the internet, it was just an example of the sort of world we live in. I would have made a similar point if we didn’t have the internet…well, not really, since I would not be talking to you or to anyone with whom I could have this sort of debate, but I mean aside from that. Take it as a representative of our world. If you prefer, I’ll mention instead cars, TV, radio, smartphones, modern medicine, genetics, and knowledge of common descent and evolution in general (for example; the list is far, far longer). The resurrection of Jesus just doesn’t fit in that world, not in the sense of a logical contradiction (as usual, theory is underdetermined by observations), but just that obviously, that’s not our world

    Yes, yes, it’s not obvious to lots of other people, even very smart ones.:-) I get that, of course. There are also people to whom it’s not obvious that 9-11 wasn’t caused by the American government, that the Moon Landing is not a conspiracy, that evolution is not a hoax, lie or whatever, etc. Such is life. But I still reckon (intuitively!) that they’re not being rational (again, if they have access to the world I’m talking about, and all of the info that even unconsciously they’re picking from their environment).

    It certainly would faze me to realize that very smart people who also have the internet regard these things as probable, even after decades of careful reflection and dealing with arguments from very smart opponents.

    Well, it would not faze me, either, since I realize that that does happen. In fact, even in the case of Young Earth Creationism, it happens. I’m just saying they’re being irrational, not that they do not do that, or that the fact that they do fazes me. Irrationality is abundant in this world, in many cases due to ideology/religion.

    I certainly would hesitate to just say that those people are “being irrational”, or that their brains must be malfunctioning. I think (and find it intuitively obvious) that there can be reasonable disagreement about these kinds of topics. They’re hard, they’re abstruse, they’re remote from normal human experience; if evolution is true, we probably didn’t evolve mental capacities geared to reliable intuiting wrt these topics… etc.

    I do not think that they’re being irrational when, say, they study their opponents’ arguments and look for logical flaws, or when they look for logical inconsistencies in their view. But I do think they are failing to make proper probabilistic assessments. That’s what is malfunctioning – and I don’t think that those subjects are hard, in a sense that is relevant here.

    An analogy: a smart person can come up with a very sophisticated theory that aliens are abducting people, experimenting on them, etc.; still, it’s not rational to buy it. Sure, if one wants, has time, etc., one can make a more detail analysis, but it’s not required.

    And it’s suspicious that you’re not able to set forth the specific reasoning that you allude to here. If it’s reasoning that any normal person in this society (with internet access) should be able to access or invent, it should also be reasoning that you can state in a pretty adequate way in a blog comment.

    It’s not reasoning. It’s an intuitive probabilistic assessment, as in the case of a complex theory of alien abductions, or a Moon Landing conspiracy theory, or a 9-11 conspiracy theory, etc.

    So,let’s say instead of telling me that I’m not able to set forth the specific reasoning in the case of Jesus’ walking on water, or resurrecting, etc., you made the same point in connection to aliens’ not abducting someone, or framing a defendant, or being held in Area 51, or a Moon Landing conspiracy theory, or a 9-11 conspiracy theory, etc.
    What could I do?

    I could either say it’s not reasoning, but I reckon immediately that it’s false by means of an intuitive probabilistic assessment, but that if I wanted to play along, I could consider the specifics of the theory set forth by you or someone else. I would still be making intuitive probabilistic assessments along the way, pretty much in the way Alice the juror does when considering what the prosecutor, witnesses, etc., have to say, but given that in that case, I would be contemplating a lot of different bits and pieces of data, making repeated assessments, etc., based on what you said before, it seems likely you would consider that to be a sort of reasoning, not something intuitive.

    I don’t consider those to be opposite; I would be reasoning, but – as always, even if that’s not transparent to most people – making intuitive probabilistic assessments all around. And of course, you could deny those, challenge the reasonableness of my assessments, and so on.
    Now, you’re a very intelligent guy (not a gal, for sure! – I’m a guy too, btw), you’re good at philosophy, and you’re used to internet debates. I’m pretty sure (though this is also underdetermined by observations!) that you can come up with a sophisticated theory of alien abductions, or pick a consistent 9-11 conspiracy theory and defend it consistently, etc., and if you did so, the exchange would get ridiculously long – it’s already far longer than I was expecting when I posted -, and in the end, you could always say I haven’t provided good reasons – since, at some point or another, I would be relying on bare epistemic intuitions.
    A similar outcome would happen if I decide to engage Catholicism is more detail.

    So, as before, after this debate about proper methods of testing moral theories finishes – else, it’s just unmanageably long for me, I’m afraid -, I offer to debate Catholicism with you in a venue of your choosing. But that would be a debate just because, so to speak, as if I were challenging a theory involving alien abductions, Moon Landing stuff, etc. Sure, you could make a number of good points in the exchange (about any of those matters, or a zillion others), but that tells me about your intelligence and debate abilities, and nothing relevant about theories in question, which I would still reckon are obviously false.

    Yes, yes, it’s not obvious to many other people, I do get it. 😀
    It happens, and many of them will believe that I’m being irrational myself by finding it obvious or holding that they’re being irrational. For that matter, there are very intelligent Christian philosophers who believe that there is no nonculpable nontheism, or non-culpable assessment that Christianity is false, or even non-culpable failure to reckon that Christianity is true after being exposed to some arguments (or whatever), and even that “plausibly” people deserve infinite punishment for some of those things. Such is life.

    Interestingly, my own sense is that “present-day knowledge” makes supernaturalism or transcendentalism far more plausible than it might have been 100 years ago.

    I don’t know what “supernaturalism” means.
    One way would be to go with some sort of ostensive definition, like pointing at hypothetical objects and say those are supernatural, and pointing at other objects and say they’re not, etc. In this sense, I would say that supernaturalism is obviously false today (to a person being rational and living in the present world), and while I think that’s pretty probable 100 years ago too, I’m slightly less sure because I have much less information about that world than ours.
    An alternative way is to go with a stipulative definitions. Alas, as far as I’ve seen, those definitions fall into one of the following categories at least:
    1. Contradictory ones.
    2. Definitions that do not do the intended work, even rendering all sort of ordinary objects supernatural.
    3. Definitions that reduce the supernatural to God or God and very few objects, and would exclude most things usually called “supernatural”.
    4. Definitions that are in terms of other problematic terms.

    In any case, I don’t know what you mean by the term. The same goes for “naturalism”. Could you clarify, please?

    As for transcendentalism, also I don’t know what you mean. I’d like to ask for clarification.

    We have more experience with science and naturalistic explanation, and we’re finding that over and over there seem to be fundamental features of reality and experience that just don’t fit into that paradigm. We can’t even imagine how they could fit. The internet, for example, seems to illustrate and amplify puzzles about minds and information and consciousness that wouldn’t have been so clear in the past. Another example is the moral sense itself. I think it’s going to be very hard to come up with any coherent naturalistic explanation for how that could exist, and how it could have any normative force.

    I don’t agree, but this is becoming too long.
    But for example, with regard to the moral sense, I think the expression “normative force” is problematic here, but the moral sense resulted from evolution. And the best understanding of its etiology and its workings I have found are, in my assessment – and perhaps surprisingly – those held by an anonymous internet poster that goes by the name “Bomb#20” on a forum called “freeratio.org”. Most of the posters there are leftists, and these days he’s mostly engaging them rather than talking about the moral sense, but if you feel like having your hypotheses on the matter (or on other matters too, perhaps) challenged by someone a lot smarter than I – and crucially, with a lot more scientific knowledge -, that would be a place to try – if you can just ignore most of the other posters, that is. I actually didn’t know about our moral sense years ago; I got that idea from his posts, though he explains all of this much better.

    Also I wonder if this line of thought is compatible with what you said about scientists dealing with intuitions: when it comes to QM, you say, they recognize that intuitions appropriate to one domain aren’t reliable when dealing with the QM domain. Okay. But then why can’t a Catholic philosopher say that ordinary empirical intuitions about biological death or water aren’t reliable when it comes to the domain of theology or metaphysics? (For what it’s worth, I find it intuitively absurd to suppose that those kinds of intuitions would be reliable there; but I know it’s not worth much since my intuitions must be all messed up given that I disagree with you.)

    They can say many things, but it’s not empirical intuitions about death or water (whatever that means). Rather, it’s the epistemic probabilistic assessment that is going wrong.
    In the case of QM, what is happening is that some intuitions about how stuff should behave seem to appear extremely improbable to hold in the realm of the very small, by the scientists’ own epistemic intuitions, and on the basis of their observations in many experiments.
    So, in other words, the probability that particles behave in some ways appears low at first, but then, it goes up as we observe the results of the experiments.
    On the other hand, nothing of the sort is happening in the case of resurrections. Take a look again at the case of the person of the street that tells you that someone just came back to life, or that walked on water, etc., or the defendant that claims that a corpse is not there because the person resurrected and left. Surely, you would not just find that because perhaps God intervened and so our ordinary empirical intuitions might not hold, then those events are probable, or even not extremely improbable. No, the prior of the resurrection of Jesus is also absurdly low. But in this case, there is nothing else rising the probability enough to make it not absurdly low.

    Your sense is not failing generally, as you still do not reckon despite your assertions about the supernatural (whatever that is), theology, etc., that there is a serious chance that the corpse walked away, or that the stranger on the street talking about a recent resurrection or about someone walking on water is making true claims (I hope!), so generally, you still properly reckon those things are extremely improbable. You just raise the probability to a non-negligible or even “probable” level (I don’t know what probability you give them), without good reason (yes, I get you think I’m irrational in thinking so; it happens. When this exchange ends, I’m willing to listen to your arguments on the matter and reply to them if you want, though I predict our respective positions will not be altered).

    But it’s not the _only_ source of moral knowledge. For example, once you construct a set of principles on the basis of intuitions, you might gain new knowledge by applying those principles to new cases, getting counter-intuitive results, and choosing to reject some of the intuitions with which you began.

    I don’t say it’s the only source of moral knowledge. But rejecting the original intuitions would be justified only if there is specific reason to think they’re failing in those cases.

    Are you sure that people in _all_ societies or even _most_ societies did “have the means of knowing right from wrong”?

    Generally, yes. Always, no, since:
    1. There are psychopaths, and their means only go as far as learning from others. If others got it wrong (even if they have the means to get it right), psychopaths can’t get them right.
    2. The moral sense is fallible.
    3. What I’m saying is that the moral sense at least generally allows us to tell right from wrong in some situation or another, but in order to make the assessment, we need to consider the situation in question, and it may well be that the real situation that people are facing turns out to be very different from the situation on which they make their moral assessments.

    People who were eating their enemies or doing mass human sacrifices or wiping out entire tribes just because they wanted their land?

    You earlier suggested (if I got your position right; else, please clarify) that the means of knowing right from wrong was needed for immoral behavior. Now you seem to indicate you suspect they did not have the means. Are you suggesting their behavior was not immoral?

    I think the actual behavior of many groups in the past is often pretty vile and disgusting, if we judge using our current “moral sense”. Not just their behavior, but their principles or values (if they had any). Now, of course, people did have ways of distinguishing what _they_ took to be “right and wrong” or some roughly–maybe only very roughly–analogous distinction, e.g., honorable and shameful. But if that’s all we’re talking about, the evidence cuts against your claim that everyone has some kind of _natural_ human moral sense. If we do, it’s very often undeveloped or ignored; or else its dictates are so minimal and vague that it becomes unclear why it should matter much.

    When interacting daily within their tribes, they probably did get most of the time right and wrong correctly, though I think that religion may have done significant damage. So, yes, generally, they had the means to tell wrong from right. Generally.

    When they behaved in those repugnant ways (and I’d say immoral), in many cases they did so while demonizing their opponents. They believed that their opponents had certain properties that would make them evil monsters of some sort.
    Does that make their behavior not immoral?
    No, but it seems to me it makes it in many cases less immoral than it would have been for someone who knows better the nonmoral facts, though there are plenty of cases in which their behavior was just atrocious.
    I would say that their moral sense was probably damaged.

    That aside, I do think that apparent moral disagreement cuts against the hypothesis that there is a human moral sense as I have roughly sketched (not sure what “natural” means), and in fact it’s the strongest argument against it. Several years ago, I thought it was strong enough, and was not a moral realist. After further considerations – actually prompted by an exchange with Bomb#20, who defended the human moral sense -, I realized I was wrong – and in fact jumped to conclusions on the matter -, and the moral sense is extremely probable. But if I’m wrong now, then I would go with an error theory as I explained before.

    I didn’t mean to say that people need an “alleged revelation” or a book of rules in order to have some kind of moral theory. I’m sure the theory could be encoded in oral tradition or myth; or it could be implicit in customs and practices that reflective members of the group can interpret and articulate, or could if they were pressed to explain themselves. But, again, I don’t insist that everyone was always moral, if “moral” refers to some fairly specific set of attitudes or dispositions like ours.

    I was ruling out other potential sources, because without a moral sense like the one I described, there is no place to even start.

    But, again, I’m not claiming that we can think or act morally _without_ intuition or moral sense; instead I’m claiming that we can’t think or act morally (in anything like our current sense of the term) with _only_ intuition or moral sense and _without_ theorizing and reflective equilibrium, etc.

    Yes, we still disagree. But I do think that without making general hypothesis – also by means of our epistemic intuitions! -, it would be very difficult to engage in moral talk, trying to persuade others, etc. (see my example above). But I think we could still generally make (i.e., in most daily cases) proper moral assessments using our sense of right or wrong.

  21. “In fact, I said repeatedly that theory can take priority over intuitions, if we have good reasons to believe that the latter are failing in a specific case, e.g., when they were influenced by religious or other ideological indoctrination.”

    Are you saying that when we learn some intuition was influenced by religion or ideology that’s a good reason for believing that the intuition isn’t reliable? Because if so that seems wrong. Some religions or ideologies might be true, or truer than what people would think had they not been “indoctrinated”, so in those cases intuitions influenced in that way wouldn’t be unreliable for that reason alone. Also I wonder why you think Catholicism (for example) isn’t based on intuitions. More generally, it appears that people have always spontaneously believed in some kind of supernatural or divine or spirit world. Maybe religions get created because they reflect deep-seated natural and spontaneous intuitions that people have (though not only that, of course). In that case, it would seem to be rational by your own standards for people to rely on those kinds of intuitions in constructing beliefs about another domain, morality, and judging that some intuitions in that domain–e.g., that it’s not wrong in any way to get a tattoo–are not reliable or significant.

    I want to ask again for some sketch, at least, of the kind of reasoning that you take to be available to anyone like me or you with our “present-day knowledge” that would enable us to “intuit” that Catholicism is “absurd”, theism is “astronomically improbable”, or that no argument for theism could make miracles probable, etc. You’ve said that the basis for this supposedly obvious conclusion is a bunch of information and deliberation that all of us now have, together with some kind of intuitive assessment of all of that. So apparently it’s not just a matter of thinking about Catholicism and having the thought “Absurd!” Rather there is some fairly definite information and reasoning that’s meant to generate that thought, and it’s supposed to be information and reasoning that any normal person in our situation can access or invent. (If not, how could its results be “obvious” to someone like me or you?) So, again, what is it? Or, if it’s impossible to briefly state it here, why is that and how is that fact consistent with the role it’s supposed to play for us?

    Sorry I’m not able to address all of the many other points you’re making here. It’s not that I have nothing to say 🙂 Or that they’re not worth addressing. Just hoping to focus on something smaller and more manageable if possible.

  22. Are you saying that when we learn some intuition was influenced by religion or ideology that’s a good reason for believing that the intuition isn’t reliable?

    Sort of, but let’s be careful, because it depends on the sort of influence.
    For example, an intuition about whether the religion/ideology in question holds, what those that adhere to it generally believe, etc., can be developed on the basis of studying a religion/ideology, and surely that’s not an undue influence. But when it comes to the development of our moral sense, yes, religion/ideology is not a proper source, just as they’re not a proper source when it comes to general knowledge about the world. Now, that is not to say that they’re always wrong. People sometimes put into religions/ideologies things that they justifiably believe, alongside others that they do not. So, not everything they hold is false. But they’re generally not reliable, so yes, that’s what I’m saying.

    Because if so that seems wrong. Some religions or ideologies might be true, or truer than what people would think had they not been “indoctrinated”, so in those cases intuitions influenced in that way wouldn’t be unreliable for that reason alone.

    I don’t think they might be true, unless you have a very low bar for “might”, in which case, any consistent theory one can make up just because might be true out of sheer luck, but that does not make it reliable.
    Granted, religions/ideologies are not made up entirely like that, so they include more or fewer elements of truth. The problem is unreliability: they also include false stuff. Of course, believers do not believe so, they disagree, and so on.

    Also I wonder why you think Catholicism (for example) isn’t based on intuitions.

    The way I see it, it’s akin to why I think Moon Landing conspiracy theories, or 9-11 conspiracy theories, or alien abduction theories, are not based on intuitions. Well, some of the people who claim to hold them lie, but sure, it looks intuitive to most of those who claim to hold them, and who sincerely do so, if that’s what they’re asking. But something in their epistemic sense is not working properly. People are not abducted by aliens. The Moon Landing happened. And Jesus did not walk on water, raised the dead or resurrected.

    More generally, it appears that people have always spontaneously believed in some kind of supernatural or divine or spirit world.

    I still don’t know what you mean by “supernatural”, but if you’re talking about agents with superhuman powers, afterlife, or things that are usually part of what we tend to call “religion” (but there are exceptions), then sure, those kinds beliefs have been prevalent historically. Most humans do have religion, historically and today. And those are human predispositions that generally are unreliable, as they lead to false beliefs.

    For example, Catholicism is not compatible with, say, the beliefs about the Greek gods, or the Norse European gods, etc. In fact, religions generally have origin stories that are incompatible with each other. So, at some point in the past, they’re generally based on a story that someone made up, and lied about it – by claiming to be a witness, or by claiming he was told that by other people, etc. -, unless he was being utterly irrational and did not realize he was making up stories about superhuman beings creating stuff, going to war with each other, punishing and/or rewarding humans, or whatever.

    The history of religions is a history with all sorts of made-up stuff about nonexistent entities with superhuman powers.

    How do people come to believe those stories?

    It depends on the person, but most of the time, people come to believe that simply because their parents/elders say so. But they’re still wrong.

    Maybe religions get created because they reflect deep-seated natural and spontaneous intuitions that people have (though not only that, of course). In that case, it would seem to be rational by your own standards for people to rely on those kinds of intuitions in constructing beliefs about another domain, morality, and judging that some intuitions in that domain–e.g., that it’s not wrong in any way to get a tattoo–are not reliable or significant.

    That would not be rational by my own standards.
    First, religions are a case in which we do have good reason to mistrust beliefs involving superhuman powers, given the vast track record of mistakes. I never said intuitions are infallible, or that all human intuitions are on par.
    Second, I reckon by my own epistemic intuitions that they’re all extremely improbable.

    I want to ask again for some sketch, at least, of the kind of reasoning that you take to be available to anyone like me or you with our “present-day knowledge” that would enable us to “intuit” that Catholicism is “absurd”, theism is “astronomically improbable”, or that no argument for theism could make miracles probable, etc.

    Those are quite different things, and each of them would take a lot of time to address if one wants to do philosophy about it, but let me briefly consider each:

    1. Catholicism:

    That involves things like the resurrection of Jesus, raising the dead, walking on water, etc. It’s like asking me why I think Moon Landing conspiracy theories, or 9-11 conspiracy theories, or alien abduction theories, are absurd. They just are, on a proper epistemic probabilistic assessment, and just as jurors should dismiss alien framing theories on the spot (even implicitly; they’re not going to do philosophy about it), I hold Catholicism should (epistemic “should”; moral issues are more complicated) be rejected on the spot, perhaps not by children, but by adults.
    That said, of course intelligent philosophers can come up with sophisticated arguments defending it, and be consistent about it since theory is not determined by observations in this case – like all or nearly all actual cases -, and simply reject the intuitive probabilistic assessments of their opponents, no matter how sophisticated the analysis of their opponents go. As I said, I offer to play if you want a debate on Catholicism: you argue for it, and I reply to your arguments. I don’t see Catholicism as a live option of course, so it would be an intellectual exercise and/or game. I can play, though it would take some time, and I would address Catholicism in much more detail than just saying it’s clearly false.

    2. Theism:

    I think this one deserves some more thinking. At any rate, I offer to discuss/debate this one as well.
    But let me be clear: there is a huge difference between not holding that it’s astronomically improbable, and holding it’s true or probably true. I reckon it’s astronomically improbable, but after considering the matter carefully, and – of course – then using my epistemic intuitions to make an assessment.,

    3. Miracles:

    What I said was that “any specific claims of miracles is also very improbable even assuming theism, going by usual examples of miracles, and not by some definition (on the other hand, there are definitions of miracles in which they wouldn’t be so improbable; if so, the point remains because I’m assessing probability of specific claims).”

    You’ve said that the basis for this supposedly obvious conclusion is a bunch of information and deliberation that all of us now have, together with some kind of intuitive assessment of all of that. So apparently it’s not just a matter of thinking about Catholicism and having the thought “Absurd!” Rather there is some fairly definite information and reasoning that’s meant to generate that thought, and it’s supposed to be information and reasoning that any normal person in our situation can access or invent.

    That’s not it with regard to Catholicism.

    On matters such as the alien abductions, framing by demons, or people raising the dead, resurrecting or walking on water, having the thought “absurd” is pretty much the right response.
    Again, Alice the juror needn’t bother with that sort of theory even if the defendant’s lawyer claims the defendant was framed by aliens from another planet, or that the missing corpse resurrected and walked away. That’s not enough to introduce reasonable doubt. But that is not to say one can’t play along and entertain that sort of thing.
    As I said, if you want me to play along and debate traditional style so to speak (well, kind of), I’m willing to do that, after the rest of the debate is over.

    On the other hand, I don’t think it’s a matter of thinking of theism and having the thought “absurd!” – well, it might be, but I wouldn’t be inclined to think a person who lives in our present-day world is being epistemically irrational if they do not do so.

    Or, if it’s impossible to briefly state it here, why is that and how is that fact consistent with the role it’s supposed to play for us?

    It’s neither required nor brief in the case of Catholicism, but as I mentioned, after we’re done with the rest of the stuff, pick your venue and argue for Catholicism, and I will argue against it.

    Maybe I will begin by pointing out that it includes, among others, a claim of a resurrection. Miracle or not (whatever that means), I attribute to those a negligible probability, as you usually do as well (e.g., the defendant saying the corpse resurrected and left, a person claiming that on the street, etc.), and then debate as if I were trying to convince you – though it would be more for the fun of the debate. 🙂

    Sorry I’m not able to address all of the many other points you’re making here. It’s not that I have nothing to say 🙂 Or that they’re not worth addressing. Just hoping to focus on something smaller and more manageable if possible.

    I hear you :-), though I think a debate on any of those issues (e.g., Catholicism) would likely become extremely long and complex. But if you pick one of the issues I mentioned above – or some other, perhaps narrower stuff -, I would agree to debate it; I’m not sure what the policy on topic limitations in this venue is, but if it’s okay and you want to debate it here, I’ll do so.

  23. Maybe I can frame one of my worries a bit differently in light of some of these comments.

    You’ve said a few times that you can’t “jump out of your brain” (or “head”?). For example in reply to my point about disagreement or conflicting intuitions. What does that mean to you?

    One interpretation would be that you think you can’t know anything about how things are in the objective world outside your mind. For example, you can’t compare how things appear to you with how things really are apart from how they appear to you. But that would be self-defeating for you, since your assessment of the rationality of your own beliefs and those of others depends on claims about which intuitions (really, objectively) are reliable or proper–e.g., you claim that people with Catholic intuitions must be biased, or their faculties must be malfunctioning somehow. This is a claim about how things really are in the world, not just about how things appear within your own brain. So I assume this isn’t what you mean.

    Another interpretation would be something like this: You can’t reason about what is reasonable, or anything else, except by relying on your own beliefs and intuitions (etc.) Now that’s true, of course–you can’t think without your thoughts, basically. Fair enough! But then I’m not sure how this deals with the problem of disagreement. After all, you believe that intuitions can be unreliable, that other people in fact have faulty intuitions, etc. And it seems like just reflecting on your own beliefs about intuition in general could make it reasonable for you to wonder whether your own are less reliable than they seem or feel, in cases where people who seem like epistemic peers disagree with you despite knowing all the arguments back and forth. In fact you admitted somewhere up there that you used to regard disagreement as a problem. I’m not suggesting you have to be skeptical, but why isn’t disagreement a good reason to think that others can _rationally_ disagree with you (about Catholicism or theism or morality, for example)? Maybe this bothers me because I don’t see how we can fairly charge people with “being irrational” or not believing what they “should believe” in cases where the supposed defect is something over and above all facts about how things appear to the subject, even on careful reflection–e.g., his intuitive faculties work in some way that, as a matter of fact, is not reliable.

    “But when it comes to the development of our moral sense, yes, religion/ideology is not a proper source, just as they’re not a proper source when it comes to general knowledge about the world … Most humans do have religion, historically and today. And those are human predispositions that generally are unreliable, as they lead to false beliefs.”

    Science also leads to lots of false beliefs, of course, as does pre-scientific common sense. Among other things, many or most religions lead to the belief that (a) there is some kind of objective moral order, (b) the universe is set up in such a way that we can understand its most important forces and features, and so (c) we have a moral sense that is generally reliable. Are these false beliefs? And if they’re not false, and people correctly hold these beliefs on the basis of intuition, why couldn’t they reasonably come to believe all kinds of other things, some true and some false, some fair approximations of truths relative to an earlier time in human development or a different cultural understanding of how truths are conveyed? Again I’m not sure how you support this kind of claim even under your own standards for reasonable belief.

    “…religions are a case in which we do have good reason to mistrust beliefs involving superhuman powers, given the vast track record of mistakes.”

    Doesn’t science–or, for that matter, reasoning about the topic of this discussion–have a similar track record of mistakes (or what we take to be mistakes)? You could say that science has a better track record for success, but that seems questionable. Lots of success on topics that maybe don’t matter much in the grand of scheme might count for less epistemically than one or two fundamental insights on topics of supreme importance–e.g., whether there is a moral order, whether the universe if intrinsically meaningful and personal, whether our moral sense is reliable, etc. If the question is about the epistemic status of religiously based intuitions, and that turns on some assessment of the epistemic status of religion, it’s not legitimate to presuppose an epistemic difference on the basis of anti-religious intuitions. (Or you could just say that you have no interest in answering this question rationally–you just know or intuit the epistemic difference–but that’s not what you seem to be doing here.)

    “On matters such as the alien abductions, framing by demons, or people raising the dead, resurrecting or walking on water, having the thought “absurd” is pretty much the right response.”

    I guess I’m just a flake. To me it seems not so unlikely that there are aliens out there, or that some of them might abduct humans once in a while. We take dolphins out of the ocean to do experiments. It’s not so unlikely to me that some of those aliens might have evolved powers that we might as well call “supernatural” or “god-like”. Are there demons? Intuitively, I find this quite plausible too. There’s a lot of evil in this world that seems to be best explained by something like demonic influence.

    If some people have totally different intuitions, that could well be because they’ve been acculturated within a naturalistic, humanistic, mechanistic society. And maybe this is the best explanation for the fact that belief in a spirit world, life after death, miracles (etc) seems to be far more common and spontaneous in human history than the intuition that such things are absurd.

    “How do people come to believe those stories? It depends on the person, but most of the time, people come to believe that simply because their parents/elders say so. But they’re still wrong.”

    Maybe, but isn’t this also how almost everyone has always learned basic moral rules? If the moral sense is natural (i.e., not learned by acculturation) it still seems that it’s activated and developed by parents and elders laying down the law, conditioning kids, just asserting that X is wrong, etc. And yet you think the moral sense is real and reliable, while you dismiss what appears to be an equally universal (or nearly universal) human tendency to believe in spirits, gods, other worlds. I think human history and psychology is evidence for an innate or natural religious sense, and I take that as pretty strong evidence that there’s a religious reality to which it’s attuned. You don’t buy any of that? And you don’t think there’s some problem of parity wrt your views on the moral sense?

  24. Just wanted to add–up there I said:

    “Maybe this bothers me because I don’t see how we can fairly charge people with “being irrational” or not believing what they “should believe” in cases where the supposed defect is something over and above all facts about how things appear to the subject, even on careful reflection–e.g., his intuitive faculties work in some way that, as a matter of fact, is not reliable.”

    In other words, I don’t like the idea that being rational or thinking as S should think would seem to require S to jump out of his own head! But that’s what externalism seems to require.

    • Another interpretation would be something like this: You can’t reason about what is reasonable, or anything else, except by relying on your own beliefs and intuitions (etc.)

      Yes, this is what I meant.

      But then I’m not sure how this deals with the problem of disagreement. After all, you believe that intuitions can be unreliable, that other people in fact have faulty intuitions, etc.

      Yes, but I think human moral and epistemic intuitions are generally reliable, and when they fail, there are generally ways for people to get around the problem, but always relying of course on other intuitions, etc. I do think it’s possible that agents do not have a way out. But I think usually that is not the case of humans, though there are exceptions. To use an extreme example, an person in an asylum might believe he’s Napoleon, and may not have a way around it.

      And it seems like just reflecting on your own beliefs about intuition in general could make it reasonable for you to wonder whether your own are less reliable than they seem or feel, in cases where people who seem like epistemic peers disagree with you despite knowing all the arguments back and forth.

      I think one can consider the cases in which there is disagreement, and then try to look for sources of error in their beliefs and in yours. Regarding whether a person is an epistemic peer, I think that that might depend on the matter one is addressing. But it’s also something one can assess – usually.

      In fact you admitted somewhere up there that you used to regard disagreement as a problem. I’m not suggesting you have to be skeptical, but why isn’t disagreement a good reason to think that others can _rationally_ disagree with you (about Catholicism or theism or morality, for example)?

      It’s not that I think others can’t reasonably disagree with me sometimes. They might, due to having different information (we always do), or due to having thought about different pieces of info we have, for different reasons, and so on.
      However, that does not prevent me from reckoning that in some cases, they’re not being reasonable – or even that I wasn’t.

      For example, in the case of moral realism, I was giving too much weight to moral disagreement, and the reason was that I was under the false impression that disagreement was far more frequent that it is. That’s probably due to a human tendency to find moral disagreement salient, but as a result, I made the wrong assessment. Moreover, I was unfortunately not happy during much of the debate (my interlocutor and I started off on the wrong foot it seems; I should have been more cautious), it took me a while to understand his points and assess them properly, as well as generally weighing more factors – I did so after the exchange was over, with a cool head.

      So, I made an epistemic mistake, but I managed to get out of it.

      Another case: I was actually raised a Catholic. As a child, it was rational of me to believe what I was told, but I should have realized as a young teen that Catholicism was false – it ought to have been obvious to me. What happened?
      As far as I can tell, I failed to contemplate the matter as I would other things, due to undue consideration to what my parents and teachers told me. My bad.
      Over time, I’ve improved my ability for not falling for the same things, and also check previous beliefs, look for errors, etc.

      Now, there might be cases in which I just can’t get out of, but that seems very improbable – not only in my case, but also in the case of most people. But – of course – I make that assessment by my own faculties too!

      Maybe this bothers me because I don’t see how we can fairly charge people with “being irrational” or not believing what they “should believe” in cases where the supposed defect is something over and above all facts about how things appear to the subject, even on careful reflection–e.g., his intuitive faculties work in some way that, as a matter of fact, is not reliable.

      That’s an interesting objection. I will try to address it.
      I think there are at least three relevant matters here:

      1. Does failure of epistemic rationality entail failure of epistemic obligation?
      2. Does failure of epistemic rationality requires that one can by one’s own lights fix the problem? (i.e., does epistemic “ought” imply “can”)
      3. Do I hold that, e.g., Catholic philosophers not have a way out, by their own lights?

      1. On the first issue, it seems to me that the answer is probably affirmative; but in this case, my intuitions aren’t so clear. For example, the crazy man who believes he’s Napoleon holds that belief irrationally – I would say. Should he believe otherwise, in the epistemic sense of “should”?
      I think probably so, but perhaps I’m mistaken. If so, then I would say he’s still being irrational.
      2. I don’t think so. The crazy person is an example.
      3. I tend to think they very probably can get out of it. They would need to engage intuitions that are working, and look at the matter from different perspectives, but sure.

      In other words, I don’t like the idea that being rational or thinking as S should think would seem to require S to jump out of his own head! But that’s what externalism seems to require.

      But do you think it’s more likely that the insane person is being rational?
      Or perhaps you think they do not count for some reason?
      At any rate, I think Catholics generally can but won’t change their minds, but I think there may be those who are too damaged. It happens.
      But do you think otherwise?
      If so, I think that’s raises morally problematic issues: if Catholics are like that and are being reasonable, why not, say, Wahhabis or others who would execute people for apostasy, blasphemy or adultery?
      You might argue that Catholicism has a number of advantages. But the fact is that the vast majority of Catholics – or other Christians, or Muslims, etc. – are not philosophers, and do not engage in any philosophical reasoning. They just go with what they were told. If Catholics are being reasonable, why not the others? And if the others are being epistemically reasonable, would their behavior be immoral? As long as you hold that internalism is true in the moral case as well as the epistemic case, it seems to me that people acting on rationally held beliefs would not be acting immorally.

      We can go in the other direction as well. Do you think some (or many) leftists may not have a way out?

      Among other things, many or most religions lead to the belief that (a) there is some kind of objective moral order, (b) the universe is set up in such a way that we can understand its most important forces and features, and so (c) we have a moral sense that is generally reliable. Are these false beliefs?

      I’m not sure about (b) – we might be looking at a very small part of reality, and our grasp of physics might be far away from the fundamentals, for all I know -, but I believe (a) and (c) are true, and that at least in the environment around us, we can understand how things work, even if how deep we can go is limited.
      However, I don’t think this is at all the result of any specific religion. Rather, if it’s included in a religion, it’s because it’s part of the normal human background knowledge, and those beliefs are cross-cultural (at least, implicitly; people may not explicitly state them). For example, all or nearly all religions posit the existence of humans, but surely that’s not unreliable. My point is that generally, common background knowledge does get into religions. But religions also generally include all sorts of false claims about interventions of superhuman agents. We already know those claims are generally false (even if I leave aside my assessment that they all are, at least nearly all are).
      Also, we know that in the case of morality, religions that do posit specific moral beliefs, are generally unreliable and encode false beliefs that are then very difficult to dislodge because people believe them because they’re part of their religions. That includes things like demonizing or dehumanizing their neighbors, or some really awful things in terms of domestic laws.

      And if they’re not false, and people correctly hold these beliefs on the basis of intuition, why couldn’t they reasonably come to believe all kinds of other things, some true and some false, some fair approximations of truths relative to an earlier time in human development or a different cultural understanding of how truths are conveyed? Again I’m not sure how you support this kind of claim even under your own standards for reasonable belief.

      I’m not following you here. I say they’re true. But sure, they might reasonably come to believe false things, under the right conditions. But I do not see why this is a problem for my position regarding Catholicism or some other specific beliefs.

      Or maybe your point is about whether religious beliefs are true?
      I’m saying that religions are generally unreliable, not that they always get things wrong (actually, if they always got things wrong, they would be excellent guides to truth – we’d just have to believe the opposite of what they say!).
      But I suspect I might be missing your point here. I’d like to ask for clarification.

      Doesn’t science–or, for that matter, reasoning about the topic of this discussion–have a similar track record of mistakes (or what we take to be mistakes)?

      No, it does not. Leave aside Abrahamic religions – not that I think they’re relevantly different, but their number is tiny: there are thousands of religions out there, and all of them or nearly all got all or nearly all of the claims wrong (whether it’s Zeus, Thor, or the spirits of the ancestors messing around with stuff).
      On the other hand, science tends to get things generally right. That’s how we’re communicating now, but it’s one example among many. Science is hugely successful.
      Another difference is that science is mostly self-correcting: the errors are eventually weeded out, at least when science is done properly (I’m not counting social sciences here; in that case, things get more complicated).
      There is one issue on which scientists tends to get it wrong: sometimes, scientists tend to assign too high a probability to their mathematical models (i.e., their usual interpretations) being true, rather than good approximations in some particular cases.

      (Or you could just say that you have no interest in answering this question rationally–you just know or intuit the epistemic difference–but that’s not what you seem to be doing here.)

      Actually, in the end that is what we rationally do; theory is underdetermined by observations, so we intuit the answer at some point – e.g., I’m intuitively holding it’s likely that the accounts of the existence of religions in the past and their description is largely correct, etc.

      I guess I’m just a flake. To me it seems not so unlikely that there are aliens out there, or that some of them might abduct humans once in a while. We take dolphins out of the ocean to do experiments. It’s not so unlikely to me that some of those aliens might have evolved powers that we might as well call “supernatural” or “god-like”. Are there demons? Intuitively, I find this quite plausible too. There’s a lot of evil in this world that seems to be best explained by something like demonic influence.

      I think those two things are very different. I too think it’s not unlikely that aliens are out there. But any claims about alien abductions seem clearly false. It’s even less probable when it comes to demons – and this is not to say that the probability of the alien abductions is anything but negligible.
      But if your assessment is different, we could get into a long argument to probably no avail, so I’m going to leave that one aside.

      Maybe, but isn’t this also how almost everyone has always learned basic moral rules?

      Actually, I think they get some examples by their parents, but most of the beliefs and the understanding of the rules is intuitive, and people have to reckon what their obligations are, in specific cases.
      But I don’t see why this would be a problem.
      I’m not saying that what the parents or elders usually claim is generally unreliable. It depends on the case. It definitely is when it comes to claims about superhuman agents – the track record of those claims is clear -, and that extends to moral claims based on the alleged intervention of those agents (very probably because false nonmoral beliefs lead to unreliable moral assessments based on them, among other causes).
      But in the moral case, people tend to get it right most of the time, and in any event, people can correct errors (usually, at least) using their own moral senses.
      Now, it’s true that sometimes they get it vastly wrong, and often that’s due to false nonmoral beliefs – many due to religion -, or to moral errors encoded in religion. But as I said, I don’t see why this should be a problem. Could you clarify, please?
      If you’re saying that moral assessments are also generally unreliable, I do not agree, but if I thought so – as I did a long time ago -, then yes, that would work in my assessment as an argument against moral realism, though these days I would be inclined towards an error theory. By the way, I don’t think that that’s an incoherent or paradoxical, etc., and further, there would be no generally good reason to go against our intuitive moral sense even if we thought there are no moral properties, etc., since that would be still part of our personal preferences. But in my assessment, that would not be a proper position on the basis of the available evidence.

      I think human history and psychology is evidence for an innate or natural religious sense, and I take that as pretty strong evidence that there’s a religious reality to which it’s attuned. You don’t buy any of that? And you don’t think there’s some problem of parity wrt your views on the moral sense?

      Sure, there is a track record in which the vast majority of claims about superhuman agents are false. There seems to be a human tendency to see agency more often than there is, and that may be part of the problem (there are good evolutionary reasons for that), but I don’t have a full account of the psychology of religious beliefs. In any event, the general failure is pretty clear. There is no similar reason to think our moral sense is generally unreliable.
      Additionally (though this is not required, given that we have no specific reason to generally distrust our moral sense), there are reasons to expect from an evolutionary perspective that we would have a generally reliable moral sense. And some smart social aliens would probably have something akin to it, though not quite the same – an alien analogue. On the other hand, there is no similar evidence for the intervention of superhuman agents in human societies – and in fact, plenty of evidence against that.

  25. Jacques,

    Sorry, I realize I picked the wrong word: the track record of traditional claims when it comes to claims about superhuman agents (namely, about the existence of them) is that they’re reliably false, not unreliable in the sense that they might be true or false. I think I made it clear that I think they’re generally false, but given my previous usage of the term “unreliable”, that part of my post might look confusing. My bad.

    A similar point applies to moral claims based on those alleged interventions, such as claims of obligations to worship them to avoid punishment, etc. Those aren’t true.
    On the other hand, other claims religions make are sometimes true, and sometimes false, including moral claims not directly related to the purported intervention.

  26. “I do think it’s possible that agents do not have a way out. But I think usually that is not the case of humans, though there are exceptions.”

    I’m not sure what a way out could be. I think even God would be in roughly the same epistemic situation. I don’t think having no “way out” is a problem. Instead I think that recognizing that no one has a way out (in that sense) has implications for what counts as reasonable or justified belief. The most any possible thinker could hope for is something like reflective equilibrium. If you’ve done that–or, more precisely, if you’re able to understand that idea, and it appears to you on reflection that you’ve approximated it–then there’s just no possible or conceivable epistemic task or obligation left for you. (Standards are even lower, I assume, for people who can’t understand this ideal.) This is why I don’t understand the idea of rating someone as “being irrational” or flouting some “obligation” epistemically or morally when he’s done his best to achieve equilibrium (or it just appears to him on reflection that he’s done his best).

    “But the fact is that the vast majority of Catholics – or other Christians, or Muslims, etc. – are not philosophers, and do not engage in any philosophical reasoning. They just go with what they were told. If Catholics are being reasonable, why not the others? And if the others are being epistemically reasonable, would their behavior be immoral?”

    Sure, I allow that some murderous jihadists (for example) might be just as rational and might behave just as morally as better kinds of people. Not all, of course. But just as I think you can be perfectly rational and end up with false beliefs, I think you can behave morally–insofar as that has to do with trying your best to act in ways that seem right on reflection–and still end up doing terrible things that other people would be morally obligated to prevent if they could. As an internalist I don’t think we can rate people’s behavior as immoral given merely facts that transcend anything internal even on reflection. I guess that’s partly because I assume judgments of immorality are directed at things like intention, motivation, beliefs and desires of the agent. If I were to learn in the afterlife that, as a matter of fact, every time I ate salad I was causing someone to be wrongly imprisoned on Alpha Centauri a billion years later, I wouldn’t think I had behaved immorally. There would be some sense (maybe) in which I did something that was morally bad, but not blameworthy or wrong–and I think being immoral or behaving immorally implies blameworthiness or wrongness.

    “2. I don’t think so. The crazy person is an example.
    3. I tend to think they very probably can get out of it. They would need to engage intuitions that are working, and look at the matter from different perspectives, but sure.”

    About 2: The crazy person is crazy, in having wildly false beliefs. Maybe I’m departing from ordinary language but I hesitate to say he’s irrational if he really has done his best to make sense of how things appear to him, and he has no “way out” of his world-view. On the other hand, it could be that he hasn’t done that. I’d be surprised if most people with paranoid delusions achieve the same degree of reflective equilibrium as Catholic philosophers.

    About 3: This is probably true for some Catholics, but as far as I can tell many are really doing their very best, engaging all intuitions that appear relevant to them, etc. And still they’re Catholics. It’s true many are also just going with what they were told. But that’s true of almost everyone on most topics, for example morality. Most people have no real argument for the wrongness of pedophilia, and haven’t really thought about it. Their intuitions are (very probably) shaped by acculturation and could be changed if culture changed–look how intuitions about homosexuality or marriage have changed in a few decades or less, and rock-bottom common sense about being a man or woman appears to have been blown apart with just a few years of propaganda, bullshit and pseudo-science, etc. I’d say many ordinary people really do not have the ability in the relevant sense to think their way out of these beliefs. In some sense, maybe–e.g., had they been born in a different society, educated and acculturated very differently, trained to think more rigorously and so on. But in that sense we might also speculate that Sid Vicious has the “ability” (or capacity, or ability to acquire the capacity?) to be the world Scrabble champion. It seems irrelevant.

    “…there are thousands of religions out there, and all of them or nearly all got all or nearly all of the claims wrong (whether it’s Zeus, Thor, or the spirits of the ancestors messing around with stuff).”

    Sheer number of claims may not be as important as whether the most fundamental or central claims are true, and how important those truths are. For instance it might be very important to know that the universe is living or personal, in which case many of these thousands of religions would be right about something important, while atheists or naturalists would (typically) be wrong about that even if right about lots of other more trivial stuff. Also I’m just not sure that all these religions are wrong–about Zeus and Thor, for example. To me it seems fairly plausible that there are spirits or super-human persons in reality. So while the ancient Greeks might have been wrong about some details–maybe because their culture was a bit primitive in some ways–they might have correctly identified something real that they interpreted as Zeus. (And the Norse identified this thing with Thor, the Hindus with Indra, etc.) Likewise I have no trouble with the idea that our ancestors live on somehow, “messing around with stuff”. And given that the human consensus over very long periods and unrelated cultures appears to be strongly in favor of such a world-view, I take that to strongly suggest that these are natural and reliable intuitions (similar to the moral sense you accept). It’s also similar to what children naturally believe, it seems. Why assume that it’s all wrong rather than an approximation of some important truths? Likewise ancient Greek science and philosophy was wrong on lots of details but not a bad approximation of some important truths about some things.

    And even if some specific conceptions of Zeus or Thor or ancestor spirits really are clearly false or irrational by our lights, we’ll find lots and lots of comparable things if we make a list of all the scientific posits and hypotheses going back to pre-Socratic times or even just early modernity.

    “On the other hand, science tends to get things generally right. That’s how we’re communicating now, but it’s one example among many. Science is hugely successful.”

    Well, how _are_ we communicating now? Science and technology has a lot to do with it, but all of that is parasitic on fundamental mysteries or magic such as intentionality and consciousness and “information”. If religions say that “In the Beginning was the Word”, or that we are spirits not just bodies, or that there’s some kind of higher or ultimate spirit-mind-soul… then that could well be a true claim about things that we can dimly intuit through science and technology (and maybe ancient or ‘primitive’ people intuited directly). Never mind the internet. I take just regular speech and thinking to support belief in immaterial minds or souls, and maybe other “weird” things. Current science and technology can be reasonably seen as a primitive incarnation or manifestation of ancient religious ideas, and ever stronger cumulative evidence over time for some very different world-view that might be more like that of the ancients than anything people in the modern west think is plausible. (Also just a very sketchy sketch of an argument, of course.)

    Another sketchy point: if we’re trying to find a “way out” epistemically, we might look to basic biological or medical markers in comparing people or cultures. Whose intuitions are more likely to be functioning properly–all those people in the past who made babies and defended themselves and often succeeded in expanding their territory and spreading their genes and memes? or the pathetic sickly people of the deracinated modern west, who tend to have one or two kids or none, who won’t even _intellectually_ take their own side when their lands are being invaded and conquered, their history and heritage pissed on? I’d bet on the intuitions of the first group, the healthy life-affirming human animals not the suicidal self-loathing ones.

    “Another difference is that science is mostly self-correcting: the errors are eventually weeded out, at least when science is done properly (I’m not counting social sciences here; in that case, things get more complicated).”

    Well, eventually maybe they are! But it often takes centuries, and if our current scientific beliefs are any guide, there have also been lots of reverses and failures to correct things. There may be a difference of degree here, but maybe that’s because religious truth is a harder domain epistemically for human beings than physics or chemistry (which wouldn’t be surprising if there were religious truths). I think this is probably your best avenue for drawing a distinction. But it’s hard to make the argument clear and convincing partly because it’s harder to say what could count as a correction or evidence of a correction.

    I don’t agree that religious claims about the existence of superhuman agents are false. I’m just not sure. I like the idea that important dimensions of reality (represented in ancient myths and religions) are closed to us right now because of our own ideology and anti-religion. The gods are silent because we disrespect them. Or they’re not silent, but we just don’t want to listen. Maybe this is because I think there’s a divine or supernatural sense as well as a moral sense. Also it seems intuitively fairly plausible to me. I take human nature and the history of religious belief and the spontaneous belief of children in a personal or animate world as good reason to doubt any naturalistic or scientistic intuitions I find myself having. I take the ever weirder discoveries of science to suggest that reality is probably far weirder than most scientists or scientistic thinkers think. But this is probably not something that we can usefully argue about.

  27. I’m not sure what a way out could be. I think even God would be in roughly the same epistemic situation. I don’t think having no “way out” is a problem. Instead I think that recognizing that no one has a way out (in that sense) has implications for what counts as reasonable or justified belief.

    I’m not sure you understood my point. I’m not talking about a way out of a person’s brain/mind of course!
    What I’m saying is that even if a few intuitions are not working properly (or some part of our own mind, etc.), usually there is a way out. We don’t have a single intuition. We have the resources to – for example – try to figure out whether we’re making a mistake, by looking at the matter from another perspective, contemplating different scenarios, and so on.
    For example, I was giving too much weigh to the instances of moral disagreement because I thought they were much more prevalent than they were, as a percentage of the cases in which different people make moral assessments, and also because in many cases, I compounded that by failing to properly consider cases of moral disagreement that resulted from nonmoral disagreement.
    But partly after reading B20’s arguments – not immediately because we weren’t getting along then and I was in a “defense” mood so to speak, but later, after the exchange ended -, partly after reading more about the subject, and thinking about potential sources of error – something that might be causing my epistemic probabilistic assessments on the issue of whether there was a moral sense go awry -, I came to realize that the fact that humans have a tendency to see moral disagreement as salient could be one such source. I took a closer look at how people regularly behaved, and realized that disagreement happens over a much, much wider background of agreement.
    I also incorporated information about what could be expected from evolution, and also info from other sources on the matter of what’s called “inferential distance” (though it’s not only about deductions, but generally about information and how to assess it), and realize that a considerable percentage of the disagreement could be attributed to that. That is something that I probably should have realized earlier, but perhaps failing to see that reinforced the belief that I already had because of the apparent (to me) prevalence of disagreement, and that’s also a general bias that probably clouded my judgment. Probably personal factors (maybe like looking (to myself) as a very smart guy who figured out that there was no objective morality!) also affected my judgment.

    Now, that was many years ago, and it took me a while to get out of it. But I did get out of it. Of course, I didn’t do it without reading from/talking to others, etc. (see above and previous posts), but the point is that I had enough cognitive resources left to fix the problem, and make a proper assessment later.

    The most any possible thinker could hope for is something like reflective equilibrium. If you’ve done that–or, more precisely, if you’re able to understand that idea, and it appears to you on reflection that you’ve approximated it–then there’s just no possible or conceivable epistemic task or obligation left for you. (Standards are even lower, I assume, for people who can’t understand this ideal.) This is why I don’t understand the idea of rating someone as “being irrational” or flouting some “obligation” epistemically or morally when he’s done his best to achieve equilibrium (or it just appears to him on reflection that he’s done his best).

    That is a different matter, or rather, it mixes two different ones.

    If I’m reading your understanding of reflective equilibrium properly (if not, please clarify), I reckon you (probably!) would say that on the matter of moral realism, I had not reached reflective equilibrium. If that is so, then I’m saying that most people (including people who have spent years or decades thinking about a matter) have (probably!) not reached reflective equilibrium. They would be able to get around their mistakes (in most cases) by thinking about the matters in some different ways, which might require considerable effort – especially on the emotional front, perhaps -, but is available to them.

    A different issue is whether it’s possible for an agent to be irrational if they had reached that sort of equilibrium. It seems to me the answer intuitively is affirmative, but perhaps that’s because you and I see irrationality in different ways: for example, if Bob suffers some sort of brain damage that makes him attribute high probability to the hypothesis that the people around him are out to get him – even if a normal human would not reckon that at all, and even if he gets it wrong almost always as one could expect -, I would say that his brain is malfunctioning in a way that causes epistemically irrational beliefs. That he does not have a way out is not the issue.
    But if I’m reading your points correctly, you disagree. I’m guessing this disagreement (probably!) runs deep. But at any rate, it’s not what I was trying to get at when I said agents usually do have a way out – though I should have said “adult human agents”, because that’s what I meant to talk about, and the general matters is much more complicated.

    Sure, I allow that some murderous jihadists (for example) might be just as rational and might behave just as morally as better kinds of people. Not all, of course. But just as I think you can be perfectly rational and end up with false beliefs, I think you can behave morally–insofar as that has to do with trying your best to act in ways that seem right on reflection–and still end up doing terrible things that other people would be morally obligated to prevent if they could.

    No doubt that people can rationally end up with false beliefs that lead them to do terrible things. But that’s not what I was trying to get at. My point was that going by what you were saying, it appeared – and still does – to me that some of those Jihadists (or not Jihadists, but people who execute others for blasphemy, or kill their sister because of “honor” under some sort of beliefs, etc.) could be (and some probably would be) not only holding their beliefs rationally, but also behaving in a non-immoral fashion when acting upon them.
    That has the consequence that they do not deserve punishment. While that would not be a problem for the justification of fighting them, it would be a problem for the justification of punishing them if caught. Would that possibility not create (at least, in many cases) reasonable doubts about whether they deserve punishment?
    If so, it might be that legally they are still punishable – there may well not be a reasonable doubt as to whether they committed a certain offense, but there would be a serious question about the justice of the punitive laws.

    As an internalist I don’t think we can rate people’s behavior as immoral given merely facts that transcend anything internal even on reflection.

    I actually agree with that.

    I guess that’s partly because I assume judgments of immorality are directed at things like intention, motivation, beliefs and desires of the agent.

    I tend to agree, though it might be that the things that go into the “like” here are different in your understanding and mine.

    If I were to learn in the afterlife that, as a matter of fact, every time I ate salad I was causing someone to be wrongly imprisoned on Alpha Centauri a billion years later, I wouldn’t think I had behaved immorally.

    Once again, I agree.

    There would be some sense (maybe) in which I did something that was morally bad, but not blameworthy or wrong–and I think being immoral or behaving immorally implies blameworthiness or wrongness.

    More agreement!

    I suspect there might be some misunderstanding going on, because you seem to be bringing these up as points of disagreement.

    About 2: The crazy person is crazy, in having wildly false beliefs. Maybe I’m departing from ordinary language but I hesitate to say he’s irrational if he really has done his best to make sense of how things appear to him, and he has no “way out” of his world-view. On the other hand, it could be that he hasn’t done that. I’d be surprised if most people with paranoid delusions achieve the same degree of reflective equilibrium as Catholic philosophers.

    If I got your understanding of reflective equilibrium properly, I don’t think Catholic philosophers probably have achieved that, though no doubt most they have thought deeply about their religion and about some arguments.
    But that aside, the point in bringing the crazy man is not that he achieved reflective equilibrium, but that he does not have the cognitive resources to get out of his beliefs. Chances are he does not even have the means to attempt to achieve it. He does have many normal beliefs and some of his faculties work more or less normally – so, in particular, he can still do things from eating to going to the toilet to even do basic math -, but he also has some serious brain defects.

    About 3: This is probably true for some Catholics, but as far as I can tell many are really doing their very best, engaging all intuitions that appear relevant to them, etc. And still they’re Catholics.

    I thought I was doing my best too (in re: moral realism). And I definitely was making an effort. But it wasn’t enough. And later – more effort, more reading, etc. -, I did better. Much of it has to do with putting aside one’s emotions, realizing that one may be biased towards protecting one’s own reputation (before others and before oneself, even), which may well involve defending the beliefs one has defended for years or decades, etc.

    To be clear, I know Catholic philosophers who are very intelligent, knowledgeable, good at logic, etc., and who dedicate a lot of time to thinking about atheistic arguments, are good at seeing errors in deduction, etc.; but even then, I’m pretty sure they’re making abysmally bad epistemic probabilistic assessments when it comes to things like walking on water, raising the dead, the resurrection, etc., and on a number of matters pertaining to their religion. I also think most of them at least very probably can do better. But they would have to look at matters from a very different perspective, reduce the level of defensiveness – which is I think even instinctive in humans, so that is difficult but doable -, and generally try to look at the matter really not trying to win (not even unconsciously; a serious conscious effort is probably required for that).

    But if that is not so (I’m saying it’s probable, not certain), I would be inclined to say that their beliefs about the resurrection, etc., are still epistemically irrational (but see above; we seem to disagree on what it takes for someone to be rational).

    Most people have no real argument for the wrongness of pedophilia, and haven’t really thought about it. Their intuitions are (very probably) shaped by acculturation and could be changed if culture changed–look how intuitions about homosexuality or marriage have changed in a few decades or less, and rock-bottom common sense about being a man or woman appears to have been blown apart with just a few years of propaganda, bullshit and pseudo-science, etc.

    But I don’t think the lack of an argument is the problem: intuitive assessments suffice, and arguments are usually attempts (even if unconscious) at generalizing from specific cases, trying to see what’s wrong with a behavior, etc., and are usually wrong. Now, I think there is nothing wrong with same-sex relations per se (as with other relations, the circumstances make it wrong), but my assessment is intuitive, and I think intuitions in the other direction were distorted.
    Also, I think probably many of the judgments in hot topics are actually the result of implicit defense of a person’s own social stance. In the ancestral environment, a person who is shunned by her or his group would have very probably died, so it’s unsurprising that humans have a tendency to share the views of the group where failing to share them results in social condemnation, even when it’s epistemically irrational to hold them (pretending to have those views without having it would be costly and less effectively from an evolutionary standpoint).

    Side note (perhaps):

    When it comes to what you call common sense about being a man or woman, I’m probably at odds with both the left and right, though more with the left (probably). I actually think there is a way a woman might have male sexual reproductive organs, or vice versa. Probably – I have alternative hypotheses, and this one is tentative.
    But consider this: let’s say that we could transplant brains or parts of brains, and Alice’s brain (or much of it) gets transplant into an otherwise male body. But she still has a female mind, in terms of predispositions, interests, etc. (I know many people deny that there is sexual dimorphism in human psychology, but they’re mistaken). Would the resulting person be a man, or a woman?
    Let’s consider a different hypothetical case: a person with male sexual organs for some reason (something malfunctions) develops a female mind (and of course, the corresponding female brain structure, at least the relevant parts). Would that person be a man, or a woman? I’m inclined to say that that would be a woman. I just think it’s improbable that at least the vast majority of people who claimed to be women but male sexual organs are like that.

    (regarding XY vs. XX chromosomes, they’re in normal and in nearly all actual cases crucial to the development of a human organism as male or female, and the further development of them as a man or a woman. But I think they’re irrelevant when it comes to truth-making).

    I’d say many ordinary people really do not have the ability in the relevant sense to think their way out of these beliefs. In some sense, maybe–e.g., had they been born in a different society, educated and acculturated very differently, trained to think more rigorously and so on.

    Here we disagree.
    I think people (not all, but most) do have the ability in the relevant sense to think their way out of them, at least if one considers thinking their way out of them in a broad manner, including reading arguments from others. If they only made a significant conscious effort not to get immediately morally outraged by what the other person is saying, but decided to take a close look in a calm manner, I think given sufficient time (and within what a person in the West tends to have) they would probably get out of them.

    In some sense, maybe–e.g., had they been born in a different society, educated and acculturated very differently, trained to think more rigorously and so on. But in that sense we might also speculate that Sid Vicious has the “ability” (or capacity, or ability to acquire the capacity?) to be the world Scrabble champion. It seems irrelevant.

    Yes, I agree that that seems irrelevant.

    Sheer number of claims may not be as important as whether the most fundamental or central claims are true, and how important those truths are. For instance it might be very important to know that the universe is living or personal, in which case many of these thousands of religions would be right about something important, while atheists or naturalists would (typically) be wrong about that even if right about lots of other more trivial stuff.

    Actually, not so much, since there are plenty of religions in which the first of the gods emerge from some impersonal stuff. But at any rate, I don’t think that believing that the universe is living or personal is more central a claim to many religions that the beliefs in the deeds of the gods, which somehow they take to justify their tribe’s belief that they are somehow the chosen, superior, better people, etc., or the claims to the land, etc.; those seem more central in many cases, or at least no less central.
    At any rate, the point remains that religions are normally bad at making claims involving the actions of superhuman agents. In fact, we can see that on that particular field, the claims are regularly false. So are the moral claims based on them in a rather direct manner (e.g., claims about which superhuman agent they have an obligation to worship lest their enemies not defeat them, etc.).

    Also I’m just not sure that all these religions are wrong–about Zeus and Thor, for example. To me it seems fairly plausible that there are spirits or super-human persons in reality.

    There might be superhuman agents like very advanced aliens, but they do not intervene in human history, and in any case, religions made up entire detailed stories about the actions of some of those agents and made them central tenets, and they’re false.

    On a related note, you said earlier:

    To me it seems not so unlikely that there are aliens out there, or that some of them might abduct humans once in a while. We take dolphins out of the ocean to do experiments. It’s not so unlikely to me that some of those aliens might have evolved powers that we might as well call “supernatural” or “god-like”.

    Advanced aliens may well be out there, but the reason some humans take dolphins is that our crude technology does not allow us to learn all of that and more from them by means that are undetectable to the dolphin and cause no trouble, like nanomachines and/or micro, bug-like machines. The abductions would not be required if the aliens didn’t want to be seen. And if they did not care whether they’re seen, we would have lots of video footage, plenty of reliable witnesses, etc.
    Another reason why we can tell that the claims of abductions are false is that they describe the aliens as some sort of flesh-and-blood humanoid. But that’s not what the aliens would look like. They very probably would not have two arms, two legs, etc. (not a very efficient design for the task), but in any case, they definitely would be robots. If the alien civilization is not an only-robot civilization, it’s a civilization that has robots and AI that they can send to do those tasks, without taking unnecessary risks (even if very low, that’s something an advanced entity that lives for millions of years would factor in), or even doing uncomfortable stuff.

    Moreover, we do know enough about physics to know certain things (usually attributed to the alien spacecraft) are just not doable by any craft, no matter how advanced. And there are several other reasons, but it’s getting too long.

    And given that the human consensus over very long periods and unrelated cultures appears to be strongly in favor of such a world-view, I take that to strongly suggest that these are natural and reliable intuitions (similar to the moral sense you accept).

    These are not reliable intuitions, clearly, given that the claims about intervention of human agents are made-up stories. Moreover, as I pointed out, there are evolutionary reasons to think we would have a generally reliable moral sense (or a similar sense; we got morality; other advanced social agents would get some analogue), whereas there are also good evolutionary reasons to have an overzealous agency-detection mechanism (e.g., oversimplifying, the ancestor who thought it was a lion, but it was the wind, got scared; the one who thought it was the wind, but it was a lion, got eaten and didn’t get to pass on her genes), as well as a tendency to have false beliefs when adherence to those beliefs is seen as one of the tokens of group membership – on which an ancestor’s life would depend -, and so on (i.e., there are more reasons, but that should be good enough and it’s taking too long).

    It’s also similar to what children naturally believe, it seems. Why assume that it’s all wrong rather than an approximation of some important truths?

    It’s not an assumption. It’s an assessment. I already explained some of the reasons for that assessment (or a hypothesis about what causes my intuitive assessment). We can already see that religions generally make wildly false claims about the intervention of superhuman agents. There are also other, evolutionary reasons to expect that those would not be reliable methods approximating any important truth. But leaving them aside, the simple systematic falsity of the claims about the involvement of superhuman agents is very strong evidence.

    That aside, I would have to say that intuitively, that sort of story just does not seem to fit with the world as science explains it. People who lived 10000 years ago (for instance) and did not know what caused storms or the movement of the sea, etc.,; who had no idea what the Moon or the Sun were, and did not know of the systematic failure in question, may not have been irrational in buying those beliefs. Or maybe they were. I don’t have enough info to be sure. But in our world, it’s simply absurd (I don’t want to sound hostile, but I want less to be dishonest). And yes, I know many intelligent people do not seem it as absurd.

    And even if some specific conceptions of Zeus or Thor or ancestor spirits really are clearly false or irrational by our lights, we’ll find lots and lots of comparable things if we make a list of all the scientific posits and hypotheses going back to pre-Socratic times or even just early modernity.

    I don’t think pre-Socratic stuff is science in the same sense of present-day science (real science, not pseudo-scientific stuff), at least in nearly all cases of pre-Socratic stuff. I’m talking about a systematic study, with some accepted methods, etc.
    As for the claim that we finds “lots and lots” of comparable things, that’s vague, but in any case not the proper way to look at it. Science generally gets it right, and sometimes wrong. And it also generally it’s self-correcting. We might find “lots and lots” of false things but they constitute a small percentage of the proportion of things that are right. On the other hand, religious claims about the involvement of superhuman agents are at least nearly always (of course, I say always, but at least nearly) false.

    Well, how _are_ we communicating now? Science and technology has a lot to do with it, but all of that is parasitic on fundamental mysteries or magic such as intentionality and consciousness and “information”.

    Certainly we’re not communicating thanks to religion, and again, the claims about the involvement of superhuman agents are at least nearly always false.
    On the other hand, science had to get a lot of things right to make this communication possible.

    Another sketchy point: if we’re trying to find a “way out” epistemically, we might look to basic biological or medical markers in comparing people or cultures. Whose intuitions are more likely to be functioning properly–all those people in the past who made babies and defended themselves and often succeeded in expanding their territory and spreading their genes and memes? or the pathetic sickly people of the deracinated modern west, who tend to have one or two kids or none, who won’t even _intellectually_ take their own side when their lands are being invaded and conquered, their history and heritage pissed on?

    That depends on the intuition. Could you give some specific examples?
    If you’re talking about beliefs in the intervention of superhuman agents (e.g., helping them in their wars of conquest), they got it wrong. Then again, some Western tribes today (some on the left, some on the right) exhibit the same sort of false beliefs. They are partly based on faulty intuitions.

    Also, expanding their territory often involved having false beliefs about other tribes (subhumans, etc.), and often, those beliefs were had in an epistemically irrational manner, but because failure to have them likely would have resulted in social isolation, and humans are instinctively driven to avoid that. In fact, some Western tribes today (whether on the left or on the right, and whether theistic or atheistic) still exhibit these sorts of shortcomings, even though they seem generally less willing to engage in genocidal violence.

    But perhaps you’re talking about some moral intuitions?
    If so, we can discuss the matter, one at a time. But I think they’re not so different.

    I’d bet on the intuitions of the first group, the healthy life-affirming human animals not the suicidal self-loathing ones.

    I don’t know what intuitions are you talking about, exactly, but I’d say it depends on the intuition. I do think that some of the failures are common in all cultures, including present-day Western tribes, and I think there aren’t so self-loathing as you think in most cases. It may appear to you like that, but when you look deeper, they tend to loath people on the other tribe, not themselves, most of the times.

    At any rate, perhaps this is an interesting matter that we could discuss, and on which we’re not going back and forth. Could you present one or two cases (i.e., which intuitions), so that we can talk about them in particular?

    Well, eventually maybe they are! But it often takes centuries, and if our current scientific beliefs are any guide, there have also been lots of reverses and failures to correct things. There may be a difference of degree here, but maybe that’s because religious truth is a harder domain epistemically for human beings than physics or chemistry (which wouldn’t be surprising if there were religious truths). I think this is probably your best avenue for drawing a distinction. But it’s hard to make the argument clear and convincing partly because it’s harder to say what could count as a correction or evidence of a correction.

    I disagree. The differences are huge. Religions make systematic false claims about superhuman agents – complete detailed made-up stories -, and they do not correct them in nearly all cases.
    Science makes errors much less frequently than it gets it right in its domain, and the reverses and failures are “lots” but infrequent.

    I don’t agree that religious claims about the existence of superhuman agents are false.

    They’re at least almost always false – Thor, Odin, Zeus, etc. -, and so are the claims of interventions by superhuman agents. But we’ve been there already. We clearly disagree, and I think we’re beginning to go in circles on this.
    I think discussing the specific intuitions you’re talking about might be a more promising avenue.

Leave a Reply (Be sure to read our comment disclaimer)