On Freedom, Excellence, Liberals and Minorities

Freedom is tricky thing to understand, for there are many different kinds. Here’s a short list:

1. Freedom from external restraint. For example, I am free in the sense that I am not imprisoned by steel bars, shackles, nor bound by rope. Those guys at Guantanamo Bay? Not so much.

2. Freedom from internal necessity—our choices are not determined by any internal necessity.

3. Freedom from moral bind. For example, if married, I am not morally free to sleep with another woman. If I were later widowed and remarried, then I could, but not now. I am not morally free in this respect. I have many other moral freedoms, but just not this.

4. Freedom from positive law, rules and regulations. For example, if positive law, rule or regulation does not prohibit some action, then, relative to those laws, rules and regulations, I am free to do that action.

These freedoms are all freedoms from some particular law, restraint or bind, though there is a fifth kind:

5. Freedom for excellence. There are certain traits, skills, habits and virtues that free a person for excellence. For example, an undisciplined and untrained person on a guitar makes noise. But a disciplined and trained person on that same guitar makes beautiful music. So those who are undisciplined and untrained are not free for that musical excellence—they are bound by ignorance. What is true for music excellence is also true for moral excellence: If we are untrained and undisciplined, subject and given to ignorance and human vice, say, greed, lust, jealously, anger, and so forth, then we are bound by ignorance, vice or sin. We are unfree in this respect.

The first four freedoms are freedoms from certain binds, laws and restraints, though the fifth freedom is freedom for a particular end. What’s interesting about the fifth freedom is that it requires certain binds, laws and restraints, particularly over our vices and undisciplined will. Without those binds, there is no such freedom for excellence—there is no self-realization. Hence, binds or laws, if they are good, will always curtail some freedoms, but only for a greater freedom for excellence.

People nowadays don’t recognize and properly value freedom for excellence. We see liberty as sheer licence, which is really problematic (please read Shain’s article Liberty and License in the American and Western tradition, and click here and here). Moreover, our freedom to choose is seen as a good in and of itself, as if it were an end for itself, turning freedom of choice into a kind of god, but this is all wrong. Liberty is not licence and our freedom to choose is not a good in and of itself. Instead, liberty is the self-mastery or self-realization (as Socrates held). In fact, our freedom to choose is a good only so that we are able to choose to pursue our excellence; hence, the freedom of choice divorced from the pursuit of excellence is not good and choices known to be the contrary of excellence are, in fact, misuses of our freedom (Gal 5:13-26 and 1 Peter 2:16).

Deep down inside, I think most of us know that what I am saying is true, but many people have just lost touch with it, for we live in a really shitty world. To better see my point, consider two heroes of the West: Socrates and Jesus Christ.

Socrates pursued wisdom, unconcerned with wealth and prestige. He refused to stop this pursuit, even if it cost him his life. And when convicted of a crime and sentenced to death (unjustly, I might add), he chose to face his execution rather than to commit an injustice by escaping (see here). Thence, Socrates chose death over injustice. Now, do you think that Socrates was unfree because he didn’t have much wealth, was imprisoned and sentenced to death? Certainly not in the sense that matters most. In that sense, he was freer than most people today, for the lower inclinations and vices common to man, say, the inclination to preserve one’s own life over justice, wrath, and so forth, were controlled and conquered. He was thus not enslaved by fears, vice and wordly desire, but freely committed to justice without compromise (See: Apology, Crito and Phaedo)

Take another example: Jesus. He was a poor carpenter who was promised all the riches, power and prestige of the world if he chose to worship Satan. In response, Jesus denied himself these worldly goods and rebuked Satan, aligning himself toward the good and God’s will, choosing to remain in poverty and toward certain death. Now, do you think that Jesus was unfree, shackled by the moral tyranny or heteronomy of God? Not a chance. Jesus chose to pursue the good and God even to the contempt of himself, just as he later did on the cross. His choices were thus exemplars of liberty, not denials of it.

That sense of awe we feel when learning about these men is not felt because they were rich, powerful and socially prestigious, for they weren’t. Instead, we admire them because they chose the good despite every lower human inclination and vice pulling them toward the contrary. What we’re admiring is their self-discipline, strength and genuine liberty, much of what we lost sense of in these crazy times.

What else can I say about freedom?  Well, we seem to really value the freedom of access in the material world—the possession, control or power over resources. Because of this value, we idolize money, wealth and social prestige. We are slaves to it, even to the contempt of the good. In some ways, our devotion is understandable, because we need resources to live. But modern liberals are so captivated by the freedom of access that they often see the poor as characteristically unfree and without happiness or excellence, shackled and victimized by their poverty. In fact, for modern liberals, just about any inequality pertaining to class, race and sex is explained in terms of some injustice regarding freedom of access, usually at the hands of those with more access. Hence, liberal solutions are often demands to redistribute the freedom of access: Other people are told that they must surrender money, power and wealth. The happiness, excellence and liberty of the poor are thus treated as if they were stolen by elitists but made available by means of government handouts.

So what’s a problem with this? Well, because the problem (lack of freedom of access) and solutions for liberals are fixated on social dynamics, their solution precludes the need for self-examination and self-realization, for one does not need to consider his own character, choices and responsibility if his problem is just that he’s being wrongly “held down” by rich, white men, or whomever else. What develops, then, is an culture of grievance, entitlement and discontentment rather than a culture of happiness, excellence and liberty. As Justice Clarence Thomas said, “Sadly, today it seems as though grievances rather than personal conduct are the means of elevation” Does that sound familiar to anyone?  I suspect so. I present Exhibt A.

But let me make something clear:  You won’t find your excellence, liberty and happiness in my wallet. You could plunder it dry, but you won’t find it there. And when you fail to find it there, the modern liberals, not knowing any other solution, will simply manufacture a new line of social oppression that has “held you down”. And on and on we will go, just like the liberals are now doing with women, black Amercians and other people of colour. It’s a wild goose chase.

My proposal? It’s simple and anti-climactic: Quit blaming other people and quit trying to find your excellence in debt and grievance–turn your glare inward. Seek solitude, or prayer, but certainly seek the good. Change yourself. Elevate yourself.  It’s hard work, but so is anything worthwhile.

P.S., Reading from the Bible, Plato and Aristotle wouldn’t hurt either–just saying.

58 Comments

  1. Something else lots of folks miss is that much of the seeming “unfairness” that’s seen outwardly can be traced back to immoral (sinful) decisions and practices, which result in negative consequences.

    “Do not be deceived: God is not mocked, for whatever one sows, that will he also reap. For the one who sows to his own flesh will from the flesh reap corruption, but the one who sows to the Spirit will from the Spirit reap eternal life”

    Of course this is a Bublical principle and not an absolute truism as we can very well see some people who live debauched and even criminal lives seem to get along quite well, whereas some who live moral and upright lives may suffer terribly. But there are also cases where sinning can result in judgment in the “here and now” in the form of negative consequences in this life.

    A sexually promiscuous person may contract myriad SED’S. A drunk may careen off the road and lose his legs due to car crash. A woman who aborts her children may suffer from enduring guilt and psychological trauma, etc.

    Sin has consequences, and no amount of government intervention can abate that.

  2. I quite enjoyed the Aristotelian/Thomistic perspective of this post! Its funny, we just discussed one of your points here in my seminar on Aquinas this past week. The question was raised to what extent a person has free will if developing/acquiring the virtues limits the possibilities of how a person can act? Presumably, if I discipline myself and become a temperate person, I will thereby act according to this virtue, my actions will be confined to a narrower subset of possibilities that was not previously available to me as an intemperate person. But this does not entail a limit on my freedom. As I think you rightly pointed out, become a temperate person gives me the freedom to achieve a certain good that was not available to me as an intemperate person, just as disciplining myself to play an instrument places restrictions on me, yet allows me to achieve the wonderful good of playing music, which was not available to me prior. I also liked how you tied these ideas into the politics of liberalism. Very interesting!

    • Thank you.

      Regarding the first part of what you said, our earthly life is very finite and our ability to acquire skills is limited; hence, ANY choice we make opens doors and closes others. In my own life, say, I devoted my life to family and study, so I missed the opportunity to become an Olympian runner. Not that I ever wanted to run, of course. I dislike cardio work, but the point remains that the opportunity is lost because of the choices I made. That’s the nature of human choice

      • Very true. Another aspect of this which I think is quite interesting is your point about how liberals cannot conceive of a natural inequality. If there is inequality in the world, it must be due to some kind of discrimination or social oppression. I think this is a great point, and one that I have heard elsewhere as well. This is perhaps going a bit off topic but I think this point can shed light on an issue liberal-minded Atheist-types often bring up about the new testament: why didn’t Jesus directly address the problem of slavery/ Doesn’t Jesus therefore condone slavery?

        From what you’ve said here, (and this is something I’ve thought about as well), Jesus’ message was not concerned with politics or social issues. In this case, yes people were in slavery because of a social/political force beyond their control. Yet, my point is that liberals seem to only be able to view this issue in social terms- what good was Jesus if he wasn’t a social-justice warrior, liberating all the slaves of the ancient world? Christ’s message was one of a spiritual nature, it concerned man’s spiritual state and relationship with God, and therefore cut far deeper than social-political classes and systems. In a sense, the slave who knows Christ is more free than the free man who does not know Christ. I fear the liberal mindset would have a hard time grasping this point though.

  3. Tom,

    I disagree with your point: I’d say the slave who falsely believes he knows Christ is still a slave, and far less free than the free man who at least doesn’t mistakenly believe he knows Christ. But I’m not a liberal, and my assessments about Jesus, Christianity, etc., have nothing to do with liberalism.
    Granted, you didn’t suggest that non-liberals would agree with you, but rather, that liberals would have a hard time “grasping” your point. On that note:

    I fear the liberal mindset would have a hard time grasping this point though.

    I suggest you google “pope Francis redistribution”.
    More generally, Latin America is a predominantly Christian region – maybe more than any other -, and also a predominantly leftist region, in economic terms. Christianity looks very different in different parts of the world, just as it looks very different if you look at different centuries, etc.

    • Talking about someone mistakenly knowing Christ is irrelevant to my point. My point was that Christ was not concerned about political and social matters, he was concerned about the spiritual state of man’s soul. To that extent, someone can physically be a slave, yet spiritually be free if they know Christ, and this is the kind of freedom that really matters (spiritual freedom). If someone who is not a slave but a free person does not know Christ, they are less free, spiritually speaking, than the slave who knows Christ. For they are then in slavery to their sin.

      You see this a lot in prison testimonies for example. Inmates who come to know Christ may live the rest of their lives in prison, yet for them they consider themselves truly free because they have come to know Christ. I would rather be confined in prison and know Christ than be free and not know him.

      My point about the liberal mindset is that because they most often deal with inequality in social terms (as Catholic Hulk said), it will be difficult for them to grasp this point. All they would think about is how wrong it was for Christ not to say anything about slavery. This misses the whole point of the gospel and what Jesus came to accomplish. I completely agree that Christianity will look different in different places. That’s irrelevant to my point though.

      • Tom,

        I’m not “Talking about someone mistakenly knowing Christ.”, but about a slave who mistakenly believes he falsely believes he knows Christ. It’s relevant to one of your points – i.e., about the slave. I don’t think that anyone actually knows Christ, because there is no Christ (which is not to say that Jesus didn’t exist).
        Now, you say “If someone who is not a slave but a free person does not know Christ, they are less free, spiritually speaking, than the slave who knows Christ. For they are then in slavery to their sin.”
        Well, I disagree with that. People who don’t have the false belief that Christianity is true aren’t (in general) for that reason slaves to their immoral behavior.

        You see this a lot in prison testimonies for example. Inmates who come to know Christ may live the rest of their lives in prison, yet for them they consider themselves truly free because they have come to know Christ. I would rather be confined in prison and know Christ than be free and not know him.

        But even if those inmates consider themselves free, they’re not.

        My point about the liberal mindset is that because they most often deal with inequality in social terms (as Catholic Hulk said), it will be difficult for them to grasp this point. All they would think about is how wrong it was for Christ not to say anything about slavery. This misses the whole point of the gospel and what Jesus came to accomplish. I completely agree that Christianity will look different in different places. That’s irrelevant to my point though.

        My point in that context is relevant to your point. I’m saying that your claim is false, as evidenced by the fact that millions of Christians are leftists (probably they fall into the category “liberals” as you use the words). They don’t say Jesus did anything wrong. Rather, they disagree with your position about part of Jesus’s message. That’s why I mentioned Latin America and pope Francis in this contexts, as examples against your claim about liberals.

  4. Hi Catholic Hulk,

    I’m not sure what your intended audience is, but I disagree with several of your points, such as:

    Deep down inside, I think most of us know that what I am saying is true, but many people have just lost touch with it, for we live in a really shitty world. To better see my point, consider two heroes of the West: Socrates and Jesus Christ.

    I don’t think they do, because I don’t think what you’re saying is entirely true – though part of it is -, but that aside, why do you think most people agree with you, deep down?

    I don’t know about most people, but I definitely do not agree with you about Socrates, let alone Jesus.

    In my assessment, Socrates was mistaken (assuming the story as told): escaping would not have been unjust, or immoral. Still, staying there to die wasn’t immoral, either.

    As for Jesus, assuming the relevant part of the biblical descriptions of his actions (not the biblical moral assessments, of course!) I reckon he was/is not a hero, but a supervillain.

    In that sense, he was freer than most people today, for the lower inclinations and vices common to man, say, the inclination to preserve one’s own life over justice, wrath, and so forth, were controlled and conquered. He was thus not enslaved by fears, vice and wordly desire, but freely committed to justice without compromise (See: Apology, Crito and Phaedo)

    Assuming that Socrates was right and he had a moral obligation not to escape, he wasn’t free from moral bind (I think he was mistaken, but your OP seems to hold he was right).

    He was a poor carpenter who was promised all the riches, power and prestige of the world if he chose to worship Satan. In response, Jesus denied himself these worldly goods and rebuked Satan, aligning himself toward the good and God’s will, choosing to remain in poverty and toward certain death.

    In that story, Jesus was an enormously powerful person – far more powerful than any human, and even than Satan -, and was offered riches and all in exchange for worshiping Satan. Now, worshiping a far less powerful person (i.e., Satan) in exchange for something he could easily get by himself – using his own powers – isn’t at all tempting, at least if Jesus’s psychology resembles anything recognizable by us (else, all bets are off; he’s just completely alien).
    At most, he’d have been tempted to use his own power – far greater than Satan’s – and get those riches, or whatever he wanted (but why would he want riches? To a powerful person like that, they probably wouldn’t mean anything).

    And of course, assuming the relevant part of the biblical description, it’s obvious that Jesus was free from external restraint, because he had the power to do as he pleased. He wasn’t free from moral bind or positive law, and it’s unclear to me whether there was some internal necessity (it’s not entirely clear what you mean by that).
    As for certain death, actually, in the story he was certain that death as a human would not make him cease to exist or that he would go to a bad place, but rather, he would remain an enormously powerful person, so death (death, not suffering) didn’t seem like a bad thing in that context.

    That sense of awe we feel when learning about these men is not felt because they were rich, powerful and socially prestigious, for they weren’t. Instead, we admire them because they chose the good despite every lower human inclination and vice pulling them toward the contrary. What we’re admiring is their self-discipline, strength and genuine liberty, much of what we lost sense of in these crazy times.

    Some (many) of us who do not feel any sense of awe when learning about those men.

    I think assuming all of the Platonic stories about Socrates are true (which isn’t actually the case, but let’s say so), then Socrates apparently was very brave, but misguided when it came to his sentence and death (though that wasn’t immoral on his part, it was a mistake). Also, apparently he thought the brutal Spartan regime was good, and claimed to have received knowledge from some sort of superhuman agent, claimed to be able to read signs, etc. But he was a smart philosopher, and sometimes he was right – some other times, wrong. He doesn’t strike me as a hero, or a villain.

    As for Jesus, he was enormously powerful assuming the biblical stories, but in any case, there is nothing I admire about him.

    But let me make something clear: You won’t find your excellence, liberty and happiness in my wallet.

    I won’t, but why don’t you think leftist wealth-distribution programs will not increase the happiness and overall wellbeing of other people?
    It seems apparent to me that usually *some* people benefit from them. They might not find excellence – though some might too, if they get to go to college they otherwise couldn’t -, but they may well still be better off.

    P.S., Reading from the Bible, Plato and Aristotle wouldn’t hurt either–just saying.

    I don’t think that’s an effective way to argue against leftist positions, because different people tend to make *vastly* different assessments after reading them – more saliently the Bible, but also Plato’s and Aristotle’s works -, as the distance between our respective assessments on the relevant matters exemplify.
    In particular, some of us are not leftists but disagree about the morality of the main characters described in the bible, aren’t Aristotelians, etc., whereas a lot of people endorse Christianity and are strongly for redistribution programs, generally big government, etc.

    • The point isn’t so much that Socrates is correct in thinking that escaping would be an injustice. The point is more that he chose death over injustice, which shows a self-mastery and freedom.

      My point about Jesus doesn’t presume his divinity. Hence, we don’t need to see him as more powerful than Satan. But even if we did, on most Christian theologies, Jesus is truly man and truly God; and as such he felt temptation, just the same as we often do. Some theologies limit Jesus in this capacity, denying him, say, omniscient knowledge, power, etc. Others do not, but find other ways to deal with the problems raised. It’s a complex debate, one that I do t need to get bogged down with to make my point.

      Regarding redistribution, that depends on what you mean by “happiness” and “excellence”. You couch this in terms of “benefit” and being “better off”. I’m unsure what this is supposed to mean. The way I see it, what we’re talking about here is self-realization and self-mastery, which is not found in my wallet. You could say that some people might take redistributed funds, go to college, learn from Plato and Socrates, and then reach or pursue self-mastery. To that, I’d say, sure. That could happen. It’s unclear how that challenges what I said.

  5. Catholic Hulk,

    Socrates chose death over what he thought was an injustice, but in my assessment, he was mistaken.
    Regarding Jesus, I wasn’t assuming he was divine, but very powerful, as indicated in the biblical description. But I didn’t assume the moral claims or implications in the Bible, and a claim of divinity would be (under usual philosophical definitions of “God”) in part a moral claim.
    I know that Christian theologies claim Jesus felt the temptation. I was challenging that, arguing that it would not make sense for him to feel tempted, and it’s not “just as we often do”, because we would not feel tempted by riches to worship another person if we know we have the power to get the riches without worshiping anyone, let alone if, on top of that, if we know we’re far more powerful than the other person.

    By “happiness”, I mean what the word usually means in English, because I thought that’s what you meant in the OP. By “excellence”, I mean what you seem to mean. If you think definitions are needed in this context, please provide definitions, given it’s your OP.

    I don’t see anything unclear about “benefit”, and “better off”. I’m using words in their usual sense in English, and context seems clear. No definition is required. They benefit and they’re better off because among other things, that may help them achieve happiness and sometimes excellence.

    As for how what I said challenged what you said, you were arguing against redistribution – among other things – on the basis that redistribution apparently wouldn’t help people achieve happiness, or excellence, etc.; my point is that that depends on the case. If you think some people do benefit in those senses from redistribution, then I’m not sure what you intend to accomplish by saying it won’t be found in your wallet. Whom are you trying to persuade, and of what?

    • Respectfully, my point with regards to Socrates is irrelevant to whether you disagree with him on whether it was an injustice to escape.

      Regarding Jesus, I could argue the details, but I think we’d miss my larger point. Even if we presume that Jesus could have gotten himself those riches, without worshipping Satan, the point is just that he didn’t. When tempted to depart from the will of the father for a life of riches and prestige, he choose otherwise. That’s the self-mastery point I’m making.

      You seem more interested in rebuttal than hearing the larger points, which have gone completely unaddressed.

      I don’t have much patience for your insistence to rely on what those words usually mean in English, since it doesn’t tell me what you think they mean. And to repeat my earlier post, I don’t see how our concerns challenge what I said about redistribution.

  6. Respectfully, the claim about Socrates is affected by whether he was correct. It’s not always admirable to act bravely on a mistaken moral belief.

    As for Jesus, the point is that either he wasn’t tempted by Satan – the offer was not tempting -, or he is psychologically alien to us. In any event, there does not seem to be anything admirable about such a powerful person who chooses for a while and for no good reason not to use his vast power, only to go back to use it later.

    I’m interested in challenging the points I mostly disagree with. You raised those points in the OP.

    Your impatience is out of place, but there is nothing I can do about that. Could you at least explain who your intended audience was, and what you intended to convince them of?

    • I’m unsure why it would be affected in this case, nor did you present an argument as to why Socrates was wrong. We just learned that you disagree, and to that fact, respectfully, I just don’t care.

      Regarding Jesus, I see no reason to think that he was not tempted by Satan, even granting that Jesus he had the power to achieve those riches and prestige himself. Perhaps he didn’t find the worship component particularly attractive, but the suggestion of prestige and riches instead of fulfilling the will of the father is tempting.

      Of course, we could construe the idea of worshipping Satan in far broader terms than its literal sense, understanding it to be the rejection of God’s will and embrace of sin and worldliness. So in other words: deny God’s will and embrace sin and worldliness, and you’ll have it all. That makes good sense, since Jesus could not “have it all” unless he chose to deny God”s will and embrace sin, etc.

      In both cases, the remaining point is just that even if we presume that Jesus could have received those riches and prestige on his own, Jesus still affirms God’s will despite his lower, human inclinations pulling him away.

      That’s all I will say about this.

      • Catholic Hulk,

        You didn’t present an argument as to why Socrates was right, either.
        Yet, the matter of whether he was right or wrong is relevant to whether his sacrifice is admirable. The chances that sacrificing oneself to prevent an injustice would be admirable is much greater than the chances that sacrificing oneself to prevent something that one falsely believes is an injustice, would be admirable (though it depends on the circumstances).

        That aside, the insistence on arguments in moral matters has the following difficulty: if you’re talking about a deductive argument, then in order for it to be valid, it has to start with moral premises if the conclusion is a moral one. But then, there is the problem of how to show the premises are right. It seems if your standard is that an argument is required to support moral claims, you would need arguments for the premises too, and so on.

        You don’t care that I disagree about Socrates – obviously. I don’t care that you disagree, either. But again, I would ask who your target audience is, and what you’re trying to persuade them of. Surely, the fact that you don’t care that I disagree with you about Socrates is not going to persuade me of anything, and neither will your claim that Socrates and Jesus were heroes.

        As for Jesus, I already gave a reason why he wouldn’t feel tempted, at least under the assumption that his psychology resembled human psychology. It’s because someone with a human-like psychology wouldn’t feel tempted by an offer of riches whose price is to worship somebody, if they know they can just as easily get those riches on their own, and furthermore, the being who makes the offer is far less powerful than they are.

        When you say that “the suggestion of prestige and riches instead of fulfilling the will of the father is tempting.”, that’s not the right contrast to make. The right contrast is this: the offer of prestige and riches at the cost of worshiping a lesser person instead of prestige and riches at no such cost, is not remotely tempting.
        If Jesus was tempted to use his power to get riches and prestige, that wouldn’t be connected to Satan’s offer (on a realistic human psychology).

        With regard to the alternative you present, okay that avoids that particular objection, at the cost of an unlikely interpretation of the text. In that case, I would say that his actions were still not admirable, since he continued to cooperate in Yahweh’s evil plans. If he had a will other than that of his father, he ought to at least stopped cooperating (and given his previous cooperation, I’d say he had an obligation to rebel and fight). Of course, you will not agree with that, since we disagree on whether Yahweh (i.e., the person or entity described in the Bible) is morally good, etc., but that brings me back to the issue of your intended audience.

        Is the OP meant for people who already agree with you that Yahweh is good? (generally, perhaps it would be useful to add to the posts a brief disclaimer explaining who the intended audience is).

  7. The point, I think, of the OP revolves around self-mastery and self-sacrifice versus self-catering and self-indulgence at the expense of others.

    Generally (to most normal, non-sociopathic) these characteristics are virtues on the one hand, and vices on the other.

    Admirable versus contemptible. The leftists tend toward the enablement and cultivation of the latter rather than the former. That’s abusive and enslaving.

    • CRD,

      I think that’s part of the OP but not all.
      With regard to the points you mention, I would agree that self-indulgence at the expense of others is a vices but whether self-sacrifice is a virtue depends on the case. There are plenty of people who engage in self-sacrifice for an evil cause they consider good. In their case, self-sacrifice does not seem to be a virtue.
      I’m not sure what you mean by “self-catering”, so I’m not entirely sure about that one. As for self-mastery, I also think it depends on the case. In a psychopath, self-mastery doesn’t look like a virtue (at least, not in a moral sense of “virtue”, which is the relevant sense in this context).

      Regarding leftist views, it seems to me they do demand sacrifice and perhaps self-mastery of many people – e.g., the people whose actions they criticize.

      • “There are plenty of people who engage in self-sacrifice for an evil cause they consider good. In their case, self-sacrifice does not seem to be a virtue.”

        If you sacrifice yourself for a cause that you consider to be good then surely there _seems_ to be _something_ intrinsically noble and virtuous about your action, regardless of whether in fact the cause is good or evil.

        When we judge character we’re concerned with an ‘internalist’ value scheme, it seems to me. We’re concerned to appraise the motivations and dispositions of the person from ‘inside’ that person’s own subjective perspective, and not concerned with the objective situation. The exception might be cases where it’s clear that the person should or could (fairly easily) have recognized that her perspective is at odds with objective reality. In the typical case, though, it seems reasonable to regard this kind of action or choice as virtuous even if, in some objective sense, the person did the wrong thing or acted non-virtuously.

        I guess this might not seem so on a certain strong conception of virtue, such that a virtuous person is by definition always capable of properly discerning good and evil, for example. (Maybe this is what Aristotle would think?) I haven’t read this whole exchange so perhaps you made it clear you’re working with such a conception–in that case, please excuse my half-assed comments…

        If Satan truly believes that he’s fighting for the Good, and gives up his power or privilege for that purpose, that seems noble to me. If Hitler truly believes he’s fighting to save his people from extermination and so he renounces a normal human life to devote himself fully to the cause, that also seems noble to me. It doesn’t even seem that way to you?

  8. Jacques,

    When I mentioned an evil cause, I was thinking about cases in which the cause is considered good not because of non-culpable non-moral error (more below). Otherwise, in the sense I had in mind the cause would be good, even if the intended effect would not be achieved. But I should have been more clear on that, so let me try to explain this with an example:
    Let’s say that Bob is deceived by Adolf – through no fault of his own; Adolf has manufactured apparent evidence, etc. -, and comes to believe that A, B, and C are mass torturers and murderers who are planning a massive attack against civilians, just to impose their religion (for example). Bob sacrifices his life to kill them, in order to save their intended victims. But in reality, Adolf was using Bob for his evil cause.
    In a sense, Bob sacrificed himself for a noble cause (namely, to save many people who didn’t deserve to be killed; we may add that Bob rationally believes the people he’s trying to save don’t deserve to be killed), but in a sense, he sacrificed himself for an evil cause – Adolf’s.
    In that context, I wouldn’t count Bob as sacrificing himself for an evil cause, because I am taking into consideration the internal perspective as you say. But the internal perspective does include that the moral assessments (and the non-moral ones on which the moral ones are based) be epistemically rational.

    As for the conception of virtue, I wasn’t assuming a specific definition of virtue, but making an assessment on the basis of the usual concept, in the moral context (assuming there is one such concept, but questioning that might be problematic for realism; in any case, this isn’t a problem). So, that’s not a problem for your reply.

    Regarding your assessment about Satan or Hitler, etc., I would distinguish at least 3 different cases:

    a. Error about non-moral matters not resulting from epistemic irrationality.
    b. Error about non-moral matters resulting from epistemic irrationality.
    c. Error about moral matters which is not the result of error about non-moral matters.

    In case a., I would say that the cause would not be evil in the sense I had in mind, so my point wouldn’t apply.
    But on the other hand, it would apply to b. and c. (though to make the point I was getting at, I just need that there are some cases in which self-sacrifice is not virtuous, so we may limit the matter to c., which I think is the most clear case.)

    Hitler didn’t make much of a sacrifice as far as I can tell, but it seems many terrorists do.
    So, for example, if a man decides to sacrifice his life in order to kill as many infidels as possible because he believes infidels deserve it for being infidels, and/or because he believes that’s the will of a morally perfect creator, that doesn’t seem noble to me. He ought not to have such beliefs, and his self-sacrifice is not virtuous. It would be different if – for example – he believed – because he was deceived, but through no fault of his own – that his targets are actually enemy combatants who have tortured and killed many people for not converting to Christianity, and who will keep doing that unless someone kills them.

    • Even if the terrorist thinks these noncombatants are willfully ignoring the obvious religious truth out of sheer moral/epistemic perversity? Cause I think that’s what many of them do think.

      • Yes, I think even in that case. In fact, that’s the sort of case I had in mind.
        I would say that the terrorist who thinks these noncombatants are willfully ignoring the obvious religious truth out of sheer moral/epistemic perversity should not believe that, and instead should realize that those moral beliefs are false. In any event, he should realize that killing them for the reasons he’s killing them is immoral.
        It seems we disagree on the matter.
        How about the following argument?

        When the terrorist engages in his terrorist acts, he is behaving immorally. But immoral behavior is never virtuous – i.e., an instance of immoral behavior is not an instance of an exercise of a moral virtue (of course, you might not agree one or more of my statements on that, either, so we may have to agree to disagree).

  9. Hi Catholic Hulk,

    Isn’t freedom from positive law, rules, and regulation just a way to be free from external restraint? Are all these different kinds of freedom natural kinds? I don’t see any reason to recognize freedom for excellence any more than I see reason to recognize freedom for ice cream. 🙂 I think there is freedom for excellence, but that’s just freedom directed at something, namely, excellence; and my freedom can be directed at any number of things, like ice cream. What do you think?

  10. Hi Angra,
    I don’t agree with the premise that the terrorist is just “acting immorally” because I think his action’s have at least two distinct moral aspects, one of which is entirely internalist. And that’s the aspect that I take to be relevant to judgments of virtue and vice. Whether you acted viciously or even wrongly in doing X normally depends on what you thought you were doing, what you intended, what you could reasonably have known or expected to be the effects of your actions, etc. Otherwise I’d prefer to say that the person who acts “wrongly” in some objective sense despite his action seeming to him on serious reflection to be (for example) morally obligatory is just doing something that is regrettable or bad but not really _wrong_ at all. Alternatively, I could say that “immoral behavior” can be virtuous behavior if “immoral behavior” includes behavior that is just objectively morally bad somehow regardless of how it appears to the agent. (The exception to this might be cases in which the person’s ‘values’ or ‘principles’ are so perverse that we question whether he really has a _moral_ point of view in the first place. But that’s a oretty rare and weird case and I doubt anyone has a good account of it.)

    You say the terrorist shouldn’t hold the beliefs we’re imagining, or should realize that certain moral beliefs of his are false. Would you agree that “shouldn’t X” implies “can fail to X” and “should Y” implies “can Y”? I think that in many cases terrorists and other people doing things that are objectively morally bad (or wrong in some objective or externalist sense) really couldn’t have had a point of view like the one you claim they should have or should have had–they “couldn’t have” in some appropriate sense, of course, not to say it’s logically or metaphysically impossible or whatever. Would you disagree with that? Because if not it’s going to be hard for you to claim that nonetheless they’re behaving immorally in any relevant sense.

    I think many terrorists have been virtuous–noble high-minded idealists–and many people acting within the law or conventions of so-called ‘just war’ are vicious and ignoble. Naturally that does depend on a controversial view about virtue, right and wrong, etc. I agree that often terrorists are doing really horrific things and should be stopped by whatever means necessary (e.g., blow their heads off). But even that’s not always true either. So I have various reasons for doubting that terrorists in general are always acting immorally when they engage in terrorism.

    • Hi Jacques,

      I do agree that whether you acted viciously or even wrongly in doing X normally depends on what you thought you were doing, what you intended, what you could reasonably have known or expected to be the effects of your actions, etc., but that’s not about your moral beliefs, but about the consequences, etc., described in non-moral terms. Let me try a few examples, including my moral assessments. Let’s assume Bob has a realistic human mind, and lives in the present time, has access to the internet, knows about the different moral views, etc.

      1. Even if Bob believes that it’s morally praiseworthy or obligatory to through men who have homosexual sex from tall buildings as a punishment for having homosexual sex, it’s not morally acceptable for Bob to throw men who have homosexual sex from tall buildings as a punishment for having homosexual sex.
      2. Even if Bob believes it’s morally praiseworthy or obligatory to decapitate people who convert from Islam to Christianity as a punishment for converting from Islam to Christianity, it’s not morally acceptable for Bob to decapitate people who convert from Islam to Christianity as a punishment for converting from Islam to Christianity.
      3. Even if Bob believes it’s morally acceptable or praiseworthy to seize Yazidi women by force, sell them as domestic and/or sexual slaves, keep them as domestic and/or sexual slaves, etc. (raping them as he wants), it’s not morally acceptable for Bob to do any of that.

      Granted, those are not cases of self-sacrifice, but my point is that if he behaves in ways he believes that are morally acceptable, praiseworthy or obligatory, he still is behaving immorally, in ways he morally ought not to. So, I would like to ask you about those cases, in light of your latest reply. In particular, I would like to ask:
      a. Do you think realistically Bob is behaving immorally in those cases?
      b. Do you think Bob deserves to be blamed for his actions?
      c. Do you think Bob can believe other than what he believes, on those matters?

      With regard to your question, I’m not sure epistemic “should” entails “can”. It’s a complicated matter. I suspect the use of “can” in this context is problematic, due to the multiple meanings of the word “can”. I discussed this matter in greater detail in another blog. I can’t post links here, but if you’re interested in the discussion, I would recommend reading the post “On ‘Ought Implies Can’ in Ethics and Epistemology”, in the blog “aphilosopherstake.com”.
      I think in any case that terrorists are being epistemically irrational in some of their beliefs, so they epistemically ought not to have them. I also think they morally ought not to behave in accordance to them.
      Now, I think terrorists, etc., who behave immorally clearly can behave in different ways. For example, Bob definitely can refrain from raping Yazidi women, enslaving them, etc., he can refrain from decapitating Christian converts, etc.
      Can he actually believe that it’s not morally acceptable to enslave and rape Yazidi women, etc.?
      I think he very probably can: he would have to reflect on the matter, instead of accepting the claims others make. Now, if it turns out he can’t, I’m inclined to believe that that is evidence against ought implies can in epistemology. But moreover, under the assumption that ought implies can in epistemology too, then assuming he can’t believe it, then it’s not the case he epistemically ought to believe it, but I would say he’s still behaving immorally by doing those things, so he morally ought not to do them (but those are assumptions I do not share).

      • “I do agree that whether you acted viciously or even wrongly in doing X normally depends on what you thought you were doing, what you intended, what you could reasonably have known or expected to be the effects of your actions, etc., but that’s not about your moral beliefs, but about the consequences, etc., described in non-moral terms.”

        Very puzzled by this comment. If we say that you acted wrongly because (for example) you knew that doing X would cause serious undeserved harm to Smith, and you knew that causing serious undeserved harm to people is wrong, how is this “not about your moral beliefs, but about the consequences…described in non-moral terms”? I’d say that the terrorist who does some awful thing because he sincerely believes it’s obligatory may be doing something that isn’t wrong because of how things seem to him. Again, the exculpating or justifying facts seem to be facts about what he believes–or how things seem to him, or something like that–and not facts about consequences. You lost me here. Are you just saying that _some_ of these considerations are about consequences, i.e., “what you could reasonably have known … to be the effects of your actions”? Because “known” is factive so those must be consequences? Okay then I agree. But I can just substitute “reasonably have believed” or something non-factive in order to capture what I think is most important here. Sorry if I’m missing your point.

        About Bob: What he’s doing is disgusting and must be stopped if possible. I don’t know whether I’d _blame_ him for doing any of this if his epistemic situation is the way I think it can be. For example I just don’t agree that “reflecting” or just being more critical and not accepting things others tell him could be enough for him to (rationally) revise his whole culturally grounded moral scheme. It would be like me saying to you that you should consider the possibility that Germans and Japanese were the good guys in WWII. At least if you’re like most people in our society this is simply not a ‘live option’ for you no matter how much reflection or philosophizing you carry out. You’re almost certainly not capable in a real sense of stepping outside the whole moral-historical-cultural world-view that we’ve had drummed into us from kindergarten. Not capable epistemically or psychologically or socially. Well again that might not be true of you but you get my point. So if Bob is in this kind of situation that I think is fairly common, your epistemic or moral expectations of him are just way too high. (And besides, what exactly is it that he’s supposed to discover in his reflections? I’m truly uncertain how to answer that.) If Bob is as you describe him, he’s a savage and a menace and we should probably blow his head off. But I don’t feel comfortable _blaming_ him for his actions or choices. He’s a savage!

        Bob does act immorally in some sense, to be sure. He’s violating objectively correct moral principles that we know about. When we talk about what he ought to do, I think that’s also equivocal or ambiguous. Are we talking about what he qua arbitrary agent under no other description ought to do? In that case he ought to do whatever it is that a person in his situation ought to do, e.g., not throw innocent people off buildings. Are we talking about what he ought to do under a more detailed description where we take into account all his limitations and other such facts? Then I think there’s some sense in which he ought to do what he actually is doing, however horrible that may be. Or at least, there’s a sense in which it’s _not_ the case that he ought _not_ to do that horrible stuff. Do you allow two kinds of ought-ness here? And would you agree with me that when we talk about virtuous and vicious behavior we’re really talking about the second kind?

  11. Jacques,

    “Very puzzled by this comment. If we say that you acted wrongly because (for example) you knew that doing X would cause serious undeserved harm to Smith, and you knew that causing serious undeserved harm to people is wrong, how is this “not about your moral beliefs, but about the consequences…described in non-moral terms”? ”
    Let’s say you knew exactly what doing X would do to Smith, and what Smith had done, etc., but you still believed that Smith deserved the harm (even though he didn’t). I think you still acted wrongly.

    Generally speaking, we don’t say that in order for an action to be wrong, you need to believe it’s wrong (well, maybe some people do say that, but they’re mistaken).

    “I’d say that the terrorist who does some awful thing because he sincerely believes it’s obligatory may be doing something that isn’t wrong because of how things seem to him.”
    Do you apply that only to actions the terrorist (or someone else) believes to be obligatory, or also permissible?
    If you make a distinction, then why do you make a distinction?
    After all, the basis for your assessment – i.e., a person’s own beliefs – seem to apply in the case of actions believed to be permissible even if not obligatory (or praiseworthy).

    On that note, there is the case of the enslavement and rape of Yazidi women. Don’t you think that ISIS members deserve blame even if they believe that it’s morally acceptable to behave in that way, or even morally praiseworthy?

    “About Bob: What he’s doing is disgusting and must be stopped if possible. I don’t know whether I’d _blame_ him for doing any of this if his epistemic situation is the way I think it can be. For example I just don’t agree that “reflecting” or just being more critical and not accepting things others tell him could be enough for him to (rationally) revise his whole culturally grounded moral scheme. It would be like me saying to you that you should consider the possibility that Germans and Japanese were the good guys in WWII. At least if you’re like most people in our society this is simply not a ‘live option’ for you no matter how much reflection or philosophizing you carry out. ”
    I’m almost certainly not in your society, but that aside, it seems clear to me that there were plenty of bad guys on both sides, but generally Germany and Japan were much worse than the allies, who also engaged in immoral targeting and killing of civilians, rounding people up on ethnic grounds, etc., so a lot of evil stuff on the other side too.

    “You’re almost certainly not capable in a real sense of stepping outside the whole moral-historical-cultural world-view that we’ve had drummed into us from kindergarten.”
    That’s false. I am capable of that, since I did that. The whole moral-historical-cultural world view in question was Catholic. I reflected on the biblical description of Yahweh’s behavior, and reckoned that Yahweh (i.e., the powerful person, being, substance or whatever) described there is morally evil – not that I believe he exists, but one can properly assess the moral character of hypothetical persons too, as we do when we do philosophy or when we watch movies with villains.

    “Not capable epistemically or psychologically or socially. Well again that might not be true of you but you get my point. ”
    It’s not true of me, and I’m not sure why it would be true of so many people.

    “So if Bob is in this kind of situation that I think is fairly common, your epistemic or moral expectations of him are just way too high. (And besides, what exactly is it that he’s supposed to discover in his reflections? I’m truly uncertain how to answer that.) If Bob is as you describe him, he’s a savage and a menace and we should probably blow his head off. But I don’t feel comfortable _blaming_ him for his actions or choices. He’s a savage!”
    He’s supposed to discover that it’s immoral to enslave, rape Yazidi women, it’s immoral to throw gay men off tall buildings, it’s immoral to decapitate Christian converts, etc. In this case, he probably knew that once; he was convinced otherwise by ISIS propagandists who used the Quran or hadith to back up their claims; it’s not the point, though. We may consider other cases if you like, as there are plenty available. For example, people who live in Saudi Arabia and were raised under Wahhabi indoctrination to believe that Muslims who convert to Christianity deserve to be killed, etc.

    But let’s take a step back here. You say he’s a menace, and that justifies killing him, but you don’t feel comfortable blaming him. On the other hand, he would likely say that you’re the menace, the savage, etc. – though he surely would blame you -, and would believe it’s justified to kill you (or me, for that matter).

    But if it’s true that Bob is not capable in a real sense of stepping outside the whole moral-historical-cultural world-view that we’ve had drummed into us from kindergarten, and also you are not capable of doing that, then why do you think you’re more likely to be right about morality than Bob?

    Let me go a step further: let’s say that it’s generally the case that people are not capable in a real sense of stepping outside the whole moral-historical-cultural world-view that they’ve had drummed into them from kindergarten, except for unusual cases. When people see that two or more of these world views collide on moral matters, how do they go about figuring out which one is right? If you’re incapable of making an assessment outside that goes beyond the view drummed into you, and Bob is incapable also, and the same for other people, how would it be possible to correct errors in such world views? And if it’s not possible, how is it that you can tell that you’re right, and he’s wrong, and so on?

    “Bob does act immorally in some sense, to be sure. He’s violating objectively correct moral principles that we know about. ”
    But how do you know about them?
    If he has his worldview drummed into him, and you have yours, how do you know which one is correct?

    “Are we talking about what he ought to do under a more detailed description where we take into account all his limitations and other such facts? ”
    Yes, we’re talking about his specific situation, but that doesn’t mean that all of his limitations are morally relevant. Some are, and some aren’t.

    “Do you allow two kinds of ought-ness here?”
    Regarding actions, I’m talking about moral ought-ness, so I don’t think there are two kinds of ought-ness relevant here. Regarding beliefs, there is epistemic and moral ought-ness.
    But I think this might be problematic, so I’ll go with blameworthiness. I would say he’s to blame.

    “And would you agree with me that when we talk about virtuous and vicious behavior we’re really talking about the second kind?”
    I’m inclined to say not, because I’m using the words in their moral sense, and I don’t see two kinds of moral ought-ness in this context.

    • “Let’s say you knew exactly what doing X would do to Smith, and what Smith had done, etc., but you still believed that Smith deserved the harm (even though he didn’t). I think you still acted wrongly.”

      Well, sure in some sense you may have acted wrongly. If Smith was not deserving of being harmed then it would be wrong (in an externalist/objectivist sense) to harm him. But would you say that my doing X to Smith in this case shows that I’m not “virtuous” or not acting “virtuously” toward Smith? Would you say my X-ing was wrong in that (internalist) sense? I think that’s the issue here.

      “Generally speaking, we don’t say that in order for an action to be wrong, you need to believe it’s wrong (well, maybe some people do say that, but they’re mistaken).”

      You seem to be begging the question here since I do say that (and say I’m not mistaken :)) In fact lots of people will have the intuition that in order for your action to be wrong in the sense that involves vicious or non-virtuous behavior, the agent must in some sense believe or know that it’s wrong. My earlier examples were meant to elicit that intuition. For instance, again, if Hitler truly believed he was acting for the best, or fulfilling some vital moral obligation, many of us will intuit that he was not acting _wrongly_ even though what he did may have been very bad and even morally bad and even contrary to some moral principle (unbeknownst to him). It’s one thing to say you don’t share that intuition but you can’t just assert that it’s a mistake to hold this view. Think of young children. If a 3 year old shoots his sister he did something that is wrong in some sense–intentionally harming an innocent person for no good reason, say. But he didn’t do wrong in the present sense, I claim, because he didn’t really understand or appreciate what he was doing. I wouldn’t blame him for it, wouldn’t say he was vicious or acting non-virtuously, etc. You think this is simply a mistake?

      “When people see that two or more of these world views collide on moral matters, how do they go about figuring out which one is right? If you’re incapable of making an assessment outside that goes beyond the view drummed into you, and Bob is incapable also, and the same for other people, how would it be possible to correct errors in such world views? And if it’s not possible, how is it that you can tell that you’re right, and he’s wrong, and so on?”

      These are excellent questions. I wish I knew the answers. Do you? My sense is that there may be no way to rationally resolve certain conflicts between world-views or moral schemes. I’m sure that when they conflict at most one is correct, but I doubt that we can always figure out which one that is. You’re suggesting I should be skeptical here–how can I tell Bob is the savage not me? Well, at some point I have to go on what just seems to be true. For example, if Bob is a Muslim his religion seems pretty irrational and crude and very very very implausible to me. Sure, he might have the same impression about mine. But I have the impressions and intuitions and phenomenal ‘appearances’ that I actually do have, and–long story short–I don’t think any possible thinker can avoid having fundamental beliefs based on these kinds of things. So Bob has his, and I have mine, and maybe we can’t rationally resolve our differences. That’s why he’ll try to kill me, maybe, and I’ll try to kill him. But, being me, I naturally still think I’m right and he’s wrong, and I naturally regard his intuitive faculties as unreliable or whatnot. Maybe that’s not a proper answer to the problem of skepticism. I don’t know. But skepticism and arguments for skepticism are also based ultimately on unargued intuitions. I don’t find those ones particularly compelling.

      Anyway the real point is that even if I had no idea how to answer your philosophical questions it wouldn’t make any difference to my fairly plausible hypothesis that lots of people really are in an epistemic situation such that they’re not capable of changing basic beliefs or intuitions in the way that you seem to have in mind. So even if the upshot were that I had reason to be skeptical or think that I’m just as likely as Bob to be the savage, and so on, that would seem to be irrelevant to the truth or plausibility of my claim about the kind of situation that _he_ could be in. Am I missing something?

      • Jacques,

        “Well, sure in some sense you may have acted wrongly. If Smith was not deserving of being harmed then it would be wrong (in an externalist/objectivist sense) to harm him. But would you say that my doing X to Smith in this case shows that I’m not “virtuous” or not acting “virtuously” toward Smith? Would you say my X-ing was wrong in that (internalist) sense? I think that’s the issue here.”
        I would say that in my scenario, you didn’t act in a virtuous way, in the moral sense of “virtuous”.
        I’m not sure how you distinguish an internalist vs. an externalist sense, but I’m not saying that it was wrong to harm Smith because he wasn’t deserving of being harmed, because if you thought he was deserving on being harm through no epistemic fault on your part, and your sense of right and wrong was functioning properly, then it’s not wrong.

        “You seem to be begging the question here since I do say that (and say I’m not mistaken :))”
        Well, then you were begging the question before!
        Okay, so I’m making the assessment that you’re mistaken.

        “In fact lots of people will have the intuition that in order for your action to be wrong in the sense that involves vicious or non-virtuous behavior, the agent must in some sense believe or know that it’s wrong. My earlier examples were meant to elicit that intuition. ”
        I’m inclined not to believe that. I think it’s unusual. It’s an empirical matter that would need some experiment I guess, but in any case, even if many people have that preliminary intuition, I think it will probably go away after considering a few examples (though maybe not in an internet exchange, which might be too contentious to result in change).
        My examples were meant to elicit the contrary intuition.

        “For instance, again, if Hitler truly believed he was acting for the best, or fulfilling some vital moral obligation, many of us will intuit that he was not acting _wrongly_ even though what he did may have been very bad and even morally bad and even contrary to some moral principle (unbeknownst to him). ”
        I don’t agree, and I’m not sure many people believe that. In fact, most people (nearly everyone) believe that Hitler acted wrongly, but I don’t know that many people believe he didn’t believe he was doing the right thing. The examples of terrorists are better, I think, because there are plenty of them, and it’s more clear that many believe that their actions are good.

        ” It’s one thing to say you don’t share that intuition but you can’t just assert that it’s a mistake to hold this view. ”
        Sure I can. But if you mean I shouldn’t, I disagree. A Young Earth Creationist (YEC) might hold consistently that the Earth is less than 10000 years old, and give an account compatible with all observations. It’s proper of me to say that his epistemic assessment is mistaken, intuitive or otherwise.
        Still, I haven’t “just” asserted that it’s a mistake to hold that view. I have asserted that, but I’ve also argued against it, both by means of examples intended to elicit the opposite intuition (which have not worked on you, but I tried), and by means of arguing what the consequences of some of your assessments would be – those are intended to see if there is inconsistency in your position; it seems that hasn’t worked, either, but I’ve tried.

        ” Think of young children. If a 3 year old shoots his sister he did something that is wrong in some sense–intentionally harming an innocent person for no good reason, say. ”
        No, he didn’t act immorally. What he did was not morally wrong. Intentionally harming an innocent person for no good reason is immoral for some agents, but not for all. If a lion does it, it’s not immoral. If a 3 years old does it, it’s not immoral (plus, the 3 year old probably doesn’t even realize it’s hurting his sister).

        “But he didn’t do wrong in the present sense, I claim, because he didn’t really understand or appreciate what he was doing. I wouldn’t blame him for it, wouldn’t say he was vicious or acting non-virtuously, etc. You think this is simply a mistake?”
        No, the assessment that you didn’t do wrong in the present sense is not a mistake. It’s correct. Now, if a 30 year old shoots his sister because he “shamed” his family by having consensual sex with a Westerner, he acted viciously, immorally, is blameworthy, etc., even if he believed he had a moral obligation to kill her.

        “These are excellent questions. I wish I knew the answers. Do you?”
        Those questions are meant to be an internal problem for your claims. It’s not for mine, since I’m not saying one can’t in a real sense step outside the whole moral-historical-cultural world-view that we’ve had drummed into us from kindergarten.
        Personally, I would say humans have a species-wide moral sense, which is flawed, but generally reliable, and we can usually choose to use it and reflect on different situations, to see what their sense of right and wrong says. Additionally, we can look for contradictions, look for potential sources of bias, etc.

        “My sense is that there may be no way to rationally resolve certain conflicts between world-views or moral schemes. I’m sure that when they conflict at most one is correct, but I doubt that we can always figure out which one that is. You’re suggesting I should be skeptical here–how can I tell Bob is the savage not me? Well, at some point I have to go on what just seems to be true. For example, if Bob is a Muslim his religion seems pretty irrational and crude and very very very implausible to me. Sure, he might have the same impression about mine. But I have the impressions and intuitions and phenomenal ‘appearances’ that I actually do have, and–long story short–I don’t think any possible thinker can avoid having fundamental beliefs based on these kinds of things. So Bob has his, and I have mine, and maybe we can’t rationally resolve our differences. That’s why he’ll try to kill me, maybe, and I’ll try to kill him. But, being me, I naturally still think I’m right and he’s wrong, and I naturally regard his intuitive faculties as unreliable or whatnot. Maybe that’s not a proper answer to the problem of skepticism. I don’t know. But skepticism and arguments for skepticism are also based ultimately on unargued intuitions. I don’t find those ones particularly compelling.”
        But you seem committed to the view that he can’t make assessments outside what he’s been indoctrinated in, neither can you, etc., and so the issue is the following one: you do have some seemings, appearances, etc., but – by your own criterion, apparently – they come from something that you were indoctrinated in, and you can’t step out of it. So, you have no means of assessing whether what you were indoctrinated in is correct.
        It seems to me that answering internally (i.e., from your own view) would be viciously circular in this case, since:

        a. Indoctrination is a generally *unreliable* means of getting to know moral truth (at least, in case of disagreement between doctrines, and precisely given such disagreements).
        b. From your own set of beliefs, it follows your beliefs come from indoctrination (or from seemings, etc., that in turn come from indoctrination).
        c. You don’t have any *other* means to make moral assessments (i.e., you can’t step out of that).

        As far as I can tell, that’s a decisive internal problem for your view – i.e., if I believed what you do, I’d reckon an epistemic moral error theory is probably the way to go.

        All that said, let me say the following: I also have my seemings, etc., which may or may not be the result of indoctrination, but still are different from yours. If there is no way to resolve the matter rationally, perhaps it’s time to agree to disagree and leave it at that? 🙂

        “Anyway the real point is that even if I had no idea how to answer your philosophical questions it wouldn’t make any difference to my fairly plausible hypothesis that lots of people really are in an epistemic situation such that they’re not capable of changing basic beliefs or intuitions in the way that you seem to have in mind. So even if the upshot were that I had reason to be skeptical or think that I’m just as likely as Bob to be the savage, and so on, that would seem to be irrelevant to the truth or plausibility of my claim about the kind of situation that _he_ could be in. Am I missing something?”
        Well, actually I was trying to convince you that the claim was false by presenting a conflict between that claim and a claim that I thought you would find even more probable, namely the claim that an epistemic moral error theory is not true, or at least getting you to reflect on your views if they turn out to be internally inconsistent and I can convince you that they are.

        That said, I mentioned before that even if they’re not capable of changing their moral beliefs, I blame them for their actions. They can definitely refrain from acting as they do, and even if they are wrong about morality, they still should care about the suffering of their victims, and should for that reason avoid those actions despite their false beliefs – and they can care about the suffering of their victims, even if they choose not to.
        For that matter, I also blame psychopaths for their actions, even if they don’t have a way of making proper moral assessments.

  12. Ok let’s just disagree then. I find some of your replies pretty uncharitable. For example, I never said that all beliefs come from indoctrination, and I think that’s false. (Some come from reliable appearances that are _not_ due to indoctrination or any other cultural factor in my view.) You tell me most people intuit that Hitler acted wrongly or whatever, when the relevant question is whether they would still have that intuition if we specify that we’re asking about the _kind_ of wrongness involving vicious character _and_ get them to suppose H was purely well intentioned, etc.

    • Jacques,

      I apologize for misunderstanding your position.
      Given that this is a matter where we don’t just have to agree to disagree – I just correct my beliefs about your position according to your clarification -, here’s an updated argument. Please let me know if I didn’t get your position right.
      You said the following:

      You’re almost certainly not capable in a real sense of stepping outside the whole moral-historical-cultural world-view that we’ve had drummed into us from kindergarten.

      And:

      My sense is that there may be no way to rationally resolve certain conflicts between world-views or moral schemes. I’m sure that when they conflict at most one is correct, but I doubt that we can always figure out which one that is. You’re suggesting I should be skeptical here–how can I tell Bob is the savage not me? Well, at some point I have to go on what just seems to be true. For example, if Bob is a Muslim his religion seems pretty irrational and crude and very very very implausible to me. Sure, he might have the same impression about mine. But I have the impressions and intuitions and phenomenal ‘appearances’ that I actually do have, and–long story short–I don’t think any possible thinker can avoid having fundamental beliefs based on these kinds of things. So Bob has his, and I have mine, and maybe we can’t rationally resolve our differences. That’s why he’ll try to kill me, maybe, and I’ll try to kill him. But, being me, I naturally still think I’m right and he’s wrong, and I naturally regard his intuitive faculties as unreliable or whatnot.

      While you did not use the word “indoctrination”, it seems clear that you’re committed to the view that different people are indoctrinated from kindergarten into different world-views, and at least most people cannot get out of them.
      Moreover, moral beliefs – if not all, at least in many cases – do come from that particular indoctrination. Even if some of them may have other sources as well (e.g., sometimes, the indoctrination might happen to agree with reliable appearances), the indoctrination trumps the other source, in the sense that beliefs will almost certainly follow indoctrination in case of conflict.

      a. Indoctrination is a generally *unreliable* means of getting to know moral truth (at least, in case of disagreement between doctrines, and precisely given such disagreements).
      b. From your own set of beliefs, it follows you were indoctrinated in a signficant number of moral beliefs, and in those cases, indoctrination trumps other sources (i.e., the indoctrination will be followed in case of conflict).
      c. You don’t have any other effective means to make moral assessments (i.e., you can’t step out of that) in case you were indoctrination – i.e., you cannot beat the indoctrination, almost certainly.

      As far as I can tell, that’s a decisive internal problem for your view – i.e., if I believed what you do, I’d reckon an epistemic moral error theory *limited to the beliefs you were indoctrinated about*.
      That’s not as far-reaching a problem as a general epistemic moral error theory, but it’s still pretty strong as far as I can tell. If there is something I misunderstood about your theory, please clarify.

      You tell me most people intuit that Hitler acted wrongly or whatever, when the relevant question is whether they would still have that intuition if we specify that we’re asking about the _kind_ of wrongness involving vicious character _and_ get them to suppose H was purely well intentioned, etc.

      But that’s not the relevant part. In order to make my case in this context, I only need that sometimes people do what they believe is morally acceptable (or maybe obligatory; you didn’t explain that), but behave immorally and are blameworthy.

      Imagine Bob is an ISIS member and has the intention of enslaving and raping Ceylan, a Yazidi woman – and he does that. He also believes that enslaving and raping Ceylan is morally permissible, and even morally praiseworthy – because God he believes likes that -, but not obligatory. His actions remain immoral. It would be improper to get people to suppose that Bob was well intentioned. The intention was to enslave and rape, and it would be improper in this context to get people to suppose that’s a good intention.

      Let’s consider another case, in case a belief in moral obligation is needed by your standards: Ahmed believes he has a moral obligation to kill his sister Aisha because she had sex consensually with a man she was not married to, and Ahmed believes that in doing so, Aisha was bringing dishonor to his family and deserved the death penalty for that. He kills her for that reason, and is proud of what he did.
      Now, I hold that he behaved immorally, and is blameworthy, and I think most people would agree (well, in the West at least). But it would not be proper to get people to believe Ahmed was purely well intentioned, since it would not be proper to get people to believe that Ahmed’s intention to kill his sister was good – only that he *believed* that it was good and obligatory, not that it was.

  13. Quick comment. Call any acculturation “indoctrination”. Then I deny that indoctrination is unreliable. Maybe moral codes of some cultures are based on reliable intuitions of moral reality and others are not. In fact I think that’s the case roughly speaking. In that case some formsnof indoctrination provide people with moral knowledge and are reliable, and others don’t. This is consistent with the psychological claim that most people can’t really step outside their indoctrination. And I think that’s also true.

    • Jacques,

      A quick reply too:

      I would say that indoctrination would still be unreliable because it’s a method that in the vast majority of cases, it results in many important false beliefs (this is evidenced by disagreements between different indoctrinated world views), it does not allow for error correction (at least most of the times, and assuming your take on that is correct), and beliefs resulting from it are not sensitive to whether or not the intuitions on which it was based happened to be reliable or not (i.e., people get to believe what they’re indoctrinated in, regardless of whether the doctrine was originally based on reliable intuitions).

      Granted, you might you reckon that’s not a significant internal challenge to your position, in which case we also have different intuitions on this matter.
      Do you think there is an issue (among the ones we’ve been discussing) in which we have room for further discussion beyond stating our conflicting intuitive assessments?

      For example, there would be room for that if we had further empirical info about what most people would say if presented with cases like Ahmed’s or Bob’s, but I don’t know of any studies that would help settle the matter, and it seems we have diverging anecdotal evidence.

  14. I don’t think moral diagreement (for example) is evidence that indoctrination is unreliable in general. If some forms were reliable but not others, then, given my psychological claim there would still be that kind of diagreement. That’s surely possible.

    And it seems indoctrination often does actually result in knowledge. Why do most people around here think it’s wrong to murder Jews because they’re Jews for example? Lots of indoctrination, i.e., acculturation. But they’re right about that and many other things, unlike others indoctrinated otherwise. So it seems to me their indoctrination produced reliable moral faculties. Do you think they acquired these faculties and beliefs in some other way?

    • Jacques,

      My position is that we do have a generally reliable moral sense that can get us out of indoctrination, so also doctrines will also be informed by it. So, it’s unsurprising that doctrines often get it right.
      But my challenge to your take on this is internal. When you say that indoctrination often does actually result in knowledge, you’re assuming that it’s knowledge. But that’s what I was questioning, at least in cases in which there is deep disagreement, and under the assumption that your view on the non-stepping out principle was correct.
      Now, given agreement between doctrines in many cases (most cases), it would still be the case that indoctrination often results in true beliefs (whether they’re knowledge is another matter, but even if it is), but the difficulty would arise in cases of disagreement between the doctrines.

      The crux of the matter is that doctrines/world views often get it wrong when it comes to those beliefs in which there is a lot of disagreement, and you would have no reliable means of assessing whether what you have been indoctrinated in is true. But it seems to me you don’t consider that problematic, so there seems not to be much room to move forward on that.

      I’ve got a question: How do you think moral persuasion works?
      If different people have different world views, and they disagree, you said there may be no way to rationally resolve certain conflicts between world-views or moral schemes. But people sometimes are convinced by arguments, and change their beliefs. How do you think that happens? Do you think that they have a means to realize that they were mistaken (if they were), or something else?

  15. “My position is that we do have a generally reliable moral sense that can get us out of indoctrination, so also doctrines will also be informed by it. So, it’s unsurprising that doctrines often get it right.”

    I agree that we have a generally reliable moral sense. But I think that under certain cultural conditions (or some kinds of “indoctrination”) this moral sense is suppressed or stunted, doesn’t get developed properly, etc. I assume this is what happens in cases when people end up thinking it’s perfectly fine to throw acid in a woman’s face if she’s dishonored the family. They probably have, or once had, some generally reliable moral sense but they’ve undergone the wrong kind of acculturation. Now I just add that such acculturation or “indoctrination” is a very powerful thing and _very_ few people are psychologically capable of overcoming its influence and getting back in touch with their more reliable moral faculties or intuitions. I doubt that the average person who believes in honor killing is psychologically capable of just reflecting rationally (or whatever) and coming to realize that his culturally grounded world-view is fundamentally wrong.

    So far this story seems coherent, I hope. You think there’s some kind of internal problem for me here. What exactly is the problem? Yes, I believe that some cases of acculturation or “indoctrination” result in people having generally true beliefs and generally reliable faculties. Is the problem that, given that last paragraph, I have reason to think that all forms of “indoctrination”, including my own, are likely to result in false or unreasonable beliefs? If so I don’t understand the argument. It seems to me that my own acculturation went pretty well, that I have lots of true beliefs and some reliable cognitive faculties as a result of it. As far as I can tell, my own “indoctrination” was often grounded in a properly functioning moral sense: the adults who socialized me were part of one of those better cultures, one that tends to enable the normal development of the moral sense we both believe in. Sure, in order to justify that belief I have to rely on my own beliefs and faculties, which I recognize to be grounded in a certain kind of acculturation. But unless I’m meant to _assume_ that acculturation or “indoctrination” is generally unreliable, why can’t it be rational for me to rely on those things? I think what I’m saying is coherent, at least, though maybe some people won’t agree with it. Maybe you can explain the problem in more detail?

    “I’ve got a question: How do you think moral persuasion works?
    If different people have different world views, and they disagree, you said there may be no way to rationally resolve certain conflicts between world-views or moral schemes. But people sometimes are convinced by arguments, and change their beliefs. How do you think that happens? Do you think that they have a means to realize that they were mistaken (if they were), or something else?”

    I’m sure moral persuasion works in many different ways. Often people have world-views that are somewhat different but over-lapping. Then we may appeal to the shared part. More often, I suspect, we work within a shared world-view. And probably there are some rare people who do ‘step outside’ their familiar world-view in order to seriously question it; but I think that is quite rare and impossible for most people. So people do have ways to realize they’re mistaken, often enough. Mistaken relative to certain basic beliefs or norms they already accept at least. I doubt that people can realize a ‘mistake’ in some radically objective sense–setting aside any particular moral scheme and then realizing that one of them is the more rational, for example.

    • Jacques,

      I agree that we have a generally reliable moral sense. But I think that under certain cultural conditions (or some kinds of “indoctrination”) this moral sense is suppressed or stunted, doesn’t get developed properly, etc. I assume this is what happens in cases when people end up thinking it’s perfectly fine to throw acid in a woman’s face if she’s dishonored the family. They probably have, or once had, some generally reliable moral sense but they’ve undergone the wrong kind of acculturation. Now I just add that such acculturation or “indoctrination” is a very powerful thing and _very_ few people are psychologically capable of overcoming its influence and getting back in touch with their more reliable moral faculties or intuitions. I doubt that the average person who believes in honor killing is psychologically capable of just reflecting rationally (or whatever) and coming to realize that his culturally grounded world-view is fundamentally wrong.

      It seems to me that they would have the capability to, say, ask themselves the question: “Is it immoral to kill my sister because she had consensual sex while she’s single? Or is it praiseworthy, or obligatory?”. If they’re human, they do seem to have that capability. And it’s unclear to me why they would not be able to use their moral sense to make the right assessment.
      But I’ll leave that aside because that’s another issue, and focus on your question about the internal problem:

      So far this story seems coherent, I hope. You think there’s some kind of internal problem for me here. What exactly is the problem? Yes, I believe that some cases of acculturation or “indoctrination” result in people having generally true beliefs and generally reliable faculties. Is the problem that, given that last paragraph, I have reason to think that all forms of “indoctrination”, including my own, are likely to result in false or unreasonable beliefs? If so I don’t understand the argument. It seems to me that my own acculturation went pretty well, that I have lots of true beliefs and some reliable cognitive faculties as a result of it. As far as I can tell, my own “indoctrination” was often grounded in a properly functioning moral sense: the adults who socialized me were part of one of those better cultures, one that tends to enable the normal development of the moral sense we both believe in. Sure, in order to justify that belief I have to rely on my own beliefs and faculties, which I recognize to be grounded in a certain kind of acculturation. But unless I’m meant to _assume_ that acculturation or “indoctrination” is generally unreliable, why can’t it be rational for me to rely on those things? I think what I’m saying is coherent, at least, though maybe some people won’t agree with it. Maybe you can explain the problem in more detail?

      Your earlier description was not limited to cases of “certain cultural conditions (or some kinds of “indoctrination”)”, but was much broader, for example when you claimed

      You’re almost certainly not capable in a real sense of stepping outside the whole moral-historical-cultural world-view that we’ve had drummed into us from kindergarten.

      There are at least two significant features of that claim.

      1. You made the claim that *I* was almost certainly not capable, etc., even though you did not have any particular knowledge about the kind of indoctrination I might have been subject to. It seems the claim was a general one: people almost never are capable of stepping outside the whole moral-historical-cultural world-view that someone has drummed into us from kindergarten.
      2. You apparently *included yourself* in the claim, when you said that “we” have had that the moral, etc., world-view drummed into “us” from kindergarten.

      That entails you have generally no means of realizing that your particular indoctrination gave you false beliefs, if and when it did.
      Now, there is the question of the general reliability of indoctrination. But even if – say – we grant indoctrination tends to produce true beliefs in the cases in which the different doctrines agree, it seems clear that indoctrination is unreliable in cases of considerable disagreement.
      Let me give you an example. Let us say that doctrines (i.e., that which is indoctrinated) are divided in the following manner:

      C1,X: 30% believe that doing X is immoral, and it’s good for the state/government to criminalize X.
      C2,X: 30% believe that doing X is immoral, but it would be bad for the state/government to criminalize X.
      C3,X: 40% believe that doing X is not immoral, and it would be bad for the state/government to criminalize X.

      On the issue of whether doing X is immoral, either 40% of doctrines are mistaken, or 60% of them are mistaken. Either way, indoctrination is a generally unreliable process – you get at best a 60% chance of getting it right, depending on your luck getting the right or wrong indoctrination.
      On the issue of whether it’s good or bad for the state/government to criminalize X, either 30% of doctrines are mistaken, or 70% are mistaken. Again, that makes the indoctrination process unreliable.

      But moreover, when you consider both the morality of doing X and the morality of criminalizing doing X, at most 40% get both right, and moreover, if the maximum percentage – i.e., 40% – get both right, then 60% get the morality of doing X wrong.
      Yet, the people indoctrinated in categories C1,X, C2,X, or C3,X, almost never have a way of stepping out of their indoctrination and correct their beliefs.
      That shows a high degree of unreliability in our ability to make assessments about the morality of X and its criminalization.
      In addition to that, even when we’re limited only to X and its criminalization – i.e., before we get to other beliefs -, there is the problem that different doctrines give different verdicts about how wrong it is to do X, or how bad it is to criminalize it, etc.; in other words, in real life conditions, there are many more than 3 positions.
      In fact, even when it comes to the morality of doing X alone, four different options are:

      O1: X is morally obligatory.
      O2: X is morally praiseworthy, but not obligatory.
      O3: X is morally permissible, but not obligatory or praiseworthy.
      O4: X is morally impermissible.

      Different doctrines may hold that different such options are true. Moreover, even within each option, degrees can – and sometimes do – vary widely.

      So, on this argument, you’re not meant to *assume* that indoctrination is generally unreliable. Rather, you’re meant to *reckon* that indoctrination is generally unreliable *when it comes to cases in which there are such disagreements between different doctrines*, and on the basis of the reasons I’ve been giving. That is not an internal problem for your position when it comes to moral assessments in which no such disagreement occurs, but it is in my assessment a decisive problem in cases in which such vast disagreements do occur. I don’t see a way around that for your position, but I take it you just have a different intuition on the matter and do not see this as a problem. I don’t think I have further arguments to give, but I hope at least you *understand* what the objection is, even if you don’t agree with it (else, I’m thinking maybe we can’t understand each other on this? I hope that’s not so)

      Mistaken relative to certain basic beliefs or norms they already accept at least. I doubt that people can realize a ‘mistake’ in some radically objective sense–setting aside any particular moral scheme and then realizing that one of them is the more rational, for example.

      I wouldn’t call it a radically objective sense, but rather a species-wide sense, which is enough. But “mistaken relative to certain basic beliefs or norms they accept at least”, if those beliefs or norms come from indoctrination, seems to have the problem I’ve been explaining.

      • “It seems to me that they would have the capability to, say, ask themselves the question: “Is it immoral to kill my sister because she had consensual sex while she’s single? Or is it praiseworthy, or obligatory?”. If they’re human, they do seem to have that capability. And it’s unclear to me why they would not be able to use their moral sense to make the right assessment.”

        VERY quick comment! Yes, of course they’re capable of asking the question. I haven’t said they can’t do that. But I’ve said pretty clearly why I think they’d very often be unable to “make the right assessment”. They have been _acculturated_ from early childhood onwards, and acculturation is a very powerful thing that most people can’t ignore or rationally reassess, etc. I mean, you can disagree but is this not at least a _clear_ account of why it’s supposed to be impossible for many people?

    • Jacques,

      This is a quick clarification point:

      Let me give you an example. Let us say that doctrines (i.e., that which is indoctrinated) are divided in the following manner:

      C1,X: 30% believe that doing X is immoral, and it’s good for the state/government to criminalize X.
      C2,X: 30% believe that doing X is immoral, but it would be bad for the state/government to criminalize X.
      C3,X: 40% believe that doing X is not immoral, and it would be bad for the state/government to criminalize X.

      On the issue of whether doing X is immoral, either 40% of doctrines are mistaken, or 60% of them are mistaken. Either way, indoctrination is a generally unreliable process – you get at best a 60% chance of getting it right, depending on your luck getting the right or wrong indoctrination.
      On the issue of whether it’s good or bad for the state/government to criminalize X, either 30% of doctrines are mistaken, or 70% are mistaken. Again, that makes the indoctrination process unreliable.

      In this case, I’m simplifying by setting aside the issue of which doctrines are more frequent and assuming they’re all equally so in whatever the relevant sense is; else, that leads to issues such as whether the percentage of people indoctrinated in a certain manner at a given time is relevant, or the percentage of people overall (i.e., considering all times), etc. But even if you count the frequency in any way you want, there will be cases of disagreement like that, e.g., 30% of people are indoctrinated to believe doing X is immoral, etc., or 30% of people in a certain period, etc., so the crux of the argument doesn’t depend on that (I thought I’d simplify like that to avoid such issues, but now I think you might object precisely on that front, so this is the clarification).

      • This is an interesting thing to disagree about. I never accept these kinds of arguments. I think the entire approach is crazy. (No offense.) When I set out decide whether my acculturation (“indoctrination”) was a good or bad way to come to my beliefs, it’s simply bizarre for me to pretend that I don’t already have and rely upon the very same beliefs and values produced by that acculturation. In fact, I’m relying on them just in order to frame the kind of argument you propose or take it seriously or perhaps even accept the skeptical conclusion you want to get from it. So even if I might have some reason for thinking that there’s only a low probability that a given set of doctrines is true, in some situation where I myself am not relying on some set of doctrines in assessing such probabilities, I’m not in that situation and couldn’t be (and neither could you).

        Philosophy seems like an unreliable method if we apply similar reasoning. Science too. Or think of it this way: whatever your views about these things you’re going to find that the vast majority of theories were false by your standards. So is that a reason not to believe your own theory? I don’t think so. First, because we psychologically can’t do this, and second because there’s no reason we should take up this weird third-person attitude to our own beliefs.

        Anyway it doesn’t really matter whether indoctrination in general is a good method. It might be that theorizing in general is not reliable. Because we have to count numerological theorizing and feminist theorizing and all kinds of epistemic garbage. But that’s no reason for scientists to conclude that what they’re doing is unreliable. They can just say that they have an epistemically superior and special kind of theorizing. I don’t much care whether there may be forms of indoctrination that are not very reliable–primitive cultures, for example, or the culture of the modern degenerate west. As long as I can reasonably believe that certain kinds of indoctrination are reliable or morally and epistemically good that’s all I need to make my position reasonable.

  16. Jacques,

    I think a difficulty is to construe “impossible” in this context (or, for that matter “can”). Those words have more than one meaning. But what you seem to be saying is that if they were to ask those questions, their moral sense would deliver the wrong verdict. Perhaps, that would be so if one considers preliminary intuitions, though I’m not sure how frequent that is. What may be very common is that they feel threatened when someone else raises those questions, challenges their beliefs, etc…

    But if they were to think about why they think it’s immoral (e.g., if they were given religious reasons, etc.), and they do what without being angry or defensive (which I think they can also choose to do) do you think they would still fail to realize that they made a mistake?
    Another quick question: do you think that they’re being epistemically rational or irrational in their false moral assessments?

  17. You’re saying that if they suppress strong feelings with deep cultural roots and don’t succumb to peer pressure or self-interested rationalizations, etc etc etc THEN they may “make the right assessment”. The things you’re saying they can overcome are largely effects of acculturation. My impression is that trained philosophers specializing in ethics very rarely do all of that with much skill or seriousness. Why are so many now PC liberals? Because it’s the most rational view? There is reason to doubt that the average Punjabi peasant doesn’t and can’t surmount all of that.

    “Can’t” is understood psychologically. E.g., I just can’t become a Jehovah’s Witness. Not a live option for me. Kind of vague maybe but psychology is like that. You really don’t gave a general idea what I mean?

    I think they may be epistemically rational, sure. I think their moral beliefs could satisfy a (reasonable) credulity principle, for example.

  18. Jacques,

    You’re saying that if they suppress strong feelings with deep cultural roots and don’t succumb to peer pressure or self-interested rationalizations, etc etc etc THEN they may “make the right assessment”. The things you’re saying they can overcome are largely effects of acculturation. My impression is that trained philosophers specializing in ethics very rarely do all of that with much skill or seriousness. Why are so many now PC liberals? Because it’s the most rational view? There is reason to doubt that the average Punjabi peasant doesn’t and can’t surmount all of that.

    But the question is not whether they do it, but whether they can do it, right?
    At any rate, many philosophers and others have actually changed their minds, leaving aside what they were indoctrinated in (sometimes, they did so and got things right, sometimes wrong). How do you explain those cases?

    “Can’t” is understood psychologically. E.g., I just can’t become a Jehovah’s Witness. Not a live option for me. Kind of vague maybe but psychology is like that. You really don’t gave a general idea what I mean?

    Yes, in that sense, I understand it. But in that sense, there is the matter of one can preliminary do, or after reflection. Do you not think that most Young Earth Creationists can realize that YEC is false, if they decide to consider the matter (which they can)?

    I think they may be epistemically rational, sure. I think their moral beliefs could satisfy a (reasonable) credulity principle, for example.

    In that case, I think the problem would be that they wouldn’t be mistaken. Rather, there would seem to be miscommunication under that assumption. Generally, the meaning of the words is given by how people use the words.
    If people in a society learn to use the words “morally obligatory” (or something usually translated as that in some other language), and even under conditions of epistemic rationality, no matter what sort of evidence they’re given, they’re going to reckon that it’s morally obligatory for a man to kill his sister because she had consensual sex while not being married, and that brings shame to his family, the conclusion I’m drawn to is that (very probably) the people in question are not mistaken, but rather, they’re not talking about moral obligation at all, but some society-relative-moral obligation; the usual translation is a mistranslation, and some sort of metaethical relativism is true.
    I don’t think that’s so, but I’m not sure how you reach a different conclusion, given your assessment (unless I got some of your views wrong? If so, please clarify)

  19. “But the question is not whether they do it, but whether they can do it, right?”

    Yeah, that’s the question. I think what people do (and don’t do) can be evidence regarding what they can do. Only very rarely do people change their minds about fundamental moral principles, especially when those are built into their culture and world-view and identity. It could be that they so rarely do this even though most or all of them can, but this fact also suggests that most people can’t do it. (Why am I justified in believing that most people can’t come up philosophical or scientific innovations as great as Plato’s or Einstein’s or music as great as Bach’s or Miles Davis’s? The main evidence is just that almost no one ever does these things.)

    “At any rate, many philosophers and others have actually changed their minds, leaving aside what they were indoctrinated in (sometimes, they did so and got things right, sometimes wrong). How do you explain those cases?”

    I already offered various explanations of how people might rationally change their minds in some ordinary ways so so I won’t repeat. I guess you’re claiming that sometimes philosophers change their minds in some more radical ways that would be counter-examples to my psychological claim–i.e., they come to reject their whole prior world-view, or some absolutely central fundamental principle accepted by every normal person in their group. First of all, I never claimed that _no one_ ever does this. (In fact I’ve probably done pretty much that myself. In terms of morals, at least, I seem to disbelieve almost everything I was taught.) So even if some philosophers do that, it could still be true (as I claim) that most people can’t. Second, I think _that_ kind of change is very rare even among philosophers. What’s an example? I very much doubt that you can find many examples of philosophers doing that kind of thing. But, in the end, who cares? Philosophers are a very small minority of humans with very specific interests and abilities that most people don’t have. So even if you were right about this it would still do very little to establish that many or most people are capable of what you’re describing.

    “Do you not think that most Young Earth Creationists can realize that YEC is false, if they decide to consider the matter (which they can)?”

    If we’re thinking of a society where YEC is a crucial part of identity and tradition and a basis for shared meaning and family, etc. then yes I certainly do doubt that the average member of that community can realize that YEC is false. For that would involve tearing up his entire sense of self and community and value, etc. And, in addition, there might well be serious purely epistemic problems. But if you’re just imagining someone who accepts YEC as some kind of theory with no particularly deep cultural and personal roots, okay sure–but that’s not relevant to my psychological claim.

    “the conclusion I’m drawn to is that (very probably) the people in question are not mistaken, but rather, they’re not talking about moral obligation at all, but some society-relative-moral obligation”

    This view seems very bizarre to me. I think it’s clear (for various reasons) that jihadists and people who believe in honor killing and Nazis all do have _moral_ concepts and beliefs. And I don’t think metaethical relativism makes any sense. But anyway, if they’re not even talking about moral obligation on this view relativism wouldn’t follow; we’d just have different groups making various objectively true and compatible claims about entirely different topics, i.e., what is obligatory-relative-to-society-1 as opposed to what is obligatory-relative-to-society-2.

    I’m not claiming that there are no objective moral rules, or that there are such rules but they’re somehow relative to what people think; instead I’m just saying that there is _a_ concept of wrongness, associated with blameworthiness and judgments of virtue and vice, such that _that_ concept has to do with how things seem to the agent. And I honestly don’t know why this seems so strange or confusing. I think we use this concept all the time, e.g., when we excuse a schizophrenic because we learn that he honestly believed he was saving someone’s life by doing some terrible thing. (Lots of other examples are possible.)

    • Jacques,

      Yeah, that’s the question. I think what people do (and don’t do) can be evidence regarding what they can do. Only very rarely do people change their minds about fundamental moral principles, especially when those are built into their culture and world-view and identity. It could be that they so rarely do this even though most or all of them can, but this fact also suggests that most people can’t do it. (Why am I justified in believing that most people can’t come up philosophical or scientific innovations as great as Plato’s or Einstein’s or music as great as Bach’s or Miles Davis’s? The main evidence is just that almost no one ever does these things.)

      I think after reflection that the matter is moot, given that they would be blameworthy regardless. But that aside, I don’t think this is good evidence that they can’t *after reflection without getting emotional*, because they usually do not reflect, let alone reflect without getting emotional (i.e., angry).

      Still, if it’s not a live option for them, still epistemically they ought to believe the right thing (leaving aside your assumption that they’re being epistemically rational; if that assumption is included, my analysis is different, but I still reach the conclusion that they’re blameworthy).

      Let’s consider a psychopath, Jack. As a psychopath, he does not have a properly functioning moral sense. He lives in a society in which the vast majority of people believe that it’s morally obligatory to kill one’s sister if she has consensual sex while not being married. Given that he doesn’t have a functional moral sense, he has no way of realizing the vast majority is mistaken, and he’s making no epistemic mistake in thinking he has a moral obligation to kill his sister. Now, Jack doesn’t care at all what his sister did, but he doesn’t want to act against his moral obligation because he properly reckons it’s not in his interest, as it would damage his reputation. So, he kills her. I have no problem saying he’s blameworthy and deserves punishment, so even the condition of epistemic rationality is not enough.

      Perhaps, here one should distinguish between morality the re and the dicto, and say that even if there is no epistemic mistake in his beliefs about morality the dicto (in which he copies social beliefs), he still *can* care about his sister, empathize with her, and choose not to harm her (some research indicate psychopaths can switch empathy on or off, even if it’s off by default so to speak), and his failure to do any of that are moral failures, regardless of his moral beliefs *de dicto* (that’s not a “can” of epistemic rationality, but of power to do stuff).
      If he actually were incapable of empathizing with her, the case might be more complicated, but I’m inclined to think he’d still be to blame.

      First of all, I never claimed that _no one_ ever does this.

      True, but I was trying to show *many* cases, to get you to reconsider. Still, given that we disagree about blameworthiness even if they can’t change their minds, this empirical matter has become somewhat moot in this context.

      In fact I’ve probably done pretty much that myself. In terms of morals, at least, I seem to disbelieve almost everything I was taught.

      I don’t go that far, but in the cases in which there is considerable disagreement between world views, in most cases I disagree with what I was taught.

      Second, I think _that_ kind of change is very rare even among philosophers. What’s an example?

      Most American philosophers are non-theists, and a significant number (probably most) would find Yahweh (i.e., the biblical creator) evil, and even find the argument from suffering or from moral evil persuasive, even if most were raised as Christians or Jews.

      But, in the end, who cares? Philosophers are a very small minority of humans with very specific interests and abilities that most people don’t have.

      I don’t think that’s so. It would be like saying that most YECs can’t believe evolution is true even if most YECs who later become biologists stop being YECs and come to believe evolution is true, because the percentage of YECs who study biology is very low, and allegedly biologists have specific interests and abilities most people don’t have.
      I would say that in the sense of “can” you have in mind, most YECs can’t immediately come to believe that evolution has happened, but on the other hand, then can (in the sense of “have the power to”) study biology, and if they do study biology (which they can), then they can (in the sense of psychological “can” you have in mind) believe that evolution has happened (and in fact, they can no longer believe in YEC).
      A similar account can be given for philosophers, and non-philosophers who can study philosophy (not necessarily formally).
      Granted, plenty of people do not have time to study biology, or philosophy, and in the sense of “can” you have in mind, maybe they can’t have the right beliefs on the relevant issues. As I mentioned above, after reflecting on your use of “can”, I think they’re blameworthy regardless – and I don’t think my take on this is unusual, but that’s an empirical matter we seem to have no studies about.

      This view seems very bizarre to me. I think it’s clear (for various reasons) that jihadists and people who believe in honor killing and Nazis all do have _moral_ concepts and beliefs.

      This is clear to me as well (though not for the reason it’s clear to you, probably). I never said that they do not. I said that *under the assumption that they’re being epistemically rational in their beliefs, and that they will not change them after reflection, argumentation, etc., even after they remain epistemically rational -, then they wouldn’t have moral beliefs, but Nazi-moral, jihadi-moral, etc., beliefs, which would be psychologically akin in some sense (e.g., how they feel to them) to moral beliefs, but tracking different properties.
      In a way, it’s what I could expect from advanced extraterrestrials, but not from humans.
      But this is an assumption I’m making for the sake of the argument. It’s your belief that they are being epistemically rational and yet cannot change their beliefs after reflection, argument, etc., not mine.

      And I don’t think metaethical relativism makes any sense.

      If you’re saying it’s false, I agree. If you’re saying it’s incoherent, I disagree, since it’s like what one might call species-relativism, which makes perfect sense.
      For example, let’s say there are aliens who evolved from something like, say, squid. They have something akin to color terms, but associated with different frequencies of light (i.e., the referent is very different), so instead of color, they have squid-color terms. Similarly, they have something like moral terms, but associated with different entities, behaviors, etc.; so, instead of morality, they have squid-morality.
      I believe that’s probably true if there are advanced civilizations (long story short, I believe it’s likely there would be considerable overlap between the referent of moral and squid-moral terms, due to similar evolutionary problems, but it’s extremely unlikely there would be a match, though *if* the universe happens to be infinite, there are matches), but relativism among humans is false.
      However, a similar story for human societies makes sense (i.e., it’s coherent), even though for a number of reasons I believe it’s false.

      But anyway, if they’re not even talking about moral obligation on this view relativism wouldn’t follow; we’d just have different groups making various objectively true and compatible claims about entirely different topics, i.e., what is obligatory-relative-to-society-1 as opposed to what is obligatory-relative-to-society-2.

      I guess we have different concepts of “relativism”, then, because that sounds like relativism to me. But I hope my alien example clarifies it. If not, I tried. 🙂

      I’m not claiming that there are no objective moral rules, or that there are such rules but they’re somehow relative to what people think; instead I’m just saying that there is _a_ concept of wrongness, associated with blameworthiness and judgments of virtue and vice, such that _that_ concept has to do with how things seem to the agent.

      Yes, and as I have already explained, I actually agree with you, because I do believe that _that_ concept has to do with how things seem to the agent. I just don’t believe it has to do *only* with how things seem to the agent, but seemings often are morally relevant.
      For example, if it looks to the agent that his sister is coming at him with killer intent because his brain is malfunctioning in a way that shows that image in his head, and he kills her in what looks like self-defense to me, he is not blameworthy; but if it looks to him that his sister deserves to die because she had consensual sex while being unmarried, and he kills her, he’s blameworthy (assuming he is human; I don’t know about alien squid).

      And I honestly don’t know why this seems so strange or confusing. I think we use this concept all the time, e.g., when we excuse a schizophrenic because we learn that he honestly believed he was saving someone’s life by doing some terrible thing. (Lots of other examples are possible.)

      You are now misconstruing my position. Of course, I agree that the concept *has to do with* how things seem to the agent. I just don’t agree that the concept has to do *only* with how things look to the agent.

  20. Jacques,

    On the matter of whether epistemic ought implies can, after considering what you mean by “can” in the psychological sense, and on the issue of psychological impossibility, I don’t think that the “ought” of epistemic rationality implies “can”.
    As I see it, that P is not a live option for A seems equivalent (necessarily, if not analytically) to A assigning an extremely low probability to P – so low A doesn’t consider P seriously -, but that does not seem to prevent in any way that A ought to believe P, as far as I can tell.
    In fact, in that sense of “can’t”, people who are being epistemically irrational ought to change their beliefs, but often “can’t” in the sense that it’s not a live option.
    Purely for example, it may well be that a YEC, after reading the arguments for evolution, still assigns so low a probability that it’s not a live option. This is not to say that in another sense of “can”, he can’t believe that evolution has happened. I think he can. He just would have to reason without allowing his emotional commitment to his religion to get in the way of a proper epistemic assessment. But he can’t in the sense you state, as it’s not a live option for him. Still, he ought to believe that evolution has happened, in the sense of “ought” of epistemic rationality, in my assessment.
    If you prefer another example, you can pick a Moon Landing conspiracy theorist, or a 9/11 conspiracy theorist, etc.
    By the way, even if P is a live option, one often can’t (in the sense of power) believe P. For example, it’s a live option for me that there is life on Europa. It’s also a live option that there is not. But I don’t have the ability to choose to believe either. I remain undecided, given insufficient info.

    So, in this context, they are failing to believe what they (epistemically) ought to believe, so it seems rather clear to me that they can and often are morally blameworthy. A more difficult case would be a case in which it’s not the case that they epistemically ought to have the right belief. But I think that even then, they would be morally blameworthy (I’ll address some of your points, then come back to this, with an example of a psychopath).

    All that said, whether epistemic ought implies can in some other sense of “can” remains an open matter for me.

  21. Jacques,

    On the issue of the internal challenge, you say:

    This is an interesting thing to disagree about. I never accept these kinds of arguments. I think the entire approach is crazy. (No offense.) When I set out decide whether my acculturation (“indoctrination”) was a good or bad way to come to my beliefs, it’s simply bizarre for me to pretend that I don’t already have and rely upon the very same beliefs and values produced by that acculturation. In fact, I’m relying on them just in order to frame the kind of argument you propose or take it seriously or perhaps even accept the skeptical conclusion you want to get from it. So even if I might have some reason for thinking that there’s only a low probability that a given set of doctrines is true, in some situation where I myself am not relying on some set of doctrines in assessing such probabilities, I’m not in that situation and couldn’t be (and neither could you).

    But I don’t have that problem myself, because I believe we have a species-wide moral sense, and we can use it to get around the false indoctrinations. My argument is not meant to conclude that we ought to be skeptical about moral beliefs, but rather, that *if* your claims about indoctrination (or as you call it “acculturation”), then we ought to be skeptical in the cases in which there is such disagreement – then again, if we add your idea of epistemic rationality on the part of those who disagree, then instead of skepticism, under that assumption I’m leaning to relativism, as explained.

    Philosophy seems like an unreliable method if we apply similar reasoning. Science too. Or think of it this way: whatever your views about these things you’re going to find that the vast majority of theories were false by your standards. So is that a reason not to believe your own theory? I don’t think so. First, because we psychologically can’t do this, and second because there’s no reason we should take up this weird third-person attitude to our own beliefs.

    No, not at all, because my standards do not entail an unreliable method from which we cannot step out. My whole argument for skepticism is based on two main points:
    a. Your claim that there is (generally, not always) a way around the indoctrination.
    b. The assessment of the general reliability of indoctrination *in cases in which doctrines disagree, as explained*.

    My own standards reject a., so my views are not threatened by my argument. I’m not *actually* advancing an argument for skepticism, but one *conditional* to a claim of yours I don’t agree with, and further, if assume that the argument fails (which is entailed by another claim of yours, namely that the people disagreeing due to different world views are often being epistemically rational), then my conclusion is *probably* relativism (in the sense I explained in another post). But that’s always under assumptions I don’t believe in the first place.

    Moreover, I’m not suggesting a “third-person attitude”; the problem comes in my assessment because the unreliability can be properly established *from your own beliefs*.

    • Hi Angra,
      This has been an interesting exchange. I like that you’re pressing me on some of these points. But I think we’re just not really connecting. Let me try once more to explain how I see things, at least.

      “But I don’t have that problem myself, because I believe we have a species-wide moral sense, and we can use it to get around the false indoctrinations.”

      I didn’t mean to suggest that you have this problem. Rather I’m saying that I don’t think anyone has a (real) problem of this kind. Not you, but not me either. I also think we have a species-wide moral sense, and I allow that on occasion we can use that to “get around false indoctrinations”. For example, I like to think I did that myself with respect to various leftist-liberal falsities banged into my head during my childhood and “education”. What I’m saying is just that, as a matter of typical human psychology, this is very hard to do and practically impossible for most people.

      “My argument is not meant to conclude that we ought to be skeptical about moral beliefs, but rather, that *if* your claims about indoctrination (or as you call it “acculturation”), then we ought to be skeptical in the cases in which there is such disagreement”

      I understand that, and I take myself to be responding to the relevant charge–i.e., that given certain claims of mine I should be skeptical at least where there is disagreement. My response is to deny the views about rationality or epistemology that seem to encourage skepticism. So for instance I deny that when I myself _rationally_ assess the probability or plausibility of my own moral-political beliefs I need to think in abstract third-person terms about the distribution of truth-values over various incompatible doctrines, or the abstract likelihood that arbitrary doctrine set D is true (or likely to rest on reliable intuitions or faculties or whatever). I say that this is a “third-person” point of view because it seems to exclude facts about what I myself, as the person who is supposed to be carrying out the reasoning you describe, take to be true or plausible. Instead I’m supposed to just consider the abstract likelihood that an arbitrary doctrine set D is true, without relying on my first-person perspective.

      My view is instead that I may _rationally_ make use of the beliefs under discussion in the process of trying to figure out their probability or plausibility. Is that circular, or circular in an unacceptable way?

      For various reasons I think it isn’t. One reason is that there is no other way for anyone to rationally assess this kind of thing. Any such assessment proceeds on the basis of all kinds of taken-for-granted beliefs and values, and it just is true that many of those are products of what you’re calling “indoctrination”. No way to be rationally skeptical without (rational) credulity as a basis. If my indoctrination was the right kind of indoctrination, it served to activate or develop or augment the species-wide moral sense you describe. Notice this does not preclude the possibility that they are _also_ based on reliable intuitions or a species-wide moral sense. Nor is it irrational or question-begging for me to believe or assume that this is possible–and that my own indoctrination might have been the right kind. Unless we are already assuming from the outset that all forms of indoctrination are unreliable, which surely would beg the question here, it’s rational for me to believe that this is possible. Now I apply a principle of credulity, roughly like this: “If it seems to me that X and I have no defeaters for believing that X, it’s rational for me to believe that X on the basis of the seeming truth of X”. Lo and behold, it _does_ seem to me that many of those prior indoctrinated beliefs are true and defeater-less, and that, therefore, I hold those beliefs rationally–e.g., the belief that honor killing is wrong. If there is circularity here it’s not the bad kind, I claim. I’m a coherentist of some kind, and I think coherentism is itself coherent, and I think that my coherentism coheres with other things I believe, and I think my views about all of this are adequately justified under my own epistemic principles. Naturally one could (reasonably) reject some or all of those but I don’t think there is a problem of _internal_ coherence or consistency. At least I remain unsure how exactly you think that problem arises for me. (Maybe the problem here is that I just haven’t said enough about my own views for you to frame your objection in a way that I’d find more troubling…)

      “then again, if we add your idea of epistemic rationality on the part of those who disagree, then instead of skepticism, under that assumption I’m leaning to relativism, as explained.”

      Relativism about what? I’m a relativist, I guess, about rationality and epistemic justification and (therefore) moral justification. I’m not a relativist about truth. So I claim that the guy who thinks honor killing is morally acceptable or obligatory may well be just as rational as I am, in a certain sense, and his choices and actions may also be just as morally justifiable as mine, in a certain sense–the relevant internalist sense, I claim. But I also think he’s totally mistaken, that his moral beliefs and values are backwards and primitive. And if it turns out that he’s a typical human being who can’t step outside of his false indoctrination and wants to act on that basis we may have good reason to blow his head off. Or, at least, we should do what we can to quarantine him and all others like him–keep them out of our countries and communities. So I accept some kind ‘relativism’ but only a very weak and defensible kind: what is reasonable to believe or internalistically wrong depends very strongly on the subject and his subjective point of view or situation. But I also think there are objective facts about morality, e.g., the fact that honor killing is just wrong. And I think my own moral code, based to a large degree on the indoctrination I received, corresponds much better to some of these facts than some other codes based on what I take to be false indoctrinations. That’s the gist of it, anyway. Do you still think this is incoherent or leads to an objectionable kind of skepticism or relativism?

      “No, not at all, because my standards do not entail an unreliable method from which we cannot step out. My whole argument for skepticism is based on two main points:
      a. Your claim that there is (generally, not always) a way around the indoctrination.
      b. The assessment of the general reliability of indoctrination *in cases in which doctrines disagree, as explained*.”

      Okay, but I don’t think my own standards have this entailment. I don’t see that I’ve said anything that would entail that my own epistemic method or indoctrination is unreliable (or even that I myself can’t step out of it). I’m a bit puzzled by (a). Did I say that there’s a way around indoctrination for most people, or for everyone? Because if so I misspoke. I think this is psychologically impossible for many people, probably most. In any case I just don’t accept (b). This is not something I’ve said and it’s not something I take to be entailed by anything I’ve said (for the reasons sketched above).

      In any case, there are excellent empirical grounds for thinking that philosophy in general is not a particularly reliable method. Again, whatever philosophical views one holds, it will turn out that those views are basically correct only if far more such views are incorrect. Induction over the history of philosophy might seem to warrant skepticism. If your views or standards don’t incorporate this third-person data they _should_ incorporate it, if your views or standards are rational given the facts we all know. But I mention this only to point out that it would be weird to try to doubt one’s own philosophical convictions on this kind of basis. For example, if I think very carefully about some problem such as mind-body interaction and it just does seem to me that arguments for dualism are super-compelling and objections to it can be dismissed, I should probably just be a dualist. Third-person facts about the general method or topic or whatnot don’t seem to be enough for reasonable skepticism here. And anyway such skepticism is itself psychologically impossible for most people. If I just do find certain dualist arguments compelling on reflection and all things considered–all things except third-person facts, induction over history of philosophy–then I’m probably not able to doubt their conclusions. That’s how it is for me, anyway. So on this point we’re back to ought/can. I think that in such a situation I can’t have an epistemic obligation to doubt because I can’t psychologically entertain any real enduring doubt.

      Thanks for an intelligent and very fun debate! No doubt you’ll set me straight in the next round 🙂

  22. Hi Jacques,

    This has been an interesting exchange. I like that you’re pressing me on some of these points. But I think we’re just not really connecting. Let me try once more to explain how I see things, at least.”

    Yes, it has, and thanks for pressing me as well. I will try to do better at understanding your views this time.

    I didn’t mean to suggest that you have this problem. Rather I’m saying that I don’t think anyone has a (real) problem of this kind. Not you, but not me either. I also think we have a species-wide moral sense, and I allow that on occasion we can use that to “get around false indoctrinations”. For example, I like to think I did that myself with respect to various leftist-liberal falsities banged into my head during my childhood and “education”. What I’m saying is just that, as a matter of typical human psychology, this is very hard to do and practically impossible for most people.

    It seems to me I need to clarify some points too, since I didn’t mean to say that you actually should be skeptical, but rather, under certain assumption that I was trying to convince you to drop.
    I will try once more, adjusting my argument to that reply (I get you already got my stance on most of this, but just in case there are some points that are unclear):

    1. Indoctrination is generally unreliable *in the cases in which there are significant disagreements between different doctrines*, at least as long as the disagreement involves doctrines that are drummed into a significant percentage of the population (at a certain time, or perhaps over all times; however you slice it, one can make the case). This seems clearly true, and on the basis of your beliefs, you also should agree with that, as you can also observe the disagreement. I gave a more detailed analysis earlier (though still oversimplifying and considering that doctrines are equally common just for the sake of shortness; I clarified later, though).
    2. Point 1. does not entail that indoctrination is generally unreliable when you consider all moral beliefs. In fact, arguably it’s usually reliable. But I don’t need to claim it’s usually unreliable to make my case.
    3. *Assuming for the sake of the argument that we generally do not have a means of revising beliefs that were inculcated in us from kindergarten*, then we should be skeptical about moral assessments *in the cases involved in 1*. But this is not an assumption I believe in.
    4. I did *not* mean to argue or claim that you should be skeptical about morality in the cases involved in 1. My claim – on the basis of my arguments – was that *either* you should be skeptical about moral assessments in the cases involved in 1, or you should drop the belief that we generally do not have a means of revising false beliefs that were inculcated in us from kindergarten. My aim was to persuade you to drop that belief, not to be [partially] skeptical.
    5. None of the above is problematic for my beliefs, as I believe we generally do have a means of revising beliefs that were inculcated in us from kindergarten, even if many (most) people fail to exercise said means. It’s my experience that I can revise them, and I’ve seen that others *who try* sometimes also revise them. You revised them too. Whether the belief changes were for the better or not overall, the fact remains that we can revise them. I see no good reason to think most of the population is incapable of it.
    6. Given your point that *you* can revise those beliefs yourself, I will drop that line of argument. However, I do think skeptical arguments do work sometimes, but only within certain assumptions, and the arguments ultimately work as arguments not for skepticism, but against those assumptions. By the way, I use one such argument in a discussion with Richard Chappell on his post entitled “Self-Undermining Skepticisms”, in the blog “Philosophy, et cetera”. In the thread after that post, I raise a challenge of that sort to some kinds of moral realism (or, in the terminology he prefers, to moral realism; but I disagree with that terminology). The debate with Brandon is of no philosophical value, but the discussion with Richard Chappell is interesting I think. Just saying.
    7. So, we disagree about the value of skeptical arguments – under some circumstances -, but given your point that *you* can revise your beliefs, I withdraw my objection in that case. I just disagree with you about the capabilities of other people.
    8. Leaving aside that particular argument, *if we further assume for the sake of the argument that people who [apparently] disagree on the basis of different worldviews are being epistemically rational and would continue to do so no matter what info they’re given and while they can’t change their minds* (but I don’t believe this assumption is true), then I reckon that your beliefs are reliable, but so are those of the people who apparently disagree with you on the basis of different world views, and in fact you and they (and many others) are not really disagreeing despite appearances, but talking past each other. On this account, there is a Nazi morality, a Jihadi morality (or several, depending on the variant), and so on. I do not believe that any of this is true. Rather, that would be my assessment if I made some assumption for the sake of the argument.

    My response is to deny the views about rationality or epistemology that seem to encourage skepticism. So for instance I deny that when I myself _rationally_ assess the probability or plausibility of my own moral-political beliefs I need to think in abstract third-person terms about the distribution of truth-values over various incompatible doctrines, or the abstract likelihood that arbitrary doctrine set D is true (or likely to rest on reliable intuitions or faculties or whatever). I say that this is a “third-person” point of view because it seems to exclude facts about what I myself, as the person who is supposed to be carrying out the reasoning you describe, take to be true or plausible. Instead I’m supposed to just consider the abstract likelihood that an arbitrary doctrine set D is true, without relying on my first-person perspective.

    I don’t go as far as to say you should take a third-person perspective, but rather, when assuming that there is an unreliable process by which we have come to have beliefs in domain D and we have no way around it (at least, not from D), then at least usually it’s improper to resort to beliefs in domain D to conclude that the process happened to be reliable in our case, and that we are the exceptional case (my discussion with Richard Chappel involving the demon tossing the gazillion-sided dice might be of interest, though I would make the argument in a stronger way if I were to make it today).

    For various reasons I think it isn’t. One reason is that there is no other way for anyone to rationally assess this kind of thing. Any such assessment proceeds on the basis of all kinds of taken-for-granted beliefs and values, and it just is true that many of those are products of what you’re calling “indoctrination”.

    But that would appear to be a problem only if we were talking about an argument for *general* skepticism. It’s not a problem for an argument for skepticism *on a specific domain* that doesn’t spread to our cognitive faculties in general. For example, an argument for skepticism about moral assessments *in the cases involved in 1*, or even for moral skepticism in general, doesn’t have that problem.

    No way to be rationally skeptical without (rational) credulity as a basis. If my indoctrination was the right kind of indoctrination, it served to activate or develop or augment the species-wide moral sense you describe. Notice this does not preclude the possibility that they are _also_ based on reliable intuitions or a species-wide moral sense. Nor is it irrational or question-begging for me to believe or assume that this is possible–and that my own indoctrination might have been the right kind. Unless we are already assuming from the outset that all forms of indoctrination are unreliable, which surely would beg the question here, it’s rational for me to believe that this is possible.

    I argue – not assume – that indoctrination is generally unreliable *in the cases in which there are significant disagreements between different doctrines*, at least as long as the disagreement involves doctrines that are drummed into a significant percentage of the population. That much seems clear, precisely given such disagreement. But it’s not a problem if you have a means of revising them.

    Now I apply a principle of credulity, roughly like this: “If it seems to me that X and I have no defeaters for believing that X, it’s rational for me to believe that X on the basis of the seeming truth of X”. Lo and behold, it _does_ seem to me that many of those prior indoctrinated beliefs are true and defeater-less, and that, therefore, I hold those beliefs rationally–e.g., the belief that honor killing is wrong. If there is circularity here it’s not the bad kind, I claim. I’m a coherentist of some kind, and I think coherentism is itself coherent, and I think that my coherentism coheres with other things I believe, and I think my views about all of this are adequately justified under my own epistemic principles. Naturally one could (reasonably) reject some or all of those but I don’t think there is a problem of _internal_ coherence or consistency. At least I remain unsure how exactly you think that problem arises for me. (Maybe the problem here is that I just haven’t said enough about my own views for you to frame your objection in a way that I’d find more troubling…)

    But even if you apply that principle, the issue here is whether the fact that your belief comes from an unreliable method of belief formation is a defeater. It often would be, in my assessment.
    Let me give you an example:
    Let’s assume for the sake of the argument that the chances that evolution (when I don’t say otherwise, I mean unguided evolution) would give a species that makes moral assessments a generally reliable sense of right and wrong is very low, say less than 1 in 10 (I believe this to be false, but I believe it would be true even with a smaller number if we assume disagreement between aliens rather than miscommunication), even if evolution would likely give intelligent species generally reliable faculties (i.e., so that in other domains, generally the faculties would be reliable).
    Moreover, let’s assume that in the other cases (i.e., at least 9/10 cases if there are many species), the agents with the unreliable sense will have no means around the unreliability.
    Let’s further assume that we’re the products of evolution (this one I do believe).
    Now, under those assumptions, it seems to me we *should* be skeptical about our sense of right and wrong. Don’t you agree?

    My argument in this context is similar to that one except that the unreliability is argued for, not assumed. So, the argument goes:
    a. Your moral beliefs are the product of indoctrination.
    b. Indoctrination is generally unreliable *in the cases involved in 1 above.*
    c. There is generally no way around indoctrination.

    In that context, I would say you ought to be skeptical about your beliefs in the specific subdomain of the moral domain that I described in 1. above.
    But I already dropped that argument, because you just told me you think *you* can actually revise the beliefs you were indoctrinated in (but if it were not for that, I would still reckon it’s a good argument).

    Now, you might think that this amounts to relying on third-person perspective; I don’t think it is, since one is using plenty of other beliefs one has – only the specific domain is excluded, but I think for good reasons, given how the arguments are constructed.
    I don’t have a general theory about skeptical arguments , but reckon the arguments I gave above (both the evolutionary argument and the indoctrination argument) are good ones. Granted, not everyone finds them persuasive, so we might just have different epistemic intuitions on this subject. Still, I’d like to stress that the arguments do not require rejecting your credulity principle, but rather, only disagreeing about what constitutes a defeater in certain instances.

    Relativism about what? I’m a relativist, I guess, about rationality and epistemic justification and (therefore) moral justification. I’m not a relativist about truth. So I claim that the guy who thinks honor killing is morally acceptable or obligatory may well be just as rational as I am, in a certain sense, and his choices and actions may also be just as morally justifiable as mine, in a certain sense–the relevant internalist sense, I claim. But I also think he’s totally mistaken, that his moral beliefs and values are backwards and primitive.

    Relativism about morality. It would not be relativism about truth, it seems, given the connotations I’m guessing from your post. Rather, let’s call this M-relativism, for “Miscommunication relativism”.
    The other guy and you would be talking past each other.
    My assessment is on the basis that the meaning of words (and as a result, the referent) is picked by usage. If Bob says “X has P”, and Alice says “X does not have P”, and both Bob and Alice are being epistemically rational, and no matter how much information they are given (relevant to their assessment of whether X has P), they remain epistemically rational, and keep saying “X has P” and “X does not have P” respectively, it seems pretty likely to me that Bob and Alice do not mean the same by “P”, and instead of disagreement, this is a case of miscommunication.
    I do know that this is probably a minority position (I similarly hold that there is miscommunication on Moral Twin Earth, rather than disagreement, at least in usual description of MTE. There is “disagreement” in the sense of a dispute or conflict, but the two groups of agents are talking past each other. Still, on MTE it’s not stipulated that both groups are being epistemically irrational, so that further complicates matters; I think the case we’re considering is more clearly a case of M-relativism, due to said stipulation – if I got your position right, that is).

    But I also think there are objective facts about morality, e.g., the fact that honor killing is just wrong.

    Actually, M-relativism is compatible with there being an objective fact of the matter as to whether honor killing is morally wrong (if that’s the sense of “objective” you have in mind). It’s just that there would also be an objective fact of the matter as to whether it’s Bzzzr, where “Bzzzr” is a word in another language that is usually mistranslated as “morally wrong” (though M-relativism also is compatible with their being different variants of English in which “morally wrong” has different meanings).
    Now, I think M-relativism is extremely improbable (we’re talking about humans; if by A-M-relativism we mean “Alien Miscommunication relativism”, and it applies to different species who hypothetically appear to disagree, I think it’s very plausible), but I would reckon otherwise if I were to assume that people who [apparently] disagree on the basis of different worldviews are being epistemically rational and would continue to do so no matter what info they’re given and while they can’t change their minds (if you don’t believe that, please clarify, but your earlier points about not being rational ways to resolve the disagreement and others led me to the assessment that that is your position, at least when it comes to most people).

    And I think my own moral code, based to a large degree on the indoctrination I received, corresponds much better to some of these facts than some other codes based on what I take to be false indoctrinations. That’s the gist of it, anyway. Do you still think this is incoherent or leads to an objectionable kind of skepticism or relativism?

    I hope my stance is clear by now, but in case it’s not:
    A. I don’t think that that is incoherent.
    B. I think it would lead to an objectionable kind of skepticism if it were not for the fact that you believe that *you* actually can do what most people can’t. But you do believe that, so you avoid that one.
    C. I think on the basis of what you said about their epistemic rationality, what they can and can’t do, etc., (and fixing that part), if I understand you correctly, you ought to reckon that M-relativism is (probably) true. But I don’t think you actually ought to reckon that. Rather, I think you ought to change your belief about what they can do and/or their epistemic rationality.

    I’m a bit puzzled by (a).

    Sorry, I misspoke. What I meant to say is that “a. Your claim that there is (generally, not always) *no* way around the indoctrination.”
    I ended up saying just the opposite!

    In any case, there are excellent empirical grounds for thinking that philosophy in general is not a particularly reliable method. Again, whatever philosophical views one holds, it will turn out that those views are basically correct only if far more such views are incorrect. Induction over the history of philosophy might seem to warrant skepticism. If your views or standards don’t incorporate this third-person data they _should_ incorporate it, if your views or standards are rational given the facts we all know. But I mention this only to point out that it would be weird to try to doubt one’s own philosophical convictions on this kind of basis.

    That’s interesting. I’m not sure there is a “philosophy” domain that can be isolated from logic and generally epistemic probabilistic assessments on the basis of available info.
    That aside, I would factor in the fact of disagreement between intelligent people when thinking about a matter in which there is such disagreement, and on the basis of that, I would lower (probably) my probabilistic assessment on a some matter *at least until I have considered it more carefully, including some of the main objections*.
    But I agree that *this kind* of skeptical argument would fail.
    Granting for the sake of the argument there is a specifically philosophical method and domain, in my view there is still at least one relevant difference between the “philosophical skepticism” argument and the arguments I considered above, namely that the “philosophical skepticism” contains no “no mechanism to get around the unreliable method” premise (or something to that effect), which is in my view a crucial premise.

    Assuming that this fails for some reason (though I think the difference is in fact key), I would reply by saying that just as it would be weird to doubt (ultimately, not at first) one’s own philosophical convictions on this kind of basis (the “weirdness” is an intuitive assessment you make, but nothing wrong with that), my intuitions are that it would be at least just as weird – if not weirder – to *fail* to doubt one’s moral assessments (or one’s moral assessments in the relevant category) under the assumptions of the previous skeptical arguments I sketched, so even if I have failed to put my finger on a relevant difference (but I don’t think I have! I’m just covering my bases :D), if I go by my intuitions, there seems to be at least one.

    Do you still think (and/or find it intuitive) the other skeptical arguments fail too?

    I think that in such a situation I can’t have an epistemic obligation to doubt because I can’t psychologically entertain any real enduring doubt.

    But let’s say that Alice believes she’s Napoleon – due to some malfunctioning of her brain -, and she can’t psychologically entertain any real enduring doubt. I would still say she’s clearly being epistemically irrational.
    *If* it is the case that necessarily, A epistemically ought to believe P iff it would be epistemically irrational of A not to believe P, then it seems to me that this shows epistemic “ought” does not imply “can”, in the sense of “can” you’re entertaining.
    On the other, *if* the above equivalence does not hold, it’s more plausible to me that epistemic “ought” implies “can” in some sense, though I’m not sure it does in the sense of not being a live option.

    In any event, there is still the issue of the psychopath example. I do not believe he epistemically ought to believe it’s not obligatory to kill his sister for having consensual sex out of wedlock, and I don’t think he is being epistemically irrational in believing it’s morally obligatory to do that. But I still think he morally ought not to kill his sister (and surely, he *can* not kill his sister). I’m not sure what your intuition on that is, but I don’t think mine is unusual in this case.

    Thanks for an intelligent and very fun debate! No doubt you’ll set me straight in the next round.

    Thank you for that as well, and I’m trying ;), but you’re a very smart interlocutor! 🙂

  23. Sorry but I’m still not sure why I should believe this:

    “1. Indoctrination is generally unreliable *in the cases in which there are significant disagreements between different doctrines*, at least as long as the disagreement involves doctrines that are drummed into a significant percentage of the population (at a certain time, or perhaps over all times; however you slice it, one can make the case).”

    Consider theistic and atheistic indoctrination. For example, A was raised in medieval France and B was raised in the USSR. This is a significant disagreement of the kind you’re describing, right? So you’re saying that _both_ forms of indoctrination are likely to be unreliable, or just are unreliable, because both are cases where a significant percentage of the population has been indoctrinated into believing something and some significant percentage of the other population has been indoctrinated into believing its negation? I just find that to be intuitively obvious or even especially plausible. Maybe the development of medieval French culture was shaped by divine providence and the culture of the USSR is the result of Satanic meddling. Maybe the one kind of indoctrination reflects the proper operation of natural rational faculties that are highly reliable and the other doesn’t. So then A’s indoctrination could well be highly reliable with respect to theology and B’s might be highly unreliable. I may be missing something but I don’t understand why you think I should accept this premise.

    “But let’s say that Alice believes she’s Napoleon – due to some malfunctioning of her brain -, and she can’t psychologically entertain any real enduring doubt. I would still say she’s clearly being epistemically irrational.”

    I wouldn’t say that’s clear at all! I have the opposite intuition. I think being epistemically rational is entirely a matter of how things appear to the subject. Well, with one qualification. If the subject is so weird that she doesn’t even have relevant rational concepts or abilities, or she has them but can’t competently apply them, that non-internal fact about her means she’s not epistemically rational (and not epistemically irrational either, but rather just a-rational). If Alice thinks things over as carefully as she can, and it just does seem to her that there’s overwhelming evidence that she’s Napoleon, and it seems to her that there’s no reason whatsoever to doubt that she’s Napoleon, or that the evidence is overwhelming, etc., then I think she _must_ believe that she’s Napoleon if she’s rational. You think she’s being irrational just because of the brute external fact that she isn’t, or the brute external fact that her evidence is not as strong as it vividly appears to her on careful reflection? The fact that some ideal thinker, whose experiences and impressions and intuitions and beliefs would no doubt be radically different from Alice’s, wouldn’t accept her belief or her reasoning? I can’t really offer much of an argument for this but I just have no inclination to think that any of these things are relevant to assessing the epistemic rationality of Alice, in her actual situation and given her actual experiences, etc. What _makes_ her irrational in your view?

  24. Consider theistic and atheistic indoctrination. For example, A was raised in medieval France and B was raised in the USSR. This is a significant disagreement of the kind you’re describing, right? So you’re saying that _both_ forms of indoctrination are likely to be unreliable, or just are unreliable, because both are cases where a significant percentage of the population has been indoctrinated into believing something and some significant percentage of the other population has been indoctrinated into believing its negation? I just find that to be intuitively obvious or even especially plausible.

    You mean that you just *don’t* find that intuitively obvious or particularly plausible?
    Anyway, I’m saying that *indoctrination* as a method is unreliable.
    Perhaps, the following examples will explain what I mean better:

    EX1: Let’s say that C was indoctrinated on a matter on which at most 60% of those indoctrinated got the right belief about whether P is true, and at least 40% got it wrong. Then, on the basis of that info, we ought to assess that the chances that C got the right belief are no greater than 0.6. Granted, *if* we have a further method to assess whether P is true, then that’s not a problem: our probabilistic assessment can be raised from 0.6, or lowered from 0.6. But let’s say that we do *not* have such a method. Then, we don’t get above the 0.6.

    EX2: Let’s say that Alice reckons that:

    1. She does not have a means to get around false indoctrination, if she ever got false beliefs from indoctrination. To give it a number, let’s say the chances she’s one of the people who can beat false indoctrination are less than 1/100.
    2. At least 60% of people got the wrong indoctrination on Q1, or the wrong indoctrination on Q2. [here, Q1 and Q2 are two moral statements).

    Let us now say that Alice was indoctrinated to believe that P is true, but she now intends to assess whether her belief is correct. On the basis of 2., she reckons that there is a 0.4 chance that P is false, prior to making further assessments. Alas, due to 1., she reckons that making further assessments would have no more than a 1/100 chance of correcting a mistake in that piece of indoctrination (or any other), if there is one. It seems clear to me she ought to be skeptical about whether P is true, but let’s do the math.
    Let’s say Alice wants to keep going, so she decides to use her own reason, what seems true to her, her sense of right and wrong, etc., and applies all of that to P, and P – unsurprisingly – still “looks” or “feels” true to her…but she decides to do the math too, and introduces the following definitions:

    E(1): Alice is one of the people who has the means to correct indoctrination errors.
    E(2): Alice got the right indoctrination on P.
    E(3): After reflection, P still appears true to Alice.

    So, Alice says:

    Pr(E(2)│E(3))=Pr(E(2)&E(3))/Pr(E(3))=Pr(E(2)&E(3)&E(1))/Pr(E(3))+Pr(E(2)&E(3)&~E(1))/Pr(E(3)

    Consider theistic and atheistic indoctrination. For example, A was raised in medieval France and B was raised in the USSR. This is a significant disagreement of the kind you’re describing, right? So you’re saying that _both_ forms of indoctrination are likely to be unreliable, or just are unreliable, because both are cases where a significant percentage of the population has been indoctrinated into believing something and some significant percentage of the other population has been indoctrinated into believing its negation? I just find that to be intuitively obvious or even especially plausible.

    You mean that you just *don’t* find that intuitively obvious or particularly plausible?
    Anyway, I’m saying that *indoctrination* as a method is unreliable.
    Perhaps, the following example will explain what I mean better:

    EX1: Let’s say that C was indoctrinated on a matter on which at most 60% of those indoctrinated got the right belief about whether P is true, and at least 40% got it wrong. Then, on the basis of that info, we ought to assess that the chances that C got the right belief are no greater than 0.6. Granted, *if* we have a further method to assess whether P is true, then that’s not a problem: our probabilistic assessment can be raised from 0.6, or lowered from 0.6. But let’s say that we do *not* have such a method. Then, we don’t get above the 0.6.

    In case you don’t understand what I mean, or you do but find my assessment implausible, here’s a much more detailed example. I will actually do the math, and maybe we can figure more precisely where the disagreement comes from.

    EX2: Let’s say that Alice reckons that:

    1. She does not have a means to change indoctrinated beliefs. To give it a number, let’s say the chances she’s one of the people who can beat false indoctrination are less than 1/10.
    2. At least 60% of people got the wrong indoctrination on Q1, or the wrong indoctrination on Q2. [here, Q1 and Q2 are two moral statements).

    Let us now say that Maria was indoctrinated to believe that P is true, but she now intends to assess whether her belief that Q1&Q2 is true, is correct. On the basis of 2., she reckons that the prior probability (i.e., prior to any further reflecting on the matter, and considering just the indoctrination odds) that Q1&Q2 is true, is no greater than 0.4. Now, Maria goes on to reflect on the matter, reason, etc., and her moral seemings remain unchanged, so it still looks to her – by her own moral sense – that Q1&Q2 are true.
    So, she decides to check doing the math, and defines the following events:

    E1: Maria is one of the people who can change indoctrinated beliefs.
    E2: Maria got the right indoctrination about Q1 and Q2.
    E3: Maria reflection, Maria’s moral seemings on the matter of Q1 and Q2 have not changed.

    P(E1)≤1/10
    P(E2)≤0.4=4/10.
    P(E2│E1)=P(E2│~E1)=P(E2) [assuming P(E2&E1)=P(E2)P(E1); that seems to match your description, but if you think otherwise, please let me know).
    P(E3)≥P(E3│~E1)P(~E1)=P(~E1)≥9/10.

    P(E2│E3)=P(E2&E3)/P(E3)≤(10/9)P(E2&E3)=(10/9)(P(E2&E3│E1)P(E1)+P(E2&E3│~E1)P(~E1))≤(10/9)(P(E1)+P(E2&E3│~E1))≤(10/9)(1/10+P(E2│~E1))=≤1/9+(10/9)*(4/10)=1/9+4/9=5/9

    In other words, after all of her reflection, Alice still ought to conclude P(E2│E3)≤5/9, so she shouldn’t assign a probability greater than 5/9 to the hypothesis that she got both Q1 and Q2 right.

    I wouldn’t say that’s clear at all! I have the opposite intuition. I think being epistemically rational is entirely a matter of how things appear to the subject. Well, with one qualification. If the subject is so weird that she doesn’t even have relevant rational concepts or abilities, or she has them but can’t competently apply them, that non-internal fact about her means she’s not epistemically rational (and not epistemically irrational either, but rather just a-rational). If Alice thinks things over as carefully as she can, and it just does seem to her that there’s overwhelming evidence that she’s Napoleon, and it seems to her that there’s no reason whatsoever to doubt that she’s Napoleon, or that the evidence is overwhelming, etc., then I think she _must_ believe that she’s Napoleon if she’s rational. You think she’s being irrational just because of the brute external fact that she isn’t, or the brute external fact that her evidence is not as strong as it vividly appears to her on careful reflection?

    That’s an interesting disagreement. I would have expected that maybe you would say that “A epistemically ought to believe P iff it would be epistemically irrational of A not to believe P” is not true, but I wasn’t expecting that you would question she was being epistemically irrational!

    I am making the assessment that she’s being epistemically irrational intuitively. It seems obvious to me. If I had to speculate on the causes of my intuitive assessment, I would probably go with the brute fact that she/her brain is almost certainly making non-Bayesian updates to reach that assessment on the basis of the available evidence. But that’s a guess, even if perhaps an educated one, about what might be driving my intuitive assessment that she’s being irrational. If that guess is mistaken, the intuitive assessment remains; I don’t think I need a theory of what *makes* her irrational in order to properly assess that she’s being so.

    But let me try another example. Let’s say a jury has to decide whether a defendant is guilty of a heinous murder.
    All other jurors reckon the defendant is guilty, given DNA evidence, fingerprints, video footage, motive, and 12 witnesses.
    Alice – who believes she’s Napoleon – comes to believe that the defendant was framed by Lucifer, who used his powers to plant the evidence. She cannot believe otherwise. So, she votes “not guilty”. Do you think that’s a *reasonable* doubt? Or do you think she’s being epistemically rational despite the fact that the doubt is not reasonable (let’s say there is no evidence presented in the case that would suggest involvement of a superhuman power). We may stipulate that other than the presence of Alice, the trial was normal, so there was no (other) suggestion of involvement of a superhuman entity.

    Or maybe you think that in that context, she qualifies as “a-rational”?

    At any rate, in light of my assessment of the psychopath example (and he is being epistemically rational), I think the issue – while interesting – is not the source to our disagreement about blameworthiness.

  25. Jacques,

    Sorry, the first part of the previous reply has a copy/paste error. I didn’t mean to post the part up to “So, Alice says:

    Pr(E(2)│E(3))=Pr(E(2)&E(3))/Pr(E(3))=Pr(E(2)&E(3)&E(1))/Pr(E(3))+Pr(E(2)&E(3)&~E(1))/Pr(E(3)”. The rest is okay.

  26. Jacques,

    One more point about Maria: after she does the math and P(E2│E3)≤5/9, maybe she still continues to assign a very high probability to Q1&Q2, because E1 is false. On the other hand, if it turns out that she lowers the probability of Q1&Q2 and stops believing it, she later reckons she *can* change indoctrinated beliefs, and so after that, she reconsiders and increases her assignment again. But as I mentioned, I don’t think that most people are incapable of changing their indoctrinated beliefs (if you think this only holds for some of the most important indoctrinated beliefs, then we may stipulate that Q1 and Q2 are among those).

  27. “Let’s say that C was indoctrinated on a matter on which at most 60% of those indoctrinated got the right belief about whether P is true, and at least 40% got it wrong.”

    Why should we say this? I mean, in the cases that interest me people hold some world-view as a result of their “indoctrination” or they hold moral beliefs as a result. Why should I agree that this is one of those matters where “at most 60%” end up with true beliefs, or at most 43% or whatever? I just don’t know how to assess this. But there have to be numbers in order for your argument to work so it seems important. I don’t think we can just stipulate that people’s resulting beliefs about morality or relevant matters are only true n percent of the time.

    Another problem for me here is that, given my internalism, none of this abstruse reasoning would make any difference to a person’s rational attitudes unless he’s able to follow the reasoning and it seems to him that such considerations are relevant.

    You begin with Alice thinking “She does not have a means to get around false indoctrination, if she ever got false beliefs from indoctrination”. Well, okay I guess she might think that. I suspect it’s more likely that she would have some means of getting around it, and would think she did, if she’s actually able to reason about these issues in the way you have in mind. Would the argument not work in that case?

    Alice on the jury: I say her doubts are entirely reasonable _for her_ though of course it would be very unreasonable for many other people in different mental situations to accept her reasoning. This seems to be a point about doxastic versus propositional justification or something. We can reasonably say her doubts aren’t reasonable, sure. But that’s not the kind of judgment under discussion.

  28. Jacques,

    Why should we say this? I mean, in the cases that interest me people hold some world-view as a result of their “indoctrination” or they hold moral beliefs as a result. Why should I agree that this is one of those matters where “at most 60%” end up with true beliefs, or at most 43% or whatever? I just don’t know how to assess this. But there have to be numbers in order for your argument to work so it seems important. I don’t think we can just stipulate that people’s resulting beliefs about morality or relevant matters are only true n percent of the time.

    As I mentioned, that part of my post was a copy and paste problem. It wasn’t supposed to be posted. But the other example is similar in that regard.
    One way to make such estimates is to take a look at beliefs across the world, when we have data, and make a conservative estimate about the maximum number of people who get it wrong.

    Another problem for me here is that, given my internalism, none of this abstruse reasoning would make any difference to a person’s rational attitudes unless he’s able to follow the reasoning and it seems to him that such considerations are relevant.

    You begin with Alice thinking “She does not have a means to get around false indoctrination, if she ever got false beliefs from indoctrination”. Well, okay I guess she might think that. I suspect it’s more likely that she would have some means of getting around it, and would think she did, if she’s actually able to reason about these issues in the way you have in mind. Would the argument not work in that case?

    No, it wouldn’t work if she had some means of getting around it.
    In this part of my post, I’m discussing a skeptical argument in the context in which the person has no way of getting around her indoctrination. When you told me you think *you* actually *can* get around it, I withdrew the argument, and replied to one of your questions “I think it would lead to an objectionable kind of skepticism if it were not for the fact that you believe that *you* actually can do what most people can’t. But you do believe that, so you avoid that one.”
    In your reply, you continued discussing the skeptical argument in question, so I’m still explaining it, and making it more detailed. But I’m not saying that that argument is a problem for your position, given that you said you do have a way around indoctrination.
    I was still making the other argument, i.e., the one using M-relativism, but in your reply, you only addressed the skepticism argument, even if the one I applied to your position was the M-relativism argument. I reckoned you were interested in discussing the skepticism argument as a matter of philosophical interest and not in relation to your specific position, so I’m addressing your points, but I’m not suggesting the argument would work if she *does* have a way around the indoctrination.

    Alice on the jury: I say her doubts are entirely reasonable _for her_ though of course it would be very unreasonable for many other people in different mental situations to accept her reasoning. This seems to be a point about doxastic versus propositional justification or something. We can reasonably say her doubts aren’t reasonable, sure. But that’s not the kind of judgment under discussion.

    But in which say we can say her doubts aren’t reasonable, but “reasonable for her”?
    At any rate, here we have a disagreement, since I think she’s being epistemically irrational, whereas you think she isn’t. But this is a matter I don’t see any way to make further arguments, so we’ll just disagree it seems to me.
    As I mentioned, given my psychopath example, the issue of whether she’s being epistemically rational is not the source to our disagreement about blameworthiness.

  29. Maybe I need to understand the psycho example better. Seems to me that we can say Alice is reasonable in a straightforward way: internalism about justification. She is reasonable because all facts about justification supervene on facts about how things appear to the subject. This is something we can argue about I think. E.g., if internalism were false she could rationally think “It seems to me that my belief is produced by reliable means, and I am not aware of any evidence that it wasn’t, but still it just isn’t rational for me to hold the belief”. Or at any rate we could say that about her rationally; but that seems really weird to me. Not to you?

    • Seems to me that we can say Alice is reasonable in a straightforward way: internalism about justification. She is reasonable because all facts about justification supervene on facts about how things appear to the subject. This is something we can argue about I think. E.g., if internalism were false she could rationally think “It seems to me that my belief is produced by reliable means, and I am not aware of any evidence that it wasn’t, but still it just isn’t rational for me to hold the belief”. Or at any rate we could say that about her rationally; but that seems really weird to me. Not to you?

      Are you talking about propositional or doxastic justification?
      According to the IEP article on “Internalism and Externalism in Epistemology”, “internalism should be understood as a view about propositional justification.”, for the reasons it gives.
      In that context (i.e., propositional justification), whether all facts about justification supervene on facts about how things appear to the subject does not seem to follow from internalism simpliciter, but from some versions of internalism (e.g., point 1.c., “The Meaning of Internal”, sketches some of the variants).

      But let me make the matter a bit more concrete: when we make intuitive probabilistic assessments as we do all the time, some of our brain/mental processes are not transparent to us. For instance, we can consciously think about, say, the evidence presented against a defendant, the video, the witnesses, etc., but we don’t have access to the process by which on that basis, we assign high (or low) probability to the hypothesis that he’s guilty. We just do.
      Similarly (and perhaps equivalently, but this is debatable, so I’ll add it) we do not have conscious access to our belief-formation mechanism, in the sense that while on the basis of some data we come to have such-and-such belief, we do not have conscious access to the mechanism that “connects” the data to the belief. After all, theory (and belief, and probabilistic assessment) are all underdetermined by observations. But we do not know how our brain (or our mind; we needn’t assume it’s the brain, though I hold it is) is working and resulting in the beliefs, probabilistic assessments, etc., that we have, rather than others.

      Yet, all of those processes are internal to the subject, in some sense of “internal”, but not in other. I guess you count all of that as “external”?

      As to your example, I’m not sure why she could *rationally* think that (by “think” do you mean “believe”? Please clarify), but I would say she could *coherently* entertain that notion. I don’t think it would be *rational* to believe it, though. Are you saying it would be rational of her to believe that?
      Maybe you meant (because of the “could” part) that it’s possibly rational to believe that. But I don’t think so, because if she rationally comes to believe it *would be* irrational to keep believing that, it seems to me she ought to stop believing it right away.

      Maybe your “a-rational” example would help here.
      Could she (in the sense of “could” you are using here) think that her mind/brain is so screwed that she’s a-rational when it comes to set theory, because – say – a demon/matrix overlord is messing with her mind, preventing her from properly reasoning in the domain of set theory?

      Could she think – for example -, “It seems to me that my belief that ZL and AC are not equivalent on the basis of ZFis produced by reliable means, and I am not aware of any evidence that it wasn’t, but still it just isn’t rational for me to hold the belief, because I’m a-rational in the domain of set theory, given that a matrix overlord is messing with my mind and won’t allow me to do logic properly in that domain?”.

      That would be *weird* but it seems *coherent* to entertain it, though I don’t know it’s possibly *rational* for her to believe it. Still, whether it’s rational for *her* to believe it isn’t the point, it seems to me; *I* would be inclined to think she would hold her belief irrationally if the matrix overlord is in fact messing with her mind in that way. I guess you would disagree. But would you allow that she’s a-rational in that particular domain?
      More generally, do you allow for a-rationality with respect to specific domains, or even single propositions?

      With regard to the psychopath Jack, he has no properly functioning moral sense, but just as he can learn what is legally obligatory, forbidden, etc., he can also learn what others believe is morally forbidden, obligatory, etc., and as a result, considering that – generally – the human moral sense is reliable (which he can learn on grounds of the general reliability of human faculties), etc., he can learn some of the moral facts in question. Alas, in his society, everyone believes it’s obligatory to kill one’s sister if she has sex out of wedlock consensually. So, he comes to believe it’s morally obligatory. Since he cares about his social standing, he kills her so that others don’t think he failed to do his moral duty. Jack believed his action was morally obligatory, and also was justified in believing so. There was no irrationality on his part. I think he behaved immorally, though – which I think implies he’s blameworthy.

Comments are closed.