People are sometimes frustrated by their inability to place me in existing communities. Especially if they are intellectual communities. This makes sense. If they can place me then it is much easier to engage, they can model the parts of my thinking that I haven’t made explicit to be a copy of the general thinking of the community hivemind and it will be a good enough approximation. Unfortunately this cannot happen with my thinking.
The fact that it doesn’t happen leads to a second reason for frustration. The realization that I have read all the same arguments, and agreed with all the same premises as they have agreed with, and yet am not acting in the way that they are. I’m not pledging allegiance to the same causes. Why?
Is is that I lack the ability to take ideas seriously? Certainly that is partially the case. My mind drifts from idea to idea without any sort of “reasonability” barrier and actually believing those (which would imply acting on them) would be problematic.
Moreover, it certainly is not the case that I’m outside of the scope of what Robin Hanson mentions as the Homo Hypocritus. And it certainly is the case that I have a standing problem that blocks me from feeling like a group-member in general. But I do not think these fully explicate what is happening.
I believe that me expressing epistemic agreement and then not acting in the way that others that express agreement act is caused by the fact that I’m thinking from fundamentally orthogonal epistemological presumptions. Being placed in an existing (intellectual) community and acting in a way that reflects how people who have read the arguments shared between that community act requires a certain hedgehoginess (1) of thinking that I lack. This hedgehoginess is the ability to believe what you conclude.
I lack this piece.
I will attempt to explain why I believe I am justified in not believing what I conclude. This whole essay is serves as an attempt to facilitate others being charitable to my shortcoming, in the same way Paul Graham tried to make managers charitable to makers time. If I succeed I create a bridge for shortcoming despite the different mental configurations.
In what follow I first present my guess of the problem origin. I then present a formal depiction of what I think I lack, and why I’m justified in lacking it. I end up with musings about how to overcome this difference.
I believe the frustration arises due to an inference that goes something like this:
- If someone takes the ideas A ^ B ^ C seriously, then they will do D.
- You don’t do D.
- Therefore you don’t take the ideas A ^ B ^ C seriously . [1,2]
I will attempt to argue that the correct inference is:
- If someone takes the ideas A ^ B ^ C seriously and they have hedgehog-piece 1, then they will do A.
- You don’t do A.
- Therefore you don’t take the ideas A ^ B ^ C seriously, or you do not have hedgehog-piece 1. [1,2]
And of course, me claiming that the second disjunction is the correct one.
Hedgehog-piece 1, formally
I proceed to present (2) a very intuitive principle of reasoning that I believe others possess and that I lack, which explains the differences that we get frustrated over. I then present a paradox that follows from the acceptance of the principle. I end with an explanation of the why the paradox arises.
The Principle of Closure
The principle of closure is defined as follows:
“Necessarily, if S has justified beliefs in some propositions and comes to believe that q solely on the basis of competently deducing it from those propositions, while retaining justified beliefs in the propositions throughout the deduction, then S has a justified belief that q.”
My hypothesis is that hedgehog-piece 1 is the principle of closure. I also hypothesize that others have not reasoned themselves to it, but that it is a natural piece of their mental configuration the same way it is a naturally missing piece of my mental configuration. (I’m deliberately leaving these terms fuzzy. A case of choosing roughly correct over precisely wrong.)
In the next section I replicate a paradox that I believe undermines the principle of closure.
The Preface paradox
“It is customary for authors of academic books to include in the preface of their books statements such as “any errors that remain are my sole responsibility.” Occasionally they go further and actually claim there are errors in the books, with statements such as “the errors that are found herein are mine alone.”
(1) Such an author has written a book that contains many assertions, and has factually checked each one carefully, submitted it to reviewers for comment, etc. Thus, he has reason to believe that each assertion he has made is true.
(2) However, he knows, having learned from experience, that, despite his best efforts, there are very likely undetected errors in his book. So he also has good reason to believe that there is at least one assertion in his book that is not true.
Thus, he has good reason, from (1), to rationally believe that each statement in his book is true, while at the same time he has good reason, from (2), to rationally believe that the book contains at least one error. Thus he can rationally believe that the book both does and does not contain at least one error.”
Diagnosing the Preface Paradox
In this section I replicate a diagnosis of why the Preface Paradox holds and what it means for the Principle of Closure.
“Consider a very long sequence of competently performed simple single-premise deductions, where the conclusion of one deduction is the premise of the next. Suppose that I am justified in believing the initial premise (to a very high degree), but have no other evidence about the intermediate or final conclusions. Suppose that I come to believe the conclusion (to a very high degree) solely on the basis of going through the long deduction. I should think it likely that I’ve made a mistake somewhere in my reasoning. So it is epistemically irresponsible for me to believe the conclusion. My belief in the conclusion is unjustified.”
“Diagnosis of the preference paradox: Having a justified belief is compatible with there being a small risk that the belief is false. Having a justified belief is incompatible with there being a large risk that the belief is false. Risk can aggregate over deductive inferences. In particular, risk can aggregate over conjunction introduction”
“(T)here is a natural diagnosis of what’s going on: A thinker’s rational degree of belief drops ever so slightly with each deductive step. Given enough steps, the thinker’s rational degree of belief drops significantly. To put the point more generally, the core insight is simply this: If deduction is a way of extending belief – as the Williamsonian line of thought suggests – then there is some risk in performing any deduction. This risk can aggregate, too.“
The acceptance of the preface paradox as a counter-argument to the principle of closure makes it so that I can say of argument that concludes A – “Yes, I think that argument is valid and sound; but I don’t believe that A”. Understandably this frustrates the people arguing for A.
Alas, I didn’t reason myself into this acceptance. It is a very formal description of what has been characteristic of my reason for the longest time. And that is as far as I can say without falling into a narrative fallacy. I give this formal description in an attempt to make my reasoning less opaque, and hopefully less frustrating to others.
This hypothesis explains why consilience is my automatic go-to principle to figure stuff out, and why I’m attracted to many weak arguments over one strong argument. It also explains why I frustrate hedgehogs and vice-versa, which I explore in more detail below. Further, it predicts that posts in this blog will be very hit-or-miss, as I talk to a specific community at a time. So if so far you have had no luck, don’t despair, dear reader!
Why I frustrate hedgehogs and vice-versa
“It is tedious to undermine even though it is lightly held. A strong view requires an opponent to first expertly analyze the entire belief complex and identify its most fundamental elements, and then figure out a falsification that operates within the justification model accepted by the believer. This second point is complex. You cannot undermine a belief except by operating within the justification model the believer uses to interpret it. A strong view can only be undermined by hanging it by its own petard, through local expertise.”
“To get a fox to change his or her mind on the other hand, you have to undermine an individual belief in multiple ways and in multiple places, since chances are, any idea a fox holds is anchored by multiple instances in multiple domains, connected via a web of metaphors, analogies and narratives. To get a fox to change his or her mind in extensive ways, you have to painstakingly undermine every fragmentary belief he or she holds, in multiple domains. There is no core you can attack and undermine. There is not much coherence you can exploit, and few axioms that you can undermine to collapse an entire edifice of beliefs efficiently. Any such collapses you can trigger will tend to be shallow, localized and contained. The fox’s beliefs are strongly held because there is no center, little reliance on foundational beliefs and many anchors. Their thinking is hard to pin down to any one set of axioms, and therefore hard to undermine.”
In his depiction of Foxes and Hedgehogs, Rao misses the one dimension that matters to me in the context of this essay: Explicitness. This is because I believe that explicitness is a sine qua non for communication and that reasons matter only insofar as they are communicable.
I divide the challenges of communication by species. The fox faces a certain challenge, the hedgehog another. Following, I make these challenges explicit.
Challenges to communicable reasons
The challenge for the fox is learning how to introspect into the reasons it is using to decide, and communicate those.
This is a challenge because introspection is difficult:
“This study tested the prediction that introspecting about the reasons for one’s preferences would reduce satisfaction with a consumer choice. Subjects evaluated two types of posters and then choseone to take home. Those instructed to think about their reasons chose a different type of poster than control subjects and, when contacted 3 weeks later, were less satisfied with their choice. When people think about reasons, they appear to focus on attributes of the stimulus that are easy to verbalize and seem like plausible reasons but may not be important causes of their initial evaluations. When these attributes imply a new evaluation of the stimulus, people change their attitudes and base their choices on these new attitudes. Over time, however, people’s initial evaluation of the stimulus seems to return, and they come to regret choices based on the new attitudes.)” (3)
But there is some evidence that introspection can be trained. (4) (5) Further, the cost is to be wrong. Sometimes (frequently?) you will just believe the wrong things for the wrong reasons. Making reasons explicit helps overcome this.
The challenge for the hedgehog is to make the fundamental beliefs and justification model accepted explicit, and communicate those.
There is not much I can say about this. Hopefully some hedgehog friend can take up the challenge and report back. I understand this is asking for someone to see the unseen, or the background of whatever they are looking at. I understand this is not-trivial.
If I have been successful this essay will dismiss some frustration of my epistemic interlocutors. It will have done so by making my thinking explicit, which is what I concluded foxes ought to do in order to improve communication. (Yes, going meta, very-LW.)
Hopefully I managed to explain this fundamental cog in how I think and why it might seem that I don’t take ideas seriously, when I do.
- Hegdehogs and foxes
- This whole section is composed of quotes from Schechter, J. (2013). Rational self-doubt and the failure of closure.Philosophical studies, 163(2), 429-452; except for the preface paradox which comes from here
- Quoted from here, citation is Wilson, Timothy D., Douglas J. Lisle, Jonathan W. Schooler, Sara D. Hodges, Kristen J. Klaaren, and Suzanne J. LaFleur. “Introspecting about reasons can reduce post-choice satisfaction.”Personality and Social Psychology Bulletin 19 (1993): 331-331
- Fox, M. C., Ericsson, K. A., & Best, R. (2011). Do procedures for verbal reporting of thinking have to be reactive? A meta-analysis and recommendations for best reporting methods. Psychological bulletin, 137(2), 316.
- Gendlin, E. T. (2012). Focusing-oriented psychotherapy: A manual of the experiential method. Guilford Press.
- This is why it is difficult to me to talk to hedgehogs and to be convinced by them and vice-versa (they need to hack away at my belief in many places, I need a ton of specialized knowledge.
- How to fix this?
- Making things explicit (understanding why you value many perspective, them being as clear as possible about the local knowledge needed to go for the neck)
- How to fix this?
- To what extent does this discussion overlap with the cluster vs sequence thinking?
- Hedgehog, fox; fragility, robustness and anti-fragility
- Can I give a strong argument for “many weak arguments” of the form of “If many weak arguments can be generated for one side that cannot be generated from the other, and there is no strong argument either way this provides evidence for this one side” that is acceptable to Hedgehogs?
- Drill deeper into hedgehoginess/foxiness being a collection of pieces
- “Why having two types of reasoning and have them communicate over localizing the wrong pieces and remove them?”
- Reason works better in a community with each arguing for different sides, not in an individual.
- Why I believe in epistemic communities over epistemic individuals as the place where reason thrives (Arguments for one side vs the other; not one person painstakingly trying to get better). [Reason is social, specialisation, etc.]