The Great Encrypted Broken Telephone Beauty Contest

A favorite quote from here:


“Ok, my question might’ve been easy to misunderstand. My point was that it seems to me that you’re not familiar with the general culture in which MacIntyre writes, and so you don’t even get what he’s saying and what narratives he’s responding to. It’s like reading Nietzsche when you don’t know what Christianity is.

So your confusions aren’t about what MacIntyre is in fact saying (some of which I think has merit, some doesn’t), but it just fails to connect at all.

And while I overall like MacIntyre, I’m not enough of a fan to try to bridge that gap for him, and unless I did this full-time for a year or so, I don’t think I could come up with something better than “well, read these dozens of old books that might not seem relevant to you now, and some of which are bad but you won’t understand the later reactions otherwise, and also learn these languages because you can’t translate this stuff”. Which is a horrible answer.

Worse, it doesn’t even tell you why you should care to begin with. I think part of that is that, besides the meta-point that MacIntyre makes about narratives in general, it seems to me that the concrete construction and discourse he uses is deeply *European* and unless you are reasonably familiar with that, it will seem like one theologian advocating Calvinism instead of Lutherism when you’re Shinto and wonder why you should care about Jesus at all. (This is a general problem for non-continental readings of continental philosophy, I think – it’s deeply rooted in European drama.One reason Aristotle is so attractive is that all European drama theory derives from him and even someone as clever as Brecht couldn’t break it, so he’s an obvious attractor. I, and I suspect many continentals, came to philosophy essentially through drama, and that makes communication with outsiders difficult. Not enough shared language and sometimes very different goals.)

So I’ll save that goodwill for some later (and more fruitful) topic, if you don’t mind.

As to MacIntyre’s meta-point of “use the community-negotiated tools and narratives you already have” instead of “look for elegant theories no one actually uses anyway”, well, I *wanted* to write a different explanation of that, but then Vladimir did it already in his comment below, and I couldn’t do a *better* job right now, but he still failed, so…”

It’s no so much as eavesdropping on the great conversation, as it is playing broken telephone with multiple messages going on. Each author has a cultural backdrop and specific understanding of what came before and you need to model these to understand what they are saying. Its like an encrypted keynesian beauty contest turned up to 11.

Given these you would expect misunderstanding to abound, people to be hopelessly confused, and debates over what was meant in the first place to drown out the original messages.




Angst and Lovecraftian monsters

Sandpapering against reality

Lovecraftian horror feels to me in a very similar way to that of the existential despair that comes after removing a significant belief (Santa, God, Moral realism). I think that despair comes from a quirk in the way belief systems work.

For example. At t0, I adopt proposition A and corollaries A1 to A4. At t1, I remove A. At t2, A is not present, but the corollaries stay, even though they don’t connect anymore.

This leads to weird interactions of the sort of me still wanting to be “good” although “good” doesn’t connect anymore. And those interactions (or lack of) generate despair and Angst: My deeply true felt sense of what is needed and my understanding of what is possible for me in reality do not match.

And it hurts.

But there is the option of (paradoxically) surrendering to it, to the doom, and being meta-ok with it. “Yes it feels terrible and like everything is crashing, and that is my experience now”, and that is all there is to say. And slowly, or abruptly, one morning the corollaries are gone and so is the despair.

Why is Lovecraftian horror similar? Because it is going from “Humans are important and valuable” and “It is good and important and necessary and needed that humans are important and valuable” to just having the second proposition and having “Humans are not important and not valuable”. And this leads to grinding and sandpapering against reality. And that is painful and despairing and existentially terrifying. At least the first few times.


Flowing with the horrors

The best answer to all this (1) that I have found yet is the radical acceptance expressed by David Chapman here:

“As long as you are resentful about suffering, as long as you think the world should be different, then you are stuck obsessing over how unfair it is, and scheming about how to escape. And that just makes you angry and miserable all over again.

Charnel ground practice means giving up on that cycle. You simply lose all interest in how life ought to be.

As soon as you forget about “ought to be,” you are left with life just as it is: chaos, horror, death and all.

In that, there is absolutely no hope. But there is opportunity.

Garden of horrors

When you accept that it extends to infinity, you realize that the claustrophobic charnel ground—exactly because there is nowhere else—is a land of total openness and freedom.

You can set off in any direction to explore the scenery. The geography is endlessly varied. There are lakes of fire, rivers of poison, and oceans of blood. There are forests of cannibal trees, and of course the Nameless Lurking Evils at the Mountains of Madness.

So the charnel ground is also a horrifying amusement park. There’s lots to see and do—always something new, in fact.

Instead of trying to escape:

  • You could have fun compiling an atlas.
  • You could throw a party. You could invite the zombies. (Just make sure to collect lots of brains first. You wouldn’t want to be a stingy host.)
  • You could write a geeky identification guide to the many species of demons.
  • You could grow a garden of poisonous flowers. You could learn alchemy and refine poisonous herbal extracts into magical potions.
  • You could go talk to the cannibal witches. They’re unusual company. They might eat you, but something else could happen. Romance is possible

Sooner or later, you’ll die horribly. But you might as well do something interesting in the mean time, not just cower in a corner. Reality is a splatter movie, but it is also an adventure story and a romantic comedy—all at the same time.

Tantra is given to flights of fantasy, because reality is fantastical. Confronted with over-the-top horror in real life, you might as well laugh at the outrageousness of it.”


(1) – I make no claim to understanding “all this”.







1. The theme off being forsaken by gods keeps reappearing. There might be more to it than I anticipated.
2. for more on a specific type of monsters
3. tantra is a possible answer to monsters (read chapman)

Heuristics for map-making in adversarial settings

In a previous post I argued that one is well advised in expecting some entities to have a vested interest in strategically deceiving ones’ map-creation efforts. Samo Burja has expressed a similar sentiment here. In this post I suggest 3 classes of heuristics aimed at counteracting these deceptive efforts.

These are heuristics in Simon’s sense (1): they will lead to better results with regards to internal criteria (in this case map making) by virtue of being applicable to the structure of the environment. If I was correct in describing the structure of the environment in the previous post – then these heuristics can be expected to be helpful.

I don’t claim these heuristics to be original – Hell, everything written thus far reads like a collage. They are in place already to some extent, being used by some. What is new is uniting them under this particular framework of “Map-making in adversarial settings”. Naming things seem to be powerful, having a community (like LW) self-reinforcing things’ names seems to be powerful, being able to point people to things and treat them as objects explicitly  is powerful. I don’t yet understand exactly what is going on there.


The tools

Heuristics for question dismissal

The first heuristic is to ask “What will I do with the answer to this question?”.  Attention is finite, and the fact that a question has insinuated itself to your attention is a necessary but not sufficient condition to think about it. It is a heuristic for dealing with privileged questions.

These come especially from the media or topics that the media is addressing, and what I referred to in the previous post when I said that “There’s an old saying in the public opinion business: we can’t tell people what to think, but we can tell them what to think about.” The fact that the heuristic is not in place explains the power of agenda-setting.

As Qiaochu made clear “[Y]ou can apply all of the epistemic rationality in the world to answering a question like “should Congress pass stricter gun control laws?” and never once ask yourself where that question came from and whether there are better questions you could be answering instead.”

There is a second topic in this constellation which is about the truthfulness of what is transmitted, within what the media transmits. I don’t want to open that particular can of worms now, but want to bring to awareness that if there are 3 sides to a story, and assuming one is truthful, the prior is against the possible world in which your particular side is the truthful one.


Heuristics for not engaging

Genetic Heuristic

There is an amazing post on this by Stefan Schubert here.

The key innovation is to overturn the idea that arguments should be addressed as such because this disregards information especially about the argument’s origin. “As mentioned in the first paragraph, those who only use direct arguments against P disregard some information – i.e. the information that Betty has uttered P. It’s a general principle in the philosophy of science and Bayesian reasoning that you should use all the available evidence and not disregard anything unless you have special reasons for doing so. Of course, there might be such reasons, but the burden of proof seems to be on those arguing that we should disregard it.”

As Stefan points out you can imagine that  Betty is not reliable with regards to P because a) she is 3 years old, b) we have knowledge of Betty being biased, c) we know that Betty overestimates her knowledge about the topic of P, d) Betty gets money by making people believe P, or conversely, that Betty is reliable because she is an expert at P. I investigate the two last cases in what follows.

Deserved Persuasiveness Heuristic

Whether an argument convinces you is a function of 1) how well it meshes with your other beliefs 2) if it is true (conditional on your ability to assess truth) 3) your ignorance about the field and 4) the persuasive ability of the arguer.

If these obtain then we can draw an “Deserved Persuasiveness” heuristic that is as follows: (If searching for the truth, then) if your interlocutor is an expert in the topic at hand, and so are you; engage. If neither is, don’t engage, the most persuasive one will just input his bad ideas into the mind of the other one. If the interlocutor is an expert and you are not, then just adopt their ideas, since they are very likely much better than yours. (2)

(The third result – Just accepting expert opinions, because they come from an expert – sounds terrible in an emotional sense for a lot of people. I wonder if this is because people see their ideas as part of themselves, and their idea generation processes as well and thus  would prefer to have “Wrong and mine” ideas over “Right and theirs” ideas.

Having said this, you are doing it all the time: Physics knowledge doesn’t live in a vacuum but in experts minds. They pour it into books and you buy it as the truth coming from the book of truths. The last psychiatrist says “No one thinks a 7th grade textbook is wrong. The results of a study may be questioned, but the Introduction section isn’t. What makes a statement in the Introduction true is that it is in the Introduction”. If anything this position is already unknowingly adopted. Better do it knowingly.)


Heuristics for seeing beyond words

Incentives heuristic

“Words deceive. Actions speak louder. So study the actions. And also, I would add, study the incentives that produce the actions” Actions speak louder than words, they reveal aliefs instead of official positions. Incentives show the process by which the aliefs came to be in the first place.

The Incentives heuristic encourages one to ask: “What is incentivising A to utter P?”. Its simplicity hides its power. I maintain that sane use of this heuristic will systematically produce more reliable beliefs about the likelihood of P being the case. Like Fermat I have truly marvellous ideas about the applicability of this heuristic, which the existing inferential distance doesn’t allow me to convey, now.


(1) – Simon, H. A. (1990). Invariants of human behavior. Annual review of psychology, 41(1), 1-20.
(2) – I think my treatment of this here is superficial, compared to how emotionally painful it is to accept. In a further post I’ll argue for it more extensive

  • Understand the power of naming things
  • The “marketplace of ideas” is a rationale for freedom of expression based on an analogy to the economic concept of a free market. The “marketplace of ideas” belief holds that the truth will emerge from the competition of ideas in free, transparent public discourse (
  • puas being banned, inquisition burning people, pinker’s model of societal change. conspiracy theory, defending these people. -> i’m possbily going to have ideas that defend or associate to this people this is problematic.
  • Is most people default epistemology a consensus theory of truth?
  • Aristotle on rhetorics “There are three bases of persuasion by the spoken word: the character of the speaker, the mood of the audience, and the argument (sound or spurious) of the speech itself. So the student of rhetoric must be able to reason logically, to evaluate character, and to understand the emotions.”
  •  The state engaging in various actions to create an image of itself as “a thing” (
  • “Adversarial strategy seems to be in the same category of information security. It is something you want to have before you need it. Ideally you would never need it, but you likely will. I think “enemy” is the wrong framing, and that “non-aligned strategic players” are a better one. If you believe that (1) there exist players that have power, (2) these players are strategic, (3) these players are misaligned; and (4) that you want to have an understanding of adversarial strategy before you need it; then it follows that you would desire to install adversarial strategy pieces.”

Reason eating itself

This is the first of a series of essays on reason (1).  I will open with some preliminaries about Darwinism and Evolutionary Psychology. Then, I will discuss evolutionary psychology approaches to the function of reasoning by introducing the Justificatory and Argumentative theories of reasoning. The essay closes with some considerations about how to move forward from what has been suggested.



Darwinism as an acid burning through everything 

In Darwin’s Dangerous Idea, Dennet talks about the idea of a Universal Acid. “Dennett writes about the fantasy of a “universal acid” as a liquid that is so corrosive that it would eat through anything that it came into contact with, even a potential container. Such a powerful substance would transform everything it was applied to; leaving something very different in its wake. This is where Dennett draws parallels from the universal acid to Darwin’s idea: “it eats through just about every traditional concept, and leaves in its wake a revolutionized world-view, with most of the old landmarks still recognizable, but transformed in fundamental ways.””

Part of the goal of this blog – insofar as the goals of this blog can be stated this early – is to turn everything into an universal acid.

Part of the power of the idea of evolution is that it does act like an universal acid, and I don’t wish to trivialise that. Despite that, it seems to me that when humans are faced with a quirk in humaness (status hierarchies, any of the H&B bias, and so on) one of two broad reactions happen. Either refusal, dismissal, or acceptance and forgetfulness on one side, or plain acceptance on the other.

Plain acceptance can in turn have two tiers. One is that in which after encountering the quirk the epistemic agent goes  forward in such a fashion “Oh humans are really into status hierarchies!” and then every time that notices a potential status hierarchy situation it aims to see with new eyes, to see through the status hierarchy.

The other type of acceptance, deep acceptance, to-the-bones acceptance is when the reaction is something like “Oh my god…  I have had my status hierarchy glasses on my whole life… Everything has been distorted. This changes everything”.

Deep acceptance is why there are several universal acids. Deep acceptance is why there are Lovecraftian Monsters.


Evolutionary Psychology

Here are Tooby and Cosmides on what Evolutionary Psychology is: “The goal of research in evolutionary psychology is to discover and understand the design of the human mind. Evolutionary psychology is an approach to psychology, in which knowledge and principles from evolutionary biology are put to use in research on the structure of the human mind. It is not an area of study, like vision, reasoning, or social behavior. It is a way of thinking about psychology that can be applied to any topic within it.

In this view, the mind is a set of information-processing machines that were designed by natural selection to solve adaptive problems faced by our hunter-gatherer ancestors. This way of thinking about the brain, mind, and behavior is changing how scientists approach old topics, and opening up new ones. This chapter is a primer on the concepts and arguments that animate it.”

Evolutionary psychology connects to the rest of the essay rests via the notion of adaptive problem: “Adaptive problems have two defining characteristics. First, they are ones that cropped up again and again during the evolutionary history of a species. Second, they are problems whose solution affected the reproduction of individual organisms — however indirect the causal chain may be, and however small the effect on number of offspring produced. This is because differential reproduction (and not survival per se) is the engine that drives natural selection. Consider the fate of a circuit that had the effect, on average, of enhancing the reproductive rate of the organisms that sported it, but shortened their average lifespan in so doing (one that causes mothers to risk death to save their children, for example). If this effect persisted over many generations, then its frequency in the population would increase. In contrast, any circuit whose average effect was to decrease the reproductive rate of the organisms that had it would eventually disappear from the population. Most adaptive problems have to do with how an organism makes its living: what it eats, what eats it, who it mates with, who it socializes with, how it communicates, and so on. The only kind of problems that natural selection can design circuits for solving are adaptive problems.”

Burning through our conception of Reason

The follow-up question is “What function is usually ascribed to reasoning?” “The classical modern view on the topic of reasoning is still deeply Cartesian. It is fundamentally individualistic and internal: through a careful, analytical examination of our beliefs, we are supposed to achieve epistemic improvement and make sounder decisions.” (2)

It is reasonable to expect that function and mechanism are connected.

Argument for Learning about function leads to knowledge about mechanisms

  1. Mechanisms are adjusted to their function
  2. If mechanisms are adjusted to their function, then learning about the function of X helps to learn about the mechanisms of X
  3. Therefore, learning about the function of X helps to learn about the mechanisms of X [2,3]

In this section I present two theories that take an Evolutionary Psychology perspective to explain the function of reasoning.


Theory 1: Reasons are for social justification

The first theory proposes the general idea that language is unique to humans and  created unique adaptive problems. Suddenly one’s mind could be scrutinized by others. This led to the adaptive problem problem of social justification: the reasons given for our actions have consequences .

(Imagine someone hitting you in the arm and you asking why they did that and their answer being “For fun”, “I dislike your face”, “I tripped and was trying to hold on”, “This is a social experiment”.)

The fact that there was now a problem of social justification and of human interests diverging led to incentives such that shaped the human psyche into a public self, a private self and an experiential self. (Similar to Freud’s Superego, Ego, and Id).


“[T]he biological postulate, which is the idea that the evolution of language created a new and unique adaptive problem for our hominid ancestors, namely the problem of social justification. The problem of social justification is the problem of explaining why you do what you do. To consider why this is a ‘problem’, ask yourself the following question: Would you want everyone to be completely aware of all your thoughts? Or, to put it another way, do you always tell everyone who asks exactly what you are thinking? If your answer is “no” (which is basically everyone’s answer), you have a sense that it is often important to filter your thoughts and offer a socially justifiable narrative that explains your actions.” (3)

There is a reason that Liar Liar (a film on the premise that the main character- a Lawyer – has lost his ability to lie) is a comedy. Inability to lie is hilarious because it would destroy your life, and schadenfreude is as big as ever.


Theory 2: Reasoning is for argumentation

Language also led to the problem of how to protect ourselves from liars. (Which is related to the problem above.)

A summary of this line of argument: “Human reasoning is one mechanism of inference among others (for instance, the unconscious inference involved in perception). It is distinct in being a) conscious, b) cross-domain, c) used prominently in human communication. Mercier and Sperber make much of this last aspect, taking it as a huge hint to seek an adaptive explanation in the fashion of evolutionary psychology, which may provide better answers than previous attempts at explanations of the evolution of reasoning.

The paper defends reasoning as serving argumentation, in line with evolutionary theories of communication and signaling. In rich human communication there is little opportunity for “costly signaling”, that is, signals that are taken as honest because too expensive to fake. In other words, it’s easy to lie.

To defend ourselves against liars, we practice “epistemic vigilance“; we check the communications we receive for attributes such as a trustworthy or authoritative source; we also evaluate the coherence of the content. If the message contains symbols that matches our existing beliefs, and packages its conclusions as an inference from these beliefs, we are more likely to accept it, and thus our interlocutors have an interest in constructing good arguments. Epistemic vigilance and argumentative reasoning are thus involved in an arms race, which we should expect to result in good argumentative skills.”


What the theories explain

The beauty of these theories is on how much research they can tie together under one theoretical framework. The justification theory unites research (4) on the interpreter function of the left hemisphere, cognitive dissonance, attribution and the self-serving bias, implicit and explicit attitudes, reason giving,

It also possibly explains other interesting empirical facts. I’d maintain that when people are intellectualising in therapy and “not really doing therapy” they are sharing their intellectual and official (or S2 in another language) positions and beliefs. And that when they are actually doing therapy, actually focusing (5) they share their S1, ego, aliefs. This makes sense of the fact that so much of what comes through focusing is shameful and disgraceful and guilt and remorse-ridden. (The key point being that these are all social emotions.) It further explains that so much stuff that comes out is surprising to the person that is verbalising it and so often met with a renegaded acceptance (“I said it, and I believe it, but it can’t be true…”)-

It hints as well as to why causal mechanics explanation are so deeply unsatisfying. Humans reason in a teleological manner (6) and reductive explanations appealing to effective causes (like causal mechanics) don’t fit. (“Why did you do THAT!?” “Past events made it so that it could not be not done”)

With regards to the argumentative theory of reasoning it gives an alternative explanation for various of the H&B results:

“Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others’ arguments. M&S also plead for the “rehabilitation” of confirmation bias as playing an adaptive, useful role in the production of arguments in favor of an intuitively preferred view.

If reasoning is a skill evolved for social use, group settings should be particularly conducive to skilled arguing. Research findings in fact show that “truth wins”: once a group participant has a correct solution they will convince others. A group in a debate setting can do better than its best member.

The argumentative theory, Mercier and Sperber argue, accounts nicely for motivated reasoning, on the model that “reasoning anticipates argument”. Such anticipation colors our evaluative attitudes, leading for instance to “polarization” whereby a counter-argument makes us even more strongly believe the original position, or “bolstering” whereby we defend a position more strongly after we have committed to it.

What of all the research suggesting that humans are in fact very poor at logical reasoning? Well, if in fact “we reason in order to argue”, when the subjects are studied in non-argumentative situations this is precisely what we should expect.”

As a further data-point there is the beautiful finding that people don’t suck at the Wason selection task once you add to the logical setting some content from social relations.  (I also seem to recall humans being good at deontic logic to a remarkable level without any sort of training, but can’t find the reference.)


And now, an interval for wild speculation

This is the biggest leap I’m going to take in the whole essay. If the rest already pushed your suspension of disbelief then I encourage you to skip this section

“According to the original relational dialectic model, there were many core tensions (opposing values) in any relationship. These were: Autonomy and connectedness: The desire to have ties and connections with others versus the need to separate yourself as a unique individual.” (7)

Now imagine you can’t plug in the culture. You are fundamentally different from the people you interact with, and worse, they are extremely similar to each other. On the one hand you just cannot sacrifice your autonomy of thought and being. On the other you envy and long for what they have: a group, and connections.

Your peculiarity drives you out of the social world and into the intellectual world. And you come up with a beautiful solution: normative systems. Normative systems give you a form of explaining yourself and your actions and why they can’t be any other way, and since the explanation is in place, it allows you to reach out to something that you can use to connect to others over, whilst keeping your autonomy.

This is immensely speculative, but it fits. It answers to the confusion that Vladimir expresses at all the discussions of deontology and utilitarianism when no one uses them, ever. This is a really confusing fact. Why are people spending their lives discussing these systems of how to act when they are never used, ever? The fact is that they are both great normative justification systems. (I was surprised to find there is no post on overcoming bias about how “Moral Philosophising is not about Morality”. There is however a post on what the standard to alief a moral theory is.)

The strategy suggested is ridden with problems. One class of problem that are specific to it, as a strategy of connection; and others are those inherent to normativism. These are those that NVC refers to through the idea of jackal language,  and what therapies that mark the superego as the major psychological force try to deal with as well (8). I think that this category of problem is what is being reflected in this comment by Scott S. Alexander about people that come up with the perfect plans, but just can’t make themselves execute. (9)


What now?

Some intermediate conclusions can be drawn from what has been roamed.

First, a clarification. Despite all that has been said above I want to emphasise that spandrel and exaptation are both things. Yes, it is likely that reasoning evolved for argumentation and justification, still it happens to lead to truths. (Which is actually a remarkable fact in need of explanation. Why is it that we logical propositions that we come to believe are by-and-large true? Why do you logical beliefs match the logical facts? Why is logic reliable? Here is an evolutionary explanation, which coheres and fits to the spirit of the essay.)

The fact that reason leads to truths makes it immensely useful and makes it so that it would be a mistake to let go of it. In some way, in eating itself -using it to figure out where it comes from and its limits – reason validates itself. Just don’t take it for its word. (This point is slightly related to one that Eliezer has made, except I think he is wrong: a brain is not reflective, and the burden of proof is on him to exhibit a brain that is self-correcting to any meaningful level, or even an isolated human. What is reflective is the whole of human culture [which is illustrated by how many pieces by others this essay builds on] . I maintain that he made this mistake because he is still speaking from the individualistic, Cartesian worldview that informs H&B and modern reasoning about reasoning and Rational Choice Theory – the focus is always on one individual acting epistemic agent. Having said that he is meritorious in having, in my opinion by necessity, reached the right answer – creating a community. As further evidence, notice how the “twelve virtues of rationality” are for an epistemic agent to become a perfect Cartesian reasoner and they don’t mention community is the least.)

Secondly, almost no one is actually reasoning. They are rationalising and using ad-hoc heuristics (and evolutionarily adaptive heuristics according to Gigerenzer). This is not necessary bad because of bounded rationality, but at some level it is terrible. I know a very large portion of the readership have just gone “Yeah, of course.”. But integrating this knowledge at a deep level is terrible. Lovecraft-monster-like terrible.

You-should-not-fear-Hitler-but-the-fact-that-he-convinced-all-those-people-and-they-don’t-differ-that-much-from-the-people-around-you-now-like terrible. Our-current-civilizational-equilibrium-is-maybe-not-that-sturdy-and-might-change-at-any-time-like terrible.

Thirdly, and more encouraging, some practical upshots can be derived. Since reasoning is the best tool available to figure anything, it is important to understand the limits of applicability of reasoning, in which situations it ought to work and not; and how to engineer those situations – how to craft epistemic communities, as opposed to individuals. (If this point seems obvious, the reader is encouraged to do a tally of how much research there is on individual versus community debiasing for example.)

I shall explore these practical upshots in future essays.

  1. The way I predict this blog going they will be heavily edited and cross-referenced as my understanding deepens. It starts with a breadth-first search and then as I go deeper I upgrade what had been exposed before, or let it hang as a Wittgenstein ladder.
  2. Mercier, H., & Landemore, H. (2012). Reasoning is for arguing: Understanding the successes and failures of deliberation. Political Psychology, 33(2), 243-258.
  3. Here
  4. Here
  5. Here
  6. Kelemen, D., & Rosset, E. (2009). The human function compunction: Teleological explanation in adults. Cognition, 111(1), 138-143. ; Sehon, S. (2010). Teleological explanation. A Companion to the Philosophy of Action, 121-128.
  7. Here
  8. Wile, D. B. (2002). Collaborative couple therapy. In A. S. Gurman & N. S. Jacobson (Eds.), Clinical handbook of couple therapy (pp. 281–307). New York: Guilford Press.
  9. So apparently CFAR has picked up on something similar and shifted their workshops to be more about S1 and S2 dialogue and debugging. I applaud this effort, but if the rest of the essay is right their efforts will still fail because they are not focused on creating communities but acting on and individual-level. If Mark is right then CFAR has to become something like the Zen koan school of rationality. They cannot do it for you, but they can poke you into the correct direction.

  • Explain how logic and biology relate; explain how beautifully this all mixes with embodied cognition
  • i think it might be impossible not to lie. lies seem to be useful. i know sam harris has thought on this. on the other hand my stint with radical honesty was one of the most powerful things I have done, and NVC seems to be very keen on it as well. I think there might be various reasons for lying, one terrible one being superego inflitration and that is why radical honesty is powerful. Not sure. Need to think more.
    • and of course, even thinking about defending lying comes with huge social costs. the ongoing strategy is to create enough value to absorb those losses but I might have to think about that as well.
  • I eventually want to zoom in on my criteria to consider theories (right now it is something like “coherence” with existing belief-space, simplicity, ability to explain facts, recurrence across places, lines of evidence, expert support
  • It might not be terrible that most people are not reasoning, not sure. (societal mechanisms and bottom-up might win out)
  • talk about second and third postulates of Justification Hypothesis
  • Exposing and removing other cartesian views as necessary and possible
  • Reason works because reflexive (you can use reason to reason about itself)

On why speaking to Hedgehogs doesn’t come naturally to me

People are sometimes frustrated by their inability to place me in existing communities. Especially if they are intellectual communities. This makes sense. If they can place me then it is much easier to engage, they can model the parts of my thinking that I haven’t made explicit to be a copy of the general thinking of the community hivemind and it will be a good enough approximation. Unfortunately this cannot happen with my thinking.

The fact that it doesn’t happen leads to a second reason for frustration. The realization that I have read all the same arguments, and agreed with all the same premises as they have agreed with, and yet am not acting in the way that they are. I’m not pledging allegiance to the same causes. Why?

Is is that I lack the ability to take ideas seriously? Certainly that is partially the case. My mind drifts from idea to idea without any sort of “reasonability” barrier and actually believing those (which would imply acting on them) would be problematic.

Moreover, it certainly is not the case that I’m outside of the scope of what Robin Hanson mentions as the Homo Hypocritus. And it certainly is the case that I have a standing problem that blocks me from feeling like a group-member in general. But I do not think these fully explicate what is happening.

I believe that me expressing epistemic agreement and then not acting in the way that others that express agreement act is caused by the fact that I’m thinking from fundamentally orthogonal epistemological presumptions. Being placed in an existing (intellectual) community and acting in a way that reflects how people who have read the arguments shared between that community act requires a certain hedgehoginess (1) of thinking that I lack. This hedgehoginess is the ability to believe what you conclude.

I lack this piece.

I will attempt to explain why I believe I am justified in not believing what I conclude. This whole essay is serves as an attempt to facilitate others being charitable to my shortcoming, in the same way Paul Graham tried to make managers charitable to makers time. If I succeed I create a bridge for shortcoming despite the different mental configurations.

In what follow I first present my guess of the problem origin. I then present a formal depiction of what I think I lack, and why I’m justified in lacking it. I end up with musings about how to overcome this difference.


Problem origin

I believe the frustration arises due to an inference that goes something like this:

  1. If someone takes the ideas A ^ B ^ C seriously, then they will do D.
  2. You don’t do D.
  3. Therefore you don’t take the ideas A ^ B ^ C seriously . [1,2]

I will attempt to argue that the correct inference is:

  1. If someone takes the ideas A ^ B ^ C seriously and they have hedgehog-piece 1, then they will do A.
  2. You don’t do A.
  3. Therefore you don’t take the ideas A ^ B ^ C seriously, or you do not have hedgehog-piece 1. [1,2]

And of course, me claiming that the second disjunction is the correct one.


Hedgehog-piece 1, formally

I proceed to present (2) a very intuitive principle of reasoning that I believe others possess and that I lack, which explains the differences that we get frustrated over. I then present a paradox that follows from the acceptance of the principle. I end with an explanation of the why the paradox arises.

The Principle of Closure

The principle of closure is defined as follows:

“Necessarily, if S has justified beliefs in some propositions and comes to believe that q solely on the basis of competently deducing it from those propositions, while retaining justified beliefs in the propositions throughout the deduction, then S has a justified belief that q.”

My hypothesis is that hedgehog-piece 1 is the principle of closure. I also hypothesize that others have not reasoned themselves to it, but that it is a natural piece of their mental configuration the same way it is a naturally missing piece of my mental configuration. (I’m deliberately leaving these terms fuzzy. A case of choosing roughly correct over precisely wrong.)

In the next section I replicate a paradox that I believe undermines the principle of closure.

The Preface paradox

“It is customary for authors of academic books to include in the preface of their books statements such as “any errors that remain are my sole responsibility.” Occasionally they go further and actually claim there are errors in the books, with statements such as “the errors that are found herein are mine alone.”

(1) Such an author has written a book that contains many assertions, and has factually checked each one carefully, submitted it to reviewers for comment, etc. Thus, he has reason to believe that each assertion he has made is true.

(2) However, he knows, having learned from experience, that, despite his best efforts, there are very likely undetected errors in his book. So he also has good reason to believe that there is at least one assertion in his book that is not true.

Thus, he has good reason, from (1), to rationally believe that each statement in his book is true, while at the same time he has good reason, from (2), to rationally believe that the book contains at least one error. Thus he can rationally believe that the book both does and does not contain at least one error.”

Diagnosing the Preface Paradox

In this section I replicate a diagnosis of why the Preface Paradox holds and what it means for the Principle of Closure.

“Consider a very long sequence of competently performed simple single-premise deductions, where the conclusion of one deduction is the premise of the next. Suppose that I am justified in believing the initial premise (to a very high degree), but have no other evidence about the intermediate or final conclusions. Suppose that I come to believe the conclusion (to a very high degree) solely on the basis of going through the long deduction. I should think it likely that I’ve made a mistake somewhere in my reasoning. So it is epistemically irresponsible for me to believe the conclusion. My belief in the conclusion is unjustified.”

“Diagnosis of the preference paradox: Having a justified belief is compatible with there being a small risk that the belief is false. Having a justified belief is incompatible with there being a large risk that the belief is false. Risk can aggregate over deductive inferences. In particular, risk can aggregate over conjunction introduction”

“(T)here is a natural diagnosis of what’s going on: A thinker’s rational degree of belief drops ever so slightly with each deductive step. Given enough steps, the thinker’s rational degree of belief drops significantly. To put the point more generally, the core insight is simply this: If deduction is a way of extending belief – as the Williamsonian line of thought suggests – then there is some risk in performing any deduction. This risk can aggregate, too.“



The acceptance of the preface paradox as a counter-argument to the principle of closure makes it so that I can say of argument that concludes A – “Yes, I think that argument is valid and sound; but I don’t believe that A”. Understandably this frustrates the people arguing for A.

Alas, I didn’t reason myself into this acceptance. It is a very formal description of what has been characteristic of my reason for the longest time. And that is as far as I can say without falling into a narrative fallacy. I give this formal description in an attempt to make my reasoning less opaque, and hopefully less frustrating to others.

This hypothesis explains why consilience is my automatic go-to principle to figure stuff out, and why I’m attracted to many weak arguments over one strong argument. It also explains why I frustrate hedgehogs and vice-versa, which I explore in more detail below. Further, it predicts that posts in this blog will be very hit-or-miss, as I talk to a specific community at a time. So if so far you have had no luck, don’t despair, dear reader!


Why I frustrate hedgehogs and vice-versa

Here is Venkatesh Rao on the upsetting process through which to change Hedgehogs’ and Foxes’ beliefs:

“It is  tedious to undermine even though it is lightly held. A strong view requires an opponent to first expertly analyze the entire belief complex and identify its most fundamental elements, and then figure out a falsification that operates within the justification model accepted by the believer. This second point is complex. You cannot undermine a belief except by operating within the justification model the believer uses to interpret it.  A strong view can only be undermined by hanging it by its own petard, through local expertise.”

And conversely:

“To get a fox to change his or her mind on the other hand, you have to undermine an individual belief in multiple ways and in multiple places, since chances are, any idea a fox holds is anchored by multiple instances in multiple domains, connected via a web of metaphors, analogies and narratives. To get a fox to change his or her mind in extensive ways, you have to painstakingly undermine every fragmentary belief he or she holds, in multiple domains. There is no core you can attack and undermine. There is not much coherence you can exploit, and few axioms that you can undermine to collapse an entire edifice of beliefs efficiently. Any such collapses you can trigger will tend to be shallow, localized and contained. The fox’s beliefs are strongly held because there is no center, little reliance on foundational beliefs and many anchors. Their thinking is hard to pin down to any one set of axioms, and therefore hard to undermine.”


Interspecies communication

In his depiction of Foxes and Hedgehogs, Rao misses the one dimension that matters to me in the context of this essay: Explicitness. This is because I believe that explicitness is a sine qua non for communication and that reasons matter only insofar as they are communicable.

I divide the challenges of communication by species. The fox faces a certain challenge, the hedgehog another. Following, I make these challenges explicit.

Challenges to communicable reasons

The challenge for the fox is learning how to introspect into the reasons it is using to decide, and communicate those.

This is a challenge because introspection is difficult:

“This study tested the prediction that introspecting about the reasons for one’s preferences would reduce satisfaction with a consumer choice. Subjects evaluated two types of posters and then choseone to take home. Those instructed to think about their reasons chose a different type of poster than control subjects and, when contacted 3 weeks later, were less satisfied with their choice. When people think about reasons, they appear to focus on attributes of the stimulus that are easy to verbalize and seem like plausible reasons but may not be important causes of their initial evaluations. When these attributes imply a new evaluation of the stimulus, people change their attitudes and base their choices on these new attitudes. Over time, however, people’s initial evaluation of the stimulus seems to return, and they come to regret choices based on the new attitudes.)” (3)

But there is some evidence that introspection can be trained. (4) (5) Further, the cost is to be wrong. Sometimes (frequently?) you will just believe the wrong things for the wrong reasons. Making reasons explicit helps overcome this.


The challenge for the hedgehog is to make the fundamental beliefs and justification model accepted explicit, and communicate those.

There is not much I can say about this. Hopefully some hedgehog friend can take up the challenge and report back. I understand this is asking for someone to see the unseen, or the background of whatever they are looking at. I understand this is not-trivial.



If I have been successful this essay will dismiss some frustration of my epistemic interlocutors. It will have done so by making my thinking explicit, which is what I concluded foxes ought to do in order to improve communication. (Yes, going meta, very-LW.)

Hopefully I managed to explain this fundamental cog in how I think and why it might seem that I don’t take ideas seriously, when I do.


  1. Hegdehogs and foxes
  2. This whole section is composed of quotes from Schechter, J. (2013). Rational self-doubt and the failure of closure.Philosophical studies, 163(2), 429-452; except for the preface paradox which comes from here
  3. Quoted from here, citation is Wilson, Timothy D., Douglas J. Lisle, Jonathan W. Schooler, Sara D. Hodges, Kristen J. Klaaren, and Suzanne J. LaFleur. “Introspecting about reasons can reduce post-choice satisfaction.”Personality and Social Psychology Bulletin 19 (1993): 331-331
  4. Fox, M. C., Ericsson, K. A., & Best, R. (2011). Do procedures for verbal reporting of thinking have to be reactive? A meta-analysis and recommendations for best reporting methods. Psychological bulletin, 137(2), 316.
  5. Gendlin, E. T. (2012). Focusing-oriented psychotherapy: A manual of the experiential method. Guilford Press.


  • This is why it is difficult to me to talk to hedgehogs and to be convinced by them and vice-versa (they need to hack away at my belief in many places, I need a ton of specialized knowledge.
    • How to fix this?
      • Making things explicit (understanding why you value many perspective, them being as clear as possible about the local knowledge needed to go for the neck)
  • To what extent does this discussion overlap with the cluster vs sequence thinking?
  • Hedgehog, fox; fragility, robustness and anti-fragility
  • Can I give a strong argument for “many weak arguments” of the form of “If many weak arguments can be generated for one side that cannot be generated from the other, and there is no strong argument either way this provides evidence for this one side” that is acceptable to Hedgehogs?
  • Drill deeper into hedgehoginess/foxiness being a collection of pieces
  • “Why having two types of reasoning and have them communicate over localizing the wrong pieces and remove them?”
    • Reason works better in a community with each arguing for different sides, not in an individual.
    • Why I believe in epistemic communities over epistemic individuals as the place where reason thrives (Arguments for one side vs the other; not one person painstakingly trying to get better). [Reason is social, specialisation, etc.]

On creating a map amidst strategic deception attempts

Others are incentivized to change our beliefs to the degree they are affected by them. This has always been the case and has led to several strategies to do so through a Darwinian mechanism. Facing this, the only reasonable response is to evolve counter-strategies.

In what follows I argue for the first two points – Why is there deception going on, and how it came about. In a further essay I suggest possible counter-strategies.


Beliefs, Power, and Deception

There are ontologically subjective facts which are epistemically objective (1). Allow me to unpack. The fact that Elizabeth Alexandra Mary is the Queen of England is epistemically objective (she is indeed the Queen of England). It is also ontologically subjective in the sense that it made true solely by a standing agreement amongst epistemic agents. This agreement consists of various propositions which are made true in virtue of the agreement (one of them being “Elizabeth Alexandra Mary is the Queen of England”).

The fact that it is made true in virtue of agreement is where I want to focus. This fact makes it so, dear reader, that if everyone were to wake up tomorrow believing you to be the Queen or King of England, then in fact, you would be.

Now one might have a preference for being or not being the Queen of England, President of the U.S.A., CEO of Apple, owner of the convenience store next to your place, the smartest man in the world, and so on . This means that shared beliefs (which your beliefs are, in part, part of) affect others.

If your beliefs affect others and their preferences, then this would provide them with an incentive to alter them. Especially beliefs that concern them or their position or interests, insofar as they can alter them. It is unlikely that the owner of the convenience store next to your place can do much.

You can imagine that utter disapproval of the President could lead to a revolution or impeachment. Utter disbelief in the regime’s legitimacy would change the regime. Disbelief in the power of the CEO to call the shots would lead to him being led into paralysis, and so on.

Now this is a very fine and amusing story. But what would you actually expect to see in a world where the story obtains? Where facts are made true by agreement? Where power derives from these agreements?

You would expect to see groups with enough power systematically trying to install in those whose beliefs they depend on particular opinions and beliefs as to keep them in power. You would expect systematic efforts to alter beliefs in a certain direction. You would expect to see propaganda, agenda-setting, disinformation, crowd manipulation, media manipulation, delegitimisation, wars of ideas. These on the “evil” side. (2)

But you would also expect “good” issues (3). Modern social justice (4), appeals to “rights” (5), and various movements trying to influence you (feminism, Masculism, A, etc.). The Watson controversy. Inequality talk (6). And in older time, just war.

You would also expect the fact that some “facts are made true by agreement” to be as hidden as possible since widespread knowledge of this would make these facts stand on much shakier ground . Better to stand on some firm ground (7). You would further expect systematic deception by those in power. (8) And a lot of people telling you what to do.

These efforts need not be explained by a big conspiracy by the powerful. In fact, no intelligent design is needed. In a parallel to man, Darwin explain how very complex mechanism come into place.


Mechanism’s Origin

Gaining power is a very strong incentive. Since it is the ability to influence or control the behavior of people, power can help one reach all of theirs instrumental and terminal goals.

The claim is that for those that wanted to come to or stay in power there was an active darwinian mechanism going on involving variation, selection and retention. Variation was provided by the various techniques used to attempt to alter the belief landscape. The owners of the conjoined belief landscape provided a selection mechanism – they would either come to agree about the power of those in power and the necessity of them staying in power, or not. And finally, retention, successful strategies would stay and be used.

As the landscape evolves so do the strategies. Upworthy was selected for virality in a landscape of social media dominance that was not in place fifteen years ago. Trying to convince the populace that the chief of state ought to be the chief of state because he is a God would work on North Korea and in Egypt some 2000 years ago, and wouldn’t work anywhere in Europe in the present.



The setting presents itself as such. On the one hand we want to have an accurate map. On the other there are entities that wish us to have an inaccurate map. In such a situation one might suggest creating tools to defend ourselves from these pernicious influences.

The contestation can be heard: “Are we not children of the Enlightenment? Was it not one of historicism’s mistake that Popper destroyed (9) to analyse the origin of an argument instead of the argument itself? Ought we not to do engage with arguments, by themselves?”

I answer that we mustn’t. Attention is limited, and if one can dismiss a source quickly, then all the better. “There’s an old saying in the public opinion business: we can’t tell people what to think, but we can tell them what to think about.” And this is precisely why we cannot engage with every source with the same seriousness. Engaging with any one source has opportunity costs and thus needs to be established as useful or necessary prior to the engagement itself. In the next post I will describe some heuristics for belief adoption in adversarial settings.

Open Questions
  • How much distortion can we expect on the various entities telling us what to do?
  • Can the convenience store owner do anything to influence our belief? (Maybe acting as a prototypical convenience store owner.)
  • How far can the Darwinian selection mechanism hypothesis apply and what does it predict?
  • Social reality is pretty openly discussed within some academic circles. It might be a case of hiding in plain sight. It might be that it just can’t be hidden. It is notable that all major world religions (except Buddhism?) mesh really well with objectivity, and objective world (God-given) and not with facts made true by human agreement.
  • Plato in the republic emphasised maintaining social order (classes) by spreading a myth, the Noble falsehood that different people have different metals in their souls.

(1) -Searle, J. R. (1995). The construction of social reality. Simon and Schuster.
(2) – I.e.: the various sides that have not been branded by themselves.
(3) – I.e.: various movements that have branded themselves (I.e.: Pro-choice, versus, Pro-Life [No one is anti-anything. See here])
(4) – The current world is unjust and we aim to make it just. “Just” is of course not about human agreement, but beyond humans.
(5) – Rights are normative principles, and in some cases given by nature (whatever that means). A society that doesn’t assert rights is just wrong.
(6) – Inequality being like social justice. The current world is unequal, equal is good, let’s make it equal!
(7) – Religion is as firm as you can get. Who would dare defy a God?
(8) – And, of course, you would expect this to happen in all places where there are power relations. So not only there and in the past, but here and in the present.
(9) – Popper, K. R., Havel, V., & Gombrich, E. H. J. (2011). The open society and its enemies. Routledge

Why I don’t want to make my models explicit

This essay is written in a stream-of-consciousness way. It is me trying to understand a contradiction in my thinking and acting.

The contradiction

  1. I want to improve my models.
  2. I believe it is easier to improve my models if they are made explicit.
  3. Therefore, I want to make my models explicit. [1,2]
  4. I don’t make my models explicit.
So why 4.? Why don’t I make my models explicit? Part of the answer that comes up to “Why don’t I want to make my models explicit?” is a reflection of what Mark wrote here:
” Modeling is tricky. Verbalizing is tricky. Reality doesn’t come prepackaged, carved up to correspond perfectly to simple sentences.
When you write things down you can distort the underlying sense of what you meant. When you write things down you can kill the underlying sense of what you meant. Writing things down can be counterproductive. Being “rational” can be counterproductive.”
That may all be true, but that is not the true reason I don’t want to make models explicit.

A possible justification

  1. Intellectual beliefs correspond to the official position (the tribe sanctioned position) (belief in belief)
  2. Emotional beliefs correspond to the true beliefs (aliefs)
Reason is for rationalisation and arguing and justification to other tribe members . Thus your official positions better be socially acceptable. (Else you will be kicked out of the tribe which means death.)
This model explains why belief reporting (which is the reporting of aliefs or emotional beliefs) leads to many “ridiculous”, “shameful”, “guilt-ridden”, “wrong”  reports. (1)
Thus the reason I don’t want to make models explicit is because I think doing so will get me kicked out of the tribe.
But there is no tribe to get kicked out off. My brain might still feel that we are a group of 150 people and if I lose them I die alone. But we are not hunter-gatherers anymore. If I want to get better models I need to make them explicit.

Conclusion, sort of

My goal is to have true beliefs that allow me to push the world where I want. A lot of my belief updating is bottleneck on psych issues. This “making my models explicit will get me kicked out of the tribe” is one of these issues. It follows that I need to solve psych issues in order to be able to belief update.
A step back
But also there are things that I feel will be killed by being made explicit prematurely and that I cannot capture. And it certainly is the case that I DO NOT WISH TO BE normative before having good descriptive models. (This leads heads first into the valley of bad rationality.) For these models I would like to alter them, whilst not making them explicit. I don’t yet understand how to navigate these considerations.
  • Find beautiful video where people are given a political questionnaire and their answers are swapped around and then they proceed to defend the (false) answers
  • Explore the desire to be descriptive before being normative.
  • Why do I believe most people are not reasoning into the positions they hold.
(1) – In Freud’s language, belief reporting allows us to access the id that the societal superego keeps hidden from the ego. In a more modern language, it allows us to access the private and experiential self, and not the public self.