Aliefology and Beliefology

We have talked about weird effects that occur in what I called Social Descriptive Epistemology. I want to get a more precise description of what it is that we are studying. To that purpose in this essay I introduce and explain aliefology and beliefology and speculate about future avenues for investigation and how to use that information.

In particular I go into detail about Individual Descriptive Aliefology, suggest a standpoint and various heuristics and show how to take advantage of the knowledge of the existence of these heuristics to create a better map.

 

Aliefology and Beliefology

Phenomenon of Study                                                                  

Actual Beliefs Claimed Beliefs
Social Social Descriptive Aliefology Social Descriptive Beliefology
Individual Individual Descriptive Aliefology Individual Descriptive Beliefology

 

Aliefology is the study of how an agent or set of agents come to believe X.

Beliefology is the study of how an agent or set of agents claim to have come to the belief that X.  Individual and Social refer to the level of analysis.

A lot of focus on this blog has been on how to build an appropriate map. I think that the study of aliefology and beliefology at the individual and societal level are, together, a  crucial lever to create a map fast.

I think this lever is what both Thiel and Graham found in the context of pushing the lever to make money.  Thiel talks about secrets: “Back in class one, we identified a very key question that you should continually ask yourself: what important truth do very few people agree with you on? To a first approximation, the correct answer is going to be a secret. Secrets are unpopular or unconventional truths. So if you come up with a good answer, that’s your secret.” and Graham about “What can’t you say?”

They are clearly circling the same topic here, although they did not divide it as I do.

What is gained by dividing it as I did is that you can start talking about various categories of mismatches more precisely. Things that are aliefed at the societal level, and claimed to not be believed. Things that are claimed to be believed and are not aliefed, at the societal level (See the whole of Overcoming Bias). Things that are aliefed, and claimed to be believed for a reason, but in fact are aliefed for a totally different reason.

And this enhanced precision is possible even before starting to classify the beliefs and aliefs as true or false.

To demonstrate what this study may look like, in the next section I speculate about Individual Descriptive Aliefology.

 

Individual Descriptive Aliefology

I sense that naive realism is the default human epistemological stance – the folk epistemology if you will. I suspect this because you need to go up in levels to figure out that naive realism doesn’t obtain and most of the population is achieving the formal operations level at which this is a possibility (and may or may not be pursued)

The other reason to sense this is the case is that it took 9 Eliezer-essays to explain that the map is not the territory. (Of course, evidence I can easily share, the stronger reason for this idea are year-hours of observation of people talking and speaking.)

Besides the default epistemological sense I think we can reverse-engineer what heuristics people are using by looking at several areas:

  • Rhetorics
  • Persuasion
  • Fallacies
  • Bias

I consider these to be descriptions of what works, with the job of individual aliefology being to systematize them, and understand why they work.

Rhetorics and persuasion techniques are codifications of what has historically worked to convince people.  Fallacies are patterns of thought that lead to incorrect conclusions that happen so frequently that they got codified as such, being a special case of bias, as they relate to arguments.

Following I describe several possible heuristics that are being used in individual aliefology.

  • Futuristic heuristic

“Discount things that sound futuristic”

  • Movie-like heuristic

Generalizing from fictional evidence

  • Conspiracy theory heuristic

“Discount things that can be called conspiracies” (That this heuristic exists is shown by the fact that conspiracy theory is used as a term of ridicule and works as a semantic-stop sign)

  • Authority Heuristic (newspaper, tv, internet)

“Trust authoritative figures/institutions ”  (That this is a heuristic is show by the fact that social sciences have needed to coin credentialism: “reliance upon formal credentials conferred by educational institutions, professional organizations, and other associations as a principal means to determine the qualifications of individuals to perform a range of particular occupational tasks or to make authoritative statements as “experts” in specific subject areas”)

  • Status quo bias

Prefer what is the case

  • Politicized Heuristic

Follow the group line

  • Sacredness Heuristic

“Do not question what is sacred.”

 

The power of descriptive aliefology

Each of these heuristics can be reversed to tell you where to go look about for wrong beliefs. As Haidt has said (in the case of the Sacredness heuristic) – ““The fundamental rule of political analysis from the point of psychology is, follow the sacredness, and around it is a ring of motivated ignorance.”

Reversing this heuristic tells you where to look to build your map – as in this analysis. Reversing the other ones should lead to the same effect.


Future:

  • folk psycholgy, folk physics, experimental philosophy, human intuitive ontology (paper on evolutionary psychology on this)
  • how these heuristics are sound in a certain environment but have been abused and don’t work in our current environment (EEA and gigerenzer)
  • knowledge goes pop
  • “broscience”
  • social aliefology: http://en.wikipedia.org/wiki/Fad
  • Coolized  “Prefer what is cool” ; crossfit
  • authority heuristics and http://en.wikipedia.org/wiki/Credentialism
  • (Unclear if politicized heuristic fits into individual beliefology or aliefology.)
  • need better names
Advertisements

Societal Map Corruption

In this essay I criticize part of Pinker’s model of societal change. I leave my criticism implicit, since making it explicit – if it is correct – would be potentially (very) problematic.

Prerequisites

We have spoken before about how definitions matter and that disputes about definitions are about power hungry strategic actors wanting to influence your map because their power depends on it.

We have also talked about Pinker’s model of societal change:

“Norm cascade” Argument of societal change

  1. The elites favor the position for which there are rational arguments.
  2. The position with rational arguments for it is position Y.
  3. Therefore, the elites favor position Y.
  4. If there are is an intense controversy between two opposed sides to a socially fractious issue (drug legalization, abortion, capital punishment, same-sex marriage), what the elite favors becomes legal norm.
  5. There are is an intense controversy between two opposed sides to a socially fractious issue  X
  6. Therefore, position Y will become legal norm [3,4,5]
  7. If nothing terrible happens, then, people and press get bored.
  8. Nothing terrible happens.
  9. Therefore, people and press get bored [7,8]
  10. If people and press get bored, then politicians realize issue is no longer a vote-getter
  11. Therefore, politicians realize issue is no longer a vote-getter. [9, 10]
  12. If politicians realize issue is no longer a vote-getter, then politicians will not reopen the issue.
  13. Therefore, politicians will not reopen the issue. [11, 12]
  14. If politicians don’t reopen the issue, no one will.
  15. Therefore, the issue is not reopened.  [13, 14]

Argument for “People accept the status quo as correct.”

  1. People accept the status quo as correct.
  2. Y is the status quo.
  3. Therefore, people accept Y as correct.

Argument for “Extremists cement the majority consensus.”

  1. “Norm cascade” Argument of societal political change
  2. Argument for “People accept the status quo as correct.”
  3. If a group goes against majority consensus and isn’t composed of elites, it will be seen as extremist/radical,
  4. Any group proclaiming ~Y, goes against the majority consensus
  5. Therefore, any group proclaiming ~Y will be seen as extremist/radical.
  6. The majority cements its consensus by opposition to extremist group positions.
  7. Therefore, the group proclaiming ~Y being seen as extremist/radical will further cement the majority consensus

Doubts

I’m afraid of this model being mostly (instead of entirely) correct. It is not obvious to me that the elites favor positions for which there are rational arguments and cause it to become the legal norm.

I just think that that is what Pinker would rationalize himself into, being smart, and the status quo argument being correct. It is just too clean – if the argument obtains, we do, in some sense, live in the best of all possible worlds.

I fear it is much more complicated than that.

What I do see is pressure groups, societies for the advancement of, “rights” movements, pride days, manifestos, for freedom groups, lobbying, academic “fields”, and censorship. I don’t see any reason to believe that membership in this groups is explained by rational argument consideration and debate but rather it seems to be a contingent aspect of pre-existing membership.

I also find it a bit too convenient and unrealistic to suspect that all of these groups have had, in the past, their positions rationally assessed by a group of the elites which then enforce whatever is rational.

I do see some groups winning over others and see it reflected in the general populace pretty fast – things that were ok, normal, cool, not a “thing” become uncool and met with a learned moral disgust response.

And I see word wars to promote beliefs that matter to some actors. (This is not a syndrome of our time, though. In the Republic Plato proposed a Noble Lie that would keep the stratification of his idealized society: god had made the souls of rules with gold, those of helpers with silver, and farmers and craftsmen had iron and brass in their souls. This is why rulers were born to rule, helpers to help and craftsmen to craft; and a desire to change classes would not even be a possible idea.)

It seems to me that what is in fact happen is that some groups unite under the idea that “X is right/correct/moral” (or the converse) because they are part of X or benefit from that in some for. They then overpower groups that don’t and spread their belief through society. If it is the fact that these groups believe this due to membership and not rational argument, then you would expect your various society-given beliefs to be corrupted. (Descartes realized some form of this around 1650. One can only expect that the corruptors have gotten better since then. For a recent example – hopefully harmless to my audience – see the Different Media portrayals of the 2014 events in Ukraine and realize that, in the best case scenario, you got the right picture in one of three possible worlds.)


 

 

Future:

  1. astroturfing, crowd manipulation, disinformation, frame building and frame setting, infoganda, media bias, media manipulation, misinformation, perception management, political warfare, psychological manipulation, psychological warfare, mudsliding, sanctioned name-calling

Theories as cameras, theories as engines

At some point all of science was together in something called “natural philosophy”. Francis Bacon was the first to attempt to partition the sciences. Nowadays disciplines are partitioned more or less neatly. (Cognitive Sciences being new is pretty haphazard.) And there is one big demarcation I wish to touch upon: between soft and hard sciences, or social and natural sciences.

I have frequently observed the confusing fact that whether one is more connected to the social or the natural sciences is quite predictive of one’s epistemological beliefs: It seems that people that participate or research in hard science fields (physics, chemistry, biology) seem to be drawn into objectivity whilst those in soft science fields (economics, sociology) more drawn to non-objectivity.

In what follows I attempt to describe the epistemology of the fields and to do a first pass at along what axis the underlying epistemological assumptions differ.

 

Descriptive epistemology of the 2 scientific cultures

In designerly ways of knowing (1), Nigel Cross talks about 3 scientific cultures (sciences, humanities, design) and characterises them across the axes of phenomenon of study, appropriate methods, and values. I’m going to reproduce his characterizations, skipping design:

“The phenomenon of study in each culture is

  • in the sciences: the natural world
  • in the humanities: human experience

The appropriate methods in each culture are

  • in the sciences: controlled experiment, classification, analysis
  • in the humanities: analogy, metaphor, evaluation

The values of each culture are

  • in the sciences: objectivity, rationality, neutrality, and a concern for ‘truth’
  • in the humanities: subjectivity, imagination, commitment, and a concern for ‘justice’

Now, researchers are trained and develop methodologies which entail epistemologies and ontologies.  I think I finally understood what this difference about.

The basic metaphor of knowledge is vision. When you see you don’t change the object. This metaphor obtains for physics and chemistry and biology for the most part, and thus you can model this domains as being observer independent, as systems that are separate from the one that is modeling them.

In the soft sciences (that is, sciences that study human-made systems, or systems made up by humans) the correct metaphor is touch: to understand you must manipulate the object. You cannot be independent of the systems that you are observing – there is theoretical performativity. Meaning, the observer and what the observer is doing (theorising) affect the phenomenon.

Your observations (and their being published and spread out) change the system you set out to observe. Your theories in this domain are not a camera, but an engine.

This suggestion is not wholly original. Cybernetics was the study of systems, and second-order cybernetics the study of systems including the ones constructing systems and studying systems. In what follows I tease out the generativity of this particular viewpoint to understand the curious fact pointed at in the introduction.

MAP-MAKING WITH CAMERAS

MODERN WORLDVIEW

Naive objectivism. The received worldview, the natural stance (my map is the territory.).

3RD PERSON

Observer sits outside of the system he is observing. God’s eye view

OBJECTIVITY IS POSSIBLE

What is being described is independent and not affected by the description or the descriptor.

MAP AND TERRITORY: SIMPLE RELATIONSHIP
SEPARATED

Representationalism/Realism: One level up, my map is not the territory and the territory informs my map but there is no bidirectional causality.

METAPHOR: VISION

“The KNOWING IS SEEING conceptual metaphor allows us to understand the abstract domain of knowledge by means of the concrete domain of sight. This is a metaphor with a clear experiential basis2 grounded in the fact that in early childhood human beings normally receive cognitive input by seeing. Nevertheless, whereas in the first years of one’s life perception and cognition are conceived as together (or conflated in terms of JOHNSON 1997), due to the fact that there is a deep basic correlation between the intellectual input and vision, afterwards these two domains separate from each other («deconflation» in JOHNSON’s words 1997). This is the reason why we are able to use the metaphor KNOWING IS SEEING meaning just «awareness» and not being linked to vision at all, which may be seen in everyday language expressions like the following ones:

(a) I see what you’re getting at.

(b) His claims aren’t clear.

(c) The passage is opaque.” (2)

TRUTH: POSSIBLE

A great intro.

MAP-MAKING WITH ENGINES

POSTMODERN WORLDVIEW

Objectivity is impossible. Everything that is said is said from someone to someone, from a specific viewpoint, culture, assumptions and so on that cannot be transcended.

2ND PERSON

Participant observation, ethnography. If you want to study the object you must interact.

INTERSUBJECTIVITY IS POSSIBLE

Claim can be made and triangulated from authors in various different positions.

MAP AND TERRITORY: COMPLEX RELATIONSHIP
MAP(S) ARE THE TERRITORY BEING MAPPED

I write about this particular relationship at some length in modelling map aggregation.

2ND AND N-ORDER EFFECTS OF MAPPING

The mapping alters the territory.

http://mitpress.mit.edu/books/engine-not-camera

MAPPING CREATES TERRITORY

“what we conventionally think of a ‘subject’ and ‘object’ are co-arising. Because the mind is embodied and arises out of “an active handling and coping with the world”, then “whatever you call an object … is entirely dependent on this constant sensory motor handling”. As a result an object is not independently ‘out there’, but “arises because of your activity, so, in fact, you and the object are co-emerging, co-arising” (Varela, 1999: 71-72).” http://en.wikipedia.org/wiki/Enactivism

METAPHOR: TOUCH

Core metaphor is “knowing is touching”. To know the object you must interact with it, and your interactions change it.

TRUTH: DISREGARDED

Habermas: “By linking meaning with the acceptability of speech acts, Habermas moves the analysis beyond a narrow focus on the truth-conditional semantics of representation to the social intelligibility of interaction. The complexity of social interaction then allows him to find three basic validity claims potentially at stake in any speech act used for cooperative purposes (i.e., in strong communicative action). His argument relies on three “world relations” that are potentially involved in strongly communicative acts in which a speaker intends to say something to someone about something (TCA1: 275ff). For example, a constative (fact-stating) speech act (a) expresses an inner world (an intention to communicate a belief); (b) establishes a communicative relation with a hearer (and thus relates to a social world, specifically one in which both persons share a piece of information, and know they do); and (c) attempts to represent the external world. This triadic structure suggests that many speech acts, including non-constatives, involve a set of tacit validity claims: the claim that the speech act is sincere (non-deceptive), is socially appropriate or right, and is factually true (or more broadly: representationally adequate). Conversely, speech acts can be criticized for failing on one or more of these scores. Thus fully successful speech acts, insofar as they involve these three world relations, must satisfy the demands connected with these three basic validity claims (sincerity, rightness, and truth) in order to be acceptable.”

  1. – “Cross, Nigel. “Designerly Ways of Knowing.” Design Studies 3.4 (1982): 221-27.”
  2. – Ruiz, J. H. (2005). The authority is vision and the knowledge is a bounded region metaphors in fairy tales. Interlingüística, (16), 569-578.

Future:

Considerations on heuristics for Map-making: Your naive reasoning mechanisms suck   

In this essay I argue for the Deserved Persuasiveness heuristic (or “Why it’s a bad idea to form your own opinions”). I being by suggesting a model through which to think about beliefs and belief-structures. I then go over the Naturalistic Decision Making Data/Frame theory of sensemaking. I end by putting the two models together in arguing for the Deserved Persuasiveness heuristic.

Beliefs

You can imagine that your beliefs are like islands set in the sea.  Islands and archipelagos, some more connected some less. How deep each land mass goes how strongly justified those particular beliefs go, and their altitude is how precise they are. Of course, their extension is the scope of what they explain.

Ideally you want to have a Pangea like structure. A model of the world that is a set of true, connected, encompassing, justified beliefs.

But keep in mind that it’s more important to make your beliefs as correct as possible then to make them as consistent as possible. Of course the ultimate truth is both correct and consistent; however, it’s perfectly possible to make your beliefs less correct by trying to make them more consistent. If you have two beliefs that do a decent job of modeling separate aspects of reality, it’s probably a good idea to keep both around, even if they seem to contradict each other. For example, both General Relativity and Quantum Mechanics do a good job modeling (parts of) reality despite being inconsistent and we want to keep both of them. Now think about what happens when a similar situation arises in a field, e.g., biology, psychology, your personal life, where evidence is messier than it is in physics.

Given the above it seems like the correct tradeoff to go for correctness – or modelling a specific part of reality – before going for propagation and connection. How does one aim at having correct beliefs?

 

The missing piece: Data/frame theory

Naturalistic Decision Making is a field of decision-making analysis that studies expert decision-making in naturalistic settings. Part of what it studies is sensemaking – how people make sense of what they experience.

The data/frame theory is the macrocognitive model of sensemaking that is used in the field. It claims that: “When people try to make sense of events, they begin with some perspective, viewpoint, or framework—however minimal. For now, let’s use a metaphor and call this a frame. We can express frames in various meaningful forms, including stories, maps, organizational diagrams, or scripts, and can use them in subsequent and parallel processes. Even though frames define what count as data, they themselves actually shape the data (for example, a house fire will be perceived differently by the homeowner, the firefighters, and the arson investigators). Furthermore, frames change as we acquire data. In other words, this is a two way street: Frames shape and define the relevant data, and data mandate that frames change in nontrivial ways.” (1)

The most interesting part is this: “Decision makers are sometimes advised that they can reduce the likelihood of a fixation error by avoiding early consideration of a hypothesis. But the Data/Frame Theory regards early consideration to a hypothesis as advantageous and inevitable. Early consideration—the rapid recognition of a frame—permits more efficient information gathering and more specific expectancies that can be violated by anomalies, permitting adjustment and reframing. Jenny Rudolph (2) found that decision makers must be sufficiently committed to a frame in order to be able to test it effectively and learn from its inadequacies—something that’s missing from open-minded and open-ended diagnostic vagabonding.

These observations would suggest that efforts to train decision makers to keep an open mind can be counterproductive (…). We hypothesize that methods designed to prevent premature consideration to a frame will degrade performance under conditions where active attention management is needed (using frames) and where people have difficulty finding useful frames.”

This matches my experience, and I believe explains hindsight bias. Being open-minded allows your brain to rationalize itself into having knew the outcome all along. Thi is why (written down) predictions increase your accuracy. You are shocked when it cames out that you were wrong. Your brain cannot construct a story about how you knew it since that would increase the cognitive dissonance. The minimal cognitive dissonance explanation is that you were wrong, and this is just accepted, and taken as feedback for existing models. On the other hand if you just thought about the outcome, then it is less painful for your brain to rewrite your autobiography as actually having predicted the outcome that happened.

Taking the data/frame theory seriously entails believing that (a) specific beliefs trump unclear beliefs (b) inaccurate beliefs trump no beliefs. This indicates that bad models are better than no models. (3)

Deserved persuasiveness heuristic

But why choose bad models? The second counter-intuitive claim (after “Do not be open-minded”) is “Don’t reason by yourself – your current reasoning mechanisms (probably) suck”.

One of the “reasonable” ways that people are supposed to update their beliefs is by taking arguments into account. This is a terrible method since the persuasive ability of an argument about a certain field is correlated with 1) the relationship between what the argument claims and the truth (bottlenecked by the listener’s model of reality), 2) the ignorance of the listener about the field, and 3) the persuasive ability of the arguer. Notice how only 1 of these is related to the actual truth-claims of the argument.

This follows from the reasonable postulates that a) there will exist convincing arguments for the true position, b) the more ignorant the listener about the field, the smaller the barrier for zir to consider an argument convincing, and c) the more persuasive the arguer about the field, the smaller the barrier for zir to make a convincing argument.

If these hold then one should expect a persuasive argument to be true if and only if a) one is knowledgeable about the field, or b) the arguer is not generally persuasive.

This has some interesting corollaries. First, if one is not knowledgeable in the field and the arguer is persuasive and a non-expert, then it may be strictly negative to discuss the matter since one will end up with higher credibility in their position, even though the process that took place is weakly correlated with a truth-outputting process. Second, if one is not knowledgeable in the field, and experts exist, it seems to be strictly dominant to find the expert consensus and just copy all those beliefs, as it is likely that the mechanism that produced those beliefs has a higher correlation with the true position than whatever naive processes one uses.

I think this second corollary is really hard to accept because they conflate their ideas with good ideas with themselves as the “venerable creators of good ideas”. Just because you had a particular idea it doesn’t mean it’s a good idea.

Adopt experts beliefs. Then, start trying to find faults in those beliefs. (As per the data/frame theory of sensemaking.) Where there are no experts, just go with what resonates , take that as a working hypothesis and keep going.

(The reason why no one follows this heuristic – as stated – is clear: arguing is not about truth. Then again some do follow it, implicitly, when expert consensus is rebranded as “Introductory Textbook”.)

(To the question of why is it that I’m not following this heuristic and am instead developing my own models, the answer is that I am, it just isn’t shown through the blog yet. Also, my motivation at the moment to put out and stabilize inchoate models is very high and thus it would be very costly to not do it. It is also a valuable enterprise as looking at existing theories will be devastating for existing models and thus they will be grokked at a much deeper, more personal level. It’s a tradeoff, if my motivation to put stabilize models would be lower I would just read my brains out on introductory textbooks to develop world models.)

  1. Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking 2: A macrocognitive model. Intelligent Systems, IEEE, 21(5), 88-92.
  2. – J.W. Rudolph, “Into the Big Muddy and Out Again,” doctoral dissertation, Boston College, 2003; http://escholarship.bc.edu/dissertations/AAI3103269
  3. – This is partially what his blog has been about. Develop models, look at the data through those model’s frame with your eyes out for falsification.

 


 

Future
  • Go into why “Going with what resonates” is a useful heuristic. Talk about why that is a reasonable (Gigerenzer, Gendlin, Taleb, etc.)
  • Address bootstrapping problem: You need to accept not forming your own ideas as an idea by yourself first
  • Gigerenzer heuristic creation (that is in fact what I’m doing with “heuristics for map-making”)
 

Heuristics for map-making in adversarial settings

In a previous post I argued that one is well advised in expecting some entities to have a vested interest in strategically deceiving ones’ map-creation efforts. Samo Burja has expressed a similar sentiment here. In this post I suggest 3 classes of heuristics aimed at counteracting these deceptive efforts.

These are heuristics in Simon’s sense (1): they will lead to better results with regards to internal criteria (in this case map making) by virtue of being applicable to the structure of the environment. If I was correct in describing the structure of the environment in the previous post – then these heuristics can be expected to be helpful.

I don’t claim these heuristics to be original – Hell, everything written thus far reads like a collage. They are in place already to some extent, being used by some. What is new is uniting them under this particular framework of “Map-making in adversarial settings”. Naming things seem to be powerful, having a community (like LW) self-reinforcing things’ names seems to be powerful, being able to point people to things and treat them as objects explicitly  is powerful. I don’t yet understand exactly what is going on there.

 

The tools

Heuristics for question dismissal

The first heuristic is to ask “What will I do with the answer to this question?”.  Attention is finite, and the fact that a question has insinuated itself to your attention is a necessary but not sufficient condition to think about it. It is a heuristic for dealing with privileged questions.

These come especially from the media or topics that the media is addressing, and what I referred to in the previous post when I said that “There’s an old saying in the public opinion business: we can’t tell people what to think, but we can tell them what to think about.” The fact that the heuristic is not in place explains the power of agenda-setting.

As Qiaochu made clear “[Y]ou can apply all of the epistemic rationality in the world to answering a question like “should Congress pass stricter gun control laws?” and never once ask yourself where that question came from and whether there are better questions you could be answering instead.”

There is a second topic in this constellation which is about the truthfulness of what is transmitted, within what the media transmits. I don’t want to open that particular can of worms now, but want to bring to awareness that if there are 3 sides to a story, and assuming one is truthful, the prior is against the possible world in which your particular side is the truthful one.

 

Heuristics for not engaging

Genetic Heuristic

There is an amazing post on this by Stefan Schubert here.

The key innovation is to overturn the idea that arguments should be addressed as such because this disregards information especially about the argument’s origin. “As mentioned in the first paragraph, those who only use direct arguments against P disregard some information – i.e. the information that Betty has uttered P. It’s a general principle in the philosophy of science and Bayesian reasoning that you should use all the available evidence and not disregard anything unless you have special reasons for doing so. Of course, there might be such reasons, but the burden of proof seems to be on those arguing that we should disregard it.”

As Stefan points out you can imagine that  Betty is not reliable with regards to P because a) she is 3 years old, b) we have knowledge of Betty being biased, c) we know that Betty overestimates her knowledge about the topic of P, d) Betty gets money by making people believe P, or conversely, that Betty is reliable because she is an expert at P. I investigate the two last cases in what follows.

Deserved Persuasiveness Heuristic

Whether an argument convinces you is a function of 1) how well it meshes with your other beliefs 2) if it is true (conditional on your ability to assess truth) 3) your ignorance about the field and 4) the persuasive ability of the arguer.

If these obtain then we can draw an “Deserved Persuasiveness” heuristic that is as follows: (If searching for the truth, then) if your interlocutor is an expert in the topic at hand, and so are you; engage. If neither is, don’t engage, the most persuasive one will just input his bad ideas into the mind of the other one. If the interlocutor is an expert and you are not, then just adopt their ideas, since they are very likely much better than yours. (2)

(The third result – Just accepting expert opinions, because they come from an expert – sounds terrible in an emotional sense for a lot of people. I wonder if this is because people see their ideas as part of themselves, and their idea generation processes as well and thus  would prefer to have “Wrong and mine” ideas over “Right and theirs” ideas.

Having said this, you are doing it all the time: Physics knowledge doesn’t live in a vacuum but in experts minds. They pour it into books and you buy it as the truth coming from the book of truths. The last psychiatrist says “No one thinks a 7th grade textbook is wrong. The results of a study may be questioned, but the Introduction section isn’t. What makes a statement in the Introduction true is that it is in the Introduction”. If anything this position is already unknowingly adopted. Better do it knowingly.)

 

Heuristics for seeing beyond words

Incentives heuristic

“Words deceive. Actions speak louder. So study the actions. And also, I would add, study the incentives that produce the actions” Actions speak louder than words, they reveal aliefs instead of official positions. Incentives show the process by which the aliefs came to be in the first place.

The Incentives heuristic encourages one to ask: “What is incentivising A to utter P?”. Its simplicity hides its power. I maintain that sane use of this heuristic will systematically produce more reliable beliefs about the likelihood of P being the case. Like Fermat I have truly marvellous ideas about the applicability of this heuristic, which the existing inferential distance doesn’t allow me to convey, now.

 

(1) – Simon, H. A. (1990). Invariants of human behavior. Annual review of psychology, 41(1), 1-20.
(2) – I think my treatment of this here is superficial, compared to how emotionally painful it is to accept. In a further post I’ll argue for it more extensive

Future
  • Understand the power of naming things
  • http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1691952
  • The “marketplace of ideas” is a rationale for freedom of expression based on an analogy to the economic concept of a free market. The “marketplace of ideas” belief holds that the truth will emerge from the competition of ideas in free, transparent public discourse (http://en.wikipedia.org/wiki/Marketplace_of_ideas)
  • puas being banned, inquisition burning people, pinker’s model of societal change. conspiracy theory, defending these people. -> i’m possbily going to have ideas that defend or associate to this people this is problematic.
  • Is most people default epistemology a consensus theory of truth?
  • Aristotle on rhetorics “There are three bases of persuasion by the spoken word: the character of the speaker, the mood of the audience, and the argument (sound or spurious) of the speech itself. So the student of rhetoric must be able to reason logically, to evaluate character, and to understand the emotions.”
  •  The state engaging in various actions to create an image of itself as “a thing” (http://csip.asia/sites/default/files/Li_Tanya_Murray_Beyond_the_state_and_failed_schemes.pdf
  • “Adversarial strategy seems to be in the same category of information security. It is something you want to have before you need it. Ideally you would never need it, but you likely will. I think “enemy” is the wrong framing, and that “non-aligned strategic players” are a better one. If you believe that (1) there exist players that have power, (2) these players are strategic, (3) these players are misaligned; and (4) that you want to have an understanding of adversarial strategy before you need it; then it follows that you would desire to install adversarial strategy pieces.”

On creating a map amidst strategic deception attempts

Others are incentivized to change our beliefs to the degree they are affected by them. This has always been the case and has led to several strategies to do so through a Darwinian mechanism. Facing this, the only reasonable response is to evolve counter-strategies.

In what follows I argue for the first two points – Why is there deception going on, and how it came about. In a further essay I suggest possible counter-strategies.

 

Beliefs, Power, and Deception

There are ontologically subjective facts which are epistemically objective (1). Allow me to unpack. The fact that Elizabeth Alexandra Mary is the Queen of England is epistemically objective (she is indeed the Queen of England). It is also ontologically subjective in the sense that it made true solely by a standing agreement amongst epistemic agents. This agreement consists of various propositions which are made true in virtue of the agreement (one of them being “Elizabeth Alexandra Mary is the Queen of England”).

The fact that it is made true in virtue of agreement is where I want to focus. This fact makes it so, dear reader, that if everyone were to wake up tomorrow believing you to be the Queen or King of England, then in fact, you would be.

Now one might have a preference for being or not being the Queen of England, President of the U.S.A., CEO of Apple, owner of the convenience store next to your place, the smartest man in the world, and so on . This means that shared beliefs (which your beliefs are, in part, part of) affect others.

If your beliefs affect others and their preferences, then this would provide them with an incentive to alter them. Especially beliefs that concern them or their position or interests, insofar as they can alter them. It is unlikely that the owner of the convenience store next to your place can do much.

You can imagine that utter disapproval of the President could lead to a revolution or impeachment. Utter disbelief in the regime’s legitimacy would change the regime. Disbelief in the power of the CEO to call the shots would lead to him being led into paralysis, and so on.

Now this is a very fine and amusing story. But what would you actually expect to see in a world where the story obtains? Where facts are made true by agreement? Where power derives from these agreements?

You would expect to see groups with enough power systematically trying to install in those whose beliefs they depend on particular opinions and beliefs as to keep them in power. You would expect systematic efforts to alter beliefs in a certain direction. You would expect to see propaganda, agenda-setting, disinformation, crowd manipulation, media manipulation, delegitimisation, wars of ideas. These on the “evil” side. (2)

But you would also expect “good” issues (3). Modern social justice (4), appeals to “rights” (5), and various movements trying to influence you (feminism, Masculism, A, etc.). The Watson controversy. Inequality talk (6). And in older time, just war.

You would also expect the fact that some “facts are made true by agreement” to be as hidden as possible since widespread knowledge of this would make these facts stand on much shakier ground . Better to stand on some firm ground (7). You would further expect systematic deception by those in power. (8) And a lot of people telling you what to do.

These efforts need not be explained by a big conspiracy by the powerful. In fact, no intelligent design is needed. In a parallel to man, Darwin explain how very complex mechanism come into place.

 

Mechanism’s Origin

Gaining power is a very strong incentive. Since it is the ability to influence or control the behavior of people, power can help one reach all of theirs instrumental and terminal goals.

The claim is that for those that wanted to come to or stay in power there was an active darwinian mechanism going on involving variation, selection and retention. Variation was provided by the various techniques used to attempt to alter the belief landscape. The owners of the conjoined belief landscape provided a selection mechanism – they would either come to agree about the power of those in power and the necessity of them staying in power, or not. And finally, retention, successful strategies would stay and be used.

As the landscape evolves so do the strategies. Upworthy was selected for virality in a landscape of social media dominance that was not in place fifteen years ago. Trying to convince the populace that the chief of state ought to be the chief of state because he is a God would work on North Korea and in Egypt some 2000 years ago, and wouldn’t work anywhere in Europe in the present.

 

Conclusion

The setting presents itself as such. On the one hand we want to have an accurate map. On the other there are entities that wish us to have an inaccurate map. In such a situation one might suggest creating tools to defend ourselves from these pernicious influences.

The contestation can be heard: “Are we not children of the Enlightenment? Was it not one of historicism’s mistake that Popper destroyed (9) to analyse the origin of an argument instead of the argument itself? Ought we not to do engage with arguments, by themselves?”

I answer that we mustn’t. Attention is limited, and if one can dismiss a source quickly, then all the better. “There’s an old saying in the public opinion business: we can’t tell people what to think, but we can tell them what to think about.” And this is precisely why we cannot engage with every source with the same seriousness. Engaging with any one source has opportunity costs and thus needs to be established as useful or necessary prior to the engagement itself. In the next post I will describe some heuristics for belief adoption in adversarial settings.

Open Questions
  • How much distortion can we expect on the various entities telling us what to do?
  • Can the convenience store owner do anything to influence our belief? (Maybe acting as a prototypical convenience store owner.)
  • How far can the Darwinian selection mechanism hypothesis apply and what does it predict?
  • Social reality is pretty openly discussed within some academic circles. It might be a case of hiding in plain sight. It might be that it just can’t be hidden. It is notable that all major world religions (except Buddhism?) mesh really well with objectivity, and objective world (God-given) and not with facts made true by human agreement.
  • Plato in the republic emphasised maintaining social order (classes) by spreading a myth, the Noble falsehood that different people have different metals in their souls.

(1) -Searle, J. R. (1995). The construction of social reality. Simon and Schuster.
(2) – I.e.: the various sides that have not been branded by themselves.
(3) – I.e.: various movements that have branded themselves (I.e.: Pro-choice, versus, Pro-Life [No one is anti-anything. See here])
(4) – The current world is unjust and we aim to make it just. “Just” is of course not about human agreement, but beyond humans.
(5) – Rights are normative principles, and in some cases given by nature (whatever that means). A society that doesn’t assert rights is just wrong.
(6) – Inequality being like social justice. The current world is unequal, equal is good, let’s make it equal!
(7) – Religion is as firm as you can get. Who would dare defy a God?
(8) – And, of course, you would expect this to happen in all places where there are power relations. So not only there and in the past, but here and in the present.
(9) – Popper, K. R., Havel, V., & Gombrich, E. H. J. (2011). The open society and its enemies. Routledge