Aliefology and Beliefology

We have talked about weird effects that occur in what I called Social Descriptive Epistemology. I want to get a more precise description of what it is that we are studying. To that purpose in this essay I introduce and explain aliefology and beliefology and speculate about future avenues for investigation and how to use that information.

In particular I go into detail about Individual Descriptive Aliefology, suggest a standpoint and various heuristics and show how to take advantage of the knowledge of the existence of these heuristics to create a better map.

 

Aliefology and Beliefology

Phenomenon of Study                                                                  

Actual Beliefs Claimed Beliefs
Social Social Descriptive Aliefology Social Descriptive Beliefology
Individual Individual Descriptive Aliefology Individual Descriptive Beliefology

 

Aliefology is the study of how an agent or set of agents come to believe X.

Beliefology is the study of how an agent or set of agents claim to have come to the belief that X.  Individual and Social refer to the level of analysis.

A lot of focus on this blog has been on how to build an appropriate map. I think that the study of aliefology and beliefology at the individual and societal level are, together, a  crucial lever to create a map fast.

I think this lever is what both Thiel and Graham found in the context of pushing the lever to make money.  Thiel talks about secrets: “Back in class one, we identified a very key question that you should continually ask yourself: what important truth do very few people agree with you on? To a first approximation, the correct answer is going to be a secret. Secrets are unpopular or unconventional truths. So if you come up with a good answer, that’s your secret.” and Graham about “What can’t you say?”

They are clearly circling the same topic here, although they did not divide it as I do.

What is gained by dividing it as I did is that you can start talking about various categories of mismatches more precisely. Things that are aliefed at the societal level, and claimed to not be believed. Things that are claimed to be believed and are not aliefed, at the societal level (See the whole of Overcoming Bias). Things that are aliefed, and claimed to be believed for a reason, but in fact are aliefed for a totally different reason.

And this enhanced precision is possible even before starting to classify the beliefs and aliefs as true or false.

To demonstrate what this study may look like, in the next section I speculate about Individual Descriptive Aliefology.

 

Individual Descriptive Aliefology

I sense that naive realism is the default human epistemological stance – the folk epistemology if you will. I suspect this because you need to go up in levels to figure out that naive realism doesn’t obtain and most of the population is achieving the formal operations level at which this is a possibility (and may or may not be pursued)

The other reason to sense this is the case is that it took 9 Eliezer-essays to explain that the map is not the territory. (Of course, evidence I can easily share, the stronger reason for this idea are year-hours of observation of people talking and speaking.)

Besides the default epistemological sense I think we can reverse-engineer what heuristics people are using by looking at several areas:

  • Rhetorics
  • Persuasion
  • Fallacies
  • Bias

I consider these to be descriptions of what works, with the job of individual aliefology being to systematize them, and understand why they work.

Rhetorics and persuasion techniques are codifications of what has historically worked to convince people.  Fallacies are patterns of thought that lead to incorrect conclusions that happen so frequently that they got codified as such, being a special case of bias, as they relate to arguments.

Following I describe several possible heuristics that are being used in individual aliefology.

  • Futuristic heuristic

“Discount things that sound futuristic”

  • Movie-like heuristic

Generalizing from fictional evidence

  • Conspiracy theory heuristic

“Discount things that can be called conspiracies” (That this heuristic exists is shown by the fact that conspiracy theory is used as a term of ridicule and works as a semantic-stop sign)

  • Authority Heuristic (newspaper, tv, internet)

“Trust authoritative figures/institutions ”  (That this is a heuristic is show by the fact that social sciences have needed to coin credentialism: “reliance upon formal credentials conferred by educational institutions, professional organizations, and other associations as a principal means to determine the qualifications of individuals to perform a range of particular occupational tasks or to make authoritative statements as “experts” in specific subject areas”)

  • Status quo bias

Prefer what is the case

  • Politicized Heuristic

Follow the group line

  • Sacredness Heuristic

“Do not question what is sacred.”

 

The power of descriptive aliefology

Each of these heuristics can be reversed to tell you where to go look about for wrong beliefs. As Haidt has said (in the case of the Sacredness heuristic) – ““The fundamental rule of political analysis from the point of psychology is, follow the sacredness, and around it is a ring of motivated ignorance.”

Reversing this heuristic tells you where to look to build your map – as in this analysis. Reversing the other ones should lead to the same effect.


Future:

  • folk psycholgy, folk physics, experimental philosophy, human intuitive ontology (paper on evolutionary psychology on this)
  • how these heuristics are sound in a certain environment but have been abused and don’t work in our current environment (EEA and gigerenzer)
  • knowledge goes pop
  • “broscience”
  • social aliefology: http://en.wikipedia.org/wiki/Fad
  • Coolized  “Prefer what is cool” ; crossfit
  • authority heuristics and http://en.wikipedia.org/wiki/Credentialism
  • (Unclear if politicized heuristic fits into individual beliefology or aliefology.)
  • need better names
Advertisements

Societal Map Corruption

In this essay I criticize part of Pinker’s model of societal change. I leave my criticism implicit, since making it explicit – if it is correct – would be potentially (very) problematic.

Prerequisites

We have spoken before about how definitions matter and that disputes about definitions are about power hungry strategic actors wanting to influence your map because their power depends on it.

We have also talked about Pinker’s model of societal change:

“Norm cascade” Argument of societal change

  1. The elites favor the position for which there are rational arguments.
  2. The position with rational arguments for it is position Y.
  3. Therefore, the elites favor position Y.
  4. If there are is an intense controversy between two opposed sides to a socially fractious issue (drug legalization, abortion, capital punishment, same-sex marriage), what the elite favors becomes legal norm.
  5. There are is an intense controversy between two opposed sides to a socially fractious issue  X
  6. Therefore, position Y will become legal norm [3,4,5]
  7. If nothing terrible happens, then, people and press get bored.
  8. Nothing terrible happens.
  9. Therefore, people and press get bored [7,8]
  10. If people and press get bored, then politicians realize issue is no longer a vote-getter
  11. Therefore, politicians realize issue is no longer a vote-getter. [9, 10]
  12. If politicians realize issue is no longer a vote-getter, then politicians will not reopen the issue.
  13. Therefore, politicians will not reopen the issue. [11, 12]
  14. If politicians don’t reopen the issue, no one will.
  15. Therefore, the issue is not reopened.  [13, 14]

Argument for “People accept the status quo as correct.”

  1. People accept the status quo as correct.
  2. Y is the status quo.
  3. Therefore, people accept Y as correct.

Argument for “Extremists cement the majority consensus.”

  1. “Norm cascade” Argument of societal political change
  2. Argument for “People accept the status quo as correct.”
  3. If a group goes against majority consensus and isn’t composed of elites, it will be seen as extremist/radical,
  4. Any group proclaiming ~Y, goes against the majority consensus
  5. Therefore, any group proclaiming ~Y will be seen as extremist/radical.
  6. The majority cements its consensus by opposition to extremist group positions.
  7. Therefore, the group proclaiming ~Y being seen as extremist/radical will further cement the majority consensus

Doubts

I’m afraid of this model being mostly (instead of entirely) correct. It is not obvious to me that the elites favor positions for which there are rational arguments and cause it to become the legal norm.

I just think that that is what Pinker would rationalize himself into, being smart, and the status quo argument being correct. It is just too clean – if the argument obtains, we do, in some sense, live in the best of all possible worlds.

I fear it is much more complicated than that.

What I do see is pressure groups, societies for the advancement of, “rights” movements, pride days, manifestos, for freedom groups, lobbying, academic “fields”, and censorship. I don’t see any reason to believe that membership in this groups is explained by rational argument consideration and debate but rather it seems to be a contingent aspect of pre-existing membership.

I also find it a bit too convenient and unrealistic to suspect that all of these groups have had, in the past, their positions rationally assessed by a group of the elites which then enforce whatever is rational.

I do see some groups winning over others and see it reflected in the general populace pretty fast – things that were ok, normal, cool, not a “thing” become uncool and met with a learned moral disgust response.

And I see word wars to promote beliefs that matter to some actors. (This is not a syndrome of our time, though. In the Republic Plato proposed a Noble Lie that would keep the stratification of his idealized society: god had made the souls of rules with gold, those of helpers with silver, and farmers and craftsmen had iron and brass in their souls. This is why rulers were born to rule, helpers to help and craftsmen to craft; and a desire to change classes would not even be a possible idea.)

It seems to me that what is in fact happen is that some groups unite under the idea that “X is right/correct/moral” (or the converse) because they are part of X or benefit from that in some for. They then overpower groups that don’t and spread their belief through society. If it is the fact that these groups believe this due to membership and not rational argument, then you would expect your various society-given beliefs to be corrupted. (Descartes realized some form of this around 1650. One can only expect that the corruptors have gotten better since then. For a recent example – hopefully harmless to my audience – see the Different Media portrayals of the 2014 events in Ukraine and realize that, in the best case scenario, you got the right picture in one of three possible worlds.)


 

 

Future:

  1. astroturfing, crowd manipulation, disinformation, frame building and frame setting, infoganda, media bias, media manipulation, misinformation, perception management, political warfare, psychological manipulation, psychological warfare, mudsliding, sanctioned name-calling

Modelling map aggregation

In theories as maps, theories as engines I touched upon the complex relationship between map and territory where “MAP(S) ARE THE TERRITORY BEING MAPPED”.

A trivial version of this mapping is done all the time. It is theory of mind, the ability to attribute mental states — beliefs, intents, desires, pretending, knowledge, etc. — to oneself and others and to understand that others have beliefs, desires, and intentions that are different from one’s own. Like, if you see that someone has hid a marble in the box, and someone else comes in the room, you don’t expect them to know that there is a marble in the box. (Except if you are under 4 or autistic. In which case it is totally not trivial.)

A less trivial version for non-autistic, non-young people is to model (a) map aggregates and what reigns their (b) development and (c) change. In this essay I make a superficial introductory foray into the topic of mapping domains that are constituted by the aggregation of maps. I consider this foray to fall under what I’ve been calling “Descriptive Epistemology”.

 

Descriptive Epistemology

I talked about social reality in On creating a map amidst strategic deception attempts. Modelling maps is going to be crucial in the context of social reality, that is, in the context where the phenomenon being modelled is the way it is as a result of the aggregation of many people mapping it in a certain way.  (Epistemically objective ontologically subjective facts.)

You can imagine that there is a degenerate case in which the mapping does not alter the territory (like physics.) In that case tehre is no interest in understanding the relationship between mapping and the territory since there is none, they are as-if independent.

The non-degenerate cases are interesting and what I’ll discuss in what follows.

Descriptive epistemology is, surprisingly, not a term in the literature (Maybe by another name? Social epistemology might be a proper subset.) I define “descriptive epistemology” as the study of how human agents and  human agent aggregations attempt to gain knowledge, what they take to be knowledge, and how the two previous factors interact with the objects of knowledge. (Whilst social reality is my focus of interest at the moment, inner cognitive reality [phenomenology] is another possible one, and natural reality a third one.)

Social Descriptive epistemology focuses on the relationships between mapping and the territory being mapped when the territory being mapped is affected by the many mappings occurring more. (Like a keynesian beauty contest.)

 

Social Descriptive Epistemology

In what follows I’ll analyse some properties of social descriptive epistemological systems. This list is not comprehensive or organized in any particular way. It merely points at how there are weird effects that are the product of interactions that do not occur when doing individual descriptive epistemology.

 

Properties

Social perception spirals

The idea here is that humans base their perception of the value of one object based on how others value that object. If everyone is doing this then you get a positive feedback loop.

I think this described partly what Rene Girard talked about when he talked about the mimetic theory – that humans imitate each other.

Now, there is an economic equilibrium for the value of things given by the balance of supply and demand. (Equilibrium being another property of social descriptive epistemological systems.) But in fact, when setting prices you need to track the valuation that others are giving to the item. This valuation might be off from what the economic equilibrium would be.

This very simple model explains a lot. The tulip mania, and boom and bust cycles in general. (It is not animal spirits, although that might play a role, its that the valuations at some point stretch all the joint credibility of the mapper of maps and we reach a tipping point that leads to an information cascade/ domino effect/ snowball effect.

It explains bank runs, political crisis to some extent. It also works to explain how concerned you should be about something.

 

Self-fulfilling prophecies 

The self-fulfilling prophecy is “an (initially) false definition of the situation evoking a new behavior which makes the original false conception come true. This specious validity of the self-fulfilling prophecy perpetuates a reign of error. For the prophet will cite the actual course of events as proof that he was right from the very beginning.”

I’m quoting from memory but the original study was something like a class of children that transitioned years and were separated into two classes of roughly equal ability. The teachers were told that one particular class had tested as being very much above average (they had not). By the end of the year, they did. The other class tested average.

The expectations of the teachers drove behavior. This is why first impressions really matter. Whatever is conveyed is going to be taken as who you are and from henceforth shape your behavior when interacting with those people.

You can imagine that someone is believed to be an extremely valuable romantic partner, or really good at fundraising. The fact that this is believed will open doors that would not be open otherwise. This in turn will allow them to become very valuable or really good at fundraising, making the original prophecy true, independent of its truth-aptness in the beginning.

 

Signalling

Clothes are both “functional” and “social”. Functionally, clothes keep us warm and cool and dry, protect us from injury, maintain privacy, and help us carry things. But since they are usually visible to others, clothes also allow us to identify with various groups, to demonstrate our independence and creativity, and to signal our wealth, profession, and social status. The milder the environment, the more we expect the social role of clothes to dominate their functional role. (Of course social roles are also “functions” in a sense; by “functional” I mean serving individual/personal functions.)

Beliefs are also both functional and social. Functionally, beliefs inform us when we choose our actions, given our preferences. But many of our beliefs are also social, in that others see and react to our beliefs. So beliefs can also allow us to identify with groups, to demonstrate our independence and creativity, and to signal our wealth, profession, and social status.”

Scott Alexander has written a beautiful analysis of politicization: “Over the past few days, my friends on Facebook have been making impassioned posts about how it’s obvious there should/shouldn’t be a quarantine, but deluded people on the other side are muddying the issue. The issue has risen to an alarmingly high level of 0.05 #Gamergates, which is my current unit of how much people on social media are concerned about a topic. What’s more, everyone supporting the quarantine has been on the right, and everyone opposing on the left. Weird that so many people suddenly develop strong feelings about a complicated epidemiological issue, which can be exactly predicted by their feelings about everything else.”

Of course, Robin Hanson is probably the go-to source about this.

 

Other, randomly assorted, weird factors 

“In an Abilene paradox a group of people collectively decide on a course of action that is counter to the preferences of many (or all) of the individuals in the group.” People poorly model the preferences of others (that is, their map of others maps are lacking) and thus the group suffers as a whole.

Groupthink is a psychological phenomenon that occurs within a group of people, in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome.”

Social influence occurs when one’s emotions, opinions, or behaviors are affected by others. Internalization is when people accept a belief or behavior and agree both publicly and privately. (Not necessarily via methods that reliably lead to acceptance of true beliefs)

Conformity is the act of matching attitudes, beliefs, and behaviors to group norms. (Relevant because group norms are going to have – explicit or entailed – epistemological and ontological stances, and beliefs)

 

Why care?

According to a particular view the way to control something is to understand it. If this obtains, then it is very likely that the particular things you care about fall within the domain of social reality, as it includes “marriage, property, hiring, firing, war, revolutions, cocktail parties, governments, meetings, unions, parliaments, corporations, laws, restaurants, vacations, lawyers, professors, doctors, medieval knights, and taxes, for example”. Further, even things that don’t fall within that domain and about which you care about are probably dependent on stable economical and political systems (or at least dynamically stable).

Both of those domains fall within the scope of “Maps of map aggregation” and errors here are terribly costly.


 

 

Future

 

 

 

 

Theories as cameras, theories as engines

At some point all of science was together in something called “natural philosophy”. Francis Bacon was the first to attempt to partition the sciences. Nowadays disciplines are partitioned more or less neatly. (Cognitive Sciences being new is pretty haphazard.) And there is one big demarcation I wish to touch upon: between soft and hard sciences, or social and natural sciences.

I have frequently observed the confusing fact that whether one is more connected to the social or the natural sciences is quite predictive of one’s epistemological beliefs: It seems that people that participate or research in hard science fields (physics, chemistry, biology) seem to be drawn into objectivity whilst those in soft science fields (economics, sociology) more drawn to non-objectivity.

In what follows I attempt to describe the epistemology of the fields and to do a first pass at along what axis the underlying epistemological assumptions differ.

 

Descriptive epistemology of the 2 scientific cultures

In designerly ways of knowing (1), Nigel Cross talks about 3 scientific cultures (sciences, humanities, design) and characterises them across the axes of phenomenon of study, appropriate methods, and values. I’m going to reproduce his characterizations, skipping design:

“The phenomenon of study in each culture is

  • in the sciences: the natural world
  • in the humanities: human experience

The appropriate methods in each culture are

  • in the sciences: controlled experiment, classification, analysis
  • in the humanities: analogy, metaphor, evaluation

The values of each culture are

  • in the sciences: objectivity, rationality, neutrality, and a concern for ‘truth’
  • in the humanities: subjectivity, imagination, commitment, and a concern for ‘justice’

Now, researchers are trained and develop methodologies which entail epistemologies and ontologies.  I think I finally understood what this difference about.

The basic metaphor of knowledge is vision. When you see you don’t change the object. This metaphor obtains for physics and chemistry and biology for the most part, and thus you can model this domains as being observer independent, as systems that are separate from the one that is modeling them.

In the soft sciences (that is, sciences that study human-made systems, or systems made up by humans) the correct metaphor is touch: to understand you must manipulate the object. You cannot be independent of the systems that you are observing – there is theoretical performativity. Meaning, the observer and what the observer is doing (theorising) affect the phenomenon.

Your observations (and their being published and spread out) change the system you set out to observe. Your theories in this domain are not a camera, but an engine.

This suggestion is not wholly original. Cybernetics was the study of systems, and second-order cybernetics the study of systems including the ones constructing systems and studying systems. In what follows I tease out the generativity of this particular viewpoint to understand the curious fact pointed at in the introduction.

MAP-MAKING WITH CAMERAS

MODERN WORLDVIEW

Naive objectivism. The received worldview, the natural stance (my map is the territory.).

3RD PERSON

Observer sits outside of the system he is observing. God’s eye view

OBJECTIVITY IS POSSIBLE

What is being described is independent and not affected by the description or the descriptor.

MAP AND TERRITORY: SIMPLE RELATIONSHIP
SEPARATED

Representationalism/Realism: One level up, my map is not the territory and the territory informs my map but there is no bidirectional causality.

METAPHOR: VISION

“The KNOWING IS SEEING conceptual metaphor allows us to understand the abstract domain of knowledge by means of the concrete domain of sight. This is a metaphor with a clear experiential basis2 grounded in the fact that in early childhood human beings normally receive cognitive input by seeing. Nevertheless, whereas in the first years of one’s life perception and cognition are conceived as together (or conflated in terms of JOHNSON 1997), due to the fact that there is a deep basic correlation between the intellectual input and vision, afterwards these two domains separate from each other («deconflation» in JOHNSON’s words 1997). This is the reason why we are able to use the metaphor KNOWING IS SEEING meaning just «awareness» and not being linked to vision at all, which may be seen in everyday language expressions like the following ones:

(a) I see what you’re getting at.

(b) His claims aren’t clear.

(c) The passage is opaque.” (2)

TRUTH: POSSIBLE

A great intro.

MAP-MAKING WITH ENGINES

POSTMODERN WORLDVIEW

Objectivity is impossible. Everything that is said is said from someone to someone, from a specific viewpoint, culture, assumptions and so on that cannot be transcended.

2ND PERSON

Participant observation, ethnography. If you want to study the object you must interact.

INTERSUBJECTIVITY IS POSSIBLE

Claim can be made and triangulated from authors in various different positions.

MAP AND TERRITORY: COMPLEX RELATIONSHIP
MAP(S) ARE THE TERRITORY BEING MAPPED

I write about this particular relationship at some length in modelling map aggregation.

2ND AND N-ORDER EFFECTS OF MAPPING

The mapping alters the territory.

http://mitpress.mit.edu/books/engine-not-camera

MAPPING CREATES TERRITORY

“what we conventionally think of a ‘subject’ and ‘object’ are co-arising. Because the mind is embodied and arises out of “an active handling and coping with the world”, then “whatever you call an object … is entirely dependent on this constant sensory motor handling”. As a result an object is not independently ‘out there’, but “arises because of your activity, so, in fact, you and the object are co-emerging, co-arising” (Varela, 1999: 71-72).” http://en.wikipedia.org/wiki/Enactivism

METAPHOR: TOUCH

Core metaphor is “knowing is touching”. To know the object you must interact with it, and your interactions change it.

TRUTH: DISREGARDED

Habermas: “By linking meaning with the acceptability of speech acts, Habermas moves the analysis beyond a narrow focus on the truth-conditional semantics of representation to the social intelligibility of interaction. The complexity of social interaction then allows him to find three basic validity claims potentially at stake in any speech act used for cooperative purposes (i.e., in strong communicative action). His argument relies on three “world relations” that are potentially involved in strongly communicative acts in which a speaker intends to say something to someone about something (TCA1: 275ff). For example, a constative (fact-stating) speech act (a) expresses an inner world (an intention to communicate a belief); (b) establishes a communicative relation with a hearer (and thus relates to a social world, specifically one in which both persons share a piece of information, and know they do); and (c) attempts to represent the external world. This triadic structure suggests that many speech acts, including non-constatives, involve a set of tacit validity claims: the claim that the speech act is sincere (non-deceptive), is socially appropriate or right, and is factually true (or more broadly: representationally adequate). Conversely, speech acts can be criticized for failing on one or more of these scores. Thus fully successful speech acts, insofar as they involve these three world relations, must satisfy the demands connected with these three basic validity claims (sincerity, rightness, and truth) in order to be acceptable.”

  1. – “Cross, Nigel. “Designerly Ways of Knowing.” Design Studies 3.4 (1982): 221-27.”
  2. – Ruiz, J. H. (2005). The authority is vision and the knowledge is a bounded region metaphors in fairy tales. Interlingüística, (16), 569-578.

Future:

Considerations on heuristics for Map-making: Your naive reasoning mechanisms suck   

In this essay I argue for the Deserved Persuasiveness heuristic (or “Why it’s a bad idea to form your own opinions”). I being by suggesting a model through which to think about beliefs and belief-structures. I then go over the Naturalistic Decision Making Data/Frame theory of sensemaking. I end by putting the two models together in arguing for the Deserved Persuasiveness heuristic.

Beliefs

You can imagine that your beliefs are like islands set in the sea.  Islands and archipelagos, some more connected some less. How deep each land mass goes how strongly justified those particular beliefs go, and their altitude is how precise they are. Of course, their extension is the scope of what they explain.

Ideally you want to have a Pangea like structure. A model of the world that is a set of true, connected, encompassing, justified beliefs.

But keep in mind that it’s more important to make your beliefs as correct as possible then to make them as consistent as possible. Of course the ultimate truth is both correct and consistent; however, it’s perfectly possible to make your beliefs less correct by trying to make them more consistent. If you have two beliefs that do a decent job of modeling separate aspects of reality, it’s probably a good idea to keep both around, even if they seem to contradict each other. For example, both General Relativity and Quantum Mechanics do a good job modeling (parts of) reality despite being inconsistent and we want to keep both of them. Now think about what happens when a similar situation arises in a field, e.g., biology, psychology, your personal life, where evidence is messier than it is in physics.

Given the above it seems like the correct tradeoff to go for correctness – or modelling a specific part of reality – before going for propagation and connection. How does one aim at having correct beliefs?

 

The missing piece: Data/frame theory

Naturalistic Decision Making is a field of decision-making analysis that studies expert decision-making in naturalistic settings. Part of what it studies is sensemaking – how people make sense of what they experience.

The data/frame theory is the macrocognitive model of sensemaking that is used in the field. It claims that: “When people try to make sense of events, they begin with some perspective, viewpoint, or framework—however minimal. For now, let’s use a metaphor and call this a frame. We can express frames in various meaningful forms, including stories, maps, organizational diagrams, or scripts, and can use them in subsequent and parallel processes. Even though frames define what count as data, they themselves actually shape the data (for example, a house fire will be perceived differently by the homeowner, the firefighters, and the arson investigators). Furthermore, frames change as we acquire data. In other words, this is a two way street: Frames shape and define the relevant data, and data mandate that frames change in nontrivial ways.” (1)

The most interesting part is this: “Decision makers are sometimes advised that they can reduce the likelihood of a fixation error by avoiding early consideration of a hypothesis. But the Data/Frame Theory regards early consideration to a hypothesis as advantageous and inevitable. Early consideration—the rapid recognition of a frame—permits more efficient information gathering and more specific expectancies that can be violated by anomalies, permitting adjustment and reframing. Jenny Rudolph (2) found that decision makers must be sufficiently committed to a frame in order to be able to test it effectively and learn from its inadequacies—something that’s missing from open-minded and open-ended diagnostic vagabonding.

These observations would suggest that efforts to train decision makers to keep an open mind can be counterproductive (…). We hypothesize that methods designed to prevent premature consideration to a frame will degrade performance under conditions where active attention management is needed (using frames) and where people have difficulty finding useful frames.”

This matches my experience, and I believe explains hindsight bias. Being open-minded allows your brain to rationalize itself into having knew the outcome all along. Thi is why (written down) predictions increase your accuracy. You are shocked when it cames out that you were wrong. Your brain cannot construct a story about how you knew it since that would increase the cognitive dissonance. The minimal cognitive dissonance explanation is that you were wrong, and this is just accepted, and taken as feedback for existing models. On the other hand if you just thought about the outcome, then it is less painful for your brain to rewrite your autobiography as actually having predicted the outcome that happened.

Taking the data/frame theory seriously entails believing that (a) specific beliefs trump unclear beliefs (b) inaccurate beliefs trump no beliefs. This indicates that bad models are better than no models. (3)

Deserved persuasiveness heuristic

But why choose bad models? The second counter-intuitive claim (after “Do not be open-minded”) is “Don’t reason by yourself – your current reasoning mechanisms (probably) suck”.

One of the “reasonable” ways that people are supposed to update their beliefs is by taking arguments into account. This is a terrible method since the persuasive ability of an argument about a certain field is correlated with 1) the relationship between what the argument claims and the truth (bottlenecked by the listener’s model of reality), 2) the ignorance of the listener about the field, and 3) the persuasive ability of the arguer. Notice how only 1 of these is related to the actual truth-claims of the argument.

This follows from the reasonable postulates that a) there will exist convincing arguments for the true position, b) the more ignorant the listener about the field, the smaller the barrier for zir to consider an argument convincing, and c) the more persuasive the arguer about the field, the smaller the barrier for zir to make a convincing argument.

If these hold then one should expect a persuasive argument to be true if and only if a) one is knowledgeable about the field, or b) the arguer is not generally persuasive.

This has some interesting corollaries. First, if one is not knowledgeable in the field and the arguer is persuasive and a non-expert, then it may be strictly negative to discuss the matter since one will end up with higher credibility in their position, even though the process that took place is weakly correlated with a truth-outputting process. Second, if one is not knowledgeable in the field, and experts exist, it seems to be strictly dominant to find the expert consensus and just copy all those beliefs, as it is likely that the mechanism that produced those beliefs has a higher correlation with the true position than whatever naive processes one uses.

I think this second corollary is really hard to accept because they conflate their ideas with good ideas with themselves as the “venerable creators of good ideas”. Just because you had a particular idea it doesn’t mean it’s a good idea.

Adopt experts beliefs. Then, start trying to find faults in those beliefs. (As per the data/frame theory of sensemaking.) Where there are no experts, just go with what resonates , take that as a working hypothesis and keep going.

(The reason why no one follows this heuristic – as stated – is clear: arguing is not about truth. Then again some do follow it, implicitly, when expert consensus is rebranded as “Introductory Textbook”.)

(To the question of why is it that I’m not following this heuristic and am instead developing my own models, the answer is that I am, it just isn’t shown through the blog yet. Also, my motivation at the moment to put out and stabilize inchoate models is very high and thus it would be very costly to not do it. It is also a valuable enterprise as looking at existing theories will be devastating for existing models and thus they will be grokked at a much deeper, more personal level. It’s a tradeoff, if my motivation to put stabilize models would be lower I would just read my brains out on introductory textbooks to develop world models.)

  1. Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking 2: A macrocognitive model. Intelligent Systems, IEEE, 21(5), 88-92.
  2. – J.W. Rudolph, “Into the Big Muddy and Out Again,” doctoral dissertation, Boston College, 2003; http://escholarship.bc.edu/dissertations/AAI3103269
  3. – This is partially what his blog has been about. Develop models, look at the data through those model’s frame with your eyes out for falsification.

 


 

Future
  • Go into why “Going with what resonates” is a useful heuristic. Talk about why that is a reasonable (Gigerenzer, Gendlin, Taleb, etc.)
  • Address bootstrapping problem: You need to accept not forming your own ideas as an idea by yourself first
  • Gigerenzer heuristic creation (that is in fact what I’m doing with “heuristics for map-making”)
 

The Great Encrypted Broken Telephone Beauty Contest

A favorite quote from here:

 

“Ok, my question might’ve been easy to misunderstand. My point was that it seems to me that you’re not familiar with the general culture in which MacIntyre writes, and so you don’t even get what he’s saying and what narratives he’s responding to. It’s like reading Nietzsche when you don’t know what Christianity is.

So your confusions aren’t about what MacIntyre is in fact saying (some of which I think has merit, some doesn’t), but it just fails to connect at all.

And while I overall like MacIntyre, I’m not enough of a fan to try to bridge that gap for him, and unless I did this full-time for a year or so, I don’t think I could come up with something better than “well, read these dozens of old books that might not seem relevant to you now, and some of which are bad but you won’t understand the later reactions otherwise, and also learn these languages because you can’t translate this stuff”. Which is a horrible answer.

Worse, it doesn’t even tell you why you should care to begin with. I think part of that is that, besides the meta-point that MacIntyre makes about narratives in general, it seems to me that the concrete construction and discourse he uses is deeply *European* and unless you are reasonably familiar with that, it will seem like one theologian advocating Calvinism instead of Lutherism when you’re Shinto and wonder why you should care about Jesus at all. (This is a general problem for non-continental readings of continental philosophy, I think – it’s deeply rooted in European drama.One reason Aristotle is so attractive is that all European drama theory derives from him and even someone as clever as Brecht couldn’t break it, so he’s an obvious attractor. I, and I suspect many continentals, came to philosophy essentially through drama, and that makes communication with outsiders difficult. Not enough shared language and sometimes very different goals.)

So I’ll save that goodwill for some later (and more fruitful) topic, if you don’t mind.

As to MacIntyre’s meta-point of “use the community-negotiated tools and narratives you already have” instead of “look for elegant theories no one actually uses anyway”, well, I *wanted* to write a different explanation of that, but then Vladimir did it already in his comment below, and I couldn’t do a *better* job right now, but he still failed, so…”

It’s no so much as eavesdropping on the great conversation, as it is playing broken telephone with multiple messages going on. Each author has a cultural backdrop and specific understanding of what came before and you need to model these to understand what they are saying. Its like an encrypted keynesian beauty contest turned up to 11.

Given these you would expect misunderstanding to abound, people to be hopelessly confused, and debates over what was meant in the first place to drown out the original messages.


 

 

Future:

Heuristics for map-making in adversarial settings

In a previous post I argued that one is well advised in expecting some entities to have a vested interest in strategically deceiving ones’ map-creation efforts. Samo Burja has expressed a similar sentiment here. In this post I suggest 3 classes of heuristics aimed at counteracting these deceptive efforts.

These are heuristics in Simon’s sense (1): they will lead to better results with regards to internal criteria (in this case map making) by virtue of being applicable to the structure of the environment. If I was correct in describing the structure of the environment in the previous post – then these heuristics can be expected to be helpful.

I don’t claim these heuristics to be original – Hell, everything written thus far reads like a collage. They are in place already to some extent, being used by some. What is new is uniting them under this particular framework of “Map-making in adversarial settings”. Naming things seem to be powerful, having a community (like LW) self-reinforcing things’ names seems to be powerful, being able to point people to things and treat them as objects explicitly  is powerful. I don’t yet understand exactly what is going on there.

 

The tools

Heuristics for question dismissal

The first heuristic is to ask “What will I do with the answer to this question?”.  Attention is finite, and the fact that a question has insinuated itself to your attention is a necessary but not sufficient condition to think about it. It is a heuristic for dealing with privileged questions.

These come especially from the media or topics that the media is addressing, and what I referred to in the previous post when I said that “There’s an old saying in the public opinion business: we can’t tell people what to think, but we can tell them what to think about.” The fact that the heuristic is not in place explains the power of agenda-setting.

As Qiaochu made clear “[Y]ou can apply all of the epistemic rationality in the world to answering a question like “should Congress pass stricter gun control laws?” and never once ask yourself where that question came from and whether there are better questions you could be answering instead.”

There is a second topic in this constellation which is about the truthfulness of what is transmitted, within what the media transmits. I don’t want to open that particular can of worms now, but want to bring to awareness that if there are 3 sides to a story, and assuming one is truthful, the prior is against the possible world in which your particular side is the truthful one.

 

Heuristics for not engaging

Genetic Heuristic

There is an amazing post on this by Stefan Schubert here.

The key innovation is to overturn the idea that arguments should be addressed as such because this disregards information especially about the argument’s origin. “As mentioned in the first paragraph, those who only use direct arguments against P disregard some information – i.e. the information that Betty has uttered P. It’s a general principle in the philosophy of science and Bayesian reasoning that you should use all the available evidence and not disregard anything unless you have special reasons for doing so. Of course, there might be such reasons, but the burden of proof seems to be on those arguing that we should disregard it.”

As Stefan points out you can imagine that  Betty is not reliable with regards to P because a) she is 3 years old, b) we have knowledge of Betty being biased, c) we know that Betty overestimates her knowledge about the topic of P, d) Betty gets money by making people believe P, or conversely, that Betty is reliable because she is an expert at P. I investigate the two last cases in what follows.

Deserved Persuasiveness Heuristic

Whether an argument convinces you is a function of 1) how well it meshes with your other beliefs 2) if it is true (conditional on your ability to assess truth) 3) your ignorance about the field and 4) the persuasive ability of the arguer.

If these obtain then we can draw an “Deserved Persuasiveness” heuristic that is as follows: (If searching for the truth, then) if your interlocutor is an expert in the topic at hand, and so are you; engage. If neither is, don’t engage, the most persuasive one will just input his bad ideas into the mind of the other one. If the interlocutor is an expert and you are not, then just adopt their ideas, since they are very likely much better than yours. (2)

(The third result – Just accepting expert opinions, because they come from an expert – sounds terrible in an emotional sense for a lot of people. I wonder if this is because people see their ideas as part of themselves, and their idea generation processes as well and thus  would prefer to have “Wrong and mine” ideas over “Right and theirs” ideas.

Having said this, you are doing it all the time: Physics knowledge doesn’t live in a vacuum but in experts minds. They pour it into books and you buy it as the truth coming from the book of truths. The last psychiatrist says “No one thinks a 7th grade textbook is wrong. The results of a study may be questioned, but the Introduction section isn’t. What makes a statement in the Introduction true is that it is in the Introduction”. If anything this position is already unknowingly adopted. Better do it knowingly.)

 

Heuristics for seeing beyond words

Incentives heuristic

“Words deceive. Actions speak louder. So study the actions. And also, I would add, study the incentives that produce the actions” Actions speak louder than words, they reveal aliefs instead of official positions. Incentives show the process by which the aliefs came to be in the first place.

The Incentives heuristic encourages one to ask: “What is incentivising A to utter P?”. Its simplicity hides its power. I maintain that sane use of this heuristic will systematically produce more reliable beliefs about the likelihood of P being the case. Like Fermat I have truly marvellous ideas about the applicability of this heuristic, which the existing inferential distance doesn’t allow me to convey, now.

 

(1) – Simon, H. A. (1990). Invariants of human behavior. Annual review of psychology, 41(1), 1-20.
(2) – I think my treatment of this here is superficial, compared to how emotionally painful it is to accept. In a further post I’ll argue for it more extensive

Future
  • Understand the power of naming things
  • http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1691952
  • The “marketplace of ideas” is a rationale for freedom of expression based on an analogy to the economic concept of a free market. The “marketplace of ideas” belief holds that the truth will emerge from the competition of ideas in free, transparent public discourse (http://en.wikipedia.org/wiki/Marketplace_of_ideas)
  • puas being banned, inquisition burning people, pinker’s model of societal change. conspiracy theory, defending these people. -> i’m possbily going to have ideas that defend or associate to this people this is problematic.
  • Is most people default epistemology a consensus theory of truth?
  • Aristotle on rhetorics “There are three bases of persuasion by the spoken word: the character of the speaker, the mood of the audience, and the argument (sound or spurious) of the speech itself. So the student of rhetoric must be able to reason logically, to evaluate character, and to understand the emotions.”
  •  The state engaging in various actions to create an image of itself as “a thing” (http://csip.asia/sites/default/files/Li_Tanya_Murray_Beyond_the_state_and_failed_schemes.pdf
  • “Adversarial strategy seems to be in the same category of information security. It is something you want to have before you need it. Ideally you would never need it, but you likely will. I think “enemy” is the wrong framing, and that “non-aligned strategic players” are a better one. If you believe that (1) there exist players that have power, (2) these players are strategic, (3) these players are misaligned; and (4) that you want to have an understanding of adversarial strategy before you need it; then it follows that you would desire to install adversarial strategy pieces.”