Considerations on heuristics for Map-making: Your naive reasoning mechanisms suck   

In this essay I argue for the Deserved Persuasiveness heuristic (or “Why it’s a bad idea to form your own opinions”). I being by suggesting a model through which to think about beliefs and belief-structures. I then go over the Naturalistic Decision Making Data/Frame theory of sensemaking. I end by putting the two models together in arguing for the Deserved Persuasiveness heuristic.

Beliefs

You can imagine that your beliefs are like islands set in the sea.  Islands and archipelagos, some more connected some less. How deep each land mass goes how strongly justified those particular beliefs go, and their altitude is how precise they are. Of course, their extension is the scope of what they explain.

Ideally you want to have a Pangea like structure. A model of the world that is a set of true, connected, encompassing, justified beliefs.

But keep in mind that it’s more important to make your beliefs as correct as possible then to make them as consistent as possible. Of course the ultimate truth is both correct and consistent; however, it’s perfectly possible to make your beliefs less correct by trying to make them more consistent. If you have two beliefs that do a decent job of modeling separate aspects of reality, it’s probably a good idea to keep both around, even if they seem to contradict each other. For example, both General Relativity and Quantum Mechanics do a good job modeling (parts of) reality despite being inconsistent and we want to keep both of them. Now think about what happens when a similar situation arises in a field, e.g., biology, psychology, your personal life, where evidence is messier than it is in physics.

Given the above it seems like the correct tradeoff to go for correctness – or modelling a specific part of reality – before going for propagation and connection. How does one aim at having correct beliefs?

 

The missing piece: Data/frame theory

Naturalistic Decision Making is a field of decision-making analysis that studies expert decision-making in naturalistic settings. Part of what it studies is sensemaking – how people make sense of what they experience.

The data/frame theory is the macrocognitive model of sensemaking that is used in the field. It claims that: “When people try to make sense of events, they begin with some perspective, viewpoint, or framework—however minimal. For now, let’s use a metaphor and call this a frame. We can express frames in various meaningful forms, including stories, maps, organizational diagrams, or scripts, and can use them in subsequent and parallel processes. Even though frames define what count as data, they themselves actually shape the data (for example, a house fire will be perceived differently by the homeowner, the firefighters, and the arson investigators). Furthermore, frames change as we acquire data. In other words, this is a two way street: Frames shape and define the relevant data, and data mandate that frames change in nontrivial ways.” (1)

The most interesting part is this: “Decision makers are sometimes advised that they can reduce the likelihood of a fixation error by avoiding early consideration of a hypothesis. But the Data/Frame Theory regards early consideration to a hypothesis as advantageous and inevitable. Early consideration—the rapid recognition of a frame—permits more efficient information gathering and more specific expectancies that can be violated by anomalies, permitting adjustment and reframing. Jenny Rudolph (2) found that decision makers must be sufficiently committed to a frame in order to be able to test it effectively and learn from its inadequacies—something that’s missing from open-minded and open-ended diagnostic vagabonding.

These observations would suggest that efforts to train decision makers to keep an open mind can be counterproductive (…). We hypothesize that methods designed to prevent premature consideration to a frame will degrade performance under conditions where active attention management is needed (using frames) and where people have difficulty finding useful frames.”

This matches my experience, and I believe explains hindsight bias. Being open-minded allows your brain to rationalize itself into having knew the outcome all along. Thi is why (written down) predictions increase your accuracy. You are shocked when it cames out that you were wrong. Your brain cannot construct a story about how you knew it since that would increase the cognitive dissonance. The minimal cognitive dissonance explanation is that you were wrong, and this is just accepted, and taken as feedback for existing models. On the other hand if you just thought about the outcome, then it is less painful for your brain to rewrite your autobiography as actually having predicted the outcome that happened.

Taking the data/frame theory seriously entails believing that (a) specific beliefs trump unclear beliefs (b) inaccurate beliefs trump no beliefs. This indicates that bad models are better than no models. (3)

Deserved persuasiveness heuristic

But why choose bad models? The second counter-intuitive claim (after “Do not be open-minded”) is “Don’t reason by yourself – your current reasoning mechanisms (probably) suck”.

One of the “reasonable” ways that people are supposed to update their beliefs is by taking arguments into account. This is a terrible method since the persuasive ability of an argument about a certain field is correlated with 1) the relationship between what the argument claims and the truth (bottlenecked by the listener’s model of reality), 2) the ignorance of the listener about the field, and 3) the persuasive ability of the arguer. Notice how only 1 of these is related to the actual truth-claims of the argument.

This follows from the reasonable postulates that a) there will exist convincing arguments for the true position, b) the more ignorant the listener about the field, the smaller the barrier for zir to consider an argument convincing, and c) the more persuasive the arguer about the field, the smaller the barrier for zir to make a convincing argument.

If these hold then one should expect a persuasive argument to be true if and only if a) one is knowledgeable about the field, or b) the arguer is not generally persuasive.

This has some interesting corollaries. First, if one is not knowledgeable in the field and the arguer is persuasive and a non-expert, then it may be strictly negative to discuss the matter since one will end up with higher credibility in their position, even though the process that took place is weakly correlated with a truth-outputting process. Second, if one is not knowledgeable in the field, and experts exist, it seems to be strictly dominant to find the expert consensus and just copy all those beliefs, as it is likely that the mechanism that produced those beliefs has a higher correlation with the true position than whatever naive processes one uses.

I think this second corollary is really hard to accept because they conflate their ideas with good ideas with themselves as the “venerable creators of good ideas”. Just because you had a particular idea it doesn’t mean it’s a good idea.

Adopt experts beliefs. Then, start trying to find faults in those beliefs. (As per the data/frame theory of sensemaking.) Where there are no experts, just go with what resonates , take that as a working hypothesis and keep going.

(The reason why no one follows this heuristic – as stated – is clear: arguing is not about truth. Then again some do follow it, implicitly, when expert consensus is rebranded as “Introductory Textbook”.)

(To the question of why is it that I’m not following this heuristic and am instead developing my own models, the answer is that I am, it just isn’t shown through the blog yet. Also, my motivation at the moment to put out and stabilize inchoate models is very high and thus it would be very costly to not do it. It is also a valuable enterprise as looking at existing theories will be devastating for existing models and thus they will be grokked at a much deeper, more personal level. It’s a tradeoff, if my motivation to put stabilize models would be lower I would just read my brains out on introductory textbooks to develop world models.)

  1. Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking 2: A macrocognitive model. Intelligent Systems, IEEE, 21(5), 88-92.
  2. – J.W. Rudolph, “Into the Big Muddy and Out Again,” doctoral dissertation, Boston College, 2003; http://escholarship.bc.edu/dissertations/AAI3103269
  3. – This is partially what his blog has been about. Develop models, look at the data through those model’s frame with your eyes out for falsification.

 


 

Future
  • Go into why “Going with what resonates” is a useful heuristic. Talk about why that is a reasonable (Gigerenzer, Gendlin, Taleb, etc.)
  • Address bootstrapping problem: You need to accept not forming your own ideas as an idea by yourself first
  • Gigerenzer heuristic creation (that is in fact what I’m doing with “heuristics for map-making”)
 
Advertisements

6 thoughts on “Considerations on heuristics for Map-making: Your naive reasoning mechanisms suck   

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s