I have heard scores of people talking about their utility functions. This strikes me as ridiculous on its face – as a if people were speaking about their gills:The concept doesn’t apply, it’s a category error.
So why do people do it? In what follows I first expand on the issue, show several reasons as to why its problematic in the first place, and end my analysis with a discussion of why it is done, and why it can be done.
“My utility function”
The problem I want to treat is first described here: “If I ever say “my utility function”, you could reasonably accuse me of cargo-cult rationality; trying to become more rational by superficially imitating the abstract rationalists we study makes about as much sense as building an air traffic control station out of grass to summon cargo planes.
There are two ways an agent could be said to have a utility function:
- It could behave in accordance with the VNM axioms; always choosing in a sane and consistent manner, such that “there exists a U”. The agent need not have an explicit representation of U.
- It could have an explicit utility function that it tries to expected-maximize. The agent need not perfectly follow the VNM axioms all the time. (Real bounded decision systems will take shortcuts for efficiency and may not achieve perfect rationality, like how real floating point arithmetic isn’t associative).
Neither of these is true of humans. Our behaviour and preferences are not consistent and sane enough to be VNM, and we are generally quite confused about what we even want, never mind having reduced it to a utility function. Nevertheless, you still see the occasional reference to “my utility function”.”
All of the above is clear to me. It is not clear to me why, despite that, I’ve heard scores of people talking about their utility function. In what follows I tease out an incomplete list of the problems of engaging in imprudent formalization and then, given the number of problems, try to understand why it is that it is done. (I’m using “imprudent” to mean both premature and plain wrong.)
Consequences of imprudent formalisation
Formality is awesome. I’m not debating this. Mathematics is insanely awesome. “Sit and think about stuff and sketch it out with a pencil and the results will be infinitely precise and it’s way cheaper than going around trying stuff out.” Yes, the most amazing free-lunch ever. Mathematics is awesome, logic is awesome, formality is awesome.
Except, it isn’t a free lunch. Formality comes at a price: “Formality has a cognitive overhead, it does not deal well with tacit or evolving knowledge, it cannot represent the situated nature of knowledge “ (1). Using technical terms in a premature fashion leads you to be precisely wrong instead of roughly correct.
Besides this, formalisation makes communication difficult and can create pain. I discuss these consequences below.
Firstly, premature formalisation leads to a greater intellectual separation: being formal over being fuzzy extends your inferential distance. Secondly, premature formalisation leads to a greater emotional separation: maybe you have noticed that as you speak about what is more unique to you, more personal, deeper, your ability to connect increases. Your experience resonates at an analogue level. Imagine speaking to someone about your utility function and how you would get 56 utilons by getting chocolate, or how you “Really really want and need chocolate right now and your day was terrible and you are really sad and you love chocolate because it makes you happier on rainy days”. It’s a no-brainer really.
Perception is bottom up and top-down. “The “top-down” processing refers to a person’s concept and expectations (knowledge), and selective mechanisms (attention) that influence perception. Perception depends on complex functions of the nervous system, but subjectively seems mostly effortless because this processing happens outside conscious awareness”.
You have to make sense of what you perceive, somehow. (Sensemaking, something I touched upon here.) And you will use the concepts at hand. You want your concepts to cut reality correctly, to package in a way that makes sense to you. If they don’t you will end up sandpapering against reality and that really sucks and hurts and if I can push you away from it my day has been a success.
Just try to get a sense of how someone experiences life going with the concepts of “monkey tribal evolutionary psychology, status seeking and signaling, freeriding, deception, self-deception, game theory, general awfulness, and everything ever from Overcoming Bias “ and someone that is thinking from a place of dreaming “about massively flexible and malleable interaction spaces. What if game breaks were a regular occurrence and could be used a jumping-off points to carve unique paths through interaction, play, collaboration, and financial/resource-independence spaces?”. And yes, a million times yes, both, and its a continuum, but some concepts suck and they don’t come with warning labels.
I’ve been talking about utility functions thus far because I’m detailing my analysis on that piece. I want to understand how and why that came about and in the future generalise lessons as possible. But, bad concepts (which include imprudently formalised concepts) go beyond that and I want to do a proof of concept: akrasia.
Mark says “First, I want to say that akrasia, by itself, is a functionally meaningless concept. I put it in the same category as depression, cancer, epilepsy, etc. Applying the label doesn’t tell you what to *do*. I don’t ever use the label “akrasia” in my own thinking.”
“Akrasia” is insane. You blackbox it and then treat it as useful? Like your body suddenly starts shaking, and someone goes “Yep, tremors” and you are like “Oh, ok”. NO! You want the fix, the origin, not a synonym that doesn’t point at anything.
Why is it done?
In the previous section we saw that there are a lot of downside to imprudent formalization. Still, people engage in it. Why? I suspect that there are significant upsides. I describe them below, after explaining exosemantics:
“exosemantic – the part of a word or statement that isn’t its strict entailments, but which are extremely common implicatures– specifically, these shouldn’t be contextual or Gricean implicatures, but socially bound ones, which have been formed by continued use of the word in particular contexts, or by particular speakers. The exosemantics of a word may eventually become incorporated into the defining entailments.
It is commonly known that words carry meaning on two levels: denotation, or strict, dictionary-level meaning, and connotation, or emotional association; but there is a third, exosemantic level. The word “eldritch”, for example, denotes otherworldliness and connotes a feeling of cosmic horror toward its referent; but it also exosemantically implies that its user has read Lovecraft. The word “liberty” is no different from the word “freedom’, The word “praxis” is no different from a certain definition of the word “practice” except in its exosemantic layer: “praxis” is heavy; “praxis” implies familiarity with—association with—the academic tradition that uses the word “praxis”.”
Two most powerful forces in the universe
I have talked about justification and on how reasons are for justification. The further step in that theory is to consider cultures as justification systems. Justification systems being “the interlocking networks of language-based beliefs and values that function to legitimize a particular worldview.”
The proposal is that formal systems originate from a desire to, a need for, justification. If that is right, then formal means better with regards to justifying yourself to others, and you would expect to see imprudent formalization. (In science this is called physics envy. In AI this was the GOFAI against which Dreyfus argued – this partially matches to neats vs scruffies but not really -, in economics this is the current debate about the use of mathematics and the old debate)
And, of course, formal systems and their technical notions carry the ultimate justificatory power. (If this seems difficult to buy, consider that normative systems in fact make claims about how one ought to behave, and ought is more powerful than want. There are only normative and applied ethics – the construction of systems of how one should behave, and the study of how to obtain that.)
With regards to the second force, Robin Hanson has said all there is to be said about signalling. It seems that in this regard imprecise formalism are being used like shibboleths: they allow one to signal being part of the in-group.
Why can it be done?
Robin Hanson has again a spot-on observation: “For subjects where there is little social monitoring and strong personal penalties for incorrect beliefs, we expect the functional role of beliefs to dominate. Beliefs about military missions or engineering projects come to mind. But for subjects with high social interest and little personal penalty for mistakes, we expect the social role of beliefs to dominate. Consider beliefs about large elections or beliefs addressing abstract philosophical, religious, or scientific questions.”
In this particular case the fact is that the penalty for being wrong is apparently small – because the belief is largely invisible. For one to be penalized, at the very least, the interlocutor has to understand what an utility function is and why it doesn’t make sense to apply the concept. Compare this to someone going around talking about their gills. Gills are really visible, and the social penalty would be immediate and harsh.
This makes copying the community shibboleths very cheap, with the upsides of justification and group membership. That is why it can, and is done.
- – Shipman, F. M., & Marshall, C. C. (1993). Formality considered harmful: Experiences, emerging themes, and directions. University of Colorado, Boulder, Department of Computer Science.
- I suggested that there are tons of consequences. What is going on? The consequences are long term and fuzzy and the benefits immediate and clear. Go into that.