Developmental views and rationality

I commence this essay with a quote about the use of a developmental view on political theorizing. Not going into politics (something I’m not doing before figuring out how to talk about socially dangerous topics), I build of that quote to talk about developmental views. I use those to touch upon ego development levels. The essay ends with speculations about the relationship between ego development, rationality, and postrationality.

 

Politics
Hanfeizi writes about a developmental view of politics:

“I think I know what you’re getting at, but it seems to me the real issue is that we need to break out of the right-left paradigm altogether and start looking at these issues developmentally.

Have you or Scott ever taken a look at the work of Don Beck (based on that of Clare Graves) and Spiral Dynamics? Or Ken Wilber’s Integral Philosophy (despite some of it’s troubles)? Here’s a good breakdown of the SDi Integral Model:

http://pialogue.info/definitions/spiral_dynamics_aqal_BIG.jpg

Wilber identifies that as clean as this model might look, each of the vMEMEs (levels) is capable of being distorted in various ways. He likes to talk about the “Mean Green Meme”- his name for what we would call SJWs and the politically correct establishment. Neoreaction seems to be a confused reaction against this- it’s groping towards a Yellow vMEME point of view, but tends to throw out everything Green rather than properly integrating it. The difference between the technocommercialists (Moldbug and Land, et al) and the ethnonationalists (Anissimov and Bayne, et al) is that the former are groping towards something higher, if not quite hitting the mark; whereas the latter are really full-on reactionaries who want to regress to the lower memes, embracing selfish power gods views (Red), ethnic tribal conformity (Blue), and to some extent Orange rationality- but wanting nothing to do with Green at all.”

 

Developmental views

Developmental psychology is the scientific study of changes that occur in human beings over the course of their life. Originally concerned with infants and children, the field has expanded to include adolescence, adult development, aging, and the entire lifespan.”

Jean Piaget was one of the fields commencers, and he was studying the cognitive development of children. He broke it into 4 stages:

  • Sensorimotor stage
    • “From birth to age two. The children experience the world through movement and their five senses. During the sensorimotor stage children are extremely egocentric, meaning they cannot perceive the world from others’ viewpoints.”
  • Preoperational stage
    • “Piaget’s second stage, the pre-operational stage, starts when the child begins to learn to speak at age two and lasts up until the age of seven. During the Pre-operational Stage of cognitive development, Piaget noted that children do not yet understand concrete logic and cannot mentally manipulate information. Children’s increase in playing and pretending takes place in this stage. However, the child still has trouble seeing things from different points of view. The children’s play is mainly categorized by symbolic play and manipulating symbols. Such play is demonstrated by the idea of checkers being snacks, pieces of paper being plates, and a box being a table. Their observations of symbols exemplifies the idea of play with the absence of the actual objects involved. By observing sequences of play, Piaget was able to demonstrate that, towards the end of the second year, a qualitatively new kind of psychological functioning occurs, known as the Pre-operational Stage.”
  • Concrete operational stage
    • “From ages seven to eleven. Children can now conserve and think logically (they understand reversibility) but are limited to what they can physically manipulate. They are no longer egocentric. During this stage, children become more aware of logic and conservation, topic previously foreign to them. Children also improve drastically with their classification skills.”
  • Formal operational stage
    • “From age eleven to sixteen and onwards (development of abstract reasoning). Children develop abstract thought and can easily conserve and think logically in their mind. Abstract thought is newly present during this stage of development. Children are now able to think abstractly and utilize metacognition. Along with this, the children in the formal operational stage display more skills oriented towards problem solving, often in multiple steps.”

Jean piaget worked with children. The formal operation stage is “(…) widely considered the adult stage in much of Western culture; and society and institutions support and reward its achievement. A citizenry capable of rational deliberation and choice based on pertinent criteria (not external features, sameness or tradition) would seem to be a necessary precondition for democracy to work. Only such a perspective and rational assessment of choices can safeguard the whole and at the same time allow changes to be reflected in the laws.” (1)

You can imagine that at some point in time we started hitting this level. Certainly it was not achieved 100.000 years ago, and certainly it is today, so at some point, there was a transition.

You might wonder if maybe this is not the last level, maybe it doesn’t stop there. This wondering has led to various theories in the developmental psychology subfield of adult development.

This wondering leaves you open to consider the levels above yours. As Mark said “[A]ctually, I am a Southern Baptist. And so are you. There are many levels above your own. What level makes yours look like a Southern Baptist’s looks to you?“

Loevinger made the most well studied model of levels of ego development. In her model there are ten stages and stage 5 is the formal operations stage. Cook-greuter has since severely expanded Loevinger’s work.

A possible complication is that one needs to be at a certain level of development to be able to treat their level of development as an object: there is a bit of bootstrapping involved.

Now, I realize all this talk sounds spiritual as hell. Mark, again, comments on how to interact with this stuff, by implicitly operating in the following way:

“ “Based on the everything I know about everything, what does the content of this human artifact, and the fact that I’m reading it, tell me about the structure and state of reality, if anything? And, given all that, what do I do next?”

In other words, you look at the methods, you look at statistical power, you look at p-values, you look at effect sizes, and you decide whether or not some of this stuff has maybe nailed down a little patch of reality, a little isolated map that can make some accurate predictions of the territory. You have to do the extra work of finding the signal in the noise, and you have to do the extra work of translating the map into language and concepts that might or might not hook up with the rest of science. But empiricism is empiricism, if you take responsibility for interpreting it, and if you choose to make use of the thousands of hours that well-intentioned people have put in.”

 

After rationality

Hanfeizi continues “Our host seems to have a view somewhere in the Turquoise band, OTOH- he seems to have been able to transcend and include everything worthwhile in both the Green (SJ, et al) and Yellow (NR, et al) vMEMEs and push on to something new- which, as we saw in “Meditations on Moloch”, borders on the spiritual.”

I feel that rationality (by which I mean LW-X-Rationality) can go wrong in several ways. Some ways are related to meaning and worldview and eternalism, other way – it seems to me – is that it anchors people in the formal operations stage. They get really good at playing that game and don’t want to stop playing.

Postrationality seems to be a reaction to that game. It might go beyond it. (Although I’m going to have to wait until the sequence comes out to make a judgement.)

I think LW-X-Rationality is an amazing scaffold because it is an in-depth, explicit, operationalization of what playing formal operations properly is like.

I think LW is a Wittgenstein Ladder. The Wittgenstein Ladder is the second-to-last proposition in the Tractatus: “My propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them—as steps—to climb beyond them. (He must, so to speak, throw away the ladder after he has climbed up it.) He must transcend these propositions, and then he will see the world aright.” I don’t think it is by chance that Wittgenstein ideas in the Tractatus – logical atomism, logical positivism, being quiet about metaphysics – resemble those of Less Wrong. But, Wittgenstein did recognize that you needed to go beyond these. (Which he did, in his second book.)

The question is, what comes after rationality? Is it postrationality? What is post-formal operations like? What does transrationality look like?

  1. http://www.cook-greuter.com/Cook-Greuter%209%20levels%20paper%20new%201.1’14%2097p%5B1%5D.pdf

 

 

Future:

Advertisements

14 thoughts on “Developmental views and rationality

  1. This continues to be extremely interesting! 🙂

    Glad to be introduced to Cook-Greiter; I didn’t know about her stuff. I was heavily influenced by Kegan’s The Evolving Self 25 years ago. I suspect it is the main (but largely unacknowledged) source for Spiral Dynamics—although the Spiral Dynamics people did have the important insight that Kegan’s framework can be applied to whole societies across history.

    You probably know this, since you have linked a lot to my site (thanks! great to have it understood by someone so smart), but: I wrote a bit about Spiral Dynamics (in passing, mainly) here: http://meaningness.com/metablog/ken-wilber-boomeritis-artificial-intelligence

    Liked by 1 person

  2. Just wait till I actually edit and structure all the things 😉

    Yep, she’s great. Mark (meditationstuff.wordpress.com) introduced me to her originally. I think there is a lot of value there and in “developmental thinking”.

    Didn’t know about The Evolving Self – added to reading list 🙂

    My pleasure, I took *so* *much* from it. I think some people are just 2 inferential steps away from you in a lot of important things. Eliezer was like this originally, later Hanson, you and Mark filled this role quite a bit.

    Yeah, I read that post – of course, haha. I think your diagnosis of Wilber’s eternalism is *spot-on*. Despite that I believe that Wilber has a ton of useful maps. I actually believe I came across that post again, by chance, 2 days ago, and added Dreyfus stuff and yours on Heideggerian AI to my reading list. Haven’t gone through those but know a bit about the history. Really curious about this topic, not in the least because one *really* smart person I know told me that Heidegger was full of shit (He can read german, I never read Heidegger.). I’m really curious about whether he is simply wrong or if is a case of broken telephone.

    Like

  3. Just wait till I actually edit and structure all the things

    That’s what I say about the Meaningness book… I have a vast outline, and a couple hundred partially-written web pages.

    Recently I’ve started posting more of them in fractionally-baked form, because who knows whether I’ll ever get to write it up properly.

    Blogging is hugely easier than writing a large, structured work, unfortunately.

    Re Kegan, he was a revelation 25 years ago, but since you’ve read later work in the field, I’m not sure there will be much of interest in the book other than historically. But it’s a pretty easy read, and maybe there’s some insights that weren’t fully assimilated by subsequent researchers.

    I think some people are just 2 inferential steps away from you in a lot of important things.

    That’s interesting… actually the stuff I want to be writing is another couple of inferential steps removed from the stuff I am writing. Everything I write now seems painfully introductory and boringly old-hat, although it was exciting when I worked it out 10-20 years ago. Also all of it has been said by someone else, now. But it’s necessary as background for explaining what I’m thinking about currently, and no one else has written it up in a coherent way that I could point readers to and say “read that first.”

    Eliezer was like this originally

    That’s also very interesting… I arrived at LessWrong after he had moved on to HPMOR, and I couldn’t understand why people were excited by his stuff. But Kaj Sotala (I think) explained that originally the sequences came out as a daily blog, live-blogging his learning process, and it was really exciting to follow that. That helped me see why it was so appealing.

    told me that Heidegger was full of shit

    Well, like all great philosophers, Heidegger had a couple of important insights, and was confused and wrong about nearly everything else. (Whereas the rest of us are just confused and wrong about nearly everything.)

    Also, I think the main value of philosophy is negative: the breakthroughs are all of the form “we’ve been thinking about this problem wrong; here’s why.” Getting that right can hugely helpful in breaking up dysfunctional reasoning. Then the philosopher often feels compelled to say “and here’s the right way of thinking about it,” and that part is usually comically bad.

    Heidegger was working in the Romantic anti-rationalist tradition. (I didn’t understand this until quite recently, and I’m not sure most Heidegger scholars really appreciate how within-tradition he actually is.) The Romantic critique of Enlightenment rationalism is roughly: “But POETRY! And art! You can’t explain ART that way! Emotions!! What about EMOTIONS!!! And SOCIETY!” All of which is kinda true. But then rationalists can say “Yah, whatever, but SCIENCE! And engineering. You guys aren’t going to build a self-driving car, and we are, SO THERE!” Which is also kinda true.

    What Heidegger really cared about was NIHILISM and ANGST and DEATH and RESOLUTENESS, which is all extremely German and Romantic, and he was (in my view) mostly confused and wrong about them.

    But along the way to explaining his theory of POETRY and DEATH, he pointed out, in an off-hand way, that it’s not just stuff like that which rationalism can’t explain. It’s also mundane practical activity. His typical example was hammering; mine in 1987 was making breakfast. If you try seriously to make a rational account of making breakfast, you realize it is impossible, and the reasons it’s impossible are pretty much the ones Heidegger pointed to.

    Philosophers ignored this part of Heidegger, and mostly still do. (Because, after all, Heidegger said his important points were about ART and ANGST.) But when AI got far enough along, it called rationalism’s bluff, and exposed the problem Heidegger had waved at off-handedly. This was Dreyfus’s insight; Phil Agre and I were among the first to run up against it hard in practice.

    Like

  4. “That’s what I say about the Meaningness book… I have a vast outline, and a couple hundred partially-written web pages.
    Recently I’ve started posting more of them in fractionally-baked form, because who knows whether I’ll ever get to write it up properly.
    Blogging is hugely easier than writing a large, structured work, unfortunately.”
    “That’s interesting… actually the stuff I want to be writing is another couple of inferential steps removed from the stuff I am writing. Everything I write now seems painfully introductory and boringly old-hat, although it was exciting when I worked it out 10-20 years ago. Also all of it has been said by someone else, now. But it’s necessary as background for explaining what I’m thinking about currently, and no one else has written it up in a coherent way that I could point readers to and say “read that first.””

    How is the book coming along? And what about the Buddhism for Vampires one? I really liked the essays as an approach to dealing with monstrosity.
    I agree that blogging is way easier. Not in the least because my favorite part of words is the first pass: actually getting the content out so that it resonates with my felt meaning and I can stabilize it enough to think about it; and not really editing and reediting it to make it transparent to others. Mainly, I believe, because I’ve done the first (way) more.

    Having said that I find myself thinking thoughts that are many steps away from the people I’m talking with and I can’t even justify the place where the thinking is coming from unless I explain a lot of background and this makes it really hard for me to hold (intellectual/cognitive/…) conversations where everyone is taking a lot of value. Hence this blog – a first pass to actually get all the things out; and a second one to make them understandable by others so that I can get and bring others to what I see as being the current adjacent possible to my thinking.
    What you mention is interestingly similar to what Elizer went through. As I understand it, he wanted to talk about Friendly AI and to do that he figured out he had to bring people up to speed on a bunch of things. So he curated information about those things and became a very good second source to get a decent grasp on many things. (Of course, biased by his own views. He called “rationality” what actually is a tiny subset of the field, and took positions on a bunch of ongoing debates and presented them as being the case. But I understand that this was a good enough map, else it would be to complex, kinda like a Wittgenstein’s Ladder.)

    One of the LW ideas I like is the idea of “Aim high, shoot low” and I think inferential distance and curse of knowledge (or something) makes it so that this process – of explaining what is boringly obvious before explaining what we are really excited about now – is necessary to some extent. (Also, not placing myself in yours or Eliezer’s category – I had these ideas in a very inchoate form and am figuring out how they *may* fit as I go.)


    “Re Kegan, he was a revelation 25 years ago, but since you’ve read later work in the field, I’m not sure there will be much of interest in the book other than historically. But it’s a pretty easy read, and maybe there’s some insights that weren’t fully assimilated by subsequent researchers.”

    Possibly. I like reading the history of fields I care about, and especially to do it by reading works of different time periods.

    “That’s also very interesting… I arrived at LessWrong after he had moved on to HPMOR, and I couldn’t understand why people were excited by his stuff. But Kaj Sotala (I think) explained that originally the sequences came out as a daily blog, live-blogging his learning process, and it was really exciting to follow that. That helped me see why it was so appealing.”

    I came before HPMOR. But, I found the sequences and I went through some of them and they helped made sense of a lot, that I had sensed but couldn’t articulate, think about, think with, and so on. It was all very useful, at that time.

    “Well, like all great philosophers, Heidegger had a couple of important insights, and was confused and wrong about nearly everything else. (Whereas the rest of us are just confused and wrong about nearly everything.)
    Also, I think the main value of philosophy is negative: the breakthroughs are all of the form “we’ve been thinking about this problem wrong; here’s why.” Getting that right can hugely helpful in breaking up dysfunctional reasoning. Then the philosopher often feels compelled to say “and here’s the right way of thinking about it,” and that part is usually comically bad.”

    I admit that is the case, but this particular friend thinks that Heidegger was not-even-wrong about everything, which is really interesting to me.

    With regards to the negative value of philosophy: I find that a very attractive conceptualization and actually hold that opinion with regards to political ideologies. They make for really good critiques, but once they try to go beyond that, it’s just terrible.

    “Heidegger was working in the Romantic anti-rationalist tradition. (I didn’t understand this until quite recently, and I’m not sure most Heidegger scholars really appreciate how within-tradition he actually is.) The Romantic critique of Enlightenment rationalism is roughly: “But POETRY! And art! You can’t explain ART that way! Emotions!! What about EMOTIONS!!! And SOCIETY!” All of which is kinda true. But then rationalists can say “Yah, whatever, but SCIENCE! And engineering. You guys aren’t going to build a self-driving car, and we are, SO THERE!” Which is also kinda true.
    What Heidegger really cared about was NIHILISM and ANGST and DEATH and RESOLUTENESS, which is all extremely German and Romantic, and he was (in my view) mostly confused and wrong about them.
    But along the way to explaining his theory of POETRY and DEATH, he pointed out, in an off-hand way, that it’s not just stuff like that which rationalism can’t explain. It’s also mundane practical activity. His typical example was hammering; mine in 1987 was making breakfast. If you try seriously to make a rational account of making breakfast, you realize it is impossible, and the reasons it’s impossible are pretty much the ones Heidegger pointed to.
    Philosophers ignored this part of Heidegger, and mostly still do. (Because, after all, Heidegger said his important points were about ART and ANGST.) But when AI got far enough along, it called rationalism’s bluff, and exposed the problem Heidegger had waved at off-handedly. This was Dreyfus’s insight; Phil Agre and I were among the first to run up against it hard in practice.”

    How does the romantic/rationalist axis line up with the scruffy/neat axis?
    With regards to Heidegger, Dreyfus and your own connection to it – all very interesting, thank you for the explanation! Have you kept up with research in AI? I seem to faintly recall that you left the field after running up against this problem, and from my understanding Dreyfus does not think AI is impossible in principle. What has been the field’s reaction since you and Phil Agre ran into this? Were the insights accommodated?

    Like

  5. How is the [Meaningness] book coming along?

    Painfully slowly. Partly because it’s intellectually difficult and partly because my life has been pretty chaotic for the past few years and I haven’t had much time/energy for writing.

    I did get the introduction done, a few months ago. Since then I’ve been quietly adding pages to the eternalism chapter, without publicizing them. I thought it would be better, before publicizing, to get enough of that chapter written that readers can at least get the shape of the argument, even if most of the details are missing. But a couple of days ago I realized that there are, at minimum, some serious expositional problems, and maybe even big failures of logic, so I might have to throw a lot away and start over. Ugh.

    what about the Buddhism for Vampires one?

    On indefinite hold. That one is emotionally/spiritually difficult, rather than intellectually. I can only work on it when I’m meditating at least two hours a day, and I haven’t had time/space/energy for that in years. I very much want to return to it; the outline looks really exciting, and I want to read the stuff I’m supposed to write!

    Hence this blog – a first pass to actually get all the things out

    Yes, I think this is a really good idea, and now I’m inspired to follow your example. Actually this was the idea behind the “metablog” on meaningness.com, but then offhand posts got taken so seriously that I’ve been reluctant to do more of it. Still, writing offhandly and getting things somewhat wrong and being misunderstood is probably better than not writing at all!

    thinks that Heidegger was not-even-wrong about everything

    Yes, that’s the view of nearly all analytic philosophers. In fact, that’s their view of nearly all Continental philosophy. But this comes of not even recognizing what the subject matter of Continental philosophy is. If you don’t know even realize that the topic of discussion is a thing, it’s not surprising that discussion about that will sound like nonsense.

    political ideologies make for really good critiques, but once they try to go beyond that, it’s just terrible.

    Yes; interestingly enough, it was in the case of political philosophy that I first understood this, too. Something something Marx Bakunin Rawls something something. (I was 18 at the time.)

    How does the romantic/rationalist axis line up with the scruffy/neat axis?

    Well I guess there might be some degree of affinity for Romanticism on the part of scruffies (and none on the part of neats). But AI scruffies were still solidly in the rationalist framework.

    Have you kept up with research in AI? I seem to faintly recall that you left the field after running up against this problem, and from my understanding Dreyfus does not think AI is impossible in principle. What has been the field’s reaction since you and Phil Agre ran into this? Were the insights accommodated?

    Long story. Supposedly I’ll be writing a bit about this Real Soon Now.

    Like

    1. [Books]

      I hope you can get through those hurdles. I’m really excited to read about those and the things that come a few inferential steps afterwards.

      [Blogging method]

      I have benefited a bunch from anonymity and not editing. This allowed me to keep the delicate balance of having stuff out that is good enough for me to work with and reformat in the future, but not so good that people will care enough to try to understand it and possibly fail and want to argue about it; hence my emotional energy is conserved and my output is high.

      [Analytical vs continental]

      This is something I’m *really* confused about. People seem really comfortable making statements of the sort of “All of these thousands of people are really wrong about this topic – even though they spent years and years on this” and not thinking this is a fact that needs to be explained. (Kinda like the LW attitude towards all of philosophy, although there I think it’s just meme contagion and group thinking. I hold analytic philosophers as a class to higher standards. Unclear if I should.)

      [AI]

      I was surprised to see you say that scruffies sat in the rationalist framework. I’m really really *really* looking forward to see you write about the field reactions to Dreyfus/Heidegger/you and Phil Agre

      Like

      1. I’m really really *really* looking forward to see you write about the field reactions to Dreyfus/Heidegger/you and Phil Agre

        Uh… why? I wasn’t going to write so much about that, as such, as about connectionism (a/k/a “neural networks”), and “deep learning,” which is a rebranding of connectionism. Those are popular partly because they are also non-rational approaches to AI. I don’t have a very high opinion of them, though.

        Like

      2. Because of the way my mind seems to be set up. (Maybe I should write about that.)

        I recognise a past pattern or tendency (that might still be going on) to dismiss/overlook/not engage with things that disagree with my view – unless I’m forced to in some sense. Coming from a very strong objectivist/rationalist/scientific realist background this happened with the Heidegger/Dreyfus stuff. What then usually happens is that I find the same camp saying something about something else that strikes me as totally right and contradicts my previous view. I then start and investigation and figure out that I have to update and integrate on many things.

        In some sense you represent this for me – I find your views on stances, systems, meaningness, tantra, and Buddhism – thus far – to completely resonate. If I saw (what for me would be) an apologia of the Heidegger/Dreyfus stuff I could not not interact with it. That’s why I’m interested in it, at an emotional level I recognise the need to interact, at an intellectual/motivational level I can’t bring my whole self to do it *yet*. And I could S2 override it (to use a different language) but I prefer not to. (Another post I guess.)

        Like

      3. I see! Well, when/if you are up for the long story, you could read Phil Agre’s book. The short version is the Pengi paper, but that is so condensed that no one understood it even at the time. And of course there’s Dreyfus’s books.

        In terms of reception… One thing that happened was that Pengi was somewhat misunderstood, and that led to a few hundred papers on “reactive architectures,” which basically meant “programs that act sensibly without deriving complete plans in advance.” That was one of the points of Pengi, but the other half-dozen were mostly missed. (Which was partly our fault because we had pretty much already lost interest, and never bothered to explain them properly.)

        A larger effect was that we pretty much put the final stake in the heart of “old-fashioned symbolic AI.” There had not been much progress there since about 1980, but lots of people were still working on it, due to undead momentum. The field was dispirited, though, with a sense that something had gone wrong, for several years before Pengi. By the early 1990s, mostly the consensus was that the research program was in fact dead, and that our work partly explained why. (Of course, most people would still say “but that Heidegger stuff is bizarre, it’s only the technical arguments that are valid, and also Dreyfus must somehow be wrong, because obviously the brain is a computer so AI must be possible.”)

        The other thing that happened was that connectionism got really popular around the same time, and partly for the same reasons. That is, people realized that symbolic AI was a dead end, and were looking for an alternative, and connectionism looked like the strongest one. It turned out to be a dead end, too, but that wasn’t realized until the mid-90s. (And now it’s shambling back from the dead and lurching around eating people’s brains.)

        Like

  6. Thanks for the explanations! This is all so interesting!

    I read the paper at the degree of granularity I could – it is condensed and I see how it was written for a symbolic AI audience.

    I might be *totally* wrong here but does your work relate to Maturana and Varela’s on Embodied Cognition? I have read the The Embodied Mind: Cognitive Science and Human Experience by them and Rosch and it *felt* similar to reading this paper. (Except different audience, and background, and Rosch talking a bunch about Budhhism.)

    As far as I understand they kicked off the situated/enactive/embodied AI ideas together with Brooks’ Subsumption architecture. Is that right? From your comment about connectionism at the end I take it that situated/enactive/embodied AI is currently a minor stream.

    Like

    1. In AI, there were three groups who independently and simultaneously realized that the planning theory of intelligent action was unworkable, and proposed alternatives that were similar in some respects. “Simultaneously” was within a few months of each other, which is kind of weird. It’s especially weird because we all knew each other pretty well, but didn’t realize we were working along the same lines, and the alternatives we developed were actually pretty different, and our explanations for why they were right were extremely different.

      The three groups were Stanley Rosenschein and Leslie Kaelbling; Rod Brooks; and Phil Agre and I. There wasn’t any sense then of arguing about who got there first, partly because initially it seemed like what we were all doing was quite different. Rod’s work (subsumption) is better-remembered because he kept doing it, whereas Phil and Stan went off and did entirely unrelated things. Leslie and I went off into machine learning for a bit; she’s still doing that, but I gave up pretty quickly.

      “Situated” and “embodied” came from Heidegger, via Dreyfus and via ethnomethodology, which Phil and I learned from Lucy Suchman (then at Xerox PARC). Her PhD thesis Plans and Situated Actions was a huge influence on us.

      Maturana and Varela had done some work with partially similar intuitions earlier (Autopoesis and Cognition, 1980). I read that, but it seemed extremely hand-waving, and in some ways quite wrong. They did another book (The Tree of Knowledge) in 1987 (the same year we published the Pengi paper), which I never read. Rosch and Thompson coauthored The Embodied Mind with Varela in ’92, so it wasn’t an influence. That book seems to have coined the word “enactive” (although maybe they were using it earlier). I still haven’t read that one either! I figure I know what it says without having to look at it, but maybe that’s unfair.

      Like

      1. I see. Thanks for the overview. I had heard about Brooks, yourself and Phil Agre, Dreyfus and Heidegger. I did not know this was the timeline, it is indeed weird that you all got to it at around the same time. At least, strike 1 for consilience.

        Never read anything of ethnomethodology, but I recall sister Y to have posted on it, and now you have as well. I’ll take a look at it.

        Yes, that’s correct, I always bundle Maturana and Varela for some reason. I find what they see interesting, but I can’t even understand whether it is right, wrong or not even wrong. They seem to be coming from a really different place of thinking. Apparently they inspired a constructivist journal – http://www.univie.ac.at/constructivism/journal/index.html – which is in the same category for me, can’t even place it because the epistemological basis are so distant.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s