søndag 13. mars 2011

"Jørgen Explains"

Ilana has collected all her little Facebook videos of me in a YouTube channel she calls Jørgen Explains. The latest addition to the collection is a shortened version of my talk yesterday, at the Unitarian Universalist Fellowship in Oslo, which she also helped me write.

I'm grateful - and flattered!
:-J

lørdag 12. mars 2011

Limitations on human rationality

I have always been a firm believer in human intelligence. Maybe not in my own, so much as in the principle.  

I used to think that there is no problem
so big or so complicated
that a great mind can’t reason its way out of it,
at least if given enough information and time.

Try to imagine me as a guy who walks around with a hammer, and thinks that every problem in the world looks like a nail. That’s me, except that the tool isn’t a hammer.  It’s more like a flashlight, shining a narrow ray of attention into the chaos of the world around me.  The way to have the best possible life, I thought, was always to learn and understand everything that held me back.

I still think it’s a good program, as far as it goes, but I’ve recently found a valuable counterpoint to it. It’s in a book I’m reading called “What Intelligence Tests Miss: The Psychology of Rational Thought”, and is written by Keith E. Stanovich. Among other things, it says that if we gave every human being a pill that added 20 points to their IQ scores instantly, we would still be making most of the same mistakes the next day.  The biggest difference would be that we’d be making them faster and more effectively. Intelligence, in other words, is not all.

When I mentioned the title to a friend of mine who is a psychologist, he said that “That must be a very long book”. However, the most important insight we can gain from it is a very short one.  It’s that

We Are All Cognitive Misers.

as Stanovich calls it.  This is shorthand for the fact that

The brain will always try to use
the simplest and sketchiest model of the world,
that it can get away with.

There are many reasons why we’re cognitive misers. One is that when brains and sensory systems first started to evolve, they were very primitive. The operating system that ran on those early brains, therefore had to make do with some pretty sketchy ideas of what the world consisted of.  Still, they did their job. They helped keep their owners alive.  Over the next several hundred million years, they kept evolving, and they always kept this as their first priority:

The evolutionary purpose of the brain
is to help keep its owner alive,
and promote his/her reproductive success,
not to take him/her to the moon.

When you look at the brain from that angle, it’s an absolutely amazing device. Normal computer programs need to be complete. They need to be double and triple checked for “bugs”.  If something goes wrong, they crash. The brain, on the other hand, can make do with only the vaguest beginning of an idea.  If something is missing, it simply fills in the blanks with whatever looks most probable, and goes on computing.  If it needs a concept, it forms one on the spot.  If input from one sense is confusing, it consults the other senses.

The brain’s image of the world around it will always look whole and complete, no matter what flaws and approximations it contains. The stuff we use to fill in the blanks usually fits so well, that we’re not aware that anything is missing.

This is as it should be. A bigger and more complicated model will require more processing power from the brain. The smaller and simpler we can make it, the easier we will get by. This type of optimization seems to be a general principle behind the way the nervous system is organized.  Brains are optimized for speed, not for accuracy:

Every task that the brain can delegate
to a lower level of consciousness,
will be delegated, and to the lowest possible level.

If there’s enough information that a lower level of your nervous system knows what to do with it, without bothering your conscious mind, then that action will usually be taken without any further notification to higher quarters. The most extreme example of this is the simple “reflexes” that we learn about in school: A knee jerk here or a hand flying back at the touch of something hot.  But it’s actually much more pervasive than that.  It permeates everything:  If a simple and primitive solution is good enough, it will be adopted until we see a reason to do otherwise, and without any resources being wasted on conscious processing.

This process of delegation happens all over our sensory systems. Let me take the eye as an example.  Imagine that you are staring at a field of uniformly gray sky. That means that all the little rods and cones at the back of your eyes will be reporting the exact same level of stimulation back to the brain.

Now imagine that a pinpoint of light suddenly you comes on in the sky, causing just one little cell in your retina to start firing back to the brain.  Can’t you hear it? It’s jumping up and down, shouting I see light! while all the others are still only seeing uniform grey.  The thing that blew my mind when I learned about this, is that the neighbours of that little cell are also going to start responding, even though nothing has changed for them. They’ll respond by reporting a false drop in light intensity, as if it had suddenly gotten getting darker. It’s as if they’ve become jealous: “Now that my neighbour has more, it feels as if I’ve got less.

You can say that the eye is lying to the brain, but it’s actually an example of delegation.  

*    If the brain had to analyze every single pixel in the picture that’s being projected onto our retinas, it probably wouldn’t be able to make heads or tails of it ... and it certainly wouldn’t have time for anything else.  

*    Instead, by the time the signal leaves the eye, it has already been heavily doctored. Contrasts and movement have been emphasized, and that makes it just simple enough for the brain to take effective action.

The eye was only an example.  All sensory input has to travel through a whole hierarchy of nerve cells, on its way towards the top level of full consciousness. Long before it gets there, salient points have been emphasized. Others have been suppressed. Different forms of input are compared and used to emphasize or cancel each other out.

For every level that a signal passes, on its way to full consciousness, details will invariably get lost. That’s how you’re able to filter one thread of conversation out of the background hum during a party.  That’s how you’re able to pick out one little detail in a picture. The implication of this is that

Your subconscious, in the wider sense,
has a lot more information at its disposal
than your conscious mind has,

which is why our semi-conscious or subconscious assessments can be not only much faster, but can also be much more accurate than our conscious analysis.

It’s time to start summing up the insights that I feel I’ve gained. There are three of them.

- o 0 o -

The first is a new understanding of the cognitive biases.  Every one of these seems to be an expression of how our brains are optimized for speed and computational ease, rather than accuracy.  I’ll mention a few examples:

*    “Confirmation bias” is one of the most pervasive because it’s an expression of how we don’t interpret sensory input from scratch.  Instead, we filter it through other layers of meaning, including our ideas of what the world should look like. When I was looking for an example of how this works, I suddenly remembered the debate around the so-called “Ground Zereo Mosque”. The thing people couldn’t agree about then, was whether a Sufi Muslim religious centre two blocks away from the World Trade Center site would be a victory for the Terrorists or for our own ideals of Religious Freedom.

   When I studied the issue, I got the impression that the majority of Muslim terrorism is directed against other Muslims: Shias against Sunnis, Sunnis against Shias ... and Everybody against the Sufis. Maybe I’m wrong, but to me it looked as if a Sufi religious center near ground zero would be more likely to be a terrorism target than a terrorism rallying point.

   Another example of confirmation bias is very close to my own heart: The way lawyers and their clients always seem to be more sensitive to information going in their favor, than to information pointing the other way. The result is a fundamental inefficiency in how legal disputes are settled. My personal estimate is that at least 75% of lawyers’ incomes are derived from this factor alone, but then again, I’m biased.

*    “Base rate neglect” or “Base rate fallacy” is the tendency to base judgments on specifics, ignoring general statistical information because that’s more abstract and hard to relate to.  This is why some people anchor their judgements on Muslims in what they see on TV (terrorism, repression of women), and don’t bother to find out how many Muslims are actually sensible, peace-loving, flexible-minded and hard-working.

*    The “Bandwagon effect” is another kind of cognitive bias, that occurs when we don’t bother to form all our ideas from scratch, when other people seem to have done the work for us. This is a huge problem in some circumstances - and a huge source of efficiency in others.

*    The “Availability cascade” is a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse. "Repeat something long enough and it will become true". What is “available” in memory gets the appearance of being more likely, and as we all know, our memories are biased toward vivid, unusual, or emotionally charged examples.

I have printed a list of cognitive biases in an attachment here, and encourage you to read them at home. It’s an interesting list.

Keith Stanovich sees these biases, which are all part of our default mode of operation, as failure to operate rationally. It’s easy to agree with him on this.  In every instance, it is possible to say that we would do better if we were able to eliminate the bias and the mental shortcuts that lie behind it.

On the other hand, I’m also tempted to see these cognitive biases as examples of how the human brain needs to function, if it is to function at all.  It’s all very well to know exactly what “sand” is, but sometimes we just have to throw it in the eyes of the attacking tiger and get on with our lives.  The reason is that our highest level of rational consciousness, that Stanovich calls “type 2 reasoning”, consumes a lot of resources. If we try to attain the highest possible level of rationality in all aspects of life, we’ll get to be right, but we won’t get to be much else.

There’s also the problem of focus when trying to engage in “type 2" fully conscious and rational reasoning.  Our highest level of attention is very much like a ship’s radar: It can only look in one direction at a time.  With the first naval radar sets, the captain of a ship actually had to steer his ship in a full circle, if he wanted to scan the whole horizon. (That was how British naval forces discovered the battleship “Bismarck” and the heavy cruiser “Prinz Eugen” in the Denmark Strait on May 24, 1941). Later radar sets were set up with revolving antennae, so that the single beam could scan the whole 360 degrees of the horizon every 5 seconds or so, giving equal attention to everything.

If our human system of attention regulation had been set up in this manner, making us pay equal amounts of attention to everything, I doubt that we’d ever have gotten out of the primordial ooze.  The cognitive biases occur because we have no choice:  We’ll be lucky if we can compensate for 1 or 2 of them at a time.

- o 0 o -

I said before that every task that the brain can delegate to a lower level of consciousness, will be delegated, and to the lowest possible level.  My second insight is that this seems to explain why Training is so much more powerful than Learning.

*    “Learning” is all about understanding and memorizing facts and rules.  It’s an incredibly powerful tool.  “Learning” is powerful because it’s about deepening our your understanding of why we should or shouldn’t do things, like point a sextant to the sun or eat less sugar. Without learning in this sense, we’d probably still be stuck in the stone age.  

*    “Training” is all creating sub-conscious dispositions to do one thing rather than another.  It’s known as behaviour training, shaping, and a host of other names. Well-known examples are Ivan Pavlov and his drooling dogs, B.F. Skinner and superstitious pigeons, and smiling dolphins jumping high in the air at our marine parks.

If training and learning contradict each other, the thing you’re trained to do (or think) will usually win. The reason, I believe, is that training operates at a lower level of consciousness, and tends to determine our actions long before we’ve had time to think them over. It makes us want to do things, not just because there might be a bucket of fish (or praise or ice cream) at the end of the game, but because the action has come to feel natural for us.  It feels good.  It’s become part of what we consider “who we are”, rather than just “what we do”. And once we’re there, it’s usually easy to justify in an apparently rational manner rationally what we want to do.

- o 0 o -

My third insight is a feeling that I can now explain God rationally, and that God doesn’t need to exist, ... but I can’t always replace him with pure rationality.

The question about God’s existence can be made pretty simple.  As I said before, the brain will always try to use the simplest and sketchiest model of the world, that it can get away with. Can you imagine a simpler model than “God”, for how the Earth and the Universe came to be? I can’t, and it’s a model that satisfied the needs of all our early ancestors.  Can you imagine a simpler source of ultimate authority than “God”, behind the rules we try to make each other follow?  I can’t.

When I look at God from that angle, it doesn’t seem matter any more whether “he” exists or not. All that matters is whether the concept makes it easier for us  to operate at the level of accuracy that’s needed at the moment.

Finding a better explanation than “God” is easy, when we’re talking about the Creation of the Universe, but difficult - and maybe unwise - when we’re looking at “him” as the ultimate source of moral authority.  Rational analysis is slow and cumbersome. It’s narrow in focus.  It’s at constant risk for irrational cognitive bias. If the Police and our human Rationality were to be our only bulwarks against moral transgressions, I think we’d be worse off than we are today.  It may be irrational bo believe in the Wrath of God and Eternal Damnation, but in times of great temptation, rationality is usually playing second fiddle anyway. In fact, research shows that when we have enough stress hormones in our systems, the prefrontal cortex starts to shut down, or get subdued. In a crisis, we're not meant to be rational. We're meant to go on autopilot.

This is not to say that I want to start scaring little children out of their wits again, about how they’ll burn in hell if they don’t do as we say. I’m saying that whatever we bring up to fill the void when God goes out the window, needs to be simple, powerful, and capable of acting directly on an irrational mind.


This posting was written with the help of Ilana Bram.