mandag 18. april 2011

Throughout my grown-up life, I've been at the receiving end of a lot of envy. "How on earth can you manage to eat so much and stay so slim?". Two articles from the NYTIMES of the last week have suddenly helped me come up with answers.

Is Sitting a Lethal Activity?
by James Vlahos suggests that my physical restlessness is part of the explanation. He describes experiments that closely monitor people's activity levels (using underwear with built-in sensors that register movement and posture) and food intake.  Some participants gained weight, while others didn't. The difference between the two groups wasn't some weird metabolic factor, but how much they moved. Both groups were forbidden to exercise, so that wasn't the decisive factor.  Instead, it was how much time they spent on their feet, moving around, or simply fidgeting.

On average, the subjects who gained weight sat two hours more per day than those who hadn’t.And when they sat, electrical activity in the muscles went down - the way the hum in a theater dies down when the curtains start moving apart. Calorie-burning rate immediately plunged to about one per minute, a third of what it would have been if they'd been up and walking. Insulin effectiveness dropped. The enzymes responsible for taking fats out of the bloodstream, plunged.

Is Sugar Toxic?by Gary Taubes strongly suggests that my (relative) lack of interest in sugar is another part of the explanation. The story has two suspects called "Sugar" (Sucrose) and HFCS (High Fructose Corn Syrup). The crime they are suspected of, is nothing less than the tremendous increase over the last 50 years, of obesity, heart disease, diabetes and cancer.

If the suspects are guilty, the article goes, it's probably because they share one property: When they're broken down in the intestine, 50-55% of them get turned into Fructose, which has to be digested by the liver. Unfortunately, the liver has a tendency to turn Fructose into fat if it's given too much. And "too much" might turn out to be a much smaller quantity than most people think.

- o 0 o -

While I was writing this, another vision popped into my head. Those fat cells that we're complaining about, aren't alien invaders. They are parts of us, and they have only one way to defend themselves, when we try to slim them out of existence: They complain, and make us feel miserable. Wouldn't you have done the same, in their place?


:-J

onsdag 13. april 2011

How to save a trillion dollars?

Do you remember the story of the Emperor's New Clothes?  I had a moment like that today, when I read yesterday's column in the NYTIMES by Mark Bittman. His message is that the giantic deficit in the US economy amounts to a rounding error, compared with the Federal Budget's share of the cost of our lifestyle diseases.

It's a surreal moment. It's like having an elephant in the room. An elephant that's grown big enough to almost choke off the world's richest economy. And what are our politicians doing? They're bringing the Federal Govenment to the brink of a shutdown in a show over how to shrink the "ronding error" without raising people's taxes.

How big is the elephant going to be next year? And why not raise the taxes? Don't most Americans have more than enough money to eat themselves to death? Isn't it in fact the very convenience of it all, the living standard that we're trying to raise, that's killing us?

As human beings, we have evolved with a very fine balance between appetite for food on the one hand, and our dislike of walking on the other.  If our ancestors wanted to eat dinner, they had to hunt it or gather it, or both. Imagine having to walk five miles for your dinner. Imagine getting only a half portion, and having to walk five more miles for the other half. That's the kind of resistance that your appetite evolved to overcome.

The older you got, and the more your joints ached, the stronger your appetite had to be, in order to get you to feed yourself. In the end, you starved to death. Today, you're more likely to eat yourself to death. It's a big difference.

:-(

tirsdag 5. april 2011

Clicker training

In Karen Pryor's latest book,

Reaching The Animal Mind,

she tells about her life as a trainer of animals and humans, and elaborates further on the ideas of the "clicker training" that she helped develop. The first half of the book held my attention steadily, and I ploughed through the pages at a steady pace.

It was in the the second half that I began to take notes, when I realized that she was drawing lines between operant conditioning on the one hand, and what Temple called the "Blue Ribbon Emotions" on the other. This helps explain more of what I called in an earlier post the difference between "training" and "learning".


* Opererant conditioning ("training") is fast because it bypasses the Cortex. It addresses the primitive parts of the brain directly. The signal hardly has to be modulated or interpreted at all.

* The effects of operant conditioning lasts for a long time, because operant conditioning establishes an extremely short and simple link between information and its usefulness. The brain seems to be "wired" to keep information longer in memory, the more useful it appears to be.

* The effects of operant conditioning can be hard to undo through "teaching", because the traffic from the primitive parts of the brain (like the Amygdala, which controls fear responses) is largely one-way. The Amygdala knows how to talk to the Neocortex, but the Neocortex has practically no way to talk to the Amygdala. Or so Karen Proyr says.

This is bad news if you've been accidentally conditioned to have an aversive reaction to stuff like homework, and are trying to reason your way out of it.  In fact, the only way to undo the damage, seems to be through new operant conditioning.

* Mixing sticks and carrots (rewards and punishments) is worse than we tend to think, because the two are handled by different parts of the brain. Fear responses are handled by the Amygdala, while the positive reinforcement is handled by the Hypothalamus. Activating both centres of the brain at the same time, does not improve the efficiency of the training.

* Operant conditioning can be great fun because it stimulates the SEEKING system in the primitive brain. This is the primary emotion that drives us to go out window-shopping, travelling, exploring, etc. Getting a good chance to explore something can be extremely rewarding ...


... which may be why I have enjoyed this book so much
(along with her previous book "Don't Shoot the Dog", which is an excellent textbook on operant conditioning).




PS: An afterthought: Could it be that the SEEKING emotion is the reason why we get addicted to computer games? Could it be because these games offer us a constant barrage of opportunities to find out "WHAT HAPPENS NEXT?"

søndag 13. mars 2011

"Jørgen Explains"

Ilana has collected all her little Facebook videos of me in a YouTube channel she calls Jørgen Explains. The latest addition to the collection is a shortened version of my talk yesterday, at the Unitarian Universalist Fellowship in Oslo, which she also helped me write.

I'm grateful - and flattered!
:-J

lørdag 12. mars 2011

Limitations on human rationality

I have always been a firm believer in human intelligence. Maybe not in my own, so much as in the principle.  

I used to think that there is no problem
so big or so complicated
that a great mind can’t reason its way out of it,
at least if given enough information and time.

Try to imagine me as a guy who walks around with a hammer, and thinks that every problem in the world looks like a nail. That’s me, except that the tool isn’t a hammer.  It’s more like a flashlight, shining a narrow ray of attention into the chaos of the world around me.  The way to have the best possible life, I thought, was always to learn and understand everything that held me back.

I still think it’s a good program, as far as it goes, but I’ve recently found a valuable counterpoint to it. It’s in a book I’m reading called “What Intelligence Tests Miss: The Psychology of Rational Thought”, and is written by Keith E. Stanovich. Among other things, it says that if we gave every human being a pill that added 20 points to their IQ scores instantly, we would still be making most of the same mistakes the next day.  The biggest difference would be that we’d be making them faster and more effectively. Intelligence, in other words, is not all.

When I mentioned the title to a friend of mine who is a psychologist, he said that “That must be a very long book”. However, the most important insight we can gain from it is a very short one.  It’s that

We Are All Cognitive Misers.

as Stanovich calls it.  This is shorthand for the fact that

The brain will always try to use
the simplest and sketchiest model of the world,
that it can get away with.

There are many reasons why we’re cognitive misers. One is that when brains and sensory systems first started to evolve, they were very primitive. The operating system that ran on those early brains, therefore had to make do with some pretty sketchy ideas of what the world consisted of.  Still, they did their job. They helped keep their owners alive.  Over the next several hundred million years, they kept evolving, and they always kept this as their first priority:

The evolutionary purpose of the brain
is to help keep its owner alive,
and promote his/her reproductive success,
not to take him/her to the moon.

When you look at the brain from that angle, it’s an absolutely amazing device. Normal computer programs need to be complete. They need to be double and triple checked for “bugs”.  If something goes wrong, they crash. The brain, on the other hand, can make do with only the vaguest beginning of an idea.  If something is missing, it simply fills in the blanks with whatever looks most probable, and goes on computing.  If it needs a concept, it forms one on the spot.  If input from one sense is confusing, it consults the other senses.

The brain’s image of the world around it will always look whole and complete, no matter what flaws and approximations it contains. The stuff we use to fill in the blanks usually fits so well, that we’re not aware that anything is missing.

This is as it should be. A bigger and more complicated model will require more processing power from the brain. The smaller and simpler we can make it, the easier we will get by. This type of optimization seems to be a general principle behind the way the nervous system is organized.  Brains are optimized for speed, not for accuracy:

Every task that the brain can delegate
to a lower level of consciousness,
will be delegated, and to the lowest possible level.

If there’s enough information that a lower level of your nervous system knows what to do with it, without bothering your conscious mind, then that action will usually be taken without any further notification to higher quarters. The most extreme example of this is the simple “reflexes” that we learn about in school: A knee jerk here or a hand flying back at the touch of something hot.  But it’s actually much more pervasive than that.  It permeates everything:  If a simple and primitive solution is good enough, it will be adopted until we see a reason to do otherwise, and without any resources being wasted on conscious processing.

This process of delegation happens all over our sensory systems. Let me take the eye as an example.  Imagine that you are staring at a field of uniformly gray sky. That means that all the little rods and cones at the back of your eyes will be reporting the exact same level of stimulation back to the brain.

Now imagine that a pinpoint of light suddenly you comes on in the sky, causing just one little cell in your retina to start firing back to the brain.  Can’t you hear it? It’s jumping up and down, shouting I see light! while all the others are still only seeing uniform grey.  The thing that blew my mind when I learned about this, is that the neighbours of that little cell are also going to start responding, even though nothing has changed for them. They’ll respond by reporting a false drop in light intensity, as if it had suddenly gotten getting darker. It’s as if they’ve become jealous: “Now that my neighbour has more, it feels as if I’ve got less.

You can say that the eye is lying to the brain, but it’s actually an example of delegation.  

*    If the brain had to analyze every single pixel in the picture that’s being projected onto our retinas, it probably wouldn’t be able to make heads or tails of it ... and it certainly wouldn’t have time for anything else.  

*    Instead, by the time the signal leaves the eye, it has already been heavily doctored. Contrasts and movement have been emphasized, and that makes it just simple enough for the brain to take effective action.

The eye was only an example.  All sensory input has to travel through a whole hierarchy of nerve cells, on its way towards the top level of full consciousness. Long before it gets there, salient points have been emphasized. Others have been suppressed. Different forms of input are compared and used to emphasize or cancel each other out.

For every level that a signal passes, on its way to full consciousness, details will invariably get lost. That’s how you’re able to filter one thread of conversation out of the background hum during a party.  That’s how you’re able to pick out one little detail in a picture. The implication of this is that

Your subconscious, in the wider sense,
has a lot more information at its disposal
than your conscious mind has,

which is why our semi-conscious or subconscious assessments can be not only much faster, but can also be much more accurate than our conscious analysis.

It’s time to start summing up the insights that I feel I’ve gained. There are three of them.

- o 0 o -

The first is a new understanding of the cognitive biases.  Every one of these seems to be an expression of how our brains are optimized for speed and computational ease, rather than accuracy.  I’ll mention a few examples:

*    “Confirmation bias” is one of the most pervasive because it’s an expression of how we don’t interpret sensory input from scratch.  Instead, we filter it through other layers of meaning, including our ideas of what the world should look like. When I was looking for an example of how this works, I suddenly remembered the debate around the so-called “Ground Zereo Mosque”. The thing people couldn’t agree about then, was whether a Sufi Muslim religious centre two blocks away from the World Trade Center site would be a victory for the Terrorists or for our own ideals of Religious Freedom.

   When I studied the issue, I got the impression that the majority of Muslim terrorism is directed against other Muslims: Shias against Sunnis, Sunnis against Shias ... and Everybody against the Sufis. Maybe I’m wrong, but to me it looked as if a Sufi religious center near ground zero would be more likely to be a terrorism target than a terrorism rallying point.

   Another example of confirmation bias is very close to my own heart: The way lawyers and their clients always seem to be more sensitive to information going in their favor, than to information pointing the other way. The result is a fundamental inefficiency in how legal disputes are settled. My personal estimate is that at least 75% of lawyers’ incomes are derived from this factor alone, but then again, I’m biased.

*    “Base rate neglect” or “Base rate fallacy” is the tendency to base judgments on specifics, ignoring general statistical information because that’s more abstract and hard to relate to.  This is why some people anchor their judgements on Muslims in what they see on TV (terrorism, repression of women), and don’t bother to find out how many Muslims are actually sensible, peace-loving, flexible-minded and hard-working.

*    The “Bandwagon effect” is another kind of cognitive bias, that occurs when we don’t bother to form all our ideas from scratch, when other people seem to have done the work for us. This is a huge problem in some circumstances - and a huge source of efficiency in others.

*    The “Availability cascade” is a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse. "Repeat something long enough and it will become true". What is “available” in memory gets the appearance of being more likely, and as we all know, our memories are biased toward vivid, unusual, or emotionally charged examples.

I have printed a list of cognitive biases in an attachment here, and encourage you to read them at home. It’s an interesting list.

Keith Stanovich sees these biases, which are all part of our default mode of operation, as failure to operate rationally. It’s easy to agree with him on this.  In every instance, it is possible to say that we would do better if we were able to eliminate the bias and the mental shortcuts that lie behind it.

On the other hand, I’m also tempted to see these cognitive biases as examples of how the human brain needs to function, if it is to function at all.  It’s all very well to know exactly what “sand” is, but sometimes we just have to throw it in the eyes of the attacking tiger and get on with our lives.  The reason is that our highest level of rational consciousness, that Stanovich calls “type 2 reasoning”, consumes a lot of resources. If we try to attain the highest possible level of rationality in all aspects of life, we’ll get to be right, but we won’t get to be much else.

There’s also the problem of focus when trying to engage in “type 2" fully conscious and rational reasoning.  Our highest level of attention is very much like a ship’s radar: It can only look in one direction at a time.  With the first naval radar sets, the captain of a ship actually had to steer his ship in a full circle, if he wanted to scan the whole horizon. (That was how British naval forces discovered the battleship “Bismarck” and the heavy cruiser “Prinz Eugen” in the Denmark Strait on May 24, 1941). Later radar sets were set up with revolving antennae, so that the single beam could scan the whole 360 degrees of the horizon every 5 seconds or so, giving equal attention to everything.

If our human system of attention regulation had been set up in this manner, making us pay equal amounts of attention to everything, I doubt that we’d ever have gotten out of the primordial ooze.  The cognitive biases occur because we have no choice:  We’ll be lucky if we can compensate for 1 or 2 of them at a time.

- o 0 o -

I said before that every task that the brain can delegate to a lower level of consciousness, will be delegated, and to the lowest possible level.  My second insight is that this seems to explain why Training is so much more powerful than Learning.

*    “Learning” is all about understanding and memorizing facts and rules.  It’s an incredibly powerful tool.  “Learning” is powerful because it’s about deepening our your understanding of why we should or shouldn’t do things, like point a sextant to the sun or eat less sugar. Without learning in this sense, we’d probably still be stuck in the stone age.  

*    “Training” is all creating sub-conscious dispositions to do one thing rather than another.  It’s known as behaviour training, shaping, and a host of other names. Well-known examples are Ivan Pavlov and his drooling dogs, B.F. Skinner and superstitious pigeons, and smiling dolphins jumping high in the air at our marine parks.

If training and learning contradict each other, the thing you’re trained to do (or think) will usually win. The reason, I believe, is that training operates at a lower level of consciousness, and tends to determine our actions long before we’ve had time to think them over. It makes us want to do things, not just because there might be a bucket of fish (or praise or ice cream) at the end of the game, but because the action has come to feel natural for us.  It feels good.  It’s become part of what we consider “who we are”, rather than just “what we do”. And once we’re there, it’s usually easy to justify in an apparently rational manner rationally what we want to do.

- o 0 o -

My third insight is a feeling that I can now explain God rationally, and that God doesn’t need to exist, ... but I can’t always replace him with pure rationality.

The question about God’s existence can be made pretty simple.  As I said before, the brain will always try to use the simplest and sketchiest model of the world, that it can get away with. Can you imagine a simpler model than “God”, for how the Earth and the Universe came to be? I can’t, and it’s a model that satisfied the needs of all our early ancestors.  Can you imagine a simpler source of ultimate authority than “God”, behind the rules we try to make each other follow?  I can’t.

When I look at God from that angle, it doesn’t seem matter any more whether “he” exists or not. All that matters is whether the concept makes it easier for us  to operate at the level of accuracy that’s needed at the moment.

Finding a better explanation than “God” is easy, when we’re talking about the Creation of the Universe, but difficult - and maybe unwise - when we’re looking at “him” as the ultimate source of moral authority.  Rational analysis is slow and cumbersome. It’s narrow in focus.  It’s at constant risk for irrational cognitive bias. If the Police and our human Rationality were to be our only bulwarks against moral transgressions, I think we’d be worse off than we are today.  It may be irrational bo believe in the Wrath of God and Eternal Damnation, but in times of great temptation, rationality is usually playing second fiddle anyway. In fact, research shows that when we have enough stress hormones in our systems, the prefrontal cortex starts to shut down, or get subdued. In a crisis, we're not meant to be rational. We're meant to go on autopilot.

This is not to say that I want to start scaring little children out of their wits again, about how they’ll burn in hell if they don’t do as we say. I’m saying that whatever we bring up to fill the void when God goes out the window, needs to be simple, powerful, and capable of acting directly on an irrational mind.


This posting was written with the help of Ilana Bram.

mandag 21. februar 2011

Logic

Logic is like a sewer: What you get out of it depends on what you put into it. 
It's great for some tasks, like getting rid of garbage. But when you actually have to wade out into the pool, and start sorting carefully between garbage and non-garbage, it can be worse than useless: A blind that covers the subjective nature of of our priorities, and the ambiguity of the words that make up the premises.

onsdag 9. februar 2011

Interesting Correlations

Ilana pointed me to this article from OKCUPID, because a friend of hers had crunched the numbers.

The Best Questions For A First Date


I was fascinated, particularly by some of the corellations towards the end.

What's it most important for potential partners in a couple to agree on?
The top 3 user-rated match questions are

"Is God imporant in your life?"
"Is sex the most important part of a relationship?"
"Does smoking disgust you?"

However, in 85% of the couples formed with the help of OKCUPID, people had diverging answers to one or more on their answers to these questions. On the other hand, a whopping 32% agree on all of these:

"Wouldn't it be fun to chuck it all and go live on a sailboat?"
"Do you like horror movies?"
"Have you ever traveled round another country alone?"

Are political beliefs corellated with anything else?
On OKCUPID, people have an opportunity to state whether they prefer simplicity over complexty, or vice versa. People who prefer simplicity turn out to be twice as likely to be conservative than liberal, and the other way around.

On questions like

"Should burning your country's flag be illegal?"
"Should the death penalty be abolished?"
"Should the marriage be legal?"
"Should evolution and creationism to be taught side-by-side in schools?"

 It turns out that the people who "prefer simplicity" are twice as likely to answer these questions in a "conservative" way. People who "prefer complexity", on the other hand, are twice as likely to lean in the liberal direction.

I'm wondering whether this has anything to do with the S/N difference in the MBTI system.  Somebody ought to look into this.


And what about religious beliefs?

One of the questions in the OKCUPID questionnaire is "Do spelling and grammar mistakes annoy you?". It turns outthat people who answer no to this, are twice as likely to be at least moderately religious than people who answered yes. Maybe this is a question of tolerance, i.e. that religious people are more okay with small mistakes than other people.

However, the article also looks into another possibility, expressed by this graph that I let speak for itself: