mandag 18. april 2011

Throughout my grown-up life, I've been at the receiving end of a lot of envy. "How on earth can you manage to eat so much and stay so slim?". Two articles from the NYTIMES of the last week have suddenly helped me come up with answers.

Is Sitting a Lethal Activity?
by James Vlahos suggests that my physical restlessness is part of the explanation. He describes experiments that closely monitor people's activity levels (using underwear with built-in sensors that register movement and posture) and food intake.  Some participants gained weight, while others didn't. The difference between the two groups wasn't some weird metabolic factor, but how much they moved. Both groups were forbidden to exercise, so that wasn't the decisive factor.  Instead, it was how much time they spent on their feet, moving around, or simply fidgeting.

On average, the subjects who gained weight sat two hours more per day than those who hadn’t.And when they sat, electrical activity in the muscles went down - the way the hum in a theater dies down when the curtains start moving apart. Calorie-burning rate immediately plunged to about one per minute, a third of what it would have been if they'd been up and walking. Insulin effectiveness dropped. The enzymes responsible for taking fats out of the bloodstream, plunged.

Is Sugar Toxic?by Gary Taubes strongly suggests that my (relative) lack of interest in sugar is another part of the explanation. The story has two suspects called "Sugar" (Sucrose) and HFCS (High Fructose Corn Syrup). The crime they are suspected of, is nothing less than the tremendous increase over the last 50 years, of obesity, heart disease, diabetes and cancer.

If the suspects are guilty, the article goes, it's probably because they share one property: When they're broken down in the intestine, 50-55% of them get turned into Fructose, which has to be digested by the liver. Unfortunately, the liver has a tendency to turn Fructose into fat if it's given too much. And "too much" might turn out to be a much smaller quantity than most people think.

- o 0 o -

While I was writing this, another vision popped into my head. Those fat cells that we're complaining about, aren't alien invaders. They are parts of us, and they have only one way to defend themselves, when we try to slim them out of existence: They complain, and make us feel miserable. Wouldn't you have done the same, in their place?


:-J

onsdag 13. april 2011

How to save a trillion dollars?

Do you remember the story of the Emperor's New Clothes?  I had a moment like that today, when I read yesterday's column in the NYTIMES by Mark Bittman. His message is that the giantic deficit in the US economy amounts to a rounding error, compared with the Federal Budget's share of the cost of our lifestyle diseases.

It's a surreal moment. It's like having an elephant in the room. An elephant that's grown big enough to almost choke off the world's richest economy. And what are our politicians doing? They're bringing the Federal Govenment to the brink of a shutdown in a show over how to shrink the "ronding error" without raising people's taxes.

How big is the elephant going to be next year? And why not raise the taxes? Don't most Americans have more than enough money to eat themselves to death? Isn't it in fact the very convenience of it all, the living standard that we're trying to raise, that's killing us?

As human beings, we have evolved with a very fine balance between appetite for food on the one hand, and our dislike of walking on the other.  If our ancestors wanted to eat dinner, they had to hunt it or gather it, or both. Imagine having to walk five miles for your dinner. Imagine getting only a half portion, and having to walk five more miles for the other half. That's the kind of resistance that your appetite evolved to overcome.

The older you got, and the more your joints ached, the stronger your appetite had to be, in order to get you to feed yourself. In the end, you starved to death. Today, you're more likely to eat yourself to death. It's a big difference.

:-(

tirsdag 5. april 2011

Clicker training

In Karen Pryor's latest book,

Reaching The Animal Mind,

she tells about her life as a trainer of animals and humans, and elaborates further on the ideas of the "clicker training" that she helped develop. The first half of the book held my attention steadily, and I ploughed through the pages at a steady pace.

It was in the the second half that I began to take notes, when I realized that she was drawing lines between operant conditioning on the one hand, and what Temple called the "Blue Ribbon Emotions" on the other. This helps explain more of what I called in an earlier post the difference between "training" and "learning".


* Opererant conditioning ("training") is fast because it bypasses the Cortex. It addresses the primitive parts of the brain directly. The signal hardly has to be modulated or interpreted at all.

* The effects of operant conditioning lasts for a long time, because operant conditioning establishes an extremely short and simple link between information and its usefulness. The brain seems to be "wired" to keep information longer in memory, the more useful it appears to be.

* The effects of operant conditioning can be hard to undo through "teaching", because the traffic from the primitive parts of the brain (like the Amygdala, which controls fear responses) is largely one-way. The Amygdala knows how to talk to the Neocortex, but the Neocortex has practically no way to talk to the Amygdala. Or so Karen Proyr says.

This is bad news if you've been accidentally conditioned to have an aversive reaction to stuff like homework, and are trying to reason your way out of it.  In fact, the only way to undo the damage, seems to be through new operant conditioning.

* Mixing sticks and carrots (rewards and punishments) is worse than we tend to think, because the two are handled by different parts of the brain. Fear responses are handled by the Amygdala, while the positive reinforcement is handled by the Hypothalamus. Activating both centres of the brain at the same time, does not improve the efficiency of the training.

* Operant conditioning can be great fun because it stimulates the SEEKING system in the primitive brain. This is the primary emotion that drives us to go out window-shopping, travelling, exploring, etc. Getting a good chance to explore something can be extremely rewarding ...


... which may be why I have enjoyed this book so much
(along with her previous book "Don't Shoot the Dog", which is an excellent textbook on operant conditioning).




PS: An afterthought: Could it be that the SEEKING emotion is the reason why we get addicted to computer games? Could it be because these games offer us a constant barrage of opportunities to find out "WHAT HAPPENS NEXT?"

søndag 13. mars 2011

"Jørgen Explains"

Ilana has collected all her little Facebook videos of me in a YouTube channel she calls Jørgen Explains. The latest addition to the collection is a shortened version of my talk yesterday, at the Unitarian Universalist Fellowship in Oslo, which she also helped me write.

I'm grateful - and flattered!
:-J

lørdag 12. mars 2011

Limitations on human rationality

I have always been a firm believer in human intelligence. Maybe not in my own, so much as in the principle.  

I used to think that there is no problem
so big or so complicated
that a great mind can’t reason its way out of it,
at least if given enough information and time.

Try to imagine me as a guy who walks around with a hammer, and thinks that every problem in the world looks like a nail. That’s me, except that the tool isn’t a hammer.  It’s more like a flashlight, shining a narrow ray of attention into the chaos of the world around me.  The way to have the best possible life, I thought, was always to learn and understand everything that held me back.

I still think it’s a good program, as far as it goes, but I’ve recently found a valuable counterpoint to it. It’s in a book I’m reading called “What Intelligence Tests Miss: The Psychology of Rational Thought”, and is written by Keith E. Stanovich. Among other things, it says that if we gave every human being a pill that added 20 points to their IQ scores instantly, we would still be making most of the same mistakes the next day.  The biggest difference would be that we’d be making them faster and more effectively. Intelligence, in other words, is not all.

When I mentioned the title to a friend of mine who is a psychologist, he said that “That must be a very long book”. However, the most important insight we can gain from it is a very short one.  It’s that

We Are All Cognitive Misers.

as Stanovich calls it.  This is shorthand for the fact that

The brain will always try to use
the simplest and sketchiest model of the world,
that it can get away with.

There are many reasons why we’re cognitive misers. One is that when brains and sensory systems first started to evolve, they were very primitive. The operating system that ran on those early brains, therefore had to make do with some pretty sketchy ideas of what the world consisted of.  Still, they did their job. They helped keep their owners alive.  Over the next several hundred million years, they kept evolving, and they always kept this as their first priority:

The evolutionary purpose of the brain
is to help keep its owner alive,
and promote his/her reproductive success,
not to take him/her to the moon.

When you look at the brain from that angle, it’s an absolutely amazing device. Normal computer programs need to be complete. They need to be double and triple checked for “bugs”.  If something goes wrong, they crash. The brain, on the other hand, can make do with only the vaguest beginning of an idea.  If something is missing, it simply fills in the blanks with whatever looks most probable, and goes on computing.  If it needs a concept, it forms one on the spot.  If input from one sense is confusing, it consults the other senses.

The brain’s image of the world around it will always look whole and complete, no matter what flaws and approximations it contains. The stuff we use to fill in the blanks usually fits so well, that we’re not aware that anything is missing.

This is as it should be. A bigger and more complicated model will require more processing power from the brain. The smaller and simpler we can make it, the easier we will get by. This type of optimization seems to be a general principle behind the way the nervous system is organized.  Brains are optimized for speed, not for accuracy:

Every task that the brain can delegate
to a lower level of consciousness,
will be delegated, and to the lowest possible level.

If there’s enough information that a lower level of your nervous system knows what to do with it, without bothering your conscious mind, then that action will usually be taken without any further notification to higher quarters. The most extreme example of this is the simple “reflexes” that we learn about in school: A knee jerk here or a hand flying back at the touch of something hot.  But it’s actually much more pervasive than that.  It permeates everything:  If a simple and primitive solution is good enough, it will be adopted until we see a reason to do otherwise, and without any resources being wasted on conscious processing.

This process of delegation happens all over our sensory systems. Let me take the eye as an example.  Imagine that you are staring at a field of uniformly gray sky. That means that all the little rods and cones at the back of your eyes will be reporting the exact same level of stimulation back to the brain.

Now imagine that a pinpoint of light suddenly you comes on in the sky, causing just one little cell in your retina to start firing back to the brain.  Can’t you hear it? It’s jumping up and down, shouting I see light! while all the others are still only seeing uniform grey.  The thing that blew my mind when I learned about this, is that the neighbours of that little cell are also going to start responding, even though nothing has changed for them. They’ll respond by reporting a false drop in light intensity, as if it had suddenly gotten getting darker. It’s as if they’ve become jealous: “Now that my neighbour has more, it feels as if I’ve got less.

You can say that the eye is lying to the brain, but it’s actually an example of delegation.  

*    If the brain had to analyze every single pixel in the picture that’s being projected onto our retinas, it probably wouldn’t be able to make heads or tails of it ... and it certainly wouldn’t have time for anything else.  

*    Instead, by the time the signal leaves the eye, it has already been heavily doctored. Contrasts and movement have been emphasized, and that makes it just simple enough for the brain to take effective action.

The eye was only an example.  All sensory input has to travel through a whole hierarchy of nerve cells, on its way towards the top level of full consciousness. Long before it gets there, salient points have been emphasized. Others have been suppressed. Different forms of input are compared and used to emphasize or cancel each other out.

For every level that a signal passes, on its way to full consciousness, details will invariably get lost. That’s how you’re able to filter one thread of conversation out of the background hum during a party.  That’s how you’re able to pick out one little detail in a picture. The implication of this is that

Your subconscious, in the wider sense,
has a lot more information at its disposal
than your conscious mind has,

which is why our semi-conscious or subconscious assessments can be not only much faster, but can also be much more accurate than our conscious analysis.

It’s time to start summing up the insights that I feel I’ve gained. There are three of them.

- o 0 o -

The first is a new understanding of the cognitive biases.  Every one of these seems to be an expression of how our brains are optimized for speed and computational ease, rather than accuracy.  I’ll mention a few examples:

*    “Confirmation bias” is one of the most pervasive because it’s an expression of how we don’t interpret sensory input from scratch.  Instead, we filter it through other layers of meaning, including our ideas of what the world should look like. When I was looking for an example of how this works, I suddenly remembered the debate around the so-called “Ground Zereo Mosque”. The thing people couldn’t agree about then, was whether a Sufi Muslim religious centre two blocks away from the World Trade Center site would be a victory for the Terrorists or for our own ideals of Religious Freedom.

   When I studied the issue, I got the impression that the majority of Muslim terrorism is directed against other Muslims: Shias against Sunnis, Sunnis against Shias ... and Everybody against the Sufis. Maybe I’m wrong, but to me it looked as if a Sufi religious center near ground zero would be more likely to be a terrorism target than a terrorism rallying point.

   Another example of confirmation bias is very close to my own heart: The way lawyers and their clients always seem to be more sensitive to information going in their favor, than to information pointing the other way. The result is a fundamental inefficiency in how legal disputes are settled. My personal estimate is that at least 75% of lawyers’ incomes are derived from this factor alone, but then again, I’m biased.

*    “Base rate neglect” or “Base rate fallacy” is the tendency to base judgments on specifics, ignoring general statistical information because that’s more abstract and hard to relate to.  This is why some people anchor their judgements on Muslims in what they see on TV (terrorism, repression of women), and don’t bother to find out how many Muslims are actually sensible, peace-loving, flexible-minded and hard-working.

*    The “Bandwagon effect” is another kind of cognitive bias, that occurs when we don’t bother to form all our ideas from scratch, when other people seem to have done the work for us. This is a huge problem in some circumstances - and a huge source of efficiency in others.

*    The “Availability cascade” is a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse. "Repeat something long enough and it will become true". What is “available” in memory gets the appearance of being more likely, and as we all know, our memories are biased toward vivid, unusual, or emotionally charged examples.

I have printed a list of cognitive biases in an attachment here, and encourage you to read them at home. It’s an interesting list.

Keith Stanovich sees these biases, which are all part of our default mode of operation, as failure to operate rationally. It’s easy to agree with him on this.  In every instance, it is possible to say that we would do better if we were able to eliminate the bias and the mental shortcuts that lie behind it.

On the other hand, I’m also tempted to see these cognitive biases as examples of how the human brain needs to function, if it is to function at all.  It’s all very well to know exactly what “sand” is, but sometimes we just have to throw it in the eyes of the attacking tiger and get on with our lives.  The reason is that our highest level of rational consciousness, that Stanovich calls “type 2 reasoning”, consumes a lot of resources. If we try to attain the highest possible level of rationality in all aspects of life, we’ll get to be right, but we won’t get to be much else.

There’s also the problem of focus when trying to engage in “type 2" fully conscious and rational reasoning.  Our highest level of attention is very much like a ship’s radar: It can only look in one direction at a time.  With the first naval radar sets, the captain of a ship actually had to steer his ship in a full circle, if he wanted to scan the whole horizon. (That was how British naval forces discovered the battleship “Bismarck” and the heavy cruiser “Prinz Eugen” in the Denmark Strait on May 24, 1941). Later radar sets were set up with revolving antennae, so that the single beam could scan the whole 360 degrees of the horizon every 5 seconds or so, giving equal attention to everything.

If our human system of attention regulation had been set up in this manner, making us pay equal amounts of attention to everything, I doubt that we’d ever have gotten out of the primordial ooze.  The cognitive biases occur because we have no choice:  We’ll be lucky if we can compensate for 1 or 2 of them at a time.

- o 0 o -

I said before that every task that the brain can delegate to a lower level of consciousness, will be delegated, and to the lowest possible level.  My second insight is that this seems to explain why Training is so much more powerful than Learning.

*    “Learning” is all about understanding and memorizing facts and rules.  It’s an incredibly powerful tool.  “Learning” is powerful because it’s about deepening our your understanding of why we should or shouldn’t do things, like point a sextant to the sun or eat less sugar. Without learning in this sense, we’d probably still be stuck in the stone age.  

*    “Training” is all creating sub-conscious dispositions to do one thing rather than another.  It’s known as behaviour training, shaping, and a host of other names. Well-known examples are Ivan Pavlov and his drooling dogs, B.F. Skinner and superstitious pigeons, and smiling dolphins jumping high in the air at our marine parks.

If training and learning contradict each other, the thing you’re trained to do (or think) will usually win. The reason, I believe, is that training operates at a lower level of consciousness, and tends to determine our actions long before we’ve had time to think them over. It makes us want to do things, not just because there might be a bucket of fish (or praise or ice cream) at the end of the game, but because the action has come to feel natural for us.  It feels good.  It’s become part of what we consider “who we are”, rather than just “what we do”. And once we’re there, it’s usually easy to justify in an apparently rational manner rationally what we want to do.

- o 0 o -

My third insight is a feeling that I can now explain God rationally, and that God doesn’t need to exist, ... but I can’t always replace him with pure rationality.

The question about God’s existence can be made pretty simple.  As I said before, the brain will always try to use the simplest and sketchiest model of the world, that it can get away with. Can you imagine a simpler model than “God”, for how the Earth and the Universe came to be? I can’t, and it’s a model that satisfied the needs of all our early ancestors.  Can you imagine a simpler source of ultimate authority than “God”, behind the rules we try to make each other follow?  I can’t.

When I look at God from that angle, it doesn’t seem matter any more whether “he” exists or not. All that matters is whether the concept makes it easier for us  to operate at the level of accuracy that’s needed at the moment.

Finding a better explanation than “God” is easy, when we’re talking about the Creation of the Universe, but difficult - and maybe unwise - when we’re looking at “him” as the ultimate source of moral authority.  Rational analysis is slow and cumbersome. It’s narrow in focus.  It’s at constant risk for irrational cognitive bias. If the Police and our human Rationality were to be our only bulwarks against moral transgressions, I think we’d be worse off than we are today.  It may be irrational bo believe in the Wrath of God and Eternal Damnation, but in times of great temptation, rationality is usually playing second fiddle anyway. In fact, research shows that when we have enough stress hormones in our systems, the prefrontal cortex starts to shut down, or get subdued. In a crisis, we're not meant to be rational. We're meant to go on autopilot.

This is not to say that I want to start scaring little children out of their wits again, about how they’ll burn in hell if they don’t do as we say. I’m saying that whatever we bring up to fill the void when God goes out the window, needs to be simple, powerful, and capable of acting directly on an irrational mind.


This posting was written with the help of Ilana Bram.

mandag 21. februar 2011

Logic

Logic is like a sewer: What you get out of it depends on what you put into it. 
It's great for some tasks, like getting rid of garbage. But when you actually have to wade out into the pool, and start sorting carefully between garbage and non-garbage, it can be worse than useless: A blind that covers the subjective nature of of our priorities, and the ambiguity of the words that make up the premises.

onsdag 9. februar 2011

Interesting Correlations

Ilana pointed me to this article from OKCUPID, because a friend of hers had crunched the numbers.

The Best Questions For A First Date


I was fascinated, particularly by some of the corellations towards the end.

What's it most important for potential partners in a couple to agree on?
The top 3 user-rated match questions are

"Is God imporant in your life?"
"Is sex the most important part of a relationship?"
"Does smoking disgust you?"

However, in 85% of the couples formed with the help of OKCUPID, people had diverging answers to one or more on their answers to these questions. On the other hand, a whopping 32% agree on all of these:

"Wouldn't it be fun to chuck it all and go live on a sailboat?"
"Do you like horror movies?"
"Have you ever traveled round another country alone?"

Are political beliefs corellated with anything else?
On OKCUPID, people have an opportunity to state whether they prefer simplicity over complexty, or vice versa. People who prefer simplicity turn out to be twice as likely to be conservative than liberal, and the other way around.

On questions like

"Should burning your country's flag be illegal?"
"Should the death penalty be abolished?"
"Should the marriage be legal?"
"Should evolution and creationism to be taught side-by-side in schools?"

 It turns out that the people who "prefer simplicity" are twice as likely to answer these questions in a "conservative" way. People who "prefer complexity", on the other hand, are twice as likely to lean in the liberal direction.

I'm wondering whether this has anything to do with the S/N difference in the MBTI system.  Somebody ought to look into this.


And what about religious beliefs?

One of the questions in the OKCUPID questionnaire is "Do spelling and grammar mistakes annoy you?". It turns outthat people who answer no to this, are twice as likely to be at least moderately religious than people who answered yes. Maybe this is a question of tolerance, i.e. that religious people are more okay with small mistakes than other people.

However, the article also looks into another possibility, expressed by this graph that I let speak for itself:

mandag 24. januar 2011

More about irrationality

Hi,

Today's post is about a piece of writing that I don't really recommend: A report from the French Food Safety Agency called



This report is a splendid example of irrational "scientific" thinking: Intelligent people following what they consider good rules, and ending up with a completely ridiculous result. If you take the logic of the of this report seriously, it would be wrong of health care personell to give mouth-to-mouth first aid to victims of drowning today:


a) The method carries with it several risk factors, like blowing air into the patient's stomach, which can cause the patient to vomit and get gastric acid into his airways. The method also carries with it the risk of over-inflation and lung damage, particularly if the patient is a child with low lung capacity. Heart compressions, which are often advocated along with the blowing, carry with them a severe risk of fractured ribs.

b) No double-blind study has yet been done, where drowned patients have been randomly been assigned to the treatment group and placebo group, and where enough care has been taken to keep observers in the dark about what patient is getting mouth-to-mouth treatment and who's been given placebo.


Before I get back to the drowning victims, I'd like to say something about seven distinct types of irrationality that we may be dealing with here.

1) First and foremost, the authors of this report seem to be suffering from a severe case of irrational loss aversion.
This factor was first convincingly demonstrated by Amos Tversky and Daniel Kahneman. Several studies have shown that the motivating force of a potential loss (like the social awkwardness from being on a diet) is much stronger than a corresponding gain (like recovering from autism). This is sometimes also called the pseudocertainty effect – the tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes.


* On the one hand, they are taking very seriously a small risk for a modestly negative outcome: "No data are available on growth or nutritional status of autistic children subjected to a gluten-free, casein-free diet. Therefore, it is impossible to contend that such a diet has no harmful effect in the short, medium or long term".

* On the other hand, they are completely disregarding the possibility of a positive outcome.

2) Secondly, it seems safe to guess is that the loss aversion in this case is being reinforced by omission bias – the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).

* The alternative they are advising against, requires practical action.

* The alternative that they don't mind, consists of inaction.

3) Thirdly, the way the authors justify their position makes me suspect that they also suffer from the observer expectancy effect – they may have manipulated unconsciously the criteria for what they consider valid, in order to find the result they expected.


* On the one hand, they advise that studies should be assigned a credibility of zero, unless they fulfill certain formal criteria: A control group (autistic children without dietary intervention), random allocation of treatment or placebo, and a double blind protocol. These criteria happen to exclude all evidence for an effect from diet on autism.

* On the other hand, this last study is assigned high credibility, in spite of the fact that the study design is such that it didn't actually test the hypothesis that most parents and supporters believe in. It's as if they'd finally gotten around to doing a double-blind placebo-controlled study of mouth-to-mouth resuscitation, ... and were giving the patients 2 inhalations each, without clearing their airways of water first.

4) Fourth, they also seem to have been at risk of falling for the Ambiguity effect - the tendency to avoid options for which missing information makes the probability seem "unknown." I see this as a verision of the Zero-risk bias - the preference for reducing a small risk to zero over a reduction in a larger risk).


* In autism, the total amount of uncertainty will be reduced if nothing is done (although in favour of a truly awful outcome).

5) Fifth, the authors must also have been at risk of
Hyperbolic discounting - the risk of having a stronger preference for or against something, the closer to the present the cost or payoff are in time or space.


* In autism, the cost of treatment (financial and in the form of hassle) is certain and immediate, while the benefit is uncertain and several months (years) into the future.

6) Sixth, the authors have also been at risk for the Bandwagon effect the tendency to do or believe things because many other people do or believe the same. This is related to "groupthink", "herd behavior" and the Semmelweis reflex – the tendency to reject new evidence that contradicts an established paradigm and the Availability cascade - a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or "repeat something long enough and it will become true").


* No medical authorities have yet had the courage to be the first to make a rational cost-risk-benefit analysis of what this treatment has to offer autistic children.

7) Lastly, it's also possible that the authors (or the authority figures that have created today's medical paradigm in this area) are suffering from
Confirmation bias – the tendency to search for or interpret information in a way that confirms one's preconceptions. This factor will be particularly strong if the preconceptions in question have the power cause severe and irreparable damage.


* In autism, it can be postulated that the medical authorities will be less and less disposed towards changing their policy, the more children that have suffered irreparable brain damage as a result of those policies.

There is probably a strong synergistic interaction between all these kinds of irrationality.

Now, back to the drowning victims. Do I hear voices saying that it's unethical not to treat them with whatever means we have at hand, as long as there's a chance that we might save them?` Thank you very much. That's exactly my point. Why shouldn't the same rule be applied to autistic children? Untreated autism leaves the patient to suffer for a lifetime (irreversible brain damage), along with his parents (burned-out), his siblings (neglected), and the rest of society (stuck with the bill when the patient grows up).

We still have a chance of saving at least some of these children. Why are these people insisting that we don't even try?



- o 0 o -

Before you can say you have a rational solution to the autism-and-diet question, you need to do the math involved in a structured risk analysis. I recommend that everyone who is interested in this issue, start by setting up a risk analysis matrix around at least five risk factors:


1) The patient 's physical growth is retarded because of a well monitored GFCF diet,
2) The patient develops irreversible brain damage, from untreated autism
3) The patient's social life becomes more complicated because of a GFCF diet,
4) The patient's social life becomes more complicated because of autism
5) The patient's parents incur some expense because of the GFCF diet,

They should then assign a probability of 1-100% to each of these factors, under the following alternatives:


a) The patient does not try the diet.
b) The patient tries the diet, has no effect from it, and discontinues it after 6 months.
c) The patient tries the diet, experiences an average *) reduction in symptoms, and continues.
d) The patient tries the diet and recovers completely.

*) The average outcome of dietary experiments is still unknown. Even if it were known, it would be impossible to extrapolate from that figure to what was going to happen to one specific patient. For that patient, the full range of outcomes must still be reckoned with, from nothing to a full recovery. The actual treatment decision must therefore always be based on possibilities, rather than probabilities. Even so, I've attempted to assign a value to this column, based on my best assessment of the total amount of evidence available.

Next, they should assigned a numerical value (for example 1-100) for the importance of each of the problems above, relative to the worst possible outcome in the most serious of the categories, to make them comparable.

Lastly they should multiply probability and importance. The resulting numbers will tell something about how important you consider it is to guard against each of these factors.

My conclusion at the bottom of my personal risk analysis matrix, was that a lifelong diet needed to produce a recovery rate of around 4%, or a reduction in severity of the autistic outcome of the same size order, in order to outweigh the costs. If we take into consideration that patiens who don't benefit from the diet, can discontinue it after 6 months, the figure falls to under 1%.

This is all based on my personal value system, i.e. how I (for example) prioritize a 100% certain loss of social convenience (being able to eat anything in birthday parties) versus a somewhat lower chance of irreversible brain damage.


I'm longing for the day when the Government's medical researchers starts doing this kind of math. When they come clean about their priorities. When they start looking at ALL the available evidence, including the lab reports that show the opoid peptides right there, physically present in the blood and urine ... just because it isn't part of a randomized double-blind placebo controlled crossover study.

And I'm longing for the day when the same people start paying attention to study designs, and start laughing - along with me - of studies that are so badly designed that they can't prove anything - even though they fulfill all those sensible criteria, with randomization and crossover and all.

:-J

onsdag 19. januar 2011

IQ and rationality

I'm reading about the distinction between intelligence and rationality these days:

What Intelligence Tests Miss: The Psychology of Rational Thought

There's one aspect of the book that I'd like to comment on so far:

Intelligence has to do with the processing power of the brain. Rationality has to do with how we form our beliefs.

In other (my) words: You don't have to be unintelligent to believe that you're safer with a gun in your house, or that lower taxes are always better. You just have to be irrational.

What do I mean with irrational? It's a lot of things, but an important element is our common tendency to be more impressed by evidence that supports our worldview, than by evidence that contradicts it.

Let's start with the pistols and revolvers. Of course, it's easy to create in our minds a scenario where you're safer with a gun in your house than without it. "You hear an intruder. You have time to find your gun and load it before the intruder finds it. You know how to use it effectively and responsibly, even when your adrenaline is sky high. The intruder is either unarmed or unprepared." And at the end of the story, you are the hero, unhurt, and the intruder is either dead, wounded or (hopefully just) in chains.

The problem is that this scenario is far more appealing than it is likely. Most people are shot by their own family members or by acquaintances, or by themselves. Not by intruders. If you let objective statistics, and not your wishful thinking be your guide, you'll know that having a gun in your house is far more likely to get you or one of your loved ones shot, than to protect them from getting shot.

As for the taxes, of course it's tempting to believe that we get richer, the more of "our" earnings we're allowed to keep. If you let objective statistics be your guide, however, you'll see that the easiest way to get richer is to live in a rich society, and that rich societies on average have higher tax rates. This doesn't prove that higher taxes will make you rich, but it does suggest, at least to me, that that it's pure wishful thinking to imagine that "my" earnings are "mine", and to forget that the opportunity to make so much wasn't a free lunch.

If you sort the lists of countries by GDP (adjusted for purchasing power) and tax rates (as percentages of GDP), the poorest 25 have an average tax rate of 13,6%, and the richest 25 have 37,2%. Coincidence? Hardly.

History is awash with politicians idealists: People whose grandiose, but irrational beliefs caught on with the masses, and ended up creating major disasters. Marx? Lenin? Hitler? I believe that they all belonged to that category.

If you want to comment on this thread, please let the thread be one of dialogue ("yes, and......"), not debate ("no, because...."). One question I'd like your opinions on, is who the 3 most popular politicians or political movements today are, that stand out by the irrationality of their beliefs?

:-J



Top 25 countries by GDP (adjusted by purchasing power)

GDP Tax Country GDP Tax as % of GDP
rank rank
1 151 Luxemb. 57.640 36,40
2 172 Norway 56.050 43,60
3 47 Singap. 49.850 13,00
4 126 USA 46.730 28,20
5 163 Netherl. 40.510 39,50
6 177 Sweden 38.560 49,70
7 170 Austria 38.550 43,40
8 133 Australia 38.210 30,50
9 150 Denmark 37.720 50,00
10 139 Canada 37.590 33,40
11 161 UK 37.360 39,00
12 99 Germany 36.960 40,60
13 175 Belgium 36.520 46,80
14 100 Finland 34.430 43,60
15 171 France 33.980 46,10
16 165 Iceland 33.390 40,40
17 144 Ireland 33.280 34,00
17 122 Japan 33.280 27,40
19 158 Spain 31.630 37,30
20 168 Italy 31.330 42,60
21 93 Greece 28.440 33,50
22 117 Korea S 27.310 26,80
23 154 Israel 27.040 36,80
24 162 Slovenia 26.340 39,30
25 124 Trin.&Tob. 25.100 28,00
26 153 Czech R. 23.610 36,30
27 156 Portugal 22.870 37,00
28 129 Slovakia 21.600 29,50


Bottom 25 countries by GDP

GDP Tax Country GDP Tax as % of GDP
rank rank
156 50 Liberia 290 13,20
155 40 Congo,DR 300 13,20
154 75 Burundi 390 17,40
153 35 Niger 660 11,00
152 19 C.African R 750 7,70
151 92 Malawi 760 20,70
150 27 S.Leone 790 10,50
149 68 Togo 850 15,50
148 52 Mozamb. 880 13,40
147 134 Ethiopia 930 11,60
146 56 Rwanda 1.060 14,10
145 36 Burkina F. 1.170 11,50
144 33 Nepal 1.180 10,90
142 63 Mali 1.190 15,30
142 45 Uganda 1.190 12,60
141 8 Chad 1.230 4,20
140 71 Zambia 1.280 16,10
139 106 Comoros 1.300 12,00
138 26 Gambiae 1.330 18,90
137 42 Tanzania 1.350 12,00
136 166 Ghana 1.480 20,80
135 65 Benin 1.510 15,40
134 83 Kenya 1.570 18,40
133 22 Banglad. 1.580 8,50
132 54 Côte d'Ivoire 1.640 15,30

Source: Wikipedia
Selection: Countries with data in both columns