Filed under: Uncategorized
I’m having fun reading the New Yorker article referred to in BoingBoing, about how smart people are more vulnerable to common thinking errors than dumb people are–or at least, there is a positive correlation between SAT scores and bias errors.
I suspect that Terry Pratchett got there first, since I remember a quote about his character Leonard of Quirm, who (in Lord Vetinarii’s estimation), had, in scaling the heights of intelligence, found heretofore undiscovered new plateaus of stupidity. It’s not quite the same thing, but it’s a similar sentiment. Most geeks and nerds don’t end up doing better in life than their dumber peers, despite their measurably greater intelligence.
In a similar vein, I’ve been reading a history of Korea, the most Confucian kingdom in Asia. Even though they had a bureaucracy of demonstrably smart, exam-passing men, even though they invented movable metal type at least two centuries before Gutenberg, 1870s Korea was an agrarian backwater, where a few families owned most of the land and an unfortunate proportion of the population were slaves. For some reason, some of the most brilliant Confucian scholars in the world, steeped in a theory of government that’s certainly no more stupid than most, were quite vulnerable to regulatory capture by the land-owners, and the result was over a century of bad governance. Government by the smart didn’t work for them, and it doesn’t seem to work very well in its modern incarnation of technocracy.
I’m not going to say government by the stupid works any better. Effective government is hard, and all models tried so far have critical shortcomings. Instead, I’d like to stretch out to a rather cynical view of evolution.
Let’s say, for the sake of argument, that this research is correct. Above a certain basic level of intelligence, getting better scores on IQ, SAT, or similar test does not make you a better decision maker. Rather, it makes you more vulnerable to your own unconscious biases.
What does this mean in evolutionary terms? Apparently, there’s little selection pressure for greater intelligence, for the simple reason that it doesn’t lead (on average) to greater resources or to greater reproductive success. It *might* also mean that the New Agers and Aquarians were right. If we get lucky, we may see evolution favoring increasing consciousness, average people becoming more aware of their own biases. Enlightened, not smarter. Of course, Tibet provides a cautionary model of what government by the enlightened looks like…
Do I believe this proposition, that evolution won’t make us smarter? I’m not totally sold, but I fear it’s true.
Now, before you say “Obviously, we’ll be computer augmented cyborgs soon, and that will solve the problem,” let me point out that increased processing power (as measured by an SAT) may make you more vulnerable to your own unconscious biases, not less. Cyborging won’t help. Unless you can invent a computer that gives you a better unconscious and fewer biases, increasing your processing power isn’t going to save you from doing stupid things. It will just help you get there faster and with greater confidence in your own wrong answers.
What do you think?
6 Comments so far
Leave a comment
This all depends on your sample, and on where you set the bar for smart versus stupid.
Over the past million years, its clear that intellect has been the dominant force in evolution. Brain size and adaptive capacity drove Homo sapiens to outpace all relatives, at the expense of degeneration in every other trait, such as physical strength, bone mass, immune system. Even the one trait of endurance running that was apparently favored in humans plays to mental strength–the ability to plan and carry out the plan.
But if you look in an advanced society, excluding those “below average” ala Garrison Keilor, it looks a little different. IQs above perhaps 120 (not sure, haven’t looked at this in a while) at some point show a dropoff in offspring number, and even birth weight. High IQ types have fewer children, and perhaps even less healthy. Why? Like orangs, they find iPads more interesting. In my books I call this the “mind children,” the future sentient machines that originate from our silicon toys.
What we forget though is that “advanced society” still applies to an appallingly small proportion of humanity. The immigrant effect continually replenishes talent, for countries that are reasonably receptive to immigration. In the rest of the world, intelligence remains a survival trait. Read Nick Kristof’s article about the woman in the Sudan, what she has to do to survive.
Comment by Joan S. June 14, 2012 @ 5:32 pmOh, I agree Joan. Intelligence, properly used, has been a survival trait for a very long time. My personal favorite example are Papuans, who have been praised for their intelligence and problem-solving ability by everyone from Alfred Russel Wallace to Jared Diamond. Of course, they grew up with the educational equivalent of “how many different problems can you solve with a couple of sharp rocks and a stand of bamboo,” which is why they’re so good at that type of problem solving. Ask them to come up with a spreadsheet for the next quarters’ projections, and they’d be lost.
Still, I think the bigger point holds: being smart in the SAT/IQ sense doesn’t free one from cognitive biases, and it might even increase their effect, according to the research mentioned in the articles. That’s fascinating, because it suggests that there are upper limits on human functional intelligence–to put it bluntly, it’s hard to be both a genius and a normally functioning human being.
As for machine intelligence replacing humans…maybe. While I have fun with human biases and shortcuts, there are many, many things that humans do that machines have a horrible time with, and that’s one reason I’m not sure the machines will take over any time soon. For example, take the instruction: “go ask Mom where we keep the towels upstairs. Get three and help me dry the dog.” I don’t think any machine can do that now, because it involves many tasks that machines typically fail. first involves finding a “mom” (Who is this person’s mom? Will Google provide the answer? How does one define the characteristics of “mom” to find such a person?) Then there’s the problem of negotiating stairs (always tricky for machines), grabbing three towels (mechanically difficult, both to identify three such blobby, complex solids, determine which towels are most appropriate, and then handle the three towels with mechanical grippers), then bring the towels back to dry a dog (another tricky challenge, since dogs tend to squirm, and the robot has to handle both the towel and the dog, and interact with the person who gave the instructions). This type of task is something children do all the time, because in large part, we live in a world we’re comfortable in. Finding either your Mom or someone else’s mom is a trivial challenge for most children, as is climbing stairs or handling fabric, and dogs and people have been getting along for millennia, which makes it easier for a human to dry a dog (imagine towel-drying a wolf for comparison).
A machine has no such advantages. In fact, we’ve had the most success making machines that do things humans are bad at. Exploring outer space is one example, and doing highly repetitive, high speed, high precision tasks is another. Because of that, I’m not sure machines will ever replace humans. Nor do I see a future without machines, for that matter. What I do see is a future where humans don’t get appreciably smarter than they are now, and most of our adaptations to our rapidly changing environment are mediated through culture, in any one of a myriad of ways.
Comment by heteromeles June 14, 2012 @ 6:47 pmThe article’s claim was that intuitive thinking could not be freed of cognitive bias by training, which I’ll accept. And I’ll also accept that people who are considered smarter by some measures fare worse than usual with intuitive thinking.
The obvious fix is to stop relying on intuitive thinking and systematically Do the Work. How tall is the world’s tallest redwood? That’s a question to be answered empirically, not by guessing. Spending 5 seconds checking the bat-and-ball answer would have revealed the common intuitive answer as wrong and prompt an exploration that leads to the correct answer. The same goes for the algae problem — though someone familiar with logarithms would get it right without turning to pencil and paper.
The few examples given in the article could all be remedied by someone with a calculation/communication device (like a smartphone) and the humility to discount their intuition and Do the Work. The fact that there is a verified correct answer to use as basis for comparison with intuition is proof enough that people can be trained to discover correct answers, if not to intuit them.
Comment by Matt June 15, 2012 @ 4:45 amTrue, Matt, but when you’ve got a complex problem and limited time to deal with it, what do you do? I’m thinking of things like, say, a committee deciding on budget priorities.
There are any number of situations when it’s logistically impossible to Do The Work, and that’s when people start taking intellectual short cuts. One of the short-cuts here is that we’d normally expect smarter people to be more unbiased when dealing with such complex problems, which is why we try to get policy wonks making policy, rather than, say, Joe the Plumber.
Unfortunately, this research says that policy wonks might be more influenced by their biases than so-called average people. This certainly fits our preconceptions about the behavior of Washington bureaucrats, and its interesting that populist biases might actually have a real grounding in human psychology.
To be fair to the policy wonks, there’s a difference between bias and ignorance. A wonk (such as myself) can learn more shortcuts that are useful, such as remembering doubling time with logistic growth. These specialized tools can be more useful than some putatively less biased ignoramus. What we have to remember is that, even if we’re smart, even if we’re scientists, we’re still likely to screw up in predictable ways.
.
Comment by heteromeles June 15, 2012 @ 2:30 pmI’ve always liked Richard Feynman’s caution to scientists:
“The first principle is that you must not fool yourself–and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”
In the example of the budget committee, it depends a great deal on whether the members share a common end and are arguing over means or if they disagree over ends as well. In the first case, tools always let you do more systematic analysis even if there’s not the time to do all conceivable analysis. In the second case — and I’d guess this describes more real world arguments — the members can’t even agree on ends. In that case tools do nothing and not even superior intuition matters.
You can’t reason or calculate your way through a conflict of axioms. If I hold “God forbids the eating of shellfish” as an axiom, no mathematical model or field research about mere nutrition or economic and environmental costs of clam consumption will incorporate them into my diet. It seems to me that most thorny arguments come down to colliding axioms, obscured by participants pretending to themselves and each other that it is reason always bringing them to foreordained conclusions.
Comment by Matt June 17, 2012 @ 3:04 amI like Feynmans’ comment too, although even there, it can be hard to determine whether something is science or ideology. String theory comes to mind.
As for the budget committee, I agree with your comments, but I was thinking of something a bit different. I’ve done a bit of budgeting, and one of the interesting situations is when you have a bunch of choices, insufficient money to fund them all, and insufficient information to determine what the “best choices” are. I know that, when I’m budgeting, I start thrashing around for any “reasonable” way to rate the choices, just to attempt to be rational about the process. This is where unconscious biases come thundering in, I’m afraid.
For some people (and committees) they simply spread the money evenly among all the choices, thereby guaranteeing that no one gets as much as they needed. Others (and this is my bias) try to find a way to figure out which choices are more “deserving,” and fund these at the expense of rejecting other choices.
Comment by Heteromeles June 18, 2012 @ 4:37 pm