Thursday, June 18, 2009

Dance with Chance

In a nutshell, Dance with Chance is all about knowing what you can and cannot predict and, therefore, what you can and cannot control.

Think about it. Every day human beings make decisions. Some are important: should you invest your life savings in the stock market? Others are trivial: should you take an umbrella today? But in both these cases you have no control. The stock market will go up or down, it will rain or it won’t… and there’s nothing you can do about it.

The problem comes when people seek to gain control by making predictions. By consulting an investment expert or a weather forecast, they think they can control the value of their investments or avoid getting wet.

But this is just an illusion. An illusion that psychologists call ‘the illusion of control’.

In many areas of life – the stock market and the weather are just two examples – accurate prediction just isn’t possible. There is always uncertainty about the future in most areas of our lives. Throw in some emotions, such as greed, fear and hope, and human beings’ predictions get even less accurate. So what are we to do?

Fortunately, Dance with Chance comes up with plenty of positive suggestions. Most importantly, it uncovers a ‘paradox of control’ that’s the antidote to the ‘illusion of control’. By knowing when to give up control, we can actually gain more control over many aspects of our lives than we had in the first place.


A brief summary

The first part of the book looks at the limits of predictability in three vitally important areas of life: medicine, investments, and business. It reveals examples of the paradox of control and offers suggestions about how to reap benefits from uncertainty.

The second part of the book elaborates on the examples of the first part to provide a framework for managing uncertainty and making luck work for you.

The third part extends this framework by examining the apparent contradictions in our mental capacities and the pros and cons of different ways of making decisions.

Finally, the last chapter of the book deals with the slippery concept of happiness. Is it any more predictable than other aspects of life? Why are some people and groups of people happier than others? Can we control our own happiness?

One thing is for sure, a happy ending is only possible if you first understand when to Dance with Chance.

Chapters

Preface
Chapter 1
Three Wishes from a Genie
Chapter 2
The Ills of Pills
Chapter 3
Getting the Right Medicine
Chapter 4
The Chatter of Money
Chapter 5
Watering Your Money Plant
Chapter 6
Lessons from Gurus
Chapter 7
Creative Destruction
Chapter 8
Does God Play Dice?
Chapter 9
Past or Future
Chapter 10
Of Subways and Coconuts – Two Types of Uncertainty
Chapter 11
Genius or Fallible?
Chapter 12
The Inevitability of Decisions
Chapter 13
Happiness, Happiness, Happiness

The illusion of control

From Chapter 1: Three Wishes from a Genie

After September 11, 2001, many people feared further terrorist attacks and chose to travel by car instead of flying. To put it simply, the number of airline passengers in the fourth quarter of 2001 fell by 18%, by comparison with the last three months of the year 2000. In other words, influenced by 9/11, close to one in five travelers decided not to fly. Let’s look at some other numbers now: in 2001, there were 483 deaths among commercial airline passengers in the USA, about half of them on 9/11. Interestingly, in 2002 there wasn’t a single one. And in 2003 and 2004 there were only nineteen and eleven fatalities respectively. This means that during these three years, a total of thirty airline passengers in America were killed in accidents. In the same period, however, 128,525 people died in US car accidents. That’s an estimated 5% more than expected, based on past driving patterns. The statisticians have concluded that as many as 5,000 deaths would probably have been avoided if people had carried on taking the plane as usual. In addition, up to 45,000 people would have been spared serious injuries and up to 325,000 less serious ones.

Why did so many people take their car instead of the plane after 9/11? The simple explanation is that, behind the wheel of your own automobile, it’s natural to feel in control. Try telling drivers that they have no influence over the skills of other road users, the weather, the condition of the road, mechanical problems, or any other common causes of accidents – and they will agree. But they still feel in control of their destiny when they drive. They can’t help it. Put them on a plane, and they think their life is in the hands of the airline pilot or, worse, a bunch of terrorists.

Psychologists call this the “illusion of control.”


Some national puzzles

From Chapter 2: The Ills of Pills

Medical researchers and statisticians have always had problems explaining the data for certain countries. Take Japan, for instance. The Japanese per capita consumption of cigarettes is among the highest in the world, yet Japanese life expectancy is also the highest (at least for larger countries). If the British results, which – if you remember – give non-smokers a ten-year edge over smokers, were applicable, Japanese women, 41% of whom smoke, could expect to live an extra four years on average. That would take them up to an incredible eighty-nine-year life expectancy. Overall life expectancy in Japan would exceed eighty-six, if both men and women didn’t smoke. And if we could rule out all the other risk factors, Japanese life expectancy would rise to well over 100 on the basis of table 2.

In the international smoking league tables, the first place goes to Greece, while Norway has the smallest per capita cigarette consumption in the developed world. Yet life expectancy in Norway is only three and a half months longer than in Greece, a country where physical exercise is famously unpopular and where hospital access is patchy – many islands don’t have hospitals at all.

What about high-fat diets and cholesterol intake? France is perhaps the most well-known paradox in this respect. Life expectancy in Metropolitan France is more than eighty years – the tenth highest in the international rankings – although the French diet is notoriously rich in fat. If we go into further detail, we find that deaths from cardiovascular disease are lower than in other nations (39.8 per 100,000 as opposed to 196.5 per 100,000 in the USA). In particular, Périgord, the region in south-west France famous for producing the high-cholesterol delicacy foie gras, has a particularly fatty diet, with plenty of butter, and duck and goose products. Yet life expectancy is higher than in the rest of France and cardiovascular death rates even lower. Back on the national scale, if we compare France to Norway, we find that per capita cigarette consumption is 2.8 times higher in the former, and fat intake significantly lower in the latter. But – you guessed it – the French live on average about a year longer than Norwegians. The only possible conclusion is that there must be factors other than smoking and cholesterol to explain the difference in life expectancy between France (and in particular Périgord) and the USA, Norway, or many other developed nations, where public health campaigns against these two “vices” have had more effect.

By focusing entirely on the negative aspects of the risk factors – and the worst-case scenarios at that – these public health campaigns tend to raise falsely positive expectations about how we individuals can improve our chances of living longer. The message may be that “doctor knows best.” But the deeper we dig into the evidence, the most charitable interpretation is the “doctor is telling us to be on the safe side.” And, strangely, there’s been comparatively little medical research about why the Japanese, Greeks, and French live longer than medical research suggests they ought to. If we’re going to make reasoned choices about how to live our lives, we need more objective and accurate cost-benefit analyses of the many different activities that influence longevity.


Mind over medicine

From Chapter 3: Getting the Right Medicine

During the Second World War, Dr Henry Beecher, an American doctor operating in Anzio, Italy, ran out of morphine. He started injecting his patients – some of whom had terrible injuries – with a harmless saline solution. To his surprise, there was little difference in the results. The soldiers thought they’d received morphine, and the salted water, acting as a placebo, seemed capable of suppressing quite excruciating pain even among those recovering from amputations. But of course, it wasn’t really the saline solution doing this. It was the human mind.

Stories of the incredible power of the mind to cure the body are not simply found in the literature of pain-killing and drugs. In the late 1990s, for instance, a Swedish hospital operated on 81 people who had a condition in which the heart muscles thicken abnormally. Typically, some sufferers experience only mild effects, while others become seriously ill and die. A common cure is to insert a pacemaker, which is exactly what happened to the 81 patients. The twist was that for half of them, the pacemakers weren’t turned on! And yet they all experienced the same kind of improvements (though to a slightly lesser extent in the case of those whose pacemakers were switched off): less chest pain, dizziness, shortness of breath and heart palpitation.

Readers of the previous chapter may be suspicious of the small sample size in this example. So here’s another case from the placebo literature. In a large group of men, aged 30 to 64, who had suffered a heart attack during the previous three months, 1,103 were given a potent drug (clofibrate) and 2,789 a placebo. The researchers followed their progress for at least five years and found almost no difference in survival rates between the two groups: 20% of those on the real drug and 20.9% of those taking the placebo died. The scientists also noticed that those who took their pills regularly had better survival rates. Of patients on the active drug who took more than 80% of the prescribed dose only 15.7% had died five years later (compared to 22.5% of those who took less than 80%). However, the same thing happened in the placebo group. Of those who took more than 80% of the fake medicine, only 16.4% had died five years later (compared to 25.8% of those who took less than 80%).


The power of luck

From Chapter 5: Watering Your Money Plant

Back in the real world, it’s time to look at this question of expertise in more detail. Bill Miller, the manager of the Legg Mason fund has beaten the S&P 500 for 15 years in a row – from 1991 to 2006. This is a spectacular achievement that makes him a superstar fund manager and a darling of the popular business press. All credit to him. But would it be possible to achieve this by luck alone? Time for a few very simple calculations.

Let’s assume there are 8,192 funds in the USA (actually, that’s not far off the truth). Now suppose that the chance of each fund beating the S&P 500 in a given year is exactly 50%, the same as tossing a coin, and that each fund’s performance is independent of all the others. This means about 4,096 funds can be expected to beat the S&P 500 in a single year. About 2,048 of these will beat it again for a second year, then 1,024 for three years in a row, 512 for four years in a row and so on, dividing by two each time... until you get to one fund that’s made it for a whole 13 years. If Bill Miller is the only "survivor”, then his achievement of getting to 15 years already sounds less impressive.

Now, it’s important to remember that one fund out of 8,192 beating the S&P for 13 years is just an average outcome of our original assumptions and that in reality the actual number can fluctuate above and below the average. Finally, if we look more closely at the record of Bill Miller’s fund, we see that in 1994 its returns more or less tied with the S&P 500. So it’s perfectly reasonable to claim that he beat the S&P only 11 years in a row, rather than 15 – a performance well within the limits of pure chance. Sorry, Bill!

The same is true for the many other funds that have outperformed the market for several years in a row. Most of them subsequently revert back to an average or even poor performance. At the same time we never hear about the funds that did worse than the average for 15 consecutive years. Who’d want to advertise results like these?

Professor Burton G. Malkiel of Princeton University, author of the classic book, A Random Walk Down Wall Street, is one of several observers who claim that beating the market is due to chance, rather than skill. In one study, he compared the results of the top 20 equity mutual funds of the 1970s with their own performance in the 1980s. In the 1970s their average returns exceeded the average of all equity funds by a margin of 10.4% to 19% per year. In the next decade, they slumped into mediocrity. They performed worse that the average fund by 11.1% to 11.7% per year. In a second study, he made the same calculations for the 1980s and 1990s. The star 20 funds of the 1980s, whose collective results had outperformed the average of all equity funds by 14.1% to 18% a year, underperformed the average by a margin of 13.7% to 14.9% over the following ten years.

John C. Bogle, the founder and former chairman of the Vanguard Group and crusader against fund managers, carried out a similar, but shorter-term study over the two periods 1996 to 1999 and 1999 to 2002. First he looked at the top ten out of a total of 851 USA equity funds (those with assets of more than $100 million). They were paragons of success and big bonuses between 1996 and 1999. But in the following three years, the former number one dropped to a position of 841. The best performance of all ten of the formerly outstanding funds was a position of 790, out of a total of 851 funds, between 1999 and 2002. It seems fair to conclude that, as the small print so often tells us, past success is no guarantee of future performance. What’s more, it really does look as if the majority of star fund managers just got lucky

Professor Eugene F. Fama of the University of Chicago put it this way: “I’d compare stock pickers to astrologers, but I don’t want to bad-mouth astrologers.”


Mediocrity and failure

From Chapter 6: Lessons from Gurus

Companies are like living creatures. They come into the world and, once they survive their teething troubles, they mature and eventually cease to exist. Sometimes, if they’re not killed off by bankruptcy, they even reproduce through the medium of merger or acquisition. Arie De Geus, a retired Shell executive, has conducted research that shows the life expectancy of new firms in Europe or Japan to be less than 13 years – down from 20 in the late 1970s and early 80s. Even if they grow up to be large multinationals they’re likely to last only 40 to 50 years in total. And Foster and Kaplan estimate that by 2020 the average S&P 500 firm will stay in the index for just ten years – down from 65 in the 1920s, when the list first appeared.

There are, as usual, exceptions. Like giant tortoises, there are some companies that have made it to over 150 years of age. But they tend to move like giant tortoises too. All the evidence points to the conclusion that the financial performance of long-lasting firms is below the market average. As Foster and Kaplan say, “the corporate equivalent of the El Dorado, the golden company that continuously performs better than the markets, has never existed. It is a myth.” Managing for survival doesn’t guarantee strong performance for the entire corporate lifespan – in fact, just the opposite.

The ephemeral nature of success and the natural tendency towards mediocrity and eventual failure is the rule for all systems devised by mankind, whether countries, industries, economic structures or superpowers. Since the beginning of human history, empires have come, seen, conquered... and disintegrated. Persia, Greece and Rome took it in turn to dominate the ancient world. Since then, there’s been a succession of empires, culminating in the presence of a single superpower at the end of the 20th century. But change is already underway. Many people say that the 21st century belongs to China.

Since the industrial revolution, entire economic sectors have risen to preeminence only to lose their appeal as others gained in significance and profits. The canals and railways dominated the 18th and 19th centuries, finally giving way to oil and steel, followed by car manufacturing in the 20th century. In the last one hundred years, electrical appliances, telecoms, banking, pharmaceuticals, consumer electronics and financial services have all risen to prominence. Now, at the beginning of the 21st century, all of these have been eclipsed by the new stars of the information industry (despite a small blip at the end of the millennium). Right now, search engines and social networking sites are all the rage, but who knows what tomorrow will bring?


The creative destruction of copper

From Chapter 7: Creative Destruction

If we raise our sights from the narrow concerns of running a single company to the broader sweep of an entire industry, we do, however, start to learn some lessons from history. Back in 1932, the real price of copper (in constant 2007 dollars) was $1.97. By 1974 it was $7.26 (again in 2007 dollars), an almost four-fold increase. The main reason for this huge increase was the demand from the growing network of copper telephone wires that encircled the globe. But it was also because an industry cartel was controlling the supply.

Economists tell us that high profits are supposed to encourage additional capacity, with new facilities opening to increase production and meet rising demand. The people who run cartels, however, love their rising profits, so why would they reduce them? They work to defy economic principles and to maintain the status quo. That’s exactly what the copper cartel did, restricting the number of mines and factories to constrain supply and maximize the profits of the copper companies – pushing the price up even further.

But high profits attracted competition from outside the copper industry. From the 1950s onwards, scientists started exploring the possibilities of fiber optics. But the theoretical problems weren’t solved until 1970, when three scientists, Robert Maurer, Donald Keck and Peter Schultz found a way of using fused silica (a material of extreme purity with a high melting point and low refractive index) to transmit more than 65,000 times more information than copper wires – and with much better transmission quality.

Fiber optics transformed the telecoms industry and heralded the coming of the information age. Since the 1980s over 35 million kilometers of telecommunications lines have been installed worldwide and nearly all of them involved fused silica. Today, technical improvements mean that a single hair’s-breadth fiber is enough to carry tens of thousands of phone calls, transmitting more than 10 billion digital bits per second. That’s equivalent to 20,000 books the size of this one.

Poor old copper. In less than a decade, the vast superiority of fiber optics made copper wire virtually obsolete. Demand for the metal collapsed, most copper companies went bankrupt and employment in the industry fell by 70% in some countries. To save itself from liquidation, Anaconda, once the fourth largest company in the world, was sold to ARCO in 1977. Prices continued to fall, and ARCO ceased all copper-mining activities in 1983. C. Jay Parkinson, the former president of Anaconda, must have regretted what he said in 1968: “This company will still be going strong 100 and even 500 years from now.” Not that anyone can blame him. At that time, the copper cartel was controlling the market, and prices – not to mention profits – had been on the rise for over 35 years. Parkinson had underestimated the incredible power of the market to drive prices down (more about this soon) and thus wreak revenge on the few companies that previously controlled the market. It’s small consolation that plumbers still like to use copper.

The key to understanding this tragic tale of an industry that got its just desserts lies in the fact that Maurer, Keck and Schultz came from another world entirely. They worked for Corning Glass, a company that had no connections whatsoever with the telecommunications industry, let alone the copper business. It produced ordinary, everyday glass products. In other words, the threat to copper came from outside and, with absolutely no warning, took an entire industry down. Corning Glass didn’t care if the copper industry was destroyed. On the contrary: the faster the destruction, the bigger its own revenues and profits.

Parkinson and his peers in the industry should have known it would happen sooner or later. But they only cared for their short term profits, not to mention their huge salaries and hefty bonuses (which were increasing even faster than their companies’ revenues). Any economics undergraduate could have told them that competition to an industry all too often comes from outside. Over-inflated profits are very attractive, and outsiders don’t worry about oversupply reducing prices. Nor do they give a damn whether an industry collapses. As it happens, neither should we (though we should retain the nugget of economic wisdom that the story of copper offers). No doubt, shareholders and employees suffered, but society as a whole gained immensely. If we’d stuck with copper, there’d be no internet, no e-mail, no free calls from Skype, no Google. One industry collapsed, but others rose phoenix-like in its place, creating new jobs and profits for new shareholders – bringing people together across oceans and time zones at little or no extra cost. Technologically, the world is a better place for the demise of copper wires.


Probability theory seems a subject made to order for the Greeks, given their zest for gambling, their skill as mathematicians, their mastery of logic, and their obsession with proofs. Yet, though the most civilized of all the ancients, they never ventured into that fascinating world. (...) Civilization as we know it may have progressed at a much faster pace if the Greeks had anticipated what their intellectual progeny – the men of the Renaissance – were to discover some thousand years later.


Take a chance on me

From Chapter 8: Does God Play Dice?

Probability theory seems a subject made to order for the Greeks, given their zest for gambling, their skill as mathematicians, their mastery of logic, and their obsession with proofs. Yet, though the most civilized of all the ancients, they never ventured into that fascinating world. (...) Civilization as we know it may have progressed at a much faster pace if the Greeks had anticipated what their intellectual progeny – the men of the Renaissance – were to discover some thousand years later.

Thanks to science, however, we are able to predict many physical phenomena with a high degree of precision. On a simple level, the law of gravity allows us to predict the trajectory of any falling object. Even before the historic apple hit Newton’s head, human beings had a good intuitive grasp of the law that brings us all down to earth. On a more complex level, we still cannot predict the exact timing and location of earthquakes, but engineers are able to build bridges and skyscrapers that will stay up regardless of the Richter scale. Occasionally, the odd bridge falls down, but that’s usually because of human error in applying the science. Most of us are so confident in the engineers’ forecasting models that we entrust our lives to them every time we take a plane, get into an elevator or drive over a bridge.

And yet we all know there are many quite mundane occurrences that we cannot predict. At all. The obvious example is the outcome of tossing a coin. On the other hand, we find it relatively easy to imagine a mental model of a “fair” coin for which coming up heads or tails is equally likely. It’s also simple to take the next step and quantify this assumption, by assigning a probability of 0.5 or 50% to the event of obtaining heads. Another step is a little harder, but still accessible to most of us – if we toss two fair coins, the probability of getting two heads is 0.25 or 25%. That’s because there are four possible outcomes, all equally likely: (a) heads for coin A and heads for coin B, (b) tails for coin A and tails for coin B; (c) heads for coin A and tails for coin B; and (d) tails for coin A and heads for coin B.

Despite all this clever calculation, we still can’t tell whether the next coin you toss will come up heads. It’s a totally chance event, unlike the forces on a bridge or a building during an earthquake. What we can predict, using probability theory, is the expected outcome of tossing a large number of coins. What we can’t predict is what will happen on any one individual toss – although we can calculate the uncertainty involved and consequently the risk of winning or losing in games of chance.

Interestingly, in evolutionary terms, human beings’ understanding of probability is comparatively recent. The first attempts to find regular patterns in chance events took place during the Middle Ages, but it was only in the 17th century that Pascal and Fermat, two French mathematicians formulated some laws of probability (in their quest to understand gambling!). As for earlier civilizations, this is what Peter Bernstein says about the Greeks in his excellent book, Against the Gods: The Remarkable Story of Risk.

In other words, the model of probability theory was a huge, recent intellectual advance for humankind. Indeed, there’s something deeply counter-intuitive about it, particularly if we go beyond simple calculations about games of chance. There are many famous brain teasers, such as the well known birthday conundrum. What is the probability that in a room of twenty-three randomly selected people, two of them will have the same birthday? The answer is much higher that you’d expect: 0.5 or 50%. What about a room of sixty people? Here the probability is very close to 1 or 100%, but when asked, most people say it’s less than 25% (click here for an explanation why these probabilities are so high)


The statistician who ate humble pie

From Chapter 9: Past or Future

As an expert in statistics, working in a business school during the 1970s, one of the authors (who also, as it happens, can’t sing a note) couldn’t fail to notice that executives were deeply preoccupied with forecasting. Their main interest lay in various types of business and economic data: the sales of their firm, its profits, exports, exchange rates, house prices, industrial output… and a host of other figures. It bugged the professor greatly that practitioners were making these predictions without recourse to the latest, most theoretically sophisticated methods developed by statisticians like himself. Instead, they preferred simpler techniques which – they said – allowed them to explain their forecasts more easily to senior management. The outraged author decided to teach them a lesson. He embarked on a research project that would demonstrate the superiority of the latest statistical techniques. Even if he couldn’t persuade business people to adopt his methods, at least he’d be able to prove the precise cost of their attempts to please the boss.

Every decent statistician knows the value of a good example, so the professor and his research assistant collected many sets of economic and business data over time from a wide range of economic and business sources. In fact they hunted down 111 different time series which they analyzed and used to make forecasts – a pretty impressive achievement given the computational requirements of the task back in the days when computers were no faster than today’s calculators. They decided to use their trawl of data to mimic, as far as possible, the real process of forecasting. To do so, each series was split into two parts: earlier data and later data. The researchers pretended that the later part hadn’t happened yet and proceeded to fit various statistical techniques, both simple and statistically sophisticated, to the earlier data. Treating this earlier data as “the past”, they then used each of the techniques to predict “the future”, whereupon they sat back and started to compare their “predictions” with what had actually happened.

Horror of horrors, the practitioners’ simple, boss-pleasing techniques turned out to be more accurate than the statisticians’ clever, statistically sophisticated methods. To be honest, neither was particularly great, but there was no doubt that the statisticians had served themselves a large portion of humble pie.


A black Monday and a black swan

From Chapter 10: Of Subways and Coconuts – Two Types of Uncertainty

Ansgar Schmetzer, a 37-year-old German living in California with his wife, had no children. His passion was investments, and he’d put practically all his money into stocks. He was extremely pleased with himself, as his portfolio had done extremely well during the five years he’d been in the USA.

Ansgar Schmetzer died, or more precisely, committed suicide early in the morning of October 20, 1987, the day after Black Monday. The stock market had fallen by more than 22% in just one single day – and his own investments by more than 39%. His wife said that he was still worth many millions, but for Schmetzer losing more than $4.8 million in a single day was more than he could bear. Even if he’d been able to accept the loss of his money, he couldn’t take the way his beliefs had been shaken. His portfolio consisted mainly of growth stocks and small caps, and had therefore been outperforming the Dow Jones and S&P 500 by more than 6%. Schmetzer had remained confident it would soon double or triple in value until less than a week before his death.

His confidence was not entirely misplaced. US stock markets had been growing nicely since the early 1980s. Then the market suddenly started falling. It fell 2.7% on October 6, 1987 and by smaller percentages over the next four days. On Tuesday, October 13 (the number 13 was always a lucky one for Ansgar Schmetzer and Tuesday was his favorite day of the week) the decline halted, and the market grew by nearly 2%. Schmetzer wasn’t particularly bothered by these events, as they were normal stock-market fluctuations. However, the next three days weren’t at all typical. On Wednesday, October 14 the market fell 3%, on Thursday 2.3% and, worse, on Friday, October 16, an additional 5.2%. But Ansgar Schmetzer hoped that the decline was over and that the next week would see another reversal of the downward trend. The US economy was growing strongly, corporate profits were high and everyone was predicting they were going to increase further in 1988. So, all in all, there were no signs of any trouble and he saw the decline as a way to increase his profits. For these perfectly sound reasons, Schmetzer borrowed money against his existing portfolio to buy more stocks at bargain prices on the Friday afternoon.

Monday, October 19, however, turned out to be the worst day in the history of the stock market. It was even worse than October 28, 1929, which kick-started the Great Depression. By the time he tragically ended his life, Ansgar Schmetzer had lost more than half of his money in just ten trading days. The shock was as great as if he’d been hit by a falling coconut – and his state of mind darker than the blackest Monday.

Although there have been many suggested explanations for Black Monday, none of them seems quite convincing – even with the benefit of hindsight. Today, it’s only when we hear about events like Ansgar Schmetzer’s suicide that we can begin to imagine how the entire financial world was taken by total and utter surprise on that Monday. The stock-market collapse of 1987 was what former trader, Nassim Nicholas Taleb, calls – in his excellent book of the same name, a “Black Swan” – a totally unexpected event with mammoth consequences.


Blinking marvelous

From Chapter 11: Genius or Fallible?

In his best-selling book Blink, Malcolm Gladwell tells the story of a Kouros (an ancient Greek statue of a male youth). The Kouros in question was acquired by the Getty Museum in California for some $10 million after an exhaustive 14-month investigation. But doubts about its authenticity lingered. After a single glance – or blink – at the statue, lasting only a few seconds, some experts in Greek art had an immediate sense of “intuitive repulsion”. It just had to be a fake. In a few seconds, says Gladwell, “They were able to understand more about the essence of the statue than the team at the Getty was able to understand after fourteen months.”

“Blinking” isn’t confined to artistic judgments either. During an exhibition tour in 1909, Capablanca, the Cuban world chess champion, played 28 matches simultaneously and won them all. How did he do it? How many moves ahead did he consider when he only had a few seconds to look at each game? “I see only one move ahead,” Capablanca is reported to have said, “but it is always the correct one.”

As a well-structured but complex game, chess has provided a wonderful laboratory for studying the capacities of the human mind. It is to the cognitive sciences what the fruit fly is to geneticists. Initially, everyone thought that the amazing intellectual feats of chess grandmasters were due to three factors: photographic memory; high IQ; and an ability to analyze the implications of many different possibilities several moves ahead. However, scientific research has rejected these ideas – and come up with three totally different factors!

The first key requirement for a grandmaster is to focus intuitively on the best move. Researchers have placed cameras under chessboards to record the eye movements of great chess players. They’ve discovered that, once the opponent’s turn is over, three out of four times a grandmaster’s eyes focus on the best move available (as agreed by other grandmasters). Next, he or she examines other possible moves – often of equal quality – only to return (three out of four times) to the first move considered. Thus, grandmasters demonstrate two important abilities. One is to come up with high-quality (and often creative) moves spontaneously; the other is the analytical skill to check this move against other possibilities. It sounds as if Capablanca wasn’t overstating his case after all. Genius blinks – it’s official.

The second differentiating factor is pattern recognition – another blinking ability. When grandmasters are shown, for only five seconds, chess pieces on a board from an ongoing game between other grandmasters, they can reproduce what they have seen with approximately 90% accuracy. The few mistakes they do make are usually in the minor pieces, mostly involving pawns. Expert players, on the other hand, can’t recall more than a few details, while novices are incapable of remembering any positions accurately. Just as importantly, no one, including grandmasters, can remember the positions on a board where the pieces have been placed by chance – that is, when they’re not the result of a real game. The memories of grandmasters are highly dependent on recognizing patterns – not unlike most people’s abilities to recognize a familiar tune after hearing the first few notes.

Third and perhaps most importantly, there’s the “practice factor”. Scientists have found that it takes years of practice to reach the level of the chess grandmaster. Not just any old practice though. Aspiring grandmasters must deliberately target continuous improvement, which means a lot of repetition, and seek constant feedback. Consistent hard graft is also crucial. Great achievers – in chess as in many other fields – have been found to practice, on average, roughly the same number of hours every day, including weekends and holidays. Finally, research indicates that the more they practice in this way, the better their performance. Continuous, painstaking practice facilitates “deeper” or better information processing and helps retain, as well as develop, skills. Curiously, it seems that, to become good at blinking, there’s a lot of painful thinking along the way.

Contrary to popular belief, then, it looks as if talent owes more to hard work than natural gifts. Genius, in chess and elsewhere, comes from long, deliberate, thoughtful practice. Herbert Simon, whose wisdom we’ve drawn on several times already in this book, estimated that, through years of intensive practice, the typical grandmaster develops a long-term working memory amounting to roughly 50,000 to 100,000 “nuggets” of chess information. Because grandmasters can retrieve this information effortlessly, they’re free to concentrate on evaluating the most promising moves that come spontaneously to mind. In contrast, weaker players generate and examine many alternatives without being able to focus on the essential. They lack the 50,000 to 100,000 chunks of chess information stored in the brains of grandmasters.

So it turns out that the critical skill of playing chess is not analytical. Instead, it’s the capacity to focus, instantly and effortlessly on the best move (or moves), a capacity that’s developed through long, deliberate practice. The role of analysis, although indispensable, is secondary: first as part of the practicing, then to verify or reject alternative moves during the game.


Predicting marital happiness

From Chapter 12: The Inevitability of Decisions

George and Jill are a nice young couple. He’s 30 and works at the local hospital; she’s 28 and is just finishing her PhD in molecular biology. Are they happily married? Their families think so but then they don’t really see them very often. You could interview them at great length and try to form an impression through a combination of “blinking” and “thinking”. That’s what marriage counselors do – but here’s an alternative method.

Psychologists John Howard and Robyn Dawes trained one partner from each of 27 couples just like George and Jill to monitor their own behavior for 35 consecutive days. The monitors counted two types of behavior and also rated the couples on a seven-point scale of marital happiness. Howard and Dawes used a simple decision rule to predict marital happiness: the difference between the frequencies of the two types of behaviors across the 35 days. The result: the simple decision rule they discovered was a valid albeit imperfect predictor of marital happiness.

The point here is that, in a domain as complex as marital happiness, predictions based on elaborate theories typically miss the mark (remember Chapter 9 and the problem of noise) – although, in hindsight, they can make great stories! On the other hand, simple decision rules can have better – even though limited – predictive validity. As Robyn Dawes and another of his research partners, Bernard Corrigan, put it: “The whole trick is to know what variables to look at and then to know how to add.” You do need judgment for the first task, but you can delegate the second to a calculator.


Increasing the sum of human happiness

From Chapter 13: Happiness, Happiness, Happiness

The newish field of positive psychology aims to help people improve their satisfaction with life and develop a more optimistic outlook. Its basic thesis is that Freud was wrong to characterize the human condition by its different neuroses. Instead, they cite evidence that smiling people are not only healthier and live longer, but also work harder, are more productive, more socially engaged, and generally more successful in life. Miserable souls, they argue, are self-obsessed and pessimistic, which gives others a low opinion of them too. As a result, things get even worse for them... and they get even unhappier. The most important teaching of the positive psychologists, however, is that absolutely anyone – even the grumpiest people – can take specific actions to increase their happiness in the long term.

This is where the nuns we promised you come in. Now, nuns provide great opportunities for psychologists – positive or not. That’s because they all follow routine lives with similar activities and comparable diets. What’s more, they don’t get married or have children. In short, nuns constitute a homogeneous population.

One of the positive psychologists’ favorite studies concerns 180 nuns in Milwaukee. Back in 1932, the then novices were asked to write short sketches of their lives. One wrote: “God started my life off well by bestowing upon me grace of inestimable value. The past year has been a very happy one.” She recently died, aged ninety-eight, after a lifetime of extraordinarily good health. By way of contrast, one of her sisters painted a neutral-to-sad picture of her own life, concluding, “With God’s grace, I intend to do my best for our Order.” She died of a stroke at the age of fifty-nine.

OK, so two nuns from Milwaukee don’t prove much. But experienced researchers studied all 180 of the sketches and ranked them according to their net satisfaction with life. Then they looked at how long the nuns lived. It turns out that nearly 90% of the “happiest” quartile made it to eighty-five or more and 54% of them were still alive at ninety-four! By then, there weren’t many of the “saddest” quartile left. Less than a third of them reached the age of eighty-five and only 11% survived to ninety-four. As well as the nuns, Martin Seligman of the University of Pennsylvania, one of the leading lights in positive psychology, cites another survey, this time of 839 patients at the Mayo Clinic in Minnesota. It was found that "optimists” from this sample lived 19% longer than “pessimists.” But that’s all you can conclude. Who’s to say that the main reason for being an optimist wasn’t better health in the first place?

No comments: