Wednesday, November 20, 2013

Don’t Blink! The Hazards of Confidence and the Illusion of Validity



Magazine



Don’t Blink! The Hazards of Confidence

 


Tim Enthoven

It afflicts us all. Because confidence in our own judgments is part of being human.

Many decades ago I spent what seemed like a great deal of time under a scorching sun, watching groups of sweaty soldiers as they solved a problem. I was doing my national service in the Israeli Army at the time. I had completed an undergraduate degree in psychology, and after a year as an infantry officer, I was assigned to the army’s Psychology Branch, where one of my occasional duties was to help evaluate candidates for officer training. We used methods that were developed by the British Army in World War II.

One test, called the leaderless group challenge, was conducted on an obstacle field. Eight candidates, strangers to one another, with all insignia of rank removed and only numbered tags to identify them, were instructed to lift a long log from the ground and haul it to a wall about six feet high. There, they were told that the entire group had to get to the other side of the wall without the log touching either the ground or the wall, and without anyone touching the wall. If any of these things happened, they were to acknowledge it and start again. 

A common solution was for several men to reach the other side by crawling along the log as the other men held it up at an angle, like a giant fishing rod. Then one man would climb onto another’s shoulder and tip the log to the far side. The last two men would then have to jump up at the log, now suspended from the other side by those who had made it over, shinny their way along its length and then leap down safely once they crossed the wall. Failure was common at this point, which required starting over. 

As a colleague and I monitored the exercise, we made note of who took charge, who tried to lead but was rebuffed, how much each soldier contributed to the group effort. We saw who seemed to be stubborn, submissive, arrogant, patient, hot-tempered, persistent or a quitter. We sometimes saw competitive spite when someone whose idea had been rejected by the group no longer worked very hard. And we saw reactions to crisis: who berated a comrade whose mistake caused the whole group to fail, who stepped forward to lead when the exhausted team had to start over. Under the stress of the event, we felt, each man’s true nature revealed itself in sharp relief. 

After watching the candidates go through several such tests, we had to summarize our impressions of the soldiers’ leadership abilities with a grade and determine who would be eligible for officer training. We spent some time discussing each case and reviewing our impressions. The task was not difficult, because we had already seen each of these soldiers’ leadership skills. Some of the men looked like strong leaders, others seemed like wimps or arrogant fools, others mediocre but not hopeless. Quite a few appeared to be so weak that we ruled them out as officer candidates. When our multiple observations of each candidate converged on a coherent picture, we were completely confident in our evaluations and believed that what we saw pointed directly to the future. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment. The obvious best guess about how he would do in training, or in combat, was that he would be as effective as he had been at the wall. Any other prediction seemed inconsistent with what we saw. 

Because our impressions of how well each soldier performed were generally coherent and clear, our formal predictions were just as definite. We rarely experienced doubt or conflicting impressions. We were quite willing to declare: “This one will never make it,” “That fellow is rather mediocre, but should do O.K.” or “He will be a star.” We felt no need to question our forecasts, moderate them or equivocate. If challenged, however, we were fully prepared to admit, “But of course anything could happen.” 

We were willing to make that admission because, as it turned out, despite our certainty about the potential of individual candidates, our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we could compare our evaluations of future cadets with the judgments of their commanders at the officer-training school. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were better than blind guesses, but not by much. 

We were downcast for a while after receiving the discouraging news. But this was the army. Useful or not, there was a routine to be followed, and there were orders to be obeyed. Another batch of candidates would arrive the next day. We took them to the obstacle field, we faced them with the wall, they lifted the log and within a few minutes we saw their true natures revealed, as clearly as ever. The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated new candidates and very little effect on the confidence we had in our judgments and predictions. 

I thought that what was happening to us was remarkable. The statistical evidence of our failure should have shaken our confidence in our judgments of particular candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each particular prediction was valid. I was reminded of visual illusions, which remain compelling even when you know that what you see is false. I was so struck by the analogy that I coined a term for our experience: the illusion of validity.
I had discovered my first cognitive fallacy. 

Decades later, I can see many of the central themes of my thinking about judgment in that old experience. One of these themes is that people who face a difficult question often answer an easier one instead, without realizing it. We were required to predict a soldier’s performance in officer training and in combat, but we did so by evaluating his behavior over one hour in an artificial situation. This was a perfect instance of a general rule that I call WYSIATI, “What you see is all there is.” We had made up a story from the little we knew but had no way to allow for what we did not know about the individual’s future, which was almost everything that would actually matter. When you know as little as we did, you should not make extreme predictions like “He will be a star.” The stars we saw on the obstacle field were most likely accidental flickers, in which a coincidence of random events — like who was near the wall — largely determined who became a leader. Other events — some of them also random — would determine later success in training and combat. 

You may be surprised by our failure: it is natural to expect the same leadership ability to manifest itself in various situations. But the exaggerated expectation of consistency is a common error. We are prone to think that the world is more regular and predictable than it really is, because our memory automatically and continuously maintains a story about what is going on, and because the rules of memory tend to make that story as coherent as possible and to suppress alternatives. Fast thinking is not prone to doubt. 

The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence. An individual who expresses high confidence probably has a good story, which may or may not be true. 

I coined the term “illusion of validity” because the confidence we had in judgments about individual soldiers was not affected by a statistical fact we knew to be true — that our predictions were unrelated to the truth. This is not an isolated observation. When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails. And this goes for you, too. The confidence you will experience in your future judgments will not be diminished by what you just read, even if you believe every word. 

I first visited a Wall Street firm in 1984. I was there with my longtime collaborator Amos Tversky, who died in 1996, and our friend Richard Thaler, now a guru of behavioral economics. Our host, a senior investment manager, had invited us to discuss the role of judgment biases in investing. I knew so little about finance at the time that I had no idea what to ask him, but I remember one exchange. “When you sell a stock,” I asked him, “who buys it?” He answered with a wave in the vague direction of the window, indicating that he expected the buyer to be someone else very much like him. That was odd: because most buyers and sellers know that they have the same information as one another, what made one person buy and the other sell? Buyers think the price is too low and likely to rise; sellers think the price is high and likely to drop. The puzzle is why buyers and sellers alike think that the current price is wrong. 

Most people in the investment business have read Burton Malkiel’s wonderful book “A Random Walk Down Wall Street.” Malkiel’s central idea is that a stock’s price incorporates all the available knowledge about the value of the company and the best predictions about the future of the stock. If some people believe that the price of a stock will be higher tomorrow, they will buy more of it today. This, in turn, will cause its price to rise. If all assets in a market are correctly priced, no one can expect either to gain or to lose by trading. 

We now know, however, that the theory is not quite right. Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match. The first demonstration of this startling conclusion was put forward by Terry Odean, a former student of mine who is now a finance professor at the University of California, Berkeley. 

Odean analyzed the trading records of 10,000 brokerage accounts of individual investors over a seven-year period, allowing him to identify all instances in which an investor sold one stock and soon afterward bought another stock. By these actions the investor revealed that he (most of the investors were men) had a definite idea about the future of two stocks: he expected the stock that he bought to do better than the one he sold. 

To determine whether those appraisals were well founded, Odean compared the returns of the two stocks over the following year. The results were unequivocally bad. On average, the shares investors sold did better than those they bought, by a very substantial margin: 3.3 percentage points per year, in addition to the significant costs of executing the trades. Some individuals did much better, others did much worse, but the large majority of individual investors would have done better by taking a nap rather than by acting on their ideas. In a paper titled “Trading Is Hazardous to Your Wealth,” Odean and his colleague Brad Barber showed that, on average, the most active traders had the poorest results, while those who traded the least earned the highest returns. In another paper, “Boys Will Be Boys,” they reported that men act on their useless ideas significantly more often than women do, and that as a result women achieve better investment results than men. 

Of course, there is always someone on the other side of a transaction; in general, it’s a financial institution or professional investor, ready to take advantage of the mistakes that individual traders make. Further research by Barber and Odean has shed light on these mistakes. Individual investors like to lock in their gains; they sell “winners,” stocks whose prices have gone up, and they hang on to their losers. Unfortunately for them, in the short run going forward recent winners tend to do better than recent losers, so individuals sell the wrong stocks. They also buy the wrong stocks. Individual investors predictably flock to stocks in companies that are in the news. Professional investors are more selective in responding to news. These findings provide some justification for the label of “smart money” that finance professionals apply to themselves. 

Although professionals are able to extract a considerable amount of wealth from amateurs, few stock pickers, if any, have the skill needed to beat the market consistently, year after year. The diagnostic for the existence of any skill is the consistency of individual differences in achievement. The logic is simple: if individual differences in any one year are due entirely to luck, the ranking of investors and funds will vary erratically and the year-to-year correlation will be zero. Where there is skill, however, the rankings will be more stable. The persistence of individual differences is the measure by which we confirm the existence of skill among golfers, orthodontists or speedy toll collectors on the turnpike. 

Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year. 

More important, the year-to-year correlation among the outcomes of mutual funds is very small, barely different from zero. The funds that were successful in any given year were mostly lucky; they had a good roll of the dice. There is general agreement among researchers that this is true for nearly all stock pickers, whether they know it or not — and most do not. The subjective experience of traders is that they are making sensible, educated guesses in a situation of great uncertainty. In highly efficient markets, however, educated guesses are not more accurate than blind guesses. 

Some years after my introduction to the world of finance, I had an unusual opportunity to examine the illusion of skill up close. I was invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarizing the investment outcomes of some 25 anonymous wealth advisers, for eight consecutive years. The advisers’ scores for each year were the main determinant of their year-end bonuses. It was a simple matter to rank the advisers by their performance and to answer a question: Did the same advisers consistently achieve better returns for their clients year after year? Did some advisers consistently display more skill than others? 

To find the answer, I computed the correlations between the rankings of advisers in different years, comparing Year 1 with Year 2, Year 1 with Year 3 and so on up through Year 7 with Year 8. That yielded 28 correlations, one for each pair of years. While I was prepared to find little year-to-year consistency, I was still surprised to find that the average of the 28 correlations was .01. In other words, zero. The stability that would indicate differences in skill was not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill. 

No one in the firm seemed to be aware of the nature of the game that its stock pickers were playing. The advisers themselves felt they were competent professionals performing a task that was difficult but not impossible, and their superiors agreed. On the evening before the seminar, Richard Thaler and I had dinner with some of the top executives of the firm, the people who decide on the size of bonuses. We asked them to guess the year-to-year correlation in the rankings of individual advisers. They thought they knew what was coming and smiled as they said, “not very high” or “performance certainly fluctuates.” It quickly became clear, however, that no one expected the average correlation to be zero. 

What we told the directors of the firm was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill. This should have been shocking news to them, but it was not. There was no sign that they disbelieved us. How could they? After all, we had analyzed their own results, and they were certainly sophisticated enough to appreciate their implications, which we politely refrained from spelling out. We all went on calmly with our dinner, and I am quite sure that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions — and thereby threaten people’s livelihood and self-esteem — are simply not absorbed. The mind does not digest them. This is particularly true of statistical studies of performance, which provide general facts that people will ignore if they conflict with their personal experience. 

The next morning, we reported the findings to the advisers, and their response was equally bland. Their personal experience of exercising careful professional judgment on complex problems was far more compelling to them than an obscure statistical result. When we were done, one executive I dined with the previous evening drove me to the airport. He told me, with a trace of defensiveness, “I have done very well for the firm, and no one can take that away from me.” I smiled and said nothing. But I thought, privately: Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it? 

We often interact with professionals who exercise their judgment with evident confidence, sometimes priding themselves on the power of their intuition. In a world rife with illusions of validity and skill, can we trust them? How do we distinguish the justified confidence of experts from the sincere overconfidence of professionals who do not know they are out of their depth? We can believe an expert who admits uncertainty but cannot take expressions of high confidence at face value. As I first learned on the obstacle field, people come up with coherent stories and confident predictions even when they know little or nothing. Overconfidence arises because people are often blind to their own blindness. 

True intuitive expertise is learned from prolonged experience with good feedback on mistakes. You are probably an expert in guessing your spouse’s mood from one word on the telephone; chess players find a strong move in a single glance at a complex position; and true legends of instant diagnoses are common among physicians. To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? 

The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about. Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion. 


Daniel Kahneman is emeritus professor of psychology and of public affairs at Princeton University and a winner of the 2002 Nobel Prize in Economics. This article is adapted from his book “Thinking, Fast and Slow,” out this month from Farrar, Straus & Giroux.

Editor: Dean Robinson

Sunday, November 10, 2013

How Power Corrupts the Mind

The  Atlantic


How Power Corrupts the Mind

Pity the despot. 

Sunday, October 27, 2013

Is There Such a Thing as "Human Nature"?



Is There Such a Thing as "Human Nature"?

 

Ethan Watters unravels the Western Mind.

 
 
REUTERS / JORGE SILVA


In the summer of 1995, a young graduate student in anthropology at UCLA named Joe Henrich traveled to Peru to carry out some fieldwork among the Machiguenga, an indigenous people who live north of Machu Picchu in the Amazon basin. The Machiguenga had traditionally been horticulturalists who lived in single-family, thatch-roofed houses in small hamlets composed of clusters of extended families. For sustenance, they relied on local game and produce from small-scale farming. They shared with their kin but rarely traded with outside groups.

While the setting was fairly typical for an anthropologist, Henrich’s research was not. Rather than practice traditional ethnography, he decided to run a behavioral experiment that had been developed not by psychologists or anthropologists, but by economists. Henrich used a “game”  –   along the lines of the famous prisoner’s dilemma –  to see whether relatively isolated cultures shared with the West the same basic instinct for fairness. In doing so, Henrich expected to confirm one of the foundational assumptions underlying such experiments, and indeed underpinning the entire fields of economics and psychology: that humans all share the same cognitive machinery –  the same evolved rational and psychological hardwiring.

The test that Henrich introduced to the Machiguenga was called the ultimatum game. The rules are simple: in each game there are two players who remain anonymous to each other. The first player is given an amount of money, say $100, and told that he has to offer some of the cash, in an amount of his choosing, to the other subject. The second player can accept or refuse the split. But there’s a hitch: players know that if the recipient refuses the offer, both leave empty-handed. North Americans, who are the most common subjects for such experiments, usually offer a 50-50 split when on the giving end. When on the receiving end, they show an eagerness to punish the other player for uneven splits at their own expense. In short, Americans show the tendency to be equitable with strangers – and to punish those who are not.

Among the Machiguenga, word spread quickly of the young, square-jawed visitor from America giving away money. The stakes Henrich used in the game with the Machiguenga were not insubstantial –  roughly equivalent to a few days’ wages they sometimes earned from episodic work with logging or oil companies. So Henrich had no problem finding volunteers. What he had great difficulty with, however, was explaining the rules, as the game struck the Machiguenga as profoundly odd.

When he began to run the game it became immediately clear that Machiguengan behavior was dramatically different from that of the average North American. To begin with, the offers from the first player were much lower. In addition, when on the receiving end of the game, the Machiguenga rarely refused even the lowest possible amount. “It just seemed ridiculous to the Machiguenga that you would reject an offer of free money,” says Henrich. “They just didn’t understand why anyone would sacrifice money to punish someone who had the good luck of getting to play the other role in the game.”
The implications of these unexpected results were quickly apparent to Henrich. He knew that a vast amount of scholarly literature in the social sciences –  particularly in economics and psychology –  relied on the ultimatum game and similar experiments.

At the heart of most of that research was the implicit assumption that the results revealed evolved psychological traits common to all humans, never mind that the test subjects were nearly always from the industrialized West.
 
Henrich realized that if the Machiguenga results stood out, and if similar differences could be measured across other populations, this assumption of universality would have to be challenged.

Initially, Henrich thought he would be merely adding a small branch to an established tree of knowledge. It turned out he was sawing at the very trunk. He began to wonder: What other certainties about “human nature” in social science research would need to be reconsidered when tested across diverse populations?

With the help of a dozen other colleagues he led a study of 14 other small-scale societies, in locales from Tanzania to Indonesia. Differences abounded in the behavior of both players in the ultimatum game. In no society did he find people who were purely selfish (that is, who always offered the lowest amount, and never refused a split). Average offers varied widely from place to place and, in some societies – ones where gift-giving is heavily used to curry favor or gain allegiance – the first player would often make overly generous offers in excess of 60 percent, and the second player would often reject them, behaviors almost never observed among Americans.

Henrich’s work also made him a controversial figure. When he presented his research to the anthropology department at the University of British Columbia during a job interview a year later, he recalls a hostile reception. Anthropology is the social science most interested in cultural differences, but the young scholar’s methods of using games and statistics to test and compare cultures with the West seemed heavy-handed and invasive to some. “Professors from the anthropology department suggested it was a bad thing that I was doing,” Henrich remembers. “The word ‘unethical’ came up.”

So instead of toeing the line, he switched teams. A few well-placed people at the University of British Columbia saw great promise in Henrich’s work and created a position for him, split between the economics department and the psychology department. It was in the psychology department that he found two kindred spirits in Steven Heine and Ara Norenzayan. Together the three set about writing a paper that they hoped would fundamentally challenge the way social scientists thought about human behavior, cognition, and culture.

A modern liberal arts education gives lots of lip service to the idea of cultural diversity. It’s generally agreed that all of us see the world in ways that are sometimes socially and culturally constructed, that pluralism is good, and that ethnocentrism is bad. But beyond that the ideas get muddy.
 
That we should welcome and celebrate people of all backgrounds seems obvious, but the implied corollary – that people from different ethno-cultural origins have particular attributes that add spice to the body politic – becomes more problematic. Being that liberal arts students tend to be hyper-vigilant in avoiding stereotyping, it is rarely stated bluntly just exactly what those culturally derived qualities might be. Challenge liberal arts graduates on their appreciation of cultural diversity and you’ll often find them retreating to the anodyne notion that under the skin everyone is really alike.

If you take a broad look at social science curricula of the last few decades, it becomes clear why modern graduates are so unmoored. The last generation or two of undergraduates have largely been taught by a cohort of social scientists busily doing penance for the racism and Eurocentrism of their predecessors, albeit in different ways. Many anthropologists took to the navel gazing of postmodernism and swore off attempts at rationality and science, which were disparaged as weapons of cultural imperialism.

Economists and psychologists skirted the issue with the convenient assumption that their job was to study the human mind stripped of culture. The human brain is genetically comparable around the globe, it was agreed, so human hardwiring for much behavior, perception, and cognition should be similarly universal. No need, in that case, to look beyond the convenient population of undergraduates for test subjects. A 2008 survey of the top six psychology journals dramatically shows that more than 96 percent of the subjects tested in psychological studies from 2003 to 2007 were Westerners –  with nearly 70 percent from the United States alone. Put another way: 96 percent of human subjects in these studies came from countries that represent only 12 percent of the world’s population.

Henrich’s work with the ultimatum game emerged from a small but growing counter trend in the social sciences, one in which researchers look straight at the question of how deeply culture shapes human cognition. His new colleagues in the psychology department, Heine and Norenzayan, were also part of this trend. Heine focused on the different ways people in Western and Eastern cultures perceived the world, reasoned, and understood themselves in relationship to others. Norenzayan’s research focused on the ways religious belief influenced bonding and behavior. The three began to compile examples of cross-cultural research that challenged long-held assumptions of human psychological universality.

Some of this research went back a generation. It was in the 1960s that researchers discovered that aspects of visual perception varied from place to place. One of the classics of the literature, the Müller-Lyer illusion, showed that where you grew up determined to what degree you would fall prey to the illusion that these two lines are different in length.

Researchers found that Americans perceive the line with the ends feathered outward (B) as being longer than the line with the arrow tips (A). San foragers of the Kalahari, on the other hand, were more likely to see the lines as they are: equal in length. Subjects from more than a dozen cultures were tested, and Americans were at the far end of the distribution – seeing the illusion more dramatically than all others.

Recently, psychologists began to challenge the universality of research done in the 1950s by pioneering social psychologist Solomon Asch. Asch had discovered that test subjects were often willing to make incorrect judgments on simple perception tests to conform with group pressure. When the test was performed across 17 societies however, it turned out that group pressure had a range of influence. Americans were again at the far end of the scale, showing the least tendency to conform to group belief.

Heine, Norenzayan, and Henrich’s research continued to find wide cultural differences almost everywhere they looked: in spatial reasoning, the way we infer the motivations of others, categorization, moral reasoning, the boundaries between the self and others, and other arenas. These differences, they believed, were not genetic. The distinct ways Americans and Machiguengans played the ultimatum game wasn’t because they had differently evolved brains. Rather, Americans were manifesting a psychological tendency shared with people in other industrialized countries, tendencies that had been refined and passed down through thousands of generations in increasingly complex and competitive market economies.

When people are constantly doing business with strangers, it helps when they have the desire to go out of their way (with a lawsuit or a bad Yelp review) when they feel cheated. Because Machiguengan culture had a different history, their gut feeling about what was fair was distinctly their own. In small-scale societies with a strong culture of gift-giving, yet another conception of fairness prevailed. There, generous financial offers were turned down because people’s minds had been shaped by a cultural norm that taught them that the acceptance of generous gifts brought burdensome obligations. Our economies hadn’t been shaped by our idea of fairness; it was the other way around.

The growing body of cross-cultural research that the three researchers were compiling suggested that the mind’s capacity to mold itself to cultural and environmental settings was far greater than had been assumed. 
 
The most interesting thing about cultures may not be in the observable things they do –  the rituals, eating preferences, codes of behavior, and the like –  but in the way they mold our most fundamental conscious and unconscious thinking and perception. The different ways people perceive the Müller-Lyer illusion reflects lifetimes spent in different physical environments. American children, for the most part, grow up in box-shaped rooms of varying dimensions. Surrounded by carpentered corners, visual perception adapts to this strange new environment (strange and new in terms of human history, that is) by learning to perceive converging lines in three dimensions. When unconsciously translated in three dimensions, the line with the outward-feathered ends (C) appears farther away and the brain therefore judges it to be longer. The more time one spends in natural environments, where there are no carpentered corners, the less one sees the illusion.

But the three researchers began to notice something else remarkable: again and again there was one group of people that appeared to be remarkably unusual when compared to other populations –  with perceptions, behaviors, and motivations that almost always slid down the far end of the human bell curve.

In the end they titled their paper “The Weirdest People in the World?” By “weird” they meant both unusual and Western, Educated, Industrialized, Rich, and Democratic.
It is not just our Western habits and cultural preferences that wholly differ from the rest of the world. The very way we think about ourselves and others – even the way we perceive reality –  makes us radically distinct from other humans on the planet today, not to mention from the vast majority of our ancestors. Among Westerners, the data showed that Americans were typically the most unusual, leading the researchers to conclude that “American participants are exceptional even within the unusual population of Westerners –  outliers among outliers”
Given this discovery, they concluded that social scientists could not possibly have picked a worse population from which to draw broad generalizations about universal human traits. Generations of researchers had been doing the equivalent of studying penguins while believing that they were learning insights applicable to all birds.

Not long ago I met Henrich, Heine, and Norenzayan for dinner at a small French restaurant in Vancouver, British Columbia, to hear about the reception of their “weird” paper, which was published in the prestigious journal Behavioral and Brain Sciences in 2010. The trio of researchers are young – as professors go –  good-humored family men. They recalled their nervousness as the publication time approached. They faced the possibility of becoming outcasts in their own fields as their paper boldly suggested that much of what social scientists thought they knew about fundamental aspects of human cognition was likely only true of one small slice of humanity.
“We were scared,” admitted Henrich. “We were warned that a lot of people were going to be upset.” “We were told we were going to get spit on,” interjected Norenzayan.
“Yes,” Henrich said. “That we’d go to conferences and no one was going to sit next to us at lunchtime.”
They seemed much less concerned that they had used the pejorative acronym WEIRD to describe a significant slice of humanity, although they did admit that they could only have done so to describe their own group. “Really,” said Henrich, “the only people we could have called weird are represented right here at this table.”

Still, I felt that describing the Western mind, and the American mind in particular, as “weird” suggested that our cognition is not just different but somehow malformed or twisted.

In their paper the trio pointed out cross-cultural studies that suggest that the “weird” [Western, Educated, Industrial, Rich, Democratic] mind is the most self-aggrandizing and egotistical on the planet: we are more likely to promote ourselves as individuals versus advancing as a group. 
 
WEIRD minds are also more analytic, possessing the tendency to telescope in on an object of interest rather than understanding that object in the context of what is around it.The WEIRD mind also appears to be unique in terms of how it comes to understand and interact with the natural world. Studies show that Western urban children grow up so closed off in man-made environments that their brains never form a deep or complex connection to the natural world. While studying children from the U.S., researchers have suggested a developmental timeline for what is called “folkbiological reasoning.” These studies posit that it is not until children are around 7 years old that they stop projecting human qualities onto animals and begin to understand that humans are one animal among many. Compared to Yucatec Maya communities in Mexico, Western urban children appear to be developmentally delayed in this regard. Children who grow up constantly interacting with the natural world are much less likely to anthropomorphize other living things into late childhood.
Given that people living in WEIRD societies don’t routinely encounter or interact with animals other than humans or pets, it’s not surprising that they end up with a rather cartoonish understanding of the natural world. “Indeed,” the report concluded, “studying the cognitive development of folkbiology in urban children would seem the equivalent of studying ‘normal’ physical growth in malnourished children.”

During our dinner, I admitted to Heine, Henrich, and Norenzayan that the idea that I can only perceive reality through a distorted cultural lens was unnerving. For me, the notion raised all sorts of metaphysical questions: Is my thinking so strange that I have little hope of understanding people from other cultures? Can I mold my own psyche or the psyches of my children to be less WEIRD and more able to think like the rest of the world? If I did, would I be happier?
Henrich said I was taking this research too personally. He had not intended for his work to be read as postmodern self-help advice.

The three insisted that their goal was not to say that one culturally shaped psychology was better or worse than another –  but that we’ll never begin to understand human behavior and cognition until we expand the sample pool beyond its current small slice of humanity. Despite these assurances, I found it hard not to read a message between the lines of their research. When they write, for example, that WEIRD children develop their understanding of the natural world in a “culturally and experientially impoverished environment” and that they are in this way analogous to “malnourished children,” it’s difficult to see this as a good thing.

The turn that Henrich, Heine, and Norenzayan are urging social scientists to make is not an easy one; accounting for the influence of culture on cognition will be a herculean task. Cultures are not monolithic; they can be endlessly parsed. Ethnic backgrounds, religious beliefs, economic status, parenting styles, rural upbringing versus urban or suburban – there are hundreds of cultural differences that individually and in endless combinations influence our conceptions of fairness, how we categorize things, our method of judging and decision making, and our deeply held beliefs about the nature of the self, among other aspects of our psychological makeup.

We are just at the beginning of learning how these fine-grained cultural differences affect our thinking. 
 
Recent research has shown that people in “tight” cultures, those with strong norms and low tolerance for deviant behaviour, develop higher impulse control and more self-monitoring abilities than those from other places. Men raised in the honor culture of the American South have been shown to experience much larger surges of testosterone after insults than do Northerners. Research published late last year suggested psychological differences at the city level too. Compared to San Franciscans, Bostonians’ internal sense of self-worth is more dependent on community status and financial and educational achievement. “A cultural difference doesn’t have to be big to be important,” Norenzayan said. “We’re not just talking about comparing New York yuppies to the Dani tribesmen of Papua New Guinea.”

As Norenzayan sees it, the last few generations of psychologists have suffered from “physics envy,” and they need to get over it. Their job, experimental psychologists often assumed, was to push past the content of people’s thoughts and see the underlying universal hardware at work. “This is a deeply flawed way of studying human nature,” Norenzayan told me, “because the content of our thoughts and their process are intertwined.” In other words, if human cognition is shaped by cultural ideas and behavior, it can’t be studied without taking into account what those ideas and behaviors are and how they are different from place to place.

Henrich, Heine and Norenzayan’s fear of being ostracized after the publication of the WEIRD paper turned out to be misplaced. Response to the paper, both published and otherwise, has been nearly universally positive, with more than a few of their colleagues suggesting that the work will spark fundamental changes.

After reading the paper, academics from other disciplines began to come forward with their own mea culpas. In response to the WEIRD paper, two brain researchers from Northwestern University argued that the nascent field of neuroimaging had made the same mistake as psychologists, noting that 90 percent of neuroimaging studies were performed in Western countries. Researchers in motor development similarly suggested that their discipline’s body of research ignored how different child-rearing practices around the world dramatically influence development. Two psycholinguistics professors suggested that their colleagues also made the same mistake: blithely assuming human homogeneity while focusing their research primarily on one rather small slice of humanity.

The challenge of the WEIRD paper is not simply to the field of experimental human research (do more cross-cultural studies!); it is a challenge to our Western conception of human nature. 

For some time now, the most widely accepted answer to the question of why humans, among all animals, have so successfully adapted to environments across the globe is that we have big brains with the ability to learn, improvise, and problem-solve.Henrich has replaced this “cognitive niche” hypothesis with the “cultural niche” hypothesis. He notes that the amount of knowledge in any culture is far greater than the capacity of individuals to learn or figure it all out on their own. He suggests that individuals tap that cultural storehouse of knowledge simply by mimicking (often unconsciously) the behavior and ways of thinking of those around them. We shape a tool in a certain manner, adhere to a food taboo, or think about fairness in a particular way, not because we individually have figured out that behavior’s adaptive value, but because we instinctively trust our culture to show us the way.

When Henrich asked Fijian women why they avoided certain potentially toxic fish during pregnancy and breastfeeding, he found that many didn’t know. Regardless of their personal understanding, by mimicking this culturally adaptive behavior they were protecting their offspring. The unique trick of human psychology, these researchers suggest, might be this: our big brains are evolved to let local culture lead us in life’s dance.

The applications of this new way of looking at the human mind are still budding. Henrich suggests that his research about fairness might first be applied to anyone working in international relations or development. People are not “plug and play,” as he puts it, and you cannot expect to drop a Western court system or form of government into another culture and expect it to work as it does back home. Those trying to use economic incentives to encourage sustainable land use will similarly need to understand local notions of fairness to have any chance of influencing behavior in predictable ways.

Because of our peculiarly Western way of thinking of ourselves as independent of others, this idea of the culturally shaped mind doesn’t go down very easily.
Perhaps the richest and most established vein of cultural psychology – that which compares Western and Eastern concepts of the self –  goes to the heart of this problem. Heine has spent much of his career following the lead of a seminal paper published in 1991 by Hazel Rose Markus, of Stanford University, and Shinobu Kitayama, who is now at the University of Michigan. Markus and Kitayama suggested that different cultures foster strikingly different views of the self, particularly along one axis: some cultures regard the self as independent from others; others see the self as interdependent. The interdependent self –  which is more the norm in East Asian countries, including Japan and China –  connects itself with others in a social group and favors social harmony over self-expression. The independent self –  which is most prominent in America –  focuses on individual attributes and preferences and thinks of the self as existing apart from the group.

That we in the West develop brains that are wired to see ourselves as separate from others may also be connected to differences in how we reason. Unlike the vast majority of the world, Westerners (Americans in particular) tend to reason analytically opposed to holistically. The American mind strives to figure out the world by taking it apart and examining its pieces. 

Show a Japanese and an American the same cartoon of an aquarium, and the American will remember details mostly about the moving fish while the Japanese observer will likely later be able to describe the seaweed, the bubbles and other objects in the background. In the “rod and frame” task, where one has to judge whether a line is vertical even though the frame around it is skewed, Americans easily see the line apart from the frame, just as they see themselves as apart from the group.

Heine and others suggest that such differences may be the echoes of cultural activities and trends going back thousands of years. Whether you think of yourself as interdependent or independent may depend on whether your distant ancestors farmed rice (which required a great deal of shared labor and group cooperation) or herded animals (which rewarded individualism and aggression). Heine points to Nisbett at Michigan, who has argued that the analytic/holistic dichotomy in reasoning styles can be clearly seen, respectively, in Greek and Chinese philosophical writing dating back 2,500 years. These psychological trends and tendencies may echo down generations, hundreds of years after the activity or situation that brought them into existence has disappeared or fundamentally changed.

And here is the rub: the culturally shaped analytic/individualistic mind-sets may partly explain why Western researchers have so dramatically failed to take into account the interplay between culture and cognition. In the end, the goal of boiling down human psychology to hardwiring is not surprising given the type of mind that has been designing the studies. Taking an object (in this case the human mind) out of its context is, after all, what distinguishes the analytic reasoning style prevalent in the West. Similarly, we may have underestimated the impact of culture because the very ideas of being subject to the will of larger historical currents and of unconsciously mimicking the cognition of those around us challenges our Western conception of the self as independent and self-determined. The historical missteps of Western researchers are the predictable consequences of the WEIRD mind doing the thinking.

Ethan Watters, a contributor to This American Life, Mother Jones, and Wired, is the author of Crazy Like Us: The Globalization of the American Psyche. This article originally appeared in Pacific Standard.

Friday, October 4, 2013

The Human Condition: Struggles of Cosmic Insignificance: The Polarized Mind




The Human Condition: Struggles of Cosmic Insignificance

           

The polarized mind — a reaction to the human condition and feelings of cosmic insignificance — fixates on one point of view to the utter exclusion of all others.

By Kirk J. Schneider, PhD
October 2013

Polarizing Events Within the Mind
"The Polarized Mind" by Kirk J. Schneider, PhD, draws from the standpoint of existential psychology, and details how the polarized mind has ravaged leaders and cultures throughout history.


Photo By Fotolia/Andrew Kuzmin



In The Polarized Mind (University Professors Press, 2013), Kirk J. Schneider, PhD, states that an individual, stricken with one absolute belief to the exclusion or even demonization of others, leads to bigotry, tyranny, and vengefulness. Dr. Schneider draws on his work in the field of humanistic depth psychology to posit that polarization is caused by a sense of cosmic insignificance, heightened in the trials of personal trauma. In this selection from "The Bases of Polarization," the nature of the human condition plays a fundamental role in the formation of polarization in the human mind.


The Bases of Polarization


Throughout human history, people have repeatedly swung between extremes. Arthur Schlesinger Jr. (1986) noted these swings in his classic book The Cycles of American History. In this book, Schlesinger articulated the continuous political swings in U.S. history, particularly those between conservatism and liberalism, rigidity and permissiveness. However, there are many other forms of such swings in many other times and places.

The usual explanations for the swings of history, as well as individuals, are cultural, political, and biological. The founders of the United States swung away from the British motherland because of political and religious oppression. Certain nineteenth century abolitionists resorted to armed struggle because of unrelenting federal support of slavery. Post World War I Germany amassed a titanic arsenal, in part to avenge the humiliation it perceived at the Treaty of Versailles. McCarthyite anti-communists swelled in number following the advent of Soviet expansionism in Eastern Europe. And so on. Schlesinger provides a cogent summation of these various dynamics:
"The roots of…cyclical self-sufficiency doubtless lie deep in the natural life of humanity. There is a cyclical pattern in organic nature — in the tides, in the seasons, in the night and day, in the systole and diastole of the human heart….People can never be fulfilled for long either in the public or in the private sector. We try one, then the other, and frustration compels a change in course. Moreover, however effective a particular course may be in meeting one set of troubles, it generally falters and fails when new troubles arise."

At the individual level, too, the conventional wisdom embraces both cultural and biological explanations. Depression is now frequently considered a biologically based disorder, rooted in an imbalance of Serotonin in the brain. Anorexia, too, is often considered a biologically and culturally based condition, stemming from an overemphasis on thinness in Western fashions. Obsessive-compulsiveness, mania, criminality, and many other forms of suffering are also considered combinations of biologically or genetically based chemical imbalances and familial or cultural influences.

However, thanks to the expanding insights of psychological depth research, we now have a clearer picture that what we once took to be biologically or culturally based appears to be rooted in a much thornier problem — the condition of being human. What I mean by this is that polarization in all forms appears to be based not just on a reaction against a particular family, or society, or physiology but on the shocking nature of the human condition itself, which, at its extremes, is the most daunting condition of all.

And what is this human condition (or “condition humane,” as Andre Malreaux put it)? It is the relationship of the human being to the groundlessness of space and time, to death, and to the most radical mystery of all, existence itself.

Just consider, for example, what happens in a classic pattern of polarization. A person or persons become injured, and the injury leads to a reaction. This reaction might take the form of a religious decree to rid the world of infidels (e.g., those who had formerly attempted to undermine established religious precepts). It might take the form of a political mobilization (e.g., fanatical nationalism) in the wake of a threatened state (e.g., post World War I Germany). Or it might take the form of a humiliated individual vowing to avenge his abusers. In each of these cases there is a time-tested dynamic at play. Initially there is a sense of helplessness (despair), then there is a reaction against that helplessness (polarization, fanaticism), followed by a destructive outbreak as a result of that reaction.

Hence, if we peel back the layers of this scenario, what do we find? We certainly find physiological (e.g., fight and flight) reactions (as Schlesinger and others have noted), classical psychodynamic issues stemming from childhood (as Freud and others have contended), and behavioral dimensions, such as conditioned reactions to aversive stimuli (as Skinner and others have pointed out). But are these really the essential building blocks of polarized experiences, of stuck and life-encompassing fixations, or the extremes and fanaticisms of theocracies, military-industrial complexes, and hate-driven assassins? The emerging consensus is quite probably “no.”

Although the traditional explanations work to a point, depth research unveils that they are but harbingers or glimpses of a much more encompassing problem. For example, the latest research on the roots of extremism centers on what is aptly termed “terror management theory”. In this theory, built on a growing base of cross-cultural research, the deeper we probe the layers of psychosocial extremes, the closer we come to anxieties about existence itself — coinciding with and extending beyond the physiological, familial, and cultural. In short, according to Terror Management Theory, polarization arises from culturally diversified experiences of death anxiety, and death anxiety is aroused by an extraordinary range of secondary fears. Among these fears is humiliation, or the terror of feeling insignificant. Witness the following conclusion by Arie Kruglanski, one of the leading investigators of global terrorism. Drawing in part on Terror Management research, and in part on the data that he and his colleagues at the National Consortium for the Study of Terrorism and Responses to Terrorism have amassed, he asserts:

"Personal significance is a motivation that has been recognized by psychological theorists as a major driving force of human behavior. Terrorists feel that through suicide, their lives will achieve tremendous significance. They will become heroes, martyrs. In many cases, their decision is a response to great loss of significance, which can occur through humiliation, discrimination or personal problems…Interesting[ly], research shows that poverty is not the root cause of terrorism. Many terrorists come out of the middle class, and some [like Osama Bin Laden] are quite well-to-do."
As Kruglanski intimates, therefore, terrorism in particular and polarization in general are rooted in a very profound problem of the human situation. It is not a problem that can simply be eliminated through material comforts, physical well-being, or even in some cases loving and well-adjusted families. It is a problem that each individual must confront in varying degrees during their lifetimes, because it is not a problem that will go away. Although physical and psychological vulnerability and ultimately annihilation appear to be at the crux of this problem, I suggest that there is something even more subtle at play, something even more harrowing. This “something” is what I and others call “existential anxiety.” Existential anxiety is not just the fear of physical death but the fear of the implications of physical death. Existing in a universe that has no calculable end and no calculable beginning — that is a radical mystery. It is our terror of our bewildering fragility, our nothingness before the vastness of space and time, and our steady transformation from matter to inexplicable dust.

Trauma, shock, and disruption all signal us to this incomprehensible state of affairs. They jar us out of our comfort zones and peel back the profundities lying just beneath our routines. Virtually everyone who is polarized, I contend, has been a victim of existential panic; and virtually all of us, in varying degrees, have experienced this polarization. The question is: How do we prevent, or at least manage, the most destructive polarizations — the polarizations that wage egregious wars, that initiate relentless hatred, that concentrate obscene accumulation of wealth, and that deplete, everyday, the imperative resources of nations?

Before we can address this question, we need to look more thoroughly at the psychological bases of polarization throughout history, the damage that polarization has wreaked, and the fates that have awaited those who fought valiantly to oppose it.


Reprinted with permission from The Polarized Mind: Why It's Killing Us and What We Can Do About It by Kirk J. Schneider, PhD and published by University Professor's Press, 2013.

Wednesday, September 11, 2013

John Gray, David Hawkes, and the Myth of Progress



Dissident Voice: a radical newsletter in the struggle for peace and social justice

John Gray, David Hawkes, and the Myth of Progress

John Gray is a British social philosopher who, in the words of David Hawkes, puts forward an “uncompromising challenge to the myth of progress.” Hawkes (an English professor at Arizona State) has recently published an essay, “Backwards into the future” in the TLS (8-30-2013) which is a sympathetic presentation of Gray’s views and a review of his latest book, The Silence of Animals: On progress and other modern myths. What is Gray’s challenge all about?

Gray’s new book is an attack on “meliorism” — which Hawkes explains as the view “that the moral and material condition of humanity will improve over time” and that its improvement is, in the long run, inevitable. Defined this way “meliorism” will be easy to attack. Conjoining “moral” and “material” conditions with “and” rather than “and/or” and adding “inevitability” suggests that meliorism is some form of utopian dream and indeed a myth.

But not all philosophers use this straw man definition of meliorism. Much more useful is the definition given, for example, in the Cambridge Encyclopedia of Philosophy. Meliorism “is the view that the world is neither completely good nor completely bad, and that incremental progress or regress depend on human actions.” This view holds that:  “By creative intelligence and education we can improve the environment and social conditions.”

Meliorism is the possibility that humans can make some progress towards improving the world but regress is also possible at times, and there is no guarantee of success since human actions cannot be predicted with inevitability. Under capitalism, for example, human actions are guided by competition and the profit motive and lead to socially destructive behaviors with respect to the environment and other people who are seen as objects to be manipulated for economic gain. Meliorism in such a system would not seem to have  much chance of success in the long run, although in some parts of the world progress in scientific understanding and disease control can be discerned.

The Wikipedia article on “Meliorism” points out that this view is the basis on which the values of liberal democracy, human rights, and liberalism as a political philosophy are founded. I should also add that Marxism and other forms of socialism are likewise indebted to Meliorism but do not think the meliorist project can really get underway, or can get underway only with great difficulty,  under capitalism or in under-developed parts of the world where meliorist social projects, including socialism, are attempted in the face of capitalist hegemony.

Hawkes praises Gray for his “bold effort” to “exorcize” the “spectre of progress.”  This “spectre” presents itself in “the guises of Enlightenment rationalism, Romantic individualism, liberal humanism, nationalism, Marxism, and neo-liberal capitalism.” Only the kitchen sink seems to be missing.

But I think Hawkes indulges in overkill. He attacks the uses to which science has been put in the last century and gives as negative examples the two world wars, the Holocaust and Hiroshima (all done under the aegis of capitalism). He says science is misused, perhaps, due to a defect in its methods and thinks “we may well ask whether such uses are not in some way inherent in the scientific method that enables them.”

I don’t know how many science courses English professors are required to take, but Gray’s target is not progress in science but the claim that there has been moral progress. In a talk he gave at an RSA conference in Britain he stated that there has been progress in scientific understanding of the world from the time of Copernicus and he is not arguing against that, but he rejects claims of ethical and moral progress — the United States, for example, has reverted to the use of torture, a practice we had thought was extinct in advanced democracies and outlawed by all sorts of international agreements and conventions.

There is nothing “inherent” in scientific method, anymore than in mathematics, that leads to the Holocaust. The failure of morality that led to the Holocaust, or Hiroshima, or the Invasion of Iraq was not a failure of science. Science, as is mathematics, is neutral on moral questions and only seeks to describe how the world works in terms of natural processes. It is similar to the rules of chess: this is how the pieces move, etc. If you play chess poorly, it is not the the fault of the rules.

Hawkes admits that Gray “never renounces belief in scientific truth” but still there are serious consequences resulting from an abandonment in belief in moral or ethical progress. The consequences Hawkes reports that Gray thinks follow from his rejection of moral progress are not “profoundly disturbing” as Hawkes maintains because they don’t really follow at all. Gray thinks it is worse to lose “faith” in progress than to lose it with respect to “God, reason or even science,” Hawkes writes.

We are told without the idea of progress we cannot see “meaning in life.” But this is just not true. Humanity makes itself by its choices and gives meaning to life by the commitments it undertakes. Sartre pointed this out before Gray was even born when he said:
Whenever a man chooses his purpose and his commitment in all clearness and in all sincerity, whatever that purpose may be, it is impossible for him to prefer another. It is true in the sense that we do not believe in progress. Progress implies amelioration; but man is always the same, facing a situation which is always changing, and choice remains always a choice in the situation. The moral problem has not changed since the time when it was a choice between slavery and anti-slavery.
That there is no transcendental meaning to life does not mean there is no meaning tout court.

We also have to abandon the idea that “empirical appearances conceal substantial essences.”  This is nonsense. Discussions of empirical and substantial essences, or real and nominal essences, of Aristotle’s views or Locke’s for that matter are quite independent of one’s theory about “progress” one way or another.

Nor is it responsible for our having to give up the belief of a “soul” within the body. Materialism is responsible for this view — it goes back to Epicurus at least and is not dependent on Gray’s views about the myth of progress. Ryle’s The Concept of Mind, written when Gray was a toddler, deals with “the ghost in the machine” quite apart from notions of progress.

One can also reject the idea of progress independently of being either a neo-pragmatist or a postmodernist — it does not commit one to rejecting the view that signs refer to external reality.

Finally we are informed, incorrectly, that not having faith in progress means we “view the world as a depthless simulacrum with no underlying significance.” Wrong again. Not all cultures have produced philosophies based on the idea of progress. The Ancient Egyptians for one had no concept of progress in our Western sense yet they did not believe the world was a depthless simulacrum without significance.

Again, Sartre would maintain that we are responsible for creating our own significance in terms of the values we choose to live by. The world presented by science is the backdrop for our experiences and choices — it up to us to provide the significance. None of the above five so called “profoundly disturbing”consequences of rejecting the idea of moral progress are logical consequences of such a rejection.

This very conclusion that I have articulated is the one Hawkes indicates is shared by Gray himself. Hawkes writes that one of the conclusions of The Silence of Animals is: “The world can only have meaning conferred on it, or be deprived of it, by human beings.” But this conclusion does NOT logically follow from Gray’s thinking. He thinks we have arrived at this conclusion not because the world has changed but because the mind; i.e., “the twenty-first century mind”  has changed. But this conclusion would be consistent with the views of mid-twentieth century thinkers such as Sartre, among others, so no new and startling “development in human history” is responsible.

Marxists would say that the dominant ideas in a culture are a reflection in the ideological super structure of the social reality that the culture has created around its basic interaction with the natural world it finds itself in and especially with respect to its mode of extracting food and sustenance in order to sustain the living human beings that comprise it.

And while the scientific world view would question the idea of “eternal verities” with respect to the development of ethical and moral systems, if Gray’s views are correct about the world’s meaning, or lack of it, being dependent on human beings then — the very idea he rejects — that it is “not the discovery of an eternal verity about the world” (as Hawkes puts it) is incorrect. The only way it could be true that the “meaning of the world” is put there by the human mind is the fact that the world, in and of, itself has no transcendent meaning of its own — it never did and presumably never will — it is just atoms and the void — and this is certainly an eternal verity about the world and a necessary condition for Gray’s views to even make whatever sense they do make.

Hawkes questions whether Gray is correct in apparently thinking that life, even for people who think it has meaning, is still meaningless. Gray writes, “symbols are useful tools; but humans have an inveterate tendency to think and act as if the world they have made from those symbols actually exists.”

Hawkes, however, asks if this is really an “inveterate tendency” rather than [as Marxism suggests] the result of historical conditioning.  We might think the word “fire” is a symbol for the speedy exothermic oxidation of combustive substances resulting in heat and light and we would not, I think, be wrong to hold that what the symbol represents “actually exists.” However, we might not have the same opinion as the ancient Greeks about “Zeus.”  It is the job of science, and philosophy, to try and hook up the proper symbolism with the actually existing world.

We can pass over the next section of Hawkes essay where he discusses the problems of symbolism and signs as elaborated by Gray in an earlier work, False Dawn (1998; 2nd edition, 2009). Here the discussion revolves around Gray’s use of economic examples to illustrate his theories and Hawkes seems to take Gray seriously when he does so. The problem is that Gray’s economic views (and Hawkes remarks about them) appear nonsensical. I base this not only trying to parse this discussion but also on Paul Krugman’s review of the second edition of False Dawn. Krugman, who has a Nobel in economics, thinks that Gray’s writings on the subject are the “garbled” views of an “ignoramus.”  Krugman, however, writes that Gray didn’t need to show himself “to be an economic ignoramus, when his core argument does not really depend on economics anyway.” [False Dawn: The Delusions of Global Capitalism (book review, New Statesman)] Let us return to the “myth” of  progress and the “core argument” and leave the dismal science to Krugman and his confrères.

Hawkes next deals with a  contradiction in Gray’s position (not necessarily a bad thing). Progress may be a myth, but “modernization inexorably occurs”  [the spectre of progress under another name] Hawkes writes. We may claim not to find any meaning in history but history and change still go on. If the myth of progress is overcome and our understanding of the world is no longer perverted by it — is this not progress? Hawkes, I fear, may be a victim of dispirited English department post modernism when he writes:
If the Western intelligentsia no longer acknowledges any significance to life, that does not mean that we have discovered a timeless truth that had been hidden from Aristotle, Plato and the prophets of monotheism. It means that we can no longer see meaning where others once did.
Well, I don’t think Hawkes speaks for the whole “Western intelligentsia.” As far as finding significance in history is concerned the “Western intelligentsia” would do well to ponder the following admonition from Hegel with regard to any scientific study and that is the categories we use to find significance or meaning in the world are the ones we ourselves bring with us and a thinker “sees the phenomena presented to his mental vision exclusively  through these media.” From which he concludes that to a person “who looks upon the world rationally, the world in its turn, presents a rational aspect.”

And what do we find when we look at the world rationally; i.e., scientifically? We don’t find the world according to Plato or Aristotle or the prophets of monotheism. We find a universe about 13.788±037 billion years old, we know life on one planet (so far), Earth, which is about 4.6 billion years old and it seems has had life for the last 3.6 billion years and for the last 200,000 years, anatomically modern humans. Our species resulted non-providentially by a process of evolution by natural selection. So here we are and we have to make the most of it.

Do we see any significance or meaning in the history of our species? Hawkes seems to agree with Gray that it is irrational to believe in (moral and ethical) progress — he is very unimpressed by the twentieth century — but, he says, that doesn’t mean there is no meaning in history.

Hawkes proposes that the meaning of history is not progress but anti-progress; i.e., not ascent but decline. “History is not progress but regress, not advance but decline, and it leads to destruction rather than to utopia.” Gray would think this just as ridiculous as progress because for him the basic reality is that the animal man is an unchanging essence. In his book Straw Dogs he writes:
Humanism can mean many things, but for us it means belief in progress. To believe in progress is to believe that, by using the new powers given us by growing scientific knowledge, humans can free themselves from the limits that frame the lives of other animals. This is the hope of nearly everybody nowadays, but it is groundless. For though human knowledge will very likely continue to grow and with it human power, the human animal will stay the same: a highly inventive species that is also one of the most predatory and destructive.”
Hawkes writes that “Belief in historical regression is a far more challenging proposition than Gray’s assertion of insignificance.” It is challenging because it is ridiculous. What is history regressing from — Atlantis? Ancient Egypt? The Stone Age? At least Gray’s warmed over Schopenhauerian pessimism makes some sense where regress doesn’t.

Hawkes also seems to miss the point about the difference between moral progress and scientific progress. A world without polio or smallpox is a great scientific advancement and shows that we can make progress in disease control and understanding nature. If there are areas where polio still breaks out, mostly in the underdeveloped world, it is a moral crisis not a scientific one. If capitalists demand money and profit for medicines, it is a moral crisis not a scientific one.

When Hawkes writes, “It is relatively easy to admit that what we have seen as scientific advancement and economic enrichment are meaningless” he is missing the whole point of what science is all about. It is not meaningless to fight against malaria, yellow fever and other infectious diseases. Pasteur was not engaged in a meaningless exercise when he discovered how to prevent rabies, nor was Koch when he discovered the cause of tuberculosis.

Hawkes ends his essay by remarking that we may soon have to consider the fact that scientific advance and economic enrichment (two inherently different activities indiscriminately lumped together) are “actively evil and destructive.” This is like calling cooking evil because some people over eat and get sick. Did cooking make them sick?

I will give the last word to Bertrand Russell who sums up all that anyone will get out Hawkes’ essay or Gray’s books as far as positive knowledge is concerned. “Change is one thing, progress is another. ‘Change’ is scientific, ‘progress’ is ethical; change is indubitable, whereas progress is a matter of controversy” (Unpopular Essays, 1951).

Thomas Riggins is currently the associate editor of Political Affairs online. Read other articles by Thomas.


Sunday, September 8, 2013

Keeping Alive The Big Questions


religion









Keeping Alive The Big Questions

Posted:   |  Updated: 09/07/2013 10:53 pm EDT


 
big questions


Twenty years ago, Evgenia Cherkasova and Elena Kornilov were doctoral students in their mid-20s, living in the same housing complex at Penn State University. As they pursued their degrees -- Cherkasova in philosophy, Kornilov in physics -- both started families, and to take a break from studying they often found themselves meeting for wine or tea, or watching their young children on the playground. As their friendship deepened, their conversations often veered into the Big Questions on their minds: How could they live a "good life" with purpose, happiness and success? What did those words mean?

After graduation, Cherkasova and Kornilov went their separate ways, keeping in touch via letters and weekly phone calls, sharing the details of every aspect of their lives – their kids’ first days of school, their academic research, their relationship hurdles.

On March 4 of this year – Kornilov’s 48th birthday -- her doctor called to tell her she had breast cancer. Even as she hid the diagnosis from other friends and some family members, Kornilov confided in Cherkasova, and the two went over her treatment options. Some, like chemotherapy, were physically intrusive, but would greatly reduce the chance of remission. Others, like hormonal drugs, were easier to handle, but came with a higher risk of a tumor returning.

Suddenly, the conversations and questions that guided their friendship over the years took on a new meaning. They weren’t just idle speculations; they were real, urgent, full of consequences, perhaps now even a matter of life and death.

“We started talking about how you deal with these situations, especially when it’s a patient with a potentially terminal disease,” recalled Cherkasova, now a philosophy professor at Suffolk University in Boston. “She told me, ‘it’s a question of the quality of life versus length of life. You have to decide: If you want to prolong your life, then what do you do it for? What am I doing in life at this point? What’s happiness?”

***
 
This fall, as the latest crop of freshmen arrives on university campuses across the country, many students will find themselves debating similar questions, and not only in early-morning 101 courses. In dining halls and dorm rooms, as they come together with people of vastly different backgrounds and perspectives, they’ll continue the typical college traditions of late nights, long conversations and self-discovery. And when they graduate, they will face a challenge much steeper than any college exam or doctoral dissertation -- carrying that spirit of inquiry with them into the real world.

Statistical and anecdotal evidence suggests this is easier said than done. According to the Bureau of Labor Statistics, which takes an annual measure of how Americans use their time, the average person spends about 45 minutes daily “socializing and communicating.” Watching TV, meanwhile, accounts for nearly three hours of the average American’s day. And today’s laptop-scattered coffee shops don't seem to foster environments of conversation and debate, like the salons of France, often credited with incubating philosophical discussions that ushered in the Age of Reason, or the cafe culture, a backdrop for the Existentialist musings of Jean-Paul Sartre and his contemporaries.

Of course, it’s much easier to measure TV-watching than America's intellectual engagement and introspection. But for some time, scholars and observers have been documenting, often with alarm, a shift from a society structured around social gatherings to a culture of technology-driven individualism -- or, depending on your point of view, isolation. Writing more than a decade ago in Bowling Alone, Harvard public policy professor Robert D. Putnam documented the erosion of Americans’ participation in community clubs -- like bowling leagues and civic organizations -- and the disengagement from society and the self that it fostered. More recently in Alone Together, Massachusetts Institute of Technology clinical psychologist Sherry Turkle, who studies the impact of technology on social relations, examined how hyperconnected-ness has created relationships where we have the “illusion of companionship” without the “demands of friendship.”

In other words, we are moving toward a way of life that discourages the kinds of conversations that defined and sustained Cherkasova and Kornilov’s friendship.

evegenia cherkasova
Evgenia Cherkasova, a philosophy professor at Suffolk University, regularly has conversations via phone about the Big Questions with her friend Elena Kornilov and will teach a course next year that's called, What is the Meaning of Life?
 

"We have stripped away so many of the conditions that make conversations like these flourish. And the condition that makes it flourish, in many cases, is the uninterrupted full attention to each other," said Turkle, who has spent the three years interviewing dozens of people from various walks of life about what they talk about with friends and how they do it for an upcoming book called Reclaiming Conversation. "These conversations are what college students are missing, they're what people at work are missing, they're what we're all missing.”

In the midst of this shift, the American university system remains an oasis of sorts, a place where the Big Questions are freely and fiercely debated -- in no small part because many students are not yet dealing with the pressures of work and family. But there’s a shift on American campuses, too. Just seven percent of graduates major in the humanities, like philosophy and literature, while majors in largely career-oriented fields have increased as more Americans pursue higher education. A half-century ago, twice as many students walked across commencement stages with humanities degrees.

New York Times columnist David Brooks, who has written and spoken extensively about the decline of the “humanist vocation,” began teaching a course at Yale University last spring about the history of character building. He said he believes there’s a shortage of people publicly asking the cosmic questions.

“People are hungry for a certain side of writing about these issues, but we no longer have that kind of group of writers widely discussing how you measure a life," said Brooks.

On occasion, an awe-inspiring commencement speech -- like David Foster Wallace's "This is Water," which was given at Kenyon College in 2005 and became a book after his death -- makes its way into pop culture. But Brooks believes we need much more. "Back in the 1950s, you had Joshua Heschel and Reinhold Niebuhr; they were writing books devoted entirely to these issues," he said.

Heschel, a rabbi who stood on the front lines of the Selma-to-Montgomery marches with Martin Luther King Jr., also was known for penning provocative theological works, like Man is Not Alone and God in Search of Man. The works of Niebuhr, a Christian theologian and professor at Union Theological Seminary, include Moral Man and Immoral Society and The Nature and Destiny of Man.

“For anyone who goes to church, these are the questions they are essentially grappling with via their faith,” said Brooks. Indeed, a measurable drop in religious affiliation and attendance at houses of worship may be a factor in the decline of a culture of inquiry and conversation. According to the Pew Research Center, 1 in 5 Americans identifies with no religion, including those who are atheist, agnostic or “spiritual but not religious.”

But the Big Questions aren't just for the faithful, and there are glimmers of hope for those who long for the days when it was easy to find souls loudly searching for what the Greeks described as eudaimonia, or the “human flourishing,” considered central to a person and society’s development.

On Meetup.com, a website where people organize get-togethers around mutual interests in homes, restaurants or cafes, hundreds of groups focused on philosophy, spirituality and religion have launched in recent years. TED, the conference series with the slogan “ideas worth spreading,” has an independent affiliate that hosts weekly salons in a Manhattan apartment where attendees watch taped talks then discuss them (in August, the theme of each meeting was “courage”). And from suburban Columbus, Ohio, to Seattle, individuals and nonprofits have launched grassroots efforts aimed at getting Americans to talk about death and what they desire out of life; events include Death Cafes -- monthly coffee shop-centered discussions on dying that can now be found in nearly every major American city -- and Death Over Dinner, a coordinated series of meals that took place in hundreds of homes last month.

The Adult Philosophy Club of East Greenwich, R.I., was launched just over three years ago by a drug addiction counselor who recognized what he called “existential crises” among his clients. Today, the group is open to the whole town.

bob houghtaling
Bob Houghtaling, director of the drug counseling program at the F.A.C.E.S. community center in East Greenwich, R.I., founded the Adult Philosophy Club two years ago. Today, its membership includes a dozen people who meet weekly in a community room at the local police station to discuss the Big Questions and how philosophy applies to them.
 
For 90 minutes each Tuesday in a community room at a police station, Bob Houghtaling, a 59-year-old counselor who studied philosophy as an undergraduate at Rhode Island College, leads a roundtable of a dozen citizens ranging from teenagers to retirees. Sometimes, they’re discussing a book, like Eichmann in Jerusalem, the examination of the trial of Nazi war criminal Adolf Eichmann in which political philosopher Hannah Arendt coined the term “the banality of evil.” Or they’re going to museums and films, like the Tomaquag Indian Memorial Museum in Rhode Island and a showing of the feature film "Lincoln."

“What constitutes morality? Are we moral? Is what’s right something natural or is it something that we’re taught?” Houghtaling said, recounting some of the club’s recurring themes. “People come in with strong convictions and religious views. It can get heated.”

Oftentimes, the conversation spins off of the news. With international controversy over revelations about the National Security Agency’s extensive spying programs and amid increased tensions over the Obama administration’s threat to launch strikes against Syria, the discussion frequently turns to the role of the state. “What obligation does the state have? In a critical situation like a war, can the government suspend natural rights?” said Houghtaling. “Where’s the line?”

As Houghtaling sees it, these are questions that can all too easily be swallowed by the activities and stresses of everyday life.

“We go through the perfunctory things so much, putting on our suits and ties, and putting on our titles, that we don’t get to talk about humanity and life. It’s cathartic when you get to do it,” he said. “It’s tough to sustain yourself unless there are ‘whys’ and purposes.”

Yet, there are plenty of reasons for putting off these questions. With high unemployment and economists predicting years of recovery from the recession ahead, jobs and money have a way of taking precedence over any talk of higher purpose. When Gallup researchers asked an international group of respondents a few years ago to describe their “best possible future," the responses leaned heavily toward “wealth” and “good health.” It was harder, on the other hand, for people to describe what they considered good relationships and a sense of community, and how important they were.

***
 
Given the considerable evidence and widespread perception that we are drifting away from the Big Questions, more universities have committed to sparking conversations. Many classrooms and campus greens are being turned into experimental zones where students and faculty can explore what Greek philosophers called the quest for ataraxia, or “tranquility,” in life.

Recognizing a yearning for “intellectual community,” the National Endowment for the Humanities has given $2.2 million in grants since 2009 to fund college and university courses that tackle the “enduring questions.” Cherkasova will teach one next year at Suffolk University in Boston called, What is the Meaning of Life (its syllabus includes Ecclesiastes and Siddhartha, Hermann Hesse’s philosophical novel about a young Brahmin’s journey of self-discovery during the age of Gautama Buddha). Among dozens of courses that the NEH has funded are, What Is The Meaning of Happiness, taught at New Mexico State University, Las Cruces; an upper-level class at the College of St. Benedict in St. Joseph, Minn., titled, What am I?, and at Ursinus College in Collegeville, Penn., What is Love?

To advocates, these courses are more than mere intellectual exercises and bull sessions. They pose questions intimately connected to the core of everyday life.

“When you are dealing with college students, mostly what you are doing is trying to plant seeds so they are familiar with different world vocabularies,” said Brooks, whose own course was not taught with a grant but is similar in some ways to the NEH programs. “You want it to be so that when they get older and encounter challenges, they know what to do, and have books and ways of thinking to help them tackle problems.”

The West Conshohocken, Penn.-based John Templeton Foundation, best known for its annual Templeton Prize, has spent tens of millions funding largely academic endeavors looking into the "basic forces, concepts, and realities" of the universe and our place in it. They range from the esoteric, like a $5 million project to research immortality at the University of California, Riverside, to projects aimed at a wider audience, like Big Questions Online, a news site updated weekly with essays by academics and spiritual thinkers.

tom kaplanTom Kaplan's Recanati-Kaplan Foundation is funding the Ethical Inquiry program at Brown University, across-departmental, interdisciplinary lecture and conference series tackling the Big Questions. Kaplan said his own philosophical journey began as a teen, when his mother gave him a copy of Marcus Aurelius' Meditations.
At Brown University, the New York-based Recanati-Kaplan Foundation began last year to fund a cross-departmental, interdisciplinary lecture and conference series on Ethical Inquiry. Its goal: to use Greek philosophies, among others, as a base to inspire students, faculty and the Providence community to explore the meaning of a "good life." At the core of its attempt is another big question: How can the wisdom accumulated over the generations be passed down instead of lost in the shuffle of everyday lives?

“One of the things that interests me is whether we can save young people literally decades of wasted time in coming to the conclusion that almost everyone does generation after generation: The things we thought were important in our youth when the world was open to us, when it was our oyster, when the future would bend itself to our will, really are not,” said billionaire natural gas and gold investor Thomas Kaplan, who started Recanati-Kaplan with his wife, Dafna Recanati.

Kaplan’s own interest in philosophy was set off in high school when his mother gave him a copy of Marcus Aurelius' Meditations, a major Stoic text.

“There are certain truisms. No man on his deathbed ever said, 'I wish I spent more time at the office,’” Kaplan said, describing one of the many lessons he hopes to impart through the nascent effort. One of the foundation's launch events in 2012 was a two-day conference on the “Art of Living.” Hundreds of students, faculty and Providence residents listened to philosophers, psychiatrists, experimental psychologists and scholars of other disciplines examine the "good life."

Meanwhile, at Stanford University, there’s Sophomore College, a three-week intensive course series where students meet for several hours every day with the same class and live together on campus. Among its seminars, the Meaning of Life was taught by the university’s dean of religious life, and included field trips to houses of worship and readings of George Bernard Shaw’s Major Barbara and Robert Bolt’s A Man for All Seasons. In another push, the Office of Religious Life hosts “What’s Meaningful to Me and Why,” a series of hour-long public discussions with faculty and administrators about “life questions.” Speakers are encouraged to discuss their personal struggles and reasons for pursuing their fields.

isabelle wijangco
Isabelle Wijangco, 23, said the Meaning of Life course at Stanford was one of the most important classes she's taken. She credits it with helping her decide on her goal to attend medical school and focus on global women's health issues. 
 
The philosophy department chair spearheading the program at Brown, Bernard Reginster, admits the limitations of universities when it comes to changing conversations at the dinner table. The challenge, he said, is to take the questions "first, to students and faculty outside the confines of academic philosophy and second, to a wider public.” How could exploring philosophy, psychology and literature, for example, amplify the life and work of a future investment banker, economist or engineer?

Isabelle Wijangco, who graduated from Stanford last year with a degree in human biology, is among those who took the Meaning of Life seminar in Sophomore College and said the course is part of what spurred her to want to focus on global women's health issues when she attends medical school.

“One of the big questions we grappled with in Meaning of Life was how to live every moment and be fully present while also being forward-looking and planning for our hopes and dreams for ourselves and the world," she said.

Wijangco recalled that a fellow student described a way to strike that balance by repeating a bit of wisdom he heard from his father: ‘Lay each brick reverently. Lay a purposeful brick, but be in the moment of laying that brick. The house will form.’”

She has carried that wisdom with her ever since. “It has helped serve as a metric for me in maintaining intentionality in every action,” she said, “for both present and future.”