Sunday, November 15, 2015

The Dynamics of Choosing Empathy

To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.

Jamil Zaki [10.20.15]
NEW — A Reality Club Discussion with responses from: Paul BloomDavid DeSteno
If you believe that you can harness empathy and make choices about when to experience it versus when not to, it adds a layer of responsibility to how you engage with other people. If you feel like you're powerless to control your empathy, you might be satisfied with whatever biases and limits you have on it. You might be okay with not caring about someone just because they're different from you. I want people to not feel safe empathizing in the way that they always have. I want them to understand that they're doing something deliberate when they connect with someone, and I want them to own that responsibility.
JAMIL ZAKI is an assistant professor of psychology at Stanford University and the director of the Stanford Social Neuroscience Lab.  Jamil Zaki's Edge Bio Page.
Paul Bloom
"Zaki correctly describes my own position as “empathy is overrated”. I agree that empathy can sometimes motivate kind behavior. But, as I’ve argued elsewhere, it is biased, pushing us in the direction of parochialism and racism. It is short-sighted, motivating actions that might make things better in the short term but lead to tragic results in the future. It is innumerate, favoring the one over the many. It is capricious; our empathy for those close to us is a powerful force for hatred towards those who harm them. It is corrosive in personal relationships, exhausting the spirit and making us less effective at helping those we love. ..." [Read]
David DeSteno: "How do we go from wanting to harm someone to commiserating with them? The answer, I think, potentially offers a solution to the competing views of Zaki and Bloom. Whereas Zaki is right about empathy and compassion being partially subject to choice, the usefulness of such “choosing” can be called into question. After all, Bloom is quite correct in noting that our care for others is biased. Compassion isn’t dispassionate; even when people’s suffering is objectively equal, we feel more compassion for those like us. ..." [Read]
A Conversation with Jamil Zaki
I've been thinking an enormous amount about a puzzle concerning how empathy works. Before describing it, I should make sure that we're on the same page about what empathy is. To me, empathy is a useful umbrella term that captures at least three distinct but related processes through which one person responds to another person's emotions.     
Let's say that I run into you and you are highly distressed. A bunch of things might happen to me. One, I might "catch" your emotion and vicariously take on the same state that I see in you; that's what I would call experience sharing. Two, I might think about how you feel and why you feel the way you do. That type of explicit consideration of the world as someone else sees it is what I would call mentalizing. Three, I might develop concern for your state, and feel motivated to help you feel better; that is what people these days call compassion, also known as empathic concern.

It often seems like these processes—sharing someone's emotions, thinking about their emotions, and wanting to improve their emotional state—should always go together, but they split apart in all sorts of interesting ways. For instance, people with psychopathy are often able to understand what you feel, but feel no concern for your emotions, and thus can leverage their understanding to manipulate and even harm you.                 
I spent several years early in my career thinking about these empathic processes and how they interact with each other, but in the last couple of years, I've zoomed out. I've stopped thinking as much about the "pieces" that make up empathy and started thinking about why and when people empathize in the first place. This is where the puzzle comes in, because there are two different narratives that you might hear about how empathy works. They're both compelling and very well supported, and they're pretty much entirely contradictory with each other, at least at first blush.                 
The first narrative is that empathy is automatic. This goes all the way back to Adam Smith, who, to me, generated the first modern account of empathy in his beautiful book, The Theory of Moral Sentiments. Smith described what he called the "fellow-feeling," through which people take on each other's states—very similar to what I would call experience sharing.                 
His most famous example of this is a crowd watching a tightrope walker. Smith said that the crowd would become nervous watching this person wobble over a precipice. Their palms would start to sweat, they'd balance and move their own bodies as though they were trying to survive the tightrope even though they were on relatively solid ground. Smith was adamant that this was something that people could not control, it just happened to them.                 
That view dominates current theory about empathy, and not without reason. It certainly jibes with our intuition that we can't control our feeling of empathy. If I were to ask you to imagine watching someone suffer a horrendous sports injury, you probably wouldn’t think, "Well, I'd figure out how much empathy I want to feel in this moment." You'd probably predict that a wave of discomfort and empathy would just wash over you. There is lots of evidence that is what happens. People take on each other's facial expressions within a fraction of a second of seeing someone else pose an expression, even if they're not aware that they're doing it. This type of imitation happens quite early in the development. Babies, in the first weeks of their lives, will cry when they hear another infant crying. This type of sharing is probably evolutionarily old as well. Mice, who don't have the same cognitive firepower that we do, appear to take on each other's states.
My lab has been interested in another signature of empathy, which is what we call neural resonance. This is something you can capture using techniques like fMRI: When I see you experience some state—make a movement, feel pain, or exhibit some emotion—my brain generates a pattern of activity consistent with what you're experiencing, not with what I'm experiencing. It's as though my brain rehearses your experience for me so that I can understand it implicitly. We, and lots of other folks, have demonstrated that this happens, even absent any instruction to empathize and even when you distract people. This suggests that even this neural signature empathy might be occurring outside of our awareness or control.                 
That's one narrative, that empathy is automatic, and again, it’s compelling—backed by lots of evidence. But if you believe that empathy always occurs automatically, you run into a freight train of evidence to the contrary. As many of us know, there are lots of instances in which people could feel empathy, but don't. The prototype case here is intergroup settings. People who are divided by a war, or a political issue, or even a sports rivalry, often experience a collapse of their empathy. In many cases, these folks feel apathy for others on the other side of a group boundary. They fail to share, or think about, or feel concern for those other people's emotions.                 
In other cases, it gets even worse: people feel overt antipathy towards others, for instance, taking pleasure when some misfortune befalls someone on the other side of a group boundary. What's interesting to me is that this occurs not only for group boundaries that are meaningful, like ethnicity or religion, but totally arbitrary groups. If I were to divide us into a red and blue team, without that taking on any more significance, you would be more likely to experience empathy for fellow red team members than for me (apparently I'm on team blue today).                 
Another interesting feature of this group-boundedness of empathy is that it doesn't just affect the amount of empathy we feel, it also affects whether we feel empathy automatically or not. Scientists have used EEG, for instance, to demonstrate that folks exhibit less neural resonance for the pain of outgroup as compared to ingroup members, and that difference appears within 200 milliseconds. It's not that you experience automatic empathy and tamp it down if you're in an intergroup setting; it seems like in those contexts empathy doesn't occur at all.                 
You've got these two narratives. On the one hand, empathy appears automatic. On the other hand, it diminishes and expands with features of your situation. How can we square these two accounts? That's what I've been asking myself a lot these days, and I feel as though I've arrived, at least preliminarily, at an answer: we can resolve the tension between those narratives by letting go of some assumptions about how empathy works. In particular, the idea that empathy is out of our control.
Lately, I've begun thinking about empathy not as something that happens to us, but rather as a choice that we make, even if we're not aware we're making it. We often make an implicit or explicit decision as to whether we want to engage with someone's emotions or not, based on the motives we might have for doing so.                 
Let me try to unpack this. Let's say that you're watching TV and you learn that the next thing coming on the station you're watching is a telethon meant to raise awareness of leukemia, and this will include kids who are suffering from leukemia telling their story. I bet you would predict, rightly, that watching this telethon would cause lots of empathy to bubble up within you. Do you stick to the channel and watch it, or do you turn away?                 
There are lots of motives you might have for watching. For one, you might be curious about the plight of folks living with leukemia. You might even feel that it's your moral responsibility to find out more about this group. You might also imagine that you'll be inspired to donate money to this cause, and that would make you feel good, as though you're living in accordance with your virtues and principles.                
There might also be reasons that you don't want to watch this telethon. For one, it might hurt. It would probably be heart wrenching to hear these stories. It might also make you experience guilt, especially if after watching this you choose not to donate. If you're strapped for cash, you might feel as though empathy will place you in a double bind, where you have to choose between your wallet on the one hand and your conscience on the other.                 
Those might be situations you want to avoid, by avoiding empathy in the first place. I use the terms empathic approach motives and empathic avoidance motives to describe drives that push people towards and away from other people's emotions. People carry those motives out in lots of different ways. For instance, if I don't want to empathize with you, one strategy is I can just avoid you altogether. People often avoid situations that they think will inspire empathy in them. I can also simply not pay attention to your emotions, or decide through some appraisal process that your emotions are not important, or at least less important than my own.                 
Over the last couple of years, I've gathered evidence in support of a motivated view of empathy. For instance, when it comes to avoiding empathy, we can go to the example I just mentioned. You might be worried that empathy will cause you to feel guilty or morally obliged to part with some of your money. It might be a costly emotion.                 
It turns out that Dan Batson, about twenty years ago, ran a beautiful study in which he demonstrated this in a simple experiment. He told some people that they'd have a chance to donate to a homeless person, and he told other people that they'd have no such opportunity. He then asked people which of two appeals they wanted to hear: one objective story about this person's life, and another that was emotionally evocative. It turns out that people who thought they'd have a chance to donate tended to choose the emotionally neutral version of the story, consistent with the idea that they want to avoid experiencing empathy.                 
Another reason you might not want to experience empathy is if you're in the position where you have to harm somebody. Let's say that you're a linebacker, for instance, and you have to deliver a vicious tackle to a running back. It probably would behoove you to not feel everything that that person is feeling and think a lot about their emotions or the pain you're causing them. This happens in much darker contexts, of course.
In war, soldiers are explicitly encouraged to dehumanize their enemy, to make it less guilt inducing when they have to harm those people. This is what my colleague Al Bandura would call a moral disengagement. Al and his colleagues a few years ago demonstrated this in a very interesting and, to me, troubling way. They found that prison guards, especially executioners, tended to downplay the suffering of death row inmates consistent with their motive to do so and avoid guilt at their own actions. You see this all the time in modern warfare. Drone strikes, for instance, are a great way to avoid empathizing with the targets of an attack.                 
Like I said, people are not just motivated to avoid empathy. People approach empathy as well. One example of this is loneliness. People who are lonely feel a deep desire to connect with others, and in many cases they do so by ramping up their empathy and focusing more on other people's minds and experiences.
Jon Maner and Adam Waytz and others have demonstrated that if you induce someone to be lonely, they'll pay more attention to other people's minds, connect more with their emotions. They'll even pay attention to minds that are not there. For instance, anthropomorphizing objects like robots. We see this to an extreme degree in the movieCast Away, where Tom Hanks is so lonely that he anthropomorphizes and empathizes with a volleyball, Wilson, and thinks a lot about a mind that he imagines to be there but is actually not.               
Another reason you might want to empathize is when it's socially desirable to do so. If you learn that people around you value empathy, you might be encouraged to experience empathy yourself. One of my favorite studies on this works over the concept of gender roles in empathy. Thomas and Maio, seven or eight years ago, ran a study in which they started out by demonstrating that on a standard empathy test heterosexual men fared a little bit worse than women. This of course plays into the stereotype that women are more empathic than men, but the reason I like this study so much is because Thomas and Maio demonstrated that this is not a constitutional difference in the abilities of men, but probably instead represents a difference in their motivation. In a second study, these scientists convinced men that women find sensitive guys attractive, and it turns out that this motivation eliminated the gender gap in empathy performance. Straight man who believed that being empathic would make them attractive became more empathic, consistent with this motivated account under which people choose to approach or avoid empathy depending on their goals in a given situation.             
What is empathy, and where does it come from in our intellectual landscape? The term, empathy, [was] generated by German aesthetic philosophers. The term in German is Einfühlung, which is when you "feel yourself into" an art object. The term originates from Theodor Lipps, following Robert Vischer, another aesthetic philosopher. They both believed that the way we make contact with art is not by assessing its qualities in an objective sense, but by feeling into it, by projecting ourselves emotionally into a work. That was translated into English 106 years ago by Titchener, into the word empathy.
It's funny, if you use Google "engram," to examine the popularity of the term empathy, it looks like it's got a hydraulic relationship with another word: sympathy. Sympathy used to be much more popular. It has declined in popularity, and empathy has risen in popularity at the same time. It's a meaningful distinction between these two things. To my mind, sympathy is a more detached form of pity that you might have for someone suffering, whereas empathy requires a lot more emotional investment.                 
Empathy is expensive, psychologically. It costs a lot to empathize with someone, and there are many cases in which you might not want to do so. Scarcity is one thing that drives empathy down, stress is another. This is why they say that virtues are easier to abide on a full stomach. Empathy is, as well. If you are worried about survival and the well-being of yourself and your closest kin—your family—it's much harder to extend the "diameter" of your empathy to larger social groups.  Steve Pinker talks about this in The Better Angels of Our Nature. Peter Singer also talks about this in The Expanding Circle, the idea that maybe we can again expand the diameter of our concern for others, and maybe we have over the last decades. Even within one person's life, within a moment in time, there are many factors that might drive you to feel empathy or not.        
The costs of empathy include when it's painful, but also the responsibility that it places on people. If you empathize with someone, it's hard to compete with them. If you empathize with nonhuman animals, it's difficult to consume them. There is a moral responsibility that comes with an experience of empathy, especially if you want to continue being an emotionally authentic person.              
It does seem as though the social norms surrounding empathy have shifted. That's important, because if you view empathy not as a fixed quality of who we are—something that just happens automatically—but instead view it as something that we choose, then cultural landscapes should shape our individual emotional landscapes. We live in a more empathy-positive time than the past. People value warmth towards others and care for others as part of what it means to be a good person, now more than ever. That can make big changes in the way that people experience empathy.                 
Erik Nook and I along with our colleagues, ran a study recently where we saw whether conformity can generate empathy in people. If you believe that others around you value empathy, are you more likely to value it yourself? We found that people were. If we convinced folks that their peers experience lots of empathy, then our participants themselves reported more empathy and acted more kindly towards strangers, even if those strangers belonged to stigmatized outgroups. We think that a changing tide in our culture can change the way that people choose to engage with empathy.                 
Obama is probably the most empathy-focused President—or at least the President who uses the term the most—I’ve seen in my lifetime. He often talks about there being an empathy deficit, and says that one of the ways that we need to improve our society and the fabric of our society, is by increasing our empathy. I bet there are a lot of people who would disagree with that as a policy for running a state.  You saw this when Obama appointed Sotomayor as a Supreme Court justice and said this is a woman who has great empathy for the plight of many people. Well, that statement was vilified. He was pilloried for saying that, and people felt as though empathy is one of the worst features that you could select for when thinking about policy, when thinking about law, when thinking about government, because empathy is an emotion subject to all sorts of irrational biases. Justice should be blind, and presumably emotionally neutral.                 
You see this a lot in a movement that's taken hold recently. Paul Bloom and a set of other psychologists have made what I think is a great and very interesting case that empathy is overrated, especially as a moral compass. Their view is that empathy generates kind and moral behaviors, but in fundamentally skewed ways, for instance, only towards members of your own group and not in ways that maximize well-being across the largest number of people. On this account, empathy is an inflexible emotional engine for driving moral behavior and if you want to do the right thing, you should focus on more objective principles to guide your decision-making.                 
That's a great argument. It's not one that I agree with. It follows from somewhat of an incomplete view of what empathy is. If you believe that empathy is automatic and either just happens to you or doesn't, then sure, the biases that characterize empathy are inescapable and will always govern empathic decision-making. If you instead view empathy as something that people can control, then people can choose to align their empathy more with their values.        
Empathy has a long tradition in lots of different fields. In philosophy, you've got Edith Stein, for instance, a nun who wrote beautifully about empathy. Also, Martin Buber—I and Thou is a beautiful book about how people connect with each other and share each other's experience.                 
Within psychology, the study of empathy has an equally long history, and one that's got a lot of players in it. For my money, the most powerful research on empathy in the 20th century comes from Dan Batson, who's for decades demonstrated the power of emotional connection to drive people to helping each other—thinking about empathy as an engine for promoting cooperation and altruism.                 
The APA recently discovered that a set of psychologists had aided and abetted in a program of enhanced interrogation, hugely controversial and enormously problematic. It's so horrific to think about psychology being used in this way, but you can imagine how that works.                 
I mentioned earlier that individuals with psychopathy can understand what people feel, but they use that understanding not to improve other people's states, but sometimes to worsen their states. In a perverse way, the torturer needs to engage with at least some forms of empathy in order to do their job effectively. They need to know how to push someone's buttons, how to generate as much distress as they can. This is the dark side of empathy.
There is a dark side of empathy, especially when you experience one piece of empathy without the others. Understanding someone, having emotional intelligence, might just make you a better manipulator, if you're so inclined.        
Viewing empathy as a choice helps us understand the basic nature of empathy, why and when people empathize and why and when they don't. It is more powerful than that, because it can also help us address what Mina Cikara and I have called empathic failures, cases in which people don't empathize, and that generates some problems down the line.                 
I mentioned this already with respect to intergroup conflicts, but empathic failures happen in lots of other settings. For instance, when adolescents bully each other, or when physicians fail to understand the suffering of their patients, those are empathic failures. There are lots of interventionists doing hard and important work to try to mitigate the effects of empathic failures. This type of intervention tends to take on one of two flavors: either teaching people empathic skills, like how to recognize other people's emotions, or giving them opportunities to empathize, for instance, taking groups of people who are in conflict and having them spend time together.                 
This is a great approach, but viewing empathy as a motivated phenomenon encourages us to take another approach as well, not just teaching people how to empathize, but getting them to want to empathize in the first place. Not just training skills, but also building motives.                 
That is what my lab has been up to for the last couple of years; we've been generating and testing social psychological “nudges” that might encourage people to want to empathize. We're excited to bring this into a bunch of spheres, including testing whether we can reduce bullying in adolescents and help physicians be more effective in treating their patients.        
There are huge disparities in how people feel about empathy and what they think it is, depending on where they fall on political and social landscapes. Both more conservative and more liberal people can be extremely empathic or very un-empathic. The question is, empathy for whom? Folks on the right end of the political spectrum tend to be more empathic with members of their group; they're oriented towards tradition, and towards establishing connections with people who are part of those traditions.        
At least the cultural mores progressives tend to be more indiscriminate and to value, in some egalitarian way, the emotions of everybody. There are shifting cultural norms surrounding who should be empathic and who should not be empathic. Men and women, for instance, are stereotyped into roles that drive them towards being more and less empathic, respectively. That is almost certainly a holdover from previous generations.
To my mind, these constructions of empathy, as a Republican or a Democratic thing or a male or a female thing, are historical more than they are embedded in the structure of who we are. One of the things that's curious to me is how empathy changes over time. A very famous and quite controversial study that came out a few years ago by Sara Konrath and her colleagues at Michigan, found that college students report being much less empathic now than they did thirty years ago. There's a drop-off in empathy that's pretty steady across that thirty-year period, but especially pronounced in the last ten years. People have jumped on the idea that this has to do with electronic forms of communication, people losing out on face-to-face contact in favor of contact that's mediated by some an electronic device.                 
That's an interesting assertion. To my mind, it's an easy conclusion to draw. I would be just as likely to believe that people are not necessarily more or less empathic, but rather, they feel that empathy is something different, and they might not be as drawn to empathy as a construct. They might not feel that it's as desirable as it was thirty years ago. This, of course, is interesting because thirty years ago is the middle of the '80s, which people probably don't consider the most empathic decade on record. Nonetheless, when we see changes in peoples' empathic experience across cultural lines, across time, across gender, it might reflect not only who people are, but who they want to be.        
My hope for our ongoing work and this line of thinking is that it can teach people about empathy, but also teach people how to work with their own empathy. This is one of those cases where education and intervention overlap. If you believe that you can harness empathy and make choices about when to experience it versus when not to, it adds a layer of responsibility to how you engage with other people. If you feel like you're powerless to control your empathy, you might be satisfied with whatever biases and limits you have on it. You might be okay with not caring about someone just because they're different from you. I want people to not feel safe empathizing in the way that they always have. I want them to understand that they're doing something deliberate when they connect with someone, and I want them to own that responsibility.        
It's easy to overdose on empathy, and empathy can be dangerous thing for an individual's well-being. It can cause you to burn out. There's something known as compassion fatigue, for hospice nurses and physicians working in hospices. These are people who overload themselves on other people's suffering to the point that they can't take care of themselves anymore.
The idea that you can control empathy is not just meant so that everyone can turn their empathy up to eleven all the time. It's just as important to know when to turn down one's empathy, especially if you need to engage in self-care. If you need to take care of yourself, sometimes it's important to not empathize.                 
My wife is a clinical psychologist, and she says that the last thing that any of her patients need if they're depressed, is for her to be depressed as well. She needs to modulate her empathy on line in order to be able to guide those people towards something that will help them, not just showing them that she feels the same thing as them, but being a source of comfort for them. That requires knowing not just how to turn up empathy, but also how to turn it down sometimes.        
There are cases in which people can use other people's empathy to take advantage of them or manipulate them. Advertisers do this all the time, and politicians. People try to narrativize their ideas and turn them into stories about people's suffering so that you will feel more connected to them. Any ad for Save The Children starts with an example of a child who's in dire straights and the only way that this child will survive is if you help them. This is explicitly meant to tug on people's heartstrings in a very particular way. I don't think that empathy is necessarily always morally positive or negative, it's somewhat neutral and it's really in the way that you use it.

Sunday, December 28, 2014

The truth about free will: Does it actually exist?


The truth about free will: Does it actually exist?


Acclaimed philosopher Daniel Dennett explains why free will is much more complicated than many people believe 



The truth about free will: Does it actually exist? 
Neo chooses the red pill, in "The Matrix" (Credit: Warner Bros.)
The following interview is excerpted from "Philosophy Bites Again"

David Edmonds: One way to exercise my freedom would be to act unpredictably, perhaps not to have a typical introduction to a “Philosophy Bites” interview, or to cut it abruptly short mid-sentence. That’s the view of the famous philosopher and cognitive scientist, Daniel Dennett. He also believes that humans can have free will, even if the world is determinist, in other words, governed by causal laws, and he…

Nigel Warburton: The topic we’re focusing on is “Free Will Worth Wanting.” That seems a strange way in to free will. Usually, the free will debate is over whether we have free will, not whether we want it, or whether it’s worth wanting. How did you come at it from this point of view?

Daniel Dennett: I came to realize that many of the issues that philosophers love to talk about in the free will debates were irrelevant to anything important. There’s a bait-and-switch that goes on. I don’t think any topic is more anxiety provoking, or more genuinely interesting to everyday people, than free will But then philosophers replace the interesting issues with technical, metaphysical issues. Who cares? We can define lots of varieties of free will that you can’t have, or that are inconsistent with determinism. But so what? The question is, ‘Should you regret, or would you regret not having free will?’ Yes. Are there many senses of free will? Yes. Philosophers have tended to concentrate on varieties that are perhaps more tractable by their methods, but they’re not important.

NW: The classic description of the problem is this: ‘If we can explain every action through a series of causal precedents, there is no space for free will.’ What’s wrong with that description?

DD: It’s completely wrong. There’s plenty of space for free will: determinism and free will are not incompatible at all.

The problem is that philosophers have a very simplistic idea of causation. They think that if you give the lowest-level atomic explanation, then you have given a complete account of the causation: that’s all the causation there is. In fact, that isn’t even causation in an interesting sense.

NW: How is that simplistic? After all , at the level of billiard balls on a table, one ball hits another one and it causes the second one to move. Neither ball has any choice about whether it moved; their paths were determined physically.

DD: The problem with that is that it ignores all of the higher-level forms of causation which are just as real and just as important. Suppose you had a complete atom-by-atom history of every giraffe that ever lived, and every giraffe ancestor that ever lived. You wouldn’t have an answer to the question of why they have long necks. There is indeed a causal explanation, but it’s lost in those details. You have to go to a different level in order to explain why the giraffe developed its long neck. That’s the notion of causation that matters for free will.

NW: Assuming that you’re not going to rely on Aesop here, how did the giraffe get its long neck?

DD: The lineage of giraffe-like animals gradually got longer necks because those that happened to have slightly longer necks had a fitness advantage over those with shorter necks. That’s where the explanation lies. Why is that true? That’s still a vexed question. Maybe the best answer is not the obvious one that they got long necks so that they could reach higher leaves. Rather, they evolved long necks because they needed them to drink because they had long legs, and they evolved long legs because they provided a better defense against lions.

NW: So that’s an evolutionary hypothesis about giraffes’ necks. How does it shed any light on the free will debate?

DD: If I want to know why you pulled the trigger, I won’t learn that by having an atom-by-atom account of what went on in your brain. I’d have to go to a higher level: I’d have to go to the intentional stance in psychology Here’s a very simple analogy: you’ve got a hand calculator and you put in a number, and it gives the answer 3.333333E. Why did it do that? Well, if you tap in ten divided by three, and the answer is an infinite continuing decimal, the calculator gives an ‘E’.
Now, if you want to understand which cases this will happen to, don’t examine each and every individual transistor: use arithmetic. Arithmetic tells you which set of cases will give you an ‘E’. Don’t think that you can answer that question by electronics. That’s the wrong level. The same is true with playing computer chess. Why did the computer move its bishop? Because otherwise its queen would have been captured. That’s the level at which you answer that question.

NW: We’re often interested in intention where this is linked to moral or legal responsibility. And some cases depend on information that we get about people’s brains. For example, there are cases where people had brain lesions that presumably had some causal impact on their criminal behaviour.

DD: I’m so glad you raised that because it perfectly illus­trates a deep cognitive illusion that’s been fostered in the field for a generation and more. People say, ‘Whenever we have a physiological causal account, we don’t hold somebody responsible.’ Well, might that be because whenever people give a physiological causal account, these are always cases of disability or pathology? You never see a physiological account of somebody getting something tight. Supposing we went into Andrew Wiles’ brain and got a perfect physiological account of how he proved Fermat’s Last Theorem. Would that show that he’s not responsible for his proof? Of course not. It’s just that we never give causal physiological-level accounts of psychological events when they go right.

NW: I’m still having trouble understanding what an intention is. We usually think of intentions as introspectible mental events that precede actions. That doesn’t seem to be quite what you mean by an intention.

DO: When discussing the ‘intentional stance’, the word ‘intention’ means something broader than that. It refers to states that have content. Beliefs, desires, and intentions are among the states that have content. To adopt the intentional stance towards a person-it’s usually a person, but it could be towards a cat, or even a computer, playing chess-is to adopt the perspective that you’re dealing with an agent who has beliefs and desires, and decides what to do, and what inten­tions to form, on the basis of a rational assessment of those beliefs and desires. It’s the stance that dominates Game Theory. When, in the twentieth century; John von Neumann and Oskar Morgenstern invented the theory of games, they pointed out that game theory reflects something fundamental in strategy. Robinson Crusoe on a desert island doesn’t need the intentional stance. If there’s something in the environment that’s like an agent-that you can treat as an agent-this changes the game. You have to start worrying about feedback loops. If you plan activities, you have to think: ‘If I do this, this agent might think of doing that in response, and what would be my response to that?’ Robinson Crusoe doesn’t have to be sneaky and tiptoe around in his garden worrying about what the cabbages will do when they see him coming. But if you’ve got another agent there, you do.

NW: So, Man Friday appears, and there are problems …

DO: As soon as Man Friday appears, then you need the intentional stance.

NW: So if you have the complexity of interaction that is characteristic of an intentional system, that’s sufficient for its having intentions. So there doesn’t seem to be any room for the mistake of anthropomorphism. Anthropomorphism, if the situation is complex enough, is simply the correct attitude to hold towards some inanimate things.

DD: We can treat a tree from the intentional stance, and think about what it needs, and what it wants, and what steps it takes to get what it needs and wants. This works to some degree. Of course, it doesn’t have a soul; it’s not conscious.
But there are certain patterns and reactions. Recently, we’ve learned that many varieties of trees have a capacity that gives them quasi-colour vision. When the light on them is predominantly reflected from green things they change the proportion of their energy that goes into growing tall We might say that they have sensed the competition and are taking a reasonable step to deal with the competition. Now, that’s a classic example of the intentional stance applied to a tree, for heaven’s sake! Fancier versions apply to everything from bacteria, through clams and fish and reptiles and higher animals, all the way to us. We are the paradigm cases.

What’s special about us is that we don’t just do things for reasons. Trees do things for reasons. But we represent the reasons and we reflect on them, and the idea of reflecting on reasons and representing reasons and justifying our reasons to each other informs us and governs the intentional stance. We grow up learning to trade reasons with our friends and family. We’re then able to direct that perspective at evolutionary history, at artifacts, at trees. And then we see the reasons that aren’t represented, but are active. Until you get the level of perspective where you can see reasons, you ‘re not going to see free will The difference between an organism that has free will and an organism that doesn’t has nothing to do with the atoms: you’ll never see it at the atomic level, ever. You have to go to the appropriate design level, and then it sticks out like a sore thumb.

NW: So we can adopt the intentional stance towards a chess-playing computer, and we probably ought to if we want to beat it at chess, but it doesn’t follow from that that it’s got free will, or agency?

DO: Exactly Those beings with free will are a sub-set of intentional systems. We say ‘free as a bird’, and birds have a certain sort of free will. But the free will of a bird is nothing compared to our free will, because the bird doesn’t have the cognitive system to anticipate and reflect on its anticipations. It doesn’t have the same sort of projectable future that we have; nor does it, of course, engage in the business of persuasion. One bird never talks another bird out of doing something. It may threaten it, but it won’t talk it out of something.
NW: So let’s go back to the original topic. What is the kind of free will worth wanting?

DO: It’s the kind of free will that gives us the political freedom to move about in a state governed by law and do what we want to do. Not everybody has that freedom. It is a precious commodity Think about promises. There are many good reasons to make promises: some long-term projects depend on promises, for example. Now, not everybody is equipped to make a promise. Being equipped to make a promise requires a sort of free will, and a sort of free will that is morally important. We can take it apart, we can understand, as an engineer might say; what the ‘specs’ are for a morally competent agent: you’ve got to be well informed, have well-ordered desires, and be movable by reasons. You have to be persuadable and be able to justify your views. And there are a few other abilities that are a little more surprising. You have to be particularly good at detecting the intent of other agents to manipulate you and you have to be able to fend off this manipulation. One thing we require of moral agents is that they are not somebody else’s puppet. If you want the buck to stop with you, then you have to protect yourself from other agents who might be trying to control you. In order to fend off manipulation, you should be a little bit unpredictable. So having a poker face is a very big part of being a moral agent. If you can’t help but reveal your state to the antique dealer when you walk into the store, then you’re going to be taken for a ride, you’re going to be manipulated. If you can’t help but reveal your beliefs and desires to everybody that comes along, you will be a defective, a disabled agent. In order to maximize getting what you want in life, don’t tell people exactly what you want.

NW: That’s a very cynical view of human nature! There’s an alternative account, surely, in which being open about what you feel allows people to take you for what you really are, not for some kind of avatar of yourself.

DD: Well, yes, there is that. But think about courtship. You see a woman and you fall head over heels in love with her. What ‘s about the worst thing you can do? Run panting up to her showing her that you’ve fallen head over heels in love.
First of all, you’ll probably scare her away, or she’ll be tempted by your very display of abject adoration to wrap you around her little finger. You don’t want that, so you keep something in reserve. Talleyrand once said that God gave men language so that they could conceal their thoughts from each other. I think that’s a deep observation about the role of language in communication. It”s essential to the understanding of communication that it’s an intentional act, where you decide which aspects of your world you want to inform people about and which you don’t.

NW: So freedom, of the important kind, of the kind worth wanting, is freedom from being manipulated. It’s about being in control of your life, you choosing to do things, rather than these things being chosen by somebody else?

DD: Yes. In order for us to be self-controllers, to be autono­mous in a strong sense, we have to make sure that we” re not being controlled by others. Now, the environment in general is not an agent, it’s not trying to control us. It’s only other agents that try to control us. And it”s important that we keep them at bay so that we can be autonomous. In order to do that, we have to have the capacity to surprise.

Excerpted from “Philosophy Bites Again” by David Edmonds and Nigel Warburton. Copyright © 2014 by David Edmonds and Nigel Warburton. Reprinted by arrangement with Oxford University Press, a division of Oxford University. All rights reserved.

Sunday, November 30, 2014

Why self-proclaimed ‘type A’ personalities are overachieving monsters


Why self-proclaimed ‘type A’ personalities are overachieving monsters

30 Nov 2014 at 11:18 ET          
Angry businessman (Shutterstock)

One of the things you get used to, when you live in New York, is encountering a large number of people who preface their statements with this phrase: “I’m a type A personality, so…”

In the last couple of weeks, I have heard the phrase used by an American woman in the final stages of pregnancy, discussing her “birth goals”; a British woman discussing the difficulties she’d been having with “the help”; and a website devoted exclusively to “alpha parents”, which I gather means parents who think very highly of themselves.

Whatever the context, using the words “I’m type A” is often a prelude to some form of conversational douche-baggery.

The people identifying as type A in these circumstances use the term as a synonym for success. Type A, in common parlance, is an advertisement for the self along the lines of: Hey, I may be a bit maddening at times, but it’s only because I have higher standards than you. Anyone who objects to the way of the A-type is merely displaying her position further down the evolutionary chain.

So universal is this interpretation of type A that it has become a principle of marketing. The New York Times just ran a story about Unplug, a new meditation franchise that has opened in Los Angeles, specifically offering “meditation for Type A personalities” and – brace yourselves – “a SoulCycle for meditation”. (Unplugged may be brilliant, but this particular sales spin is bonkers: meditation seeks to dismantle the very hierarchies and categories of achievement upon which the pitch relies. SoulCycle, on the other hand, is about re-reinforcing those categories by pretending the stationery bike you’re on is a mountain that you are conquering – a mountain probably made out of cash and the skulls of type B personalities.)

The funny thing is, this is not at all how the term “type A” was initially intended to be used. It first reached the mainstream in a 1974 book called “Type A Behavior and Your Heart” and its 1996 follow-up, “Type A Behavior: Its Diagnosis and Treatment”. These books were not written by a psychologist but by a cardiologist, Dr Meyer Friedman, who described the type A category in mostly negative terms, as a group of angry, thoughtless people whose behaviour put them at heightened risk of a heart attack. You know who else was type A in this schema? Hitler. (Sorry.)

Anyway, since then, the meaning of “type A” has been appropriated by monsters of overachievement – or at least politely self-conscious entitlement. For those who so actively use the term, type A is made to do a lot of work in a sentence, pulling off a kind of sleight of hand that reconstitutes rudeness or bad behaviour as the inevitable side-effect of ambition. Type As in this context send food back in restaurants, yell at cab drivers and bully their personal assistants with the impunity of those on a plane so much higher than the rest of us. It’s not even their fault! I mean, what do you want from them? They’re type As. As a clinical diagnosis, it is the ultimate humblebrag.

Things might have been different if Dr Friedman had chosen other letters of the alphabet as frames for his theories – Gamma and Delta, say. It’s the hierarchical nature of “A” and “B” that has caused all the problems. The British woman who told me she was type A was someone I had called to get a reference for a woman I was thinking of hiring. She was OK, she said, although not quite snappy enough to suit type A personalities. “What personality type are you?” she said.

I was taken aback. “Er … it depends,” I said, and of course my fate was instantly sealed. Type As scorn ambivalence. A dreadful silence opened up in the conversation.

To be type A or just to be: the categories encourage and launder shitty personalities – and that’s largely unhelpful. It’s possible, one would presume, to be overachieving without being the jerk who yells at the guy working the double shift on minimum wage. Or to be decisive and effective without being totally full of yourself. In its current form, the celebration of type A turns everyone into a 1980s iteration of a Wall Street banker.

So I have an idea. Next time someone looks pleased with themselves and says to you, “I’m a type A personality, so…”, why not gently interrupt them. “Oh, I’m so sorry to hear that. Is there anything I can do to help?”

Thursday, October 30, 2014

You're probably more racist and sexist than you think

You're probably more racist and sexist than you think


Acts of explicit bigotry make the headlines. But the evidence for subconscious prejudice keeps growing 

The Reverend Al Sharpton walks with demonstrators during a silent march to end New York's "stop-and-frisk" program. A US District Court judge has ruled that the New York Police Department deliberately violated the civil rights of tens of thousands of New Yorkers.
The Reverend Al Sharpton walks with demonstrators during a silent march to end New York's "stop-and-frisk" program. Photograph: Seth Wenig/AP
Not surprisingly, we tend to hear the most about bigotry and prejudice when it surfaces explicitly: see Oprah Winfrey's recent experience in a high-end Swiss boutique, for example, or the New York police department's stop-and-frisk policies, ruled racially discriminatory by a judge this week. But the truth is that much prejudice – perhaps most of it – flourishes below the level of conscious thought. Which means, alarmingly, that it's entirely possible to hold strong beliefs that point in one direction while demonstrating behaviour that points in the other. The classic (if controversial) demonstration of this is Harvard's Project Implicit, made famous in Malcolm Gladwell's book Blink. You can take the test here: whatever your race, there's a strong chance you'll take a split second longer to associate positive concepts with black faces than white ones.

Two recent pieces of research underline just how ubiquitous this kind of bias could be. One survey, reported at Inside Higher Education, involved asking white people in California how they felt about meritocracy as the basis for college admissions. The argument in favour of meritocracy, of course, is often used in opposition to affirmative action, which gives extra weighting to black students' applications. But if white Californians backed meritocracy out of pure principle, you'd expect their beliefs to hold firm no matter what race they were thinking about. Yet when you phrase the question so as to remind them that Asian American students are disproportionately successful at getting into Californian colleges, their support for meritocracy wanes.

The implication is that for at least some people, belief in meritocracy is flexible on racist grounds: it's drafted in when it helps justify white students' advantages over black ones – but it suddenly grows weaker when it risks justifying Asian American students' advantages over white ones. The urge towards system justification is strong.

A study published earlier this year is even more unsettling: it suggests that if you fail to challenge someone who expresses prejudiced attitudes, you'll actually become more prejudiced yourself. Researchers engineered a situation in which female participants heard a male experimenter make a sexist remark; some were then given the opportunity to call him out on it. Eric Horowitz explains the scenario:
In each experiment, female participants first rated their beliefs about the importance of confronting prejudice and then engaged in a “Deserted Island” task with a confederate. The task involved selecting from an existing set of people those who would be most helpful on a deserted island. The confederate … chose all males until his final selection, when he justified his choice of a female with a sexist remark (“She’s pretty hot. I think we need more women on the island to keep the men satisfied.”)
Those who had the chance to challenge this remark, but didn't do so, rated the experimenter as less sexist – and challenging sexism as less important – than the others. Which would seem to be a case of cognitive dissonance in action: when you're confronted by prejudice and you don't object to it, your own attitudes shift in a more prejudiced direction, to maintain consistency between your behaviour and your beliefs.

And then there's the finding that more intelligent white people are more likely to disavow racism, but no more likely actually to support policies that might remedy the effects of racial inequality. This news was reported as showing that clever people are just better at concealing their racism from others while harbouring bigoted thoughts. But isn't the more worrying possibility that they're concealing their racism from themselves?

Or to put it another way: those of us who reassure ourselves that we're implacably opposed to prejudice could probably do with being a lot less smug about it.

Wednesday, October 22, 2014

Wake Up! You're Living in a Trance! How to Break Out and Live!

Wake Up! You're Living in a Trance! How to Break Out and Live!

Expert Author Cindy Locher

Wake up! We are all living in a trance, or multiple situation-specific trances, all of our lives. Many of these trances aren't beneficial in our lives. They hold us back, keep us stuck in little worlds of our own creation. Yes, you are living your life in a trance. How can I make this statement?

Easy. We are born into this world with a personality, yes, but without beliefs. A belief is "the psychological state in which an individual holds a proposition or premise to be true," or "Mental acceptance of a claim as truth; Something believed," (definitions from The psychological state in which mental acceptance happens is the state of trance, or hypnosis. In trance or hypnosis, the critical factor of the mind is bypassed, or pushed aside, so that the information or message units presented to the mind are accepted. During the childhood years, the critical factor of the mind is undeveloped and therefore a great majority of your beliefs are formed during childhood. The values and behaviors that you see around you, or are told about by parents, siblings, teachers and peers are accepted much more easily and rapidly. The mind abhors a vacuum, and as the young mind encounters new situations, it requires a belief to be in place in order to know how to function within that situation, so it quickly accepts the dominant belief in its environment.

So, your beliefs were formed in trance, and because they anchor your thoughts and emotional states to that time when they were formed, you operate in a level of trance every time you function within that belief, or set of beliefs.

An example is a person who grows up in a household where money is tight. All around her, she is receiving messages of lack, of how difficult money is to obtain, how important it is to choose security over risk, of how inherently dangerous the world is because of how difficult it is to accumulate wealth. When that child becomes an adult, she may desire to be wealthy, but every time she considers taking an action to bring more wealth into her life, actions that often involve risk, her beliefs about money will come into effect, taking her back into the trance state that was in effect when those beliefs were formed, and she will find herself having all manner of troublesome feelings such as self doubt, depression, remorse about decisions involving risk, and she will consistently sabotage her desired success by not following through with actions to create wealth, pulling up short, continually returning to her comfort zone which says that security, even a security with a small amount of money, is preferable to taking risk. The beliefs in play may be stated as, "risk is bad," "it's better to be secure," "money is hard to come by."

Contrast this person to someone who is raised in an affluent family, and grows up familiar with entrepreneurialism in an environment comfortable with risk. She may grow up accumulating beliefs such as "every risk you take brings you more success," "when things don't work out as planned, you learn something valuable that gets you even further," and "there is always more than enough money." This woman will feel completely secure in leveraging her money and taking risks, fully expecting that every experience will bring more benefit and wealth into her life.

These two women could even be twins raised in different families, because while some elements of temperament are heritable, beliefs are not. Beliefs are a result of your environment. And both of these women are behaving in accordance with their beliefs, therefore they are both operating in trance. The problem is not beliefs, the problem is not the trance that they were formed in and anchored to. The problem is that we don't get the chance to opt-in. We don't get to choose and select the beliefs that will help us to live up to our potential and have the most amazing lives we can.

And while trances and beliefs (often called self-limiting beliefs) about money and abundance are much discussed, beliefs can be beneficial or detrimental in any area of your life, from your ability to learn, to your ability to form and keep good relationships, or to be fit and healthy, and on and on.

Is there an area in your life where your actual results are falling short of what you'd like to see? If so, then that is probably an area where you are carrying limiting beliefs, e.g., operating in a trance, that is not effective or beneficial to you. The way out of this situation is the same way that you got in. The mechanism that formed these beliefs is what you should use to create new beliefs to replace them, and that mechanism is hypnosis. Your actions will always be congruent with your beliefs, so use hypnosis to create beliefs that will enable you to have the life you have.

The method I suggest for this is a system, which starts by choosing an area of your life to improve, a specific goal. Then you enter the process by focusing on the success you already have in your life, which increases your self esteem and confidence to reach further. Then, address the specific fears you have that have blocked you from taking action toward your goals, and increase your motivation to reach those goals. After accomplishing these steps, your subconscious mind will be more ready to release the old, self limiting beliefs. You can't just do that first, you have to lay the groundwork, the foundation, that makes releasing and replacing the old belief desirable to your subconscious. The final steps are to choose new beliefs that support your goals and to use holographic visualization to create the context for the new beliefs--basically recreating the process that created the released beliefs, but this time you select your beliefs.

Following these steps is a process, and should be done over a period of time that feels right, but not so slowly that homeostatic resistance has a chance to un-do any gains. I suggest a system of mental reprogramming over the course of a month to six weeks as ideal for most. As with any change, this system relies on a combination of conscious action paired with subconscious messages, and reinforcement through repetition over time, for its success.

Cindy Locher is a clinical hypnotherapist and mind/body medicine expert specializing in areas of stress management and personal change. You can learn more about how hypnosis works in your life by visiting Cindy on at on the web.

Article Source:

Article Source:

Tuesday, September 16, 2014

Are People Naturally Inclined to Cooperate or Be Selfish?


Are People Naturally Inclined to Cooperate or Be Selfish?


Ariel Knafo, associate professor of psychology at the Hebrew University of Jerusalem, responds:

The jury is still out on whether we are fundamentally generous or greedy and whether these tendencies are shaped by our genes or environment.
Some evidence points to humans being innately cooperative. Studies show that in the first year of life, infants exhibit empathy toward others in distress. At later stages in life we routinely work together to reach goals and help out in times of need.

Yet instances of selfish behavior also abound in society. One recent study used a version of the classic Prisoner's Dilemma, which can test people's willingness to set aside selfish interests to reach a greater good. After modeling different strategies and outcomes, the researchers found that being selfish was more advantageous than cooperating. The benefit may be short-lived, however. Another study showed that players who cooperated did better in the long run.

It seems that human nature supports both prosocial and selfish traits. Genetic studies have made some progress toward identifying their biological roots. By comparing identical twins, who share nearly 100 percent of their genes, and fraternal twins, who share about half, researchers have found overwhelming evidence for genetic effects on behaviors such as sharing and empathy. In these twin studies, identical and fraternal twins are placed in hypothetical scenarios and asked, for example, to split a sum of money with a peer. Such studies often also rely on careful psychological assessments and DNA analysis.

Other work highlights specific genes as key players. My colleagues and I recently identified a gene linked to altruistic behavior and found that a particular variant of it was associated with more selfish behavior in preschoolers.

As for how we might have acquired a genetic blueprint for collaboration, evolutionary scientists offer several explanations. Cooperative behavior may have evolved first among relatives to promote the continuation of their genetic line. As communities diversified, such mutual support could have broadened to include individuals not linked by blood. Another possibility is that humans cooperate to gain some advantage, such as a boost in reputation. Finally, a hotly debated idea is that evolutionary processes take place at the group level. Groups of highly cooperative individuals have higher chances of survival because they can work together to reach goals that are unattainable to less cooperative groups.

Yet almost no behavior is entirely genetic, even among identical twins. Culture, school and parenting are important determinants of cooperation. Thus, the degree to which we act cooperatively or selfishly is unique to each individual and hinges on a variety of genetic and environmental influences.

This article was originally published with the title "Are People Inclined to Act Cooperatively or Selfishly? Is Such Behavior Genetic?."

Wednesday, May 7, 2014

(Updated) Dreaming to forget: the real reason why

The Human Givens Institute

Why we Dream to Forget (I don't Dream)

Joe Griffin explains why dreaming and forgetting our dreams, fulfils a vital human need.

THE human givens approach is a set of organising ideas that provides a holistic, scientific framework for understanding the way that individuals and society work. That framework has one central, highly empowering idea at its core — that human beings, like all organic beings, come into this world with a set of needs. If those needs are met appropriately, it is not possible to be mentally ill. I do not believe a more powerful statement than that could ever be made about the human condition. If human beings' needs are met, they won't get depressed; they cannot have psychosis; they cannot have manic depression; they cannot be in the grip of addictions. It is just not possible.

To get our needs met, nature has gifted us our very own internal guidance program — this, together with our needs, makes up what we call the human givens. We come into the world with an instinctive knowledge of what we need and with a set of inner resources that can help us get our needs met, provided we use them properly and are living in a healthy environment.

In terms of the history of where our knowledge about human needs comes from, there has been a distinguished cast of contributors, going right back to ancient times. More recently William James, Sigmund Freud and Alfred Adler explored human needs, and there was an outstanding contribution by Abraham Maslow, the pioneer of humanistic psychology, who first talked about a hierarchy of needs.[1] It was Abraham Maslow who introduced the idea that, until basic needs are met, people can't engage with questions of meaning and spirituality — what he calls self-actualization.

Another contributor was William Glasser, who put forward the idea that fulfillment of people's needs for control, power, achievement and intimacy depends on their ability to behave responsibly and conscientiously; he argued vehemently that mental illness springs from these needs not being met.[2] So the human givens approach belongs to no specific people, certainly not exclusively to Ivan Tyrrell and me, although we may have named it; it belongs to the human species. We are just talking more precisely about what nature has gifted us, and there have been many great contributors down the millennia and the centuries, who have contributed to our understanding of the human givens.

What we have started to do, in what has come to be called the human givens approach, is look at human needs in the light of increasing knowledge and recent discoveries that flesh them out, so that we can define them and concretise them and make them more real. We now know that having meaning and purpose, a sense of volition and control, being needed by others, having intimate connections and wider social connections, status, appropriate giving and receiving of attention etc, are crucial for health and well-being. (Attention needs weren't understood in Western psychology at all, before the contribution of Idries Shah.) So, on one side of the equation, we now have a much fuller understanding of human needs.

And, on the other side, we have our human resources — the innate guidance system. We are learning much more about how that works and the more we understand, the more effective we will be, for sure.

The REM state is at the core of being human

At the heart of the internal guidance system lies the REM (rapid eye movement) state. The REM state is, of course, predominantly the state in which dreaming occurs. That is how it got discovered. Sleep researchers studying in the laboratory what happens in the brain during sleep discovered that the brain becomes activated periodically throughout the night, to the accompaniment of rapid eye movements (REM). When they woke people up from sleep at those times to find out what was going on, they learned that, 80 per cent of the time, people reported that they were dreaming. But they dreamed just seven per cent of the time during non-REM sleep.[3] So there is an irrefutable link between dreaming and the REM state.

If you pick up almost any major textbook on the neurology of the brain and look for the REM state in the index, in all probability, you won't find it. Nonetheless, within the human givens approach, we say that the REM state is at the core of being human. This is no idle claim. In the human givens theory, right from the beginning, we have advanced the evidence that instinctive templates are programmed in during the REM state in the fetus[4] and are pattern matched in the environment, after a baby comes into the world. Directly linked to this is the equally significant role of REM-state dreaming — explained in our expectation fulfilment theory of dreaming. I shall be showing how the latest scientific findings now validate these ideas. But first it is important to take a look at the main theories of dreaming, because a challenge to them needs to be able to stand up to the most rigorous scientific examination.

There are four major players in the field. The theory that has dominated for the last 30 years is one put forward by Professor Allan Hobson of Harvard University and his colleague Robert McCarley. They called it the activation synthesis theory. In the light of more recent evidence, this theory has fallen apart. As a result, there has been an effort to revive Freud's theory and I shall be giving the evidence for why that doesn't stand up either. The third theory in the field is one developed by Francis Crick, who died recently. 

He was better known as the co-discoverer of DNA. And finally, there is the theory that dreaming facilitates memory consolidation. I am going to look at all these theories to see where they stand now.

Random barrage

I will start with Hobson and McCarley's activation synthesis theory.[5] Laboratory studies of brain waves show that, just before we go into REM sleep and during it, powerful electrical signals pass through the brain like a wave. On electroencephalogram recordings (EEGs), they appear as sudden spikes. The signals arise from the pons (P) in the brainstem, from the neurons that move the eyes, and then travel up via a part of the midbrain called the geniculate body (G) to the occipital (O) cortex in the higher brain — so are known as PGO spikes. (They also constitute what is termed our orientation response, which, when we are awake, is what directs our attention to any sudden change in the environment, such as a sound or movement.)

Hobson and McCarley's theory was that these PGO spikes were sending a random barrage of stimulation through the brain every so often, activating the whole cortex as a result; the higher brain had to try and make some sense of this random barrage, and dreams were the result. Dreams, therefore, were an epiphenomenon: they had no intrinsic meaning. They were just the brain's efforts to synthesise some sense from random signals.

Evidence has accumulated over the last 30 years to disprove this theory. The first piece of evidence that disproved it emerged once PET scanning of the brain was developed. According to Hobson and McCarley's original theory, a barrage of random stimulation coming up periodically from the brainstem was synthesised by the prefrontal cortex into dreams. But scans of the brain in the REM state showed that the cortex was very selectively activated. The emotional brain (the limbic system) and the visual brain were highly activated but the prefrontal cortex was excluded from this stimulation (the very part supposed to be doing the synthesising).[6] Indeed, Hobson himself, over the last few years, has been so drastically redrafting the theory that it is just a pale shadow of its original presentation.[7] Even he now agrees with the evidence that, instead of global forebrain activation being responsible for dream synthesis, it is the emotional brain that is responsible for dream plot formation.[6]

This evidence on its own disproves the theory. However, there is more. Research accumulated over the last 40 years, and universally accepted by dream researchers, shows that dreams are coherent and that they relate to previous waking experiences. There also tends to be continuity in the type of dream content over time and this could not be so if there were a random stimulus.

Hobson and McCarley also theorised that REM sleep serves to 'rest' the cells in the brainstem which produce serotonin and noradrenalin, because in REM sleep these particular neurotransmitters are not used by the brain. Their idea was that these neuronal pathways were being rested so that we would wake up the next day, refreshed by REM sleep. Consequently, then, the more REM sleep people had, the more refreshed they should be. But researchers looking at the sleep patterns of depressed patients found that they had massive amounts of REM sleep in proportion to slow-wave sleep and yet, far from waking up refreshed, they were waking up exhausted![8] How did Hobson account for this? He just said, "It is a paradox."

Yet another problem with this theory, which Hobson admits in his latest book, is that it can't explain why certain dreams have positive emotions and some have negative emotions.[9] But the final nail in the activation synthesis theory's coffin is the finding that deep brainstem lesions do not generally stop dreaming, whereas certain lesions in the cortex do, despite the existence of brainstem-initiated REM sleep.[10]

Who wants nightmares?

I should now like to move on to Freud's theory. There has recently been strong evidence to show that REM involves the expectation dopamine pathway. Professor Mark Solms, who holds the chair in neuropsychology at the University of Cape Town in South Africa, has been pre-eminent in synthesising this research, showing that when people go into the REM state, the motivation circuit in the brain — the expectation pathway — is activated.[10] As Freud talked about motivation and emotion, and, as Hobson was clearly wrong, perhaps, says Solms, Freud was right after all. And, in a very broad sense, this activation is support for Freud. But what it leaves out of the picture is that, when you activate the expectation pathway, you are activating consciousness. You are not activating some subconscious conflict. So there is no real evidence there in support of Freud.

Secondly, Freud's theory has real difficulties explaining why people so often have anxiety dreams.[11] Dreams also involve being angry a lot of the time. Freud said dreams were for fulfilling wishes. But who would want nightmares? Who would want to get beaten up or sexually assaulted in their dreams? So Freud's theory just didn't explain in any coherent fashion the fact that dreams involve far more than wishes and that only a minority of them can be characterised as wishes. And his claim that all dreams are sexually motivated is no longer given any credence.

Freud claimed that we dream to protect sleep, to prevent us being awakened by threatening, sub-conscious wishes.[11] However, the REM state, in which most dreams occur, is a regularly occurring biological programme in humans and other mammals, and not something which arises to protect sleep.[3]

To recap, expectation pathways activate conscious, not subconscious, experience. There is no evidence at all that dreams are sexually motivated and Freud can't plausibly explain why we would wish for anxiety dreams. The REM state occurs in all mammals, so it is not just a human activity, protecting sleep, as Freud suggested. A cat is unlikely to be dreaming about its Oedipus complex. So the attempt to revive Freud's theory seems to be based more on wishful thinking than on realistic considerations of its defects.

Strange parasitical connections

Francis Crick and Graeme Mitchison's theory suggested that we dream to forget.[12],[13] Their idea came from studying work done on computer 

programs that simulated neural intelligence. An overload of incoming information could trigger "parasitical connections" between unrelated bits of information that interfered with memory, and an unlearning system had to be developed to knock these out of the computer systems. Crick and Mitchison postulated that a complex associational network, such as the cortex, might become overloaded in the same way, and that the PGO spikes were an unlearning mechanism, in the form of random 'bangs' coming up from the brainstem every so often, to knock out these fairly weak parasitical neural links. As, at that time, most dreams were thought to be bizarre in content, this was taken as evidence for the existence of these parasitical connections. Crick and Mitchison theorised that, if we didn't have dreaming, we would go on making more and more bizarre connections, which would imply that, if we block REM sleep, our memories should become more addled. If this theory is correct, then depressed people on antidepressants that block REM sleep (monoamine oxidase inhibitors — MAOIs) should suffer memory impairmentthey don't. If anything, depressed people on MAOIs report memory improvement rather than increased confusion in memory recall.

Yet another problem with the theory is that, over the last decade or so, there have been significant technical advances in the recording of what actually happens during dreaming. The overwhelming majority of dreams are, in fact, quite routine, everyday experiences.[14] It is the tiny percentage of dreams that we recall that seem bizarre: dreams recorded in the sleep laboratory, when sleepers are woken as soon as they go into REM sleep, are mostly not bizarre at all. As a result of this discovery, Crick revised his theory to suggest that it might still, at least, explain those few dreams that do have a bizarre component to them. In other words, his theory has been so drastically modified that very little of it remains at all.

Finally, since Crick and Mitchison formulated this theory, not a shred of evidence has arisen to show that the human brain makes parasitical connections. That is something known only to occur with computer networks.

REM's role in memory

The final theory on the table is the memory consolidation theory. Blocking REM sleep impairs the ability to perform procedural tasks tasks that involve the learning of a skill through a sequence of steps that involve making predictions. For example, rats trying to find their way around a complex maze to get a reward remember much better how to do the task if they have had REM sleep. Without REM sleep, the knowledge becomes damaged. So REM sleep would seem to facilitate this type of learning.[15]

It has also been shown that, in complex learning, memory is improved if we have both slow-wave sleep and REM sleep, otherwise the knowledge doesn't seem to survive quite so well.[16] So, clearly, there is some connection between memory and REM sleep.

But is there evidence that the REM state and dreaming exist fundamentally to carry out memory consolidation? The first piece of evidence that goes against this possibility is that taking MAOIs, the antidepressants that suppress REM sleep, does not lead to memory impairment.[17]

So, although there is some evidence that certain types of learning do seem to be improved by REM sleep, most dreaming cannot be about retaining new learning.

The second fact that goes strongly against the memory consolidation hypothesis is that almost nobody remembers their dreams! It is very rare to remember dreams, unless you've trained your- self to do so. We dream for about two hours a night. Dream researcher Allan Hobson, an expert trained to recall his dreams, has himself pointed out the fact that, when he looked at the number of dreams he had recorded against the number he had forgotten, it was something like 0.0002 per cent.[7] So it seems hard to see how we could be dreaming to make our memories permanent if we forget the material we are processing as soon as we open our eyes. (Yet, as certain types of memory do seem to be facilitated by REM sleep, any state-of-the-art theory of dreaming must be able to account for it.)
"We need a new theory"

Writing in Behavioural and Brain Sciences, in a special issue devoted to the most widely promoted dream theories, Professor Domhoff of the University of California, recognised as one of the leading researchers in this field, commented on the evidence presented, concluding, "If the methodologically most sound descriptive empirical findings [ie the findings that are most solidly established to explain dreaming] were to be used as a starting point for future dream theorising, the picture would look like this:
1. Dreaming is a cognitive achievement that develops throughout childhood
2. There is a forebrain network for dream generation that is most often triggered by brainstem activation [the PGO spikes]
3. Much of dream content is coherent, consistent over time and continuous with past or present emotional concerns."[14]
So any theory of dreaming would have to account for those three most solidly established findings. I personally would add to that the need to explain memory consolidation as well, if a theory were to explain the full picture.

Here is Domhoff's final conclusion: "None of the papers reviewed in this commentary puts forward a theory that encompasses all three of these well-grounded conclusions. This suggests the need for a new neurocognitive theory of dreaming."[14] In other words, according to Professor Domhoff, theories that have dominated the field over the last 30 years do not explain why we dream, and there is need for a completely new one.

The expectation fulfilment theory

Let us take a look at the expectation fulfilment theory to see if it can explain the evidence that the other theories cannot explain. This theory of dreaming is quite different from the others in that it is firmly based in biology and yet explains the richness of the subjective experience of dreams. It states that: all arousals of the autonomic nervous system — the generation of an emotion, however slight — form half of a process. The second half is that the brain has to fulfil that expectation (an emotion is the same as an expectation) through an action of some kind. If that doesn't happen in reality during the daytime, it happens metaphorically in a dream at night, thus completing the arousal — de-arousal process.

First, has any evidence against the expectation fulfilment theory been put forward? Since it was first published 12 years ago, it has been looked at by professors at various universities. Professor Hans Eysenck of the Institute of Psychiatry saw it at the very beginning and advised me to publish it in book form, to do it justice. Lots of other people working in dream research have looked at it as well, and no flaws have been identified in the theory, to date.
There is, however, a considerable amount of evidence in support of it. First is the experiment I carried out, the findings of which were published in The Therapist,[18] predecessor of the Human Givens journal, and then in book form, in The Origin of Dreams,[19] and further updated in Dreaming Reality (see right).[20] It was the first time in scientific dream research, I believe, that someone set out to predict their own dreams, with the hypothesis that dreams relate to emotional experiences of the day before a hypothesis that has since been well validated. Dreams do involve waking emotional material.

I set up an experiment using my own dreams, waking myself up every two hours, and, for a period of a week, predicted the emotional concerns that would feature in the dreams. I found that the dreams always reflected my waking emotional concerns of the previous day, but not necessarily the most important of these. By analysing the data, I was able to show that dreams dealt not with emotional concerns per se but with those emotional concerns that had not been dealt with satisfactorily. No matter how important the emotional concern, if it got dealt with while awake, it was over and did not re-appear in a dream. The only emotional concerns that became dreams were those that I was still aroused about, for which I still had expectations that I couldn't complete.

Dreams are the fulfilment of those emotional expectations that have not been met prior to waking. They always act out the fulfilment in metaphor — ie a matching sensory pattern to the original expectation. For example, if a man feels like hitting his boss but restrains the impulse, that night he might dream of attacking another authority figure. The hypothesis was derived from a scientific experiment, which anyone can replicate, should they wish.

A second piece of evidence arises from an analysis of Freud and Jung's specimen dreams, which they had offered as the best convincing evidence for their theories.[20] The analysis revealed that the dreams were perfect metaphorical manifestations of what was worrying them the day before, according to detailed written data they had themselves provided. This was structurally extremely tight-fitting data. Freud's dream of Irma's injection and the expectations he had the previous day, the sequencing of the dream, the characters involved and what they actually did in the dream provided an exact mirror image of the biggest event he had on his mind before he went to bed that night. There is no ambiguity there. It was likewise for Jung's dream.

Furthermore, hundreds of other dreams for which we had the emotional data from waking have been analysed, and have validated the theory. Thousands of people have read the theory and many of them have contacted us with confirmation of their own.

In addition, the expectation fulfilment theory explains the developmental evidence that Domhoff wanted explained — that dreams are coherent (because they are metaphorical representations of uncompleted emotional expectations) and become more complex over time from childhood (as our introspective processes become more complex and develop). Indeed, it even goes further and explains how REM function in the fetus relates to REM function in adulthood in dreams — the pattern-matching templates are programmed in during REM sleep, as a fetus and in early life, and the same pattern-matching process is used in dreams to deactivate emotional arousal.

It is also consistent with Domhoff's requirement for "a forebrain network for dream generation"[14] (ie the uncompleted emotional expectations). And that is "most frequently triggered by brainstem activation" — the PGO startle response serves to alert the cortex to something happening and, as the brain is getting no information from the outside world at this point, it has to release from memory its current unfulfilled expectations, as its best guess as to what the 'something happening' might be.

The theory explains the consistency of dreams and the relationship to waking emotional concerns, so it meets Domhoff's criteria in that respect too. It explains the depression evidence, which none of the other theories does. (Depressed people have proportionally too much REM sleep because they continually worry and introspect, causing so much arousal needing to be discharged in dreams that they end up exhausted in the morning, instead of refreshed after sleep.[8]) It provides the first scientific explanation for hypnosis (showing that the REM state and the state known as hypnosis are one and the same). It is the only REM theory in the field to go beyond itself in this way and, indeed, a good theory ought to be able to do that, to enlarge our understanding by explaining other things not currently explained.

The functions of REM sleep

So what, then, are the functions of REM sleep? There are three. First, it has the function of switching off emotional expectation and thereby reducing the stress of managing increasing numbers of expectations that are no longer applicable to the current environment.

Second, it creates spare storage capacity in the cortex. If we look back to a primitive mammal such as the echidna (the spiny anteater), we see that, from one perspective, it has the most amazing brain on earth. It has the biggest cortex of any creature alive, for the amount of its body-weight. It doesn't have REM sleep. That is evidence to suggest that, if a creature doesn't have REM sleep, maybe it needs to have a massive cortex instead because, if unfulfilled expectations are not cleared out each night, a brain is going to need the ability to grow an ever bigger catalogue of expectations that it is still seeking to fulfil. So, by clearing in dreams each night expectations that haven't been acted out, there can be more spare capacity in the cortex.
Third, REM sleep has the function of preserving the integrity of our emotional templates. Up till now, we have never really said much about how this happens. We have said that somehow REM sleep removes impediments by acting out the unfulfilled expectations, but we haven't been specific about what is actually going on in the brain. And that is what I would like to do nowto carry the expectation fulfilment theory forward to show that it is consistent with the very latest neurological findings and ideas about how the brain works.

First, let us look at how intelligence systems work. Evolutionary psychology had postulated that brains, and in particular the human brain, must contain particular modules in the cortex that give us various types of intelligence. Hundreds, perhaps even thousands of these modules are in there, written from the genes in the cortex, telling us how to do all the things we might have to do — for instance, how to choose a mate, how to calculate whether there is reciprocity in a partnership, when to have sex, what tastes good, how to recognise the faces of the people we know, when to get sexually jealous and how to read other people's minds. Hundreds of little intelligence systems within the brain were postulated to explain how human beings can be so intelligent.[21]

What we've learned from bees

That view has been strongly challenged from two separate sources. The new theory is that the brain has what is termed an adaptive intelligence.[22] It starts off with some basic instincts but these instincts are modifiable, as a result of experience, and the brain can continually refine its learnings.

Of course, this is similar to the terms in which we have been talking for the last eight or so years in human givens theory — patterns being programmed into the brain in the REM state, during gestation and very early childhood, which humans seek to complete in the environment after birth, allowing our brains to be more flexible. All learnings, we have said, are about pattern refinement — and that, in effect, is what is contained in the latest scientific theory, which has been shown to be capable of explaining complex human behaviour.

The first evidence for this came from research findings that the honeybee has a neural transmitter called octopamine, which is similar to dopamine, our own motivation neurochemical. One single cell, using this neurochemical, motivates the bee to go out every morning to search for nectar (instinctive behaviour) and then that cell keeps a record of where the nectar is found.

The next time the bee goes out, it predicts, on the basis of that record, where it will get nectar today. So if the bee got nectar from a blue flower yesterday, it will pattern match and go to a blue flower today, predicting and expecting that it will get nectar there again. If it doesn't get nectar from the blue flower today, it immediately revises its memory store. So the memory store will now show that blue is not such a good predictor of nectar after all. Clearly, then, the bee has an instinct plus a capability for learning; it takes an instinctive pattern, builds on current information and modifies it, literally, on the wing.[23]

That is also what was postulated by computer scientists trying to model how the brain works. Their computer program succeeded in modelling complex bee foraging behaviour, and many other kinds of more complex behaviours, using this simple idea that you start with an instinctive core that can be modified through feedback from previous efforts.[24] It is a far more efficient system for acquiring knowledge than the one suggested in the module theory. If we had masses of modules in the brain, all occupying their own areas, the brain should be pretty well fixed, and that is exactly what the evolutionary psychologists thought was so.

Their idea was that those modules had evolved over the two million years that we were Stone Age hunter-gatherers and that, as all of our knowledge would go back to those times, we are ill-fitted for the world we live in today —a very fatalistic view of human nature. But the new information allows us to be more optimistic about human capacity. Even bees can learn. Behaviour has been shown to be so very much more malleable than anyone had ever suspected.

The sight cells that read Braille

The second source of evidence is neurological, and became available to us once brain scans were developed that could show exactly what was happening inside the skull cap when people learn new things. For example, it is now clear that, when people are born blind, the brain cells that would have been used to generate sight learn to read Braille instead.[25] So neurons are incredibly adaptive. They can take on new tasks.

It is now well known, for example, that the hippocampal area in taxi drivers' brains grows new cells and expands, when they do 'the knowledge' — the huge number of street maps they must plot out, learn and keep in their heads. It has also been shown that we can teach even autistic children some of the basics for empathy, despite their initial marked lack of emotional understanding.[26]

Dreaming to forget: the real reason why

Dreams are for forgetting

So this is where the cutting edge of brain research is. However, there is another important aspect of dreaming that I have alluded to but that I want to look at properly now. None of the dream theories I have discussed can satisfactorily explain why we forget almost all of our dreams. The question that has to be answered is, what is the function of forgetting our dreams? We shall see that the expectation theory of dreaming provides a satisfying explanation for this widely observed phenomenon.

Brains evolved to help animals make more accurate predictions about what behaviours would help them survive. But the type of expectations we have as humans, or that other animals have, for that matter, are infinitely more complex than those of a bee. When mammals evolved, they developed warm-bloodedness, which meant that they were no longer dependent on the sun's heat for mobility. But maintaining a constant warm body temperature required a greatly increased energy intake (estimated at up to a 500 per cent increase in calories needed). So, to meet this need, mammals had to become much better at locating food supplies while also avoiding becoming food themselves for other warm-blooded predatory mammals — all of which required a much more sophisticated prediction system, to reduce the risks.

The cortex provided the answer. The evolution of the cortex, with its much increased processing capacity, enabled mammals not just to act purely on instinct — see a food source and go for it — but to weigh up the risks and benefits of an action — do I have time to make the kill and hide it or will I get eaten by another animal while I'm doing it? In more technical terms, it enabled the ancient dopamine prediction circuits of the limbic system to be subjected to a higher-order risk analysis, based on the additional computing power provided by the cortex.

However, that left another problem to be solved. The limbic system communicates with the cortex via behavioural impulses (emotions). If these are not acted upon (for instance, because the strategy is deemed too risky or because the cortex has set other priorities — such as deciding, in certain circumstances, that it is more important to protect young than to chase a possible food source) they don't go away. In the case of humans, this state of unfulfilled expectation can also occur when we think about something in the future or the past that causes emotional arousal in the present but which can't, by its very nature, be acted upon. These uncompleted emotional impulses — expectations — stay switched on, taking up processing capacity in the expectation system.

So far, two strategies have evolved for dealing with this. The first, as I mentioned in connection with the spiny anteater, is the development of a much bigger cortex to store all these expectations whilst retaining sufficient spare computing power for making new, ongoing risk assessments. This may also be the strategy evolved by dolphins, which have an exceptionally large cortex. The muscle paralysis that accompanies REM sleep places dolphins at risk of drowning, so they can have hardly any REM sleep.

The second and much more efficient method is dreaming. In dreaming, we act out the unrealised expectations from waking by pattern match-ing them to analogous sensory patterns — images and events stored in memory — as it is through pattern matching that the REM system works. 

I am often asked why the pattern match has to be analogical or metaphorical. Apart from the evidence I have published explaining this point,[20] there is a sound physiological reason for why it must be so. An expectation is an imagined scenario, using images from memory.

In dreaming, we are asking memory to provide a scenario that matches a scenario that is already a part of memory — the event that aroused the expectation. So the matching scenario has to be the best fit that memory can provide. Think of it this way — if I hold up my left hand and ask my brain for a best-fit pattern match, it can't use my left hand because that is the one I want a match for — so it must use my right hand, as the best-fit pattern match for my left. (This does not happen in waking because we pattern match our expectations to whatever stimulates them in the environment, not to a memory. If we want an ice cream, the expectation is fulfilled when we are actually eating it.)

The dream, then, by fulfilling the expectation, completes the circuit and switches off the arousal. But that is not the end of the matter, for we have now converted an unrealised expectation into a factual memory of completing it. Ordinarily, the hippocampus, the conscious memory store, holds our memories of recent events and quickly deconstructs those memories and sends them to various parts of the cortex — the parts concerned with vision, hearing, touch, etc — for storage. It does that to facilitate efficient pattern matching. But, if the dream is allowed to be stored as a real memory, it will corrupt the memory store and greatly diminish our ability reliably to predict the outcome of similar experiences in the future. This is avoided by preventing the hippocampus from sending the dream information to the cortex for long-term storage.[27] As explained earlier, PET scans and other types of research have shown that, in dreaming, the prefrontal cortex is closed down.

So it is no accident that the prefrontal cortex is switched off during dreaming. It is no accident that the hippocampus doesn't de-construct information and send it all around the brain because what the hippocampus is doing in dreaming is getting rid of expectations that didn't pan out while we were awake. It is getting them out of the way, making them inaccessible, in effect, so as to allow us to build up a proper, intelligence prediction and expectation system, an accurate storage of knowledge. (This also explains the evidence for memory consolidation — if you take away all the false expectations, the memories that are consolidated are more accurate.)

The expectation fulfilment theory can therefore explain why dreams are about emotionally arousing events, particularly about emotionally arousing expectations. It explains why dreams are consistent over time. It explains the developmental aspects of dreaming. It can explain the other tests put down by Domhoff. But, more than that, it explains the cutting-edge evidence that the brain is ever malleable, by explaining how it can be so malleable.

The purpose of the brain is to predict, so that we can get our needs met. We need to have a system that can continually adapt itself, and the expectation fulfilment theory shows how the brain does that by cancelling out the expectations that didn't work. It enables us to have a bang-up-to-date register of what really does get needs met in our lives, so that we can more accurately predict what we need to do in the future. (But we can only work with the experiences we have had. If, as a child, a young woman experienced both abuse and love from her father, she may continually seek a relationship with abusive men, until eventually she can learn that love exists separately from abuse.)

Knowledge contained in patterns

So, in the human givens theory, right from the beginning, we have argued that the evidence shows that templates programmed in during the REM state in the fetus aren't fixed. There is only ever a part of them that is specified; the rest comes from learning. The new discoveries about the brain showing, for example, that we can use for reading Braille the neurons in the cortex that were previously used for seeing, shows that the cortex is highly malleable, and that we can change the ways we complete patterns. What the human givens theory has postulated from the beginning, cutting-edge research is strongly supporting now — our brains are built for life-long learning.

Given this fact, and that dreaming is predominantly about bringing down emotional arousal and removing failed expectations and forgetting them, maybe it would make best sense to change the name of the expectation fulfilment theory. It is an accurate name — dreams do fulfil expectations — but it doesn't make plain that we need to do this specifically to forget them.

I am proposing "the elimination of emotional expectations theory". As this has three 'E's, it can be shortened to "the triple-E theory". Elimination of emotional expectation lowers emotional arousal. It frees up spare capacity in the cortex. And dreaming amnesia eliminates failed expectations and thus updates the expectation memory store — our internal guidance system. That is the function of dreaming in the REM state.

Triple E

Take a look at the triple-E symbol. To me, it summarises all that I know about dreaming, and that I firmly believe to be accurate. The figure 3 stands for the three Es —elimination of emotional expectation. The 3 is a perfect reflection of the letter E, just as dreams are metaphorical pattern matches that eliminate failed expectations. If you focus on the space in between the 3 and the E, you can see it as a type of figurine, an oddly shaped vase, and then you can bring your mind back and see the triple-E again. That emphasises the fact that dreams are hallucinations.

Just that little symbol shows that a pattern can contain an incredible amount of knowledge. And that is what we have always said: knowledge is contained in patterns.

1]. Maslow, A H (1971). The farther reaches of human nature.
Viking, New York.
2]. Glasser, W (1965). Reality theory. Harper & Row, New York.
3]. Aserinsky, E and Kleitman, N (1953). Regularly occurring periods of eye mobility and concomitant phenomena during sleep.
Science, 18, 273—274.
4]. Jouvet, M (1978). Does a genetic programming of the brain occur
during paradoxical sleep? In P A Buser and A Rougel-Bouser (eds)
Cerebral Correlates of Conscious Experience
5]. Elsevier Hobson, J A and McCarley, R W (1977). The brain as a
dream-state generator: an activation-synthesis hypothesis of the
dream process. American Journal of Psychiatry, 134, 1335—1368.
6]. Macquet, P, Peters, J et al (1996). Functional neuroanatomy of human rapid eye movement sleep and dreaming. Nature, 383 (6596), 163—166.
7]. Hobson, J A, Pace-Schott, F and Stickgold, R (2000). Dreaming and
the brain: toward a cognitive neuroscience of conscious states.
Behaviour and Brain Sciences, 23, 6, 793—843.
8]. Berger, M, Lund, R et al (1983). REM latency in neurotic and
endogenous depression and the cholinergic REM induction test.
Psychiatry Research, 10, 113—123.
9]. Hobson, J A (2005). 13 Dreams Freud Never Had. Pi Press, New York.
10]. Solms, M ( 2000). Dreaming and REM sleep are controlled by different
brain mechanisms. Behaviour and Brain Sciences, 26, 6, 843 — 850.
11]. Freud, S (1953). The Interpretation of Dreams. In the standard
edition of The complete psychological works of Sigmund Freud.
J Strachey (ed). Hogarth Press.
12]. Crick, F and Mitchison, G (1983). The function of dream sleep.
Nature, 304, 111—114.
13]. Crick, F and Mitchison, G (1995). REM sleep and neural nets.
Behavioural Brain Research, 69, 147—155.
14]. Domhoff, G W (2000). Needed: a new theory. Behavioural and Brain
Sciences, 23, 6, 928—930.
15]. Wilson, M A and McNoughton, B L (1994). Reactivation of hippocampal
ensemble memories during sleep. Science, 265, 676— 679.
16]. Winson, J (2002). The meaning of dreams. Scientific American, 12, 1, 62—71.
17]. Jouvet, M (1999). The Paradox of Sleep. MIT Press, London.
18]. Griffin, J (1993). The meaning of dreams: a scientific solution to an ancient mystery. The Therapist, 1, 3, 33—38.
19]. Griffin, J (1997). The Origin of Dreams. The Therapist Ltd.

20]. Griffin, J and Tyrrell, I (2004). Dreaming Reality: how dreaming keeps us sane or can drive us mad. HG Publishing, East Sussex.
21]. Tooby, J and Cosmides, L. Evolutionary psychology: a primer.
22]. Quartz, S and Sejnowski, T (2002). Liars, Lovers and Heroes: what the new brain science reveals about how we become who we are.HarperCollins, New York.
23]. Hammer, M and Menzel, R (1995). Learning and memory in the
honeybee. Journal of Neuroscience, 15, 1617—1630.
24]. Montague, P, Dayan, P et al (1995). Bee foraging in uncertain
environments using predictive Hebbian learning. Nature, 377, 725—728.
25]. Robertson, I (1999). Mind Sculpture: unleashing your brain's potential.
Bantam Books.
26]. Austin, A (2003). Good choices: autism and the human givens.
Human Givens, 10, 2, 19—23.
27]. Busáki, G (1995). The hippocampo—neurocortical dialogue.
Cerebral Cortex, 6, 81—92.

BACK TO ARTICLE>>  Part One,  Part Two,   Part Three,   Part Four