Friday, July 23, 2004

Parag Khanna: The Metrosexual Superpower

(Foreign Policy Magazine) "The stylish European Union struts past the bumbling United States on the catwalk of global diplomacy.
According to Michael Flocker's 2003 bestseller, The Metrosexual Guide to Style: A Handbook for the Modern Man, the trendsetting male icons of the 21st century must combine the coercive strengths of Mars and the seductive wiles of Venus. Put simply, metrosexual men are muscular but suave, confident yet image-conscious, assertive yet clearly in touch with their feminine sides. Just consider British soccer star David Beckham. He is married to former Spice Girl Victoria “Posh” Adams, but his combination of athleticism and cross-dressing make him a sex symbol to both women and men worldwide, not to mention the inspiration for the 2002 hit movie Bend It Like Beckham. Substance, Beckham shows, is nothing without style.
Geopolitics is much the same. American neoconservatives such as Robert Kagan look down upon feminine, Venus-like Europeans, gibing their narcissistic obsession with building a postmodern, bureaucratic paradise. The United States, by contrast, supposedly carries the mantle of masculine Mars, boldly imposing freedom in the world's nastiest neighborhoods. But by cleverly deploying both its hard power and its sensitive side, the European Union (EU) has become more effective—and more attractive—than the United States on the catwalk of diplomatic clout. Meet the real New Europe: the world's first metrosexual superpower.
Metrosexuals always know how to dress for the occasion (or mission). Spreading peace across Eurasia serves U.S. interests, but it's best done by donning Armani pinstripes rather than U.S. Army fatigues. After the fall of Soviet communism, conservative U.S. thinkers feared a united Germany vying with Russia for hegemony in Central Europe. Yet, by brandishing only a slick portfolio of economic incentives, the EU has incorporated many of the former Soviet republics and satellites in the Baltics and Eastern Europe. Even Turkey is freshening up with eau d'Europe. Ankara resisted Washington's pressure to provide base rights for the invasion of Iraq in 2003. But to get backstage in Brussels, it has had to smooth out its more unseemly blemishes—abolishing the death penalty, taking steps to resolve the Cyprus dispute, and introducing laws to protect its Kurdish minority.
Metrosexuals may spend a long time standing in front of the mirror, but they never shop alone. Stripping off stale national sovereignty (that's so last century), Europeans now parade their “pooled power,” the new look for this geopolitical season. As a political, economic, and military union with some 450 million citizens, a $9 trillion economy, and armies surpassing 1.6 million soldiers, Europe is now a whole greater than the sum of its parts.
Indeed, Europe actually contributes more to U.S. foreign policy goals than the U.S. government—and does so far more fashionably. Robert Cooper, one of Britain's former defense gurus now shaping Europe's common foreign policy, argues that Europe's “magnetic allure” compels countries to rewrite their laws and constitutions to meet European standards. The United States conceives of power primarily in military terms, thus confusing presence with influence. By contrast, Europeans understand power as overall leverage. As a result, the EU is the world's largest bilateral aid donor, providing more than twice as much aid to poor countries as the United States, and it is also the largest importer of agricultural goods from the developing world, enhancing its influence in key regions of instability. Through massive deployments of “soft power” (such as economic clout and cultural appeal) Europe has made hard power less necessary. After expanding to 25 members, the EU accounts for nearly half of the world's outward foreign direct investment and exerts greater leverage than the United States over pivotal countries such as Brazil and Russia. As more oil-producing nations consider trading in euros, Europe will gain greater influence in the international marketplace. Even rogue states swoon over Europe's allure; just recall how Libya's Colonel Muammar el-Qaddafi greeted British Prime Minister Tony Blair during a recent meeting in Tripoli. “You are looking good,” gushed Libya's strongman. “You are still young.”
Brand Europe is taking over. From environmental sustainability and international law to economic development and social welfare, European views are more congenial to international tastes and more easily exported than their U.S. variants. Even the Bush administration's new strategy toward the “Greater Middle East” is based on the Helsinki model, which was Europe's way of integrating human rights standards into collective security institutions. Furthermore, regional organizations such as the Association of Southeast Asian Nations, Mercosur, and the African Union are redesigning their institutions to look more like the EU. Europe's flashy new symbol of power, the Airbus 380, will soon strut on runways all over Asia. And the euro is accepted even where they don't take American Express.
But don't be deceived by the metrosexual superpower's pleatless pants—Europe hasn't lost touch with its hard assets. Even without a centralized military command structure, the EU has recently led military operations in the Democratic Republic of the Congo and Macedonia, and it will increase troop deployments to support German and British forces in stabilizing Afghanistan. European countries already provide 10 times more peacekeepers to U.N. operations than the United States. In late 2004, the EU will take over all peacekeeping and policing operations in Bosnia and Herzegovina from NATO, and Europe's 60,000-troop Rapid Reaction Force will soon be ready to deploy around the world.
In the fight against terrorism, Europe also displays the right ensemble of strengths. Europeans excel at human intelligence, which requires expert linguists and cultural awareness. French espionage agencies have reportedly infiltrated al Qaeda cells, and German and Spanish law enforcement efforts have led to the capture of numerous al Qaeda operatives. After the March 2004 terrorist attack in Madrid, Spain's incoming prime minister immediately declared his country would “return to Europe,” signaling his opposition to the Bush administration's war on terror. Indeed, U.S. Defense Secretary Donald Rumsfeld's “New Europe” is already passé, shorter lived than the bellbottom revival.
To some observers, the EU may always be little more than a cheap superpower knockoff with little substance to show but a common multilingual passport. But after 60 years of dressing up, Europe has revealed its true 21st-century orientation. Just as metrosexuals are redefining masculinity, Europe is redefining old notions of power and influence. Expect Bend It Like Brussels to play soon in capital cities worldwide.

~Parag Khanna is a fellow in global governance at the Brookings Institution."

Bruce Bawer: Hating America

(Hudson Review) "I moved from the U.S. to Europe in 1998, and I’ve been drawing comparisons ever since. Living in turn in the Netherlands, where kids come out of high school able to speak four languages, where gay marriage is a non-issue, and where book-buying levels are the world’s highest, and in Norway, where a staggering percentage of people read three newspapers a day and where respect for learning is reflected even in Oslo place names (“Professor Aschehoug Square”; “Professor Birkeland Road”), I was tempted at one point to write a book lamenting Americans’ anti-intellectualism—their indifference to foreign languages, ignorance of history, indifference to academic achievement, susceptibility to vulgar religion and trash TV, and so forth. On point after point, I would argue, Europe had us beat.
Yet as my weeks in the Old World stretched into months and then years, my perceptions shifted. Yes, many Europeans were book lovers—but which country’s literature most engaged them? Many of them revered education—but to which country’s universities did they most wish to send their children? (Answer: the same country that performs the majority of the world’s scientific research and wins most of the Nobel Prizes.) Yes, American television was responsible for drivel like “The Ricki Lake Show”—but Europeans, I learned, watched this stuff just as eagerly as Americans did (only to turn around, of course, and mock it as a reflection of American boorishness). No, Europeans weren’t Bible-thumpers—but the Continent’s ever-growing Muslim population, I had come to realize, represented even more of a threat to pluralist democracy than fundamentalist Christians did in the U.S. And yes, more Europeans were multilingual—but then, if each of the fifty states had its own language, Americans would be multilingual, too.1 I’d marveled at Norwegians’ newspaper consumption; but what did they actually read in those newspapers?
That this was, in fact, a crucial question was brought home to me when a travel piece I wrote for the New York Times about a weekend in rural Telemark received front-page coverage in Aftenposten, Norway’s newspaper of record. Not that my article’s contents were remotely newsworthy; its sole news value lay in the fact that Norway had been mentioned in the New York Times. It was astonishing. And even more astonishing was what happened next: the owner of the farm hotel at which I’d stayed, irked that I’d made a point of his want of hospitality, got his revenge by telling reporters that I’d demanded McDonald’s hamburgers for dinner instead of that most Norwegian of delicacies, reindeer steak. Though this was a transparent fabrication (his establishment was located atop a remote mountain, far from the nearest golden arches), the press lapped it up. The story received prominent coverage all over Norway and dragged on for days. My inhospitable host became a folk hero; my irksome weekend trip was transformed into a morality play about the threat posed by vulgar, fast-food-eating American urbanites to cherished native folk traditions. I was flabbergasted. But my erstwhile host obviously wasn’t: he knew his country; he knew its media; and he’d known, accordingly, that all he needed to do to spin events to his advantage was to breathe that talismanic word, McDonald’s.
For me, this startling episode raised a few questions. Why had the Norwegian press given such prominent attention in the first place to a mere travel article? Why had it then been so eager to repeat a cartoonish lie? Were these actions reflective of a society more serious, more thoughtful, than the one I’d left? Or did they reveal a culture, or at least a media class, that was so awed by America as to be flattered by even its slightest attentions but that was also reflexively, irrationally belligerent toward it?
This experience was only part of a larger process of edification. Living in Europe, I gradually came to appreciate American virtues I’d always taken for granted, or even disdained—among them a lack of self-seriousness, a grasp of irony and self-deprecating humor, a friendly informality with strangers, an unashamed curiosity, an openness to new experience, an innate optimism, a willingness to think for oneself and speak one’s mind and question the accepted way of doing things. (One reason why Euro- peans view Americans as ignorant is that when we don’t know something, we’re more likely to admit it freely and ask questions.) While Americans, I saw, cherished liberty, Europeans tended to take it for granted or dismiss it as a naive or cynical, and somehow vaguely embarrassing, American fiction. I found myself toting up words that begin with i: individuality, imagination, initiative, inventiveness, independence of mind. Americans, it seemed to me, were more likely to think for themselves and trust their own judgments, and less easily cowed by authorities or bossed around by “experts”; they believed in their own ability to make things better. No wonder so many smart, ambitious young Europeans look for inspiration to the United States, which has a dynamism their own countries lack, and which communicates the idea that life can be an adventure and that there’s important, exciting work to be done. Reagan-style “morning in America” clichés may make some of us wince, but they reflect something genuine and valuable in the American air. Europeans may or may not have more of a “sense of history” than Americans do (in fact, in a recent study comparing students’ historical knowledge, the results were pretty much a draw), but America has something else that matters—a belief in the future.
Over time, then, these things came into focus for me. Then came September 11. Briefly, Western European hostility toward the U.S. yielded to sincere, if shallow, solidarity (“We are all Americans”). But the enmity soon re-established itself (a fact confirmed for me daily on the websites of the many Western European newspapers I had begun reading online). With the invasions of Afghanistan and Iraq, it intensified. Yet the endlessly reiterated claim that George W. Bush “squandered” Western Europe’s post-9/11 sympathy is nonsense. The sympathy was a blip; the anti-Americanism is chronic. Why? In The Eagle’s Shadow: Why America Fascinates and Infuriates the World, American journalist and NPR commentator Mark Hertsgaard purports to seek an answer.2 His assumption throughout is that anti-Americanism is amply justified, for these reasons, among others:

Our foreign policy is often arrogant and cruel and threatens to “blow back” against us in terrible ways. Our consumerist definition of prosperity is killing us, and perhaps the planet. Our democracy is an embarrassment to the word, a den of entrenched bureaucrats and legal bribery. Our media are a disgrace to the hallowed concept of freedom of the press. Our precious civil liberties are under siege, our economy is dividing us into rich and poor, our signature cultural activities are shopping and watching television. To top it off, our business and political elites are insisting that our model should also be the world’s model, through the glories of corporate-led globalization.

America, in short, is a mess—a cultural wasteland, an economic nightmare, a political abomination, an international misfit, outlaw, parasite, and pariah. If Americans don’t know this already, it is, in Hertsgaard’s view, precisely because they are Americans: “Foreigners,” he proposes, “can see things about America that natives cannot. . . . Americans can learn from their perceptions, if we choose to.” What he fails to acknowledge, however, is that most foreigners never set foot in the United States, and that the things they think they know about it are consequently based not on first-hand experience but on school textbooks, books by people like Michael Moore, movies about spies and gangsters, “Ricki Lake,” “C.S.I.,” and, above all, the daily news reports in their own national media. What, one must therefore ask, are their media telling them? What aren’t they telling them? And what are the agendas of those doing the telling? Such questions, crucial to a study of the kind Hertsgaard pretends to be making, are never asked here. Citing a South African restaurateur’s assertion that non-Americans “have an advantage over [Americans], because we know everything about you and you know nothing about us,” Hertsgaard tells us that this is a good point, but it’s not: non-Americans are always saying this to Americans, but when you poke around a bit, you almost invariably discover that what they “know” about America is very wide of the mark.
In any event, The Eagle’s Shadow proves to be something of a gyp: for though it’s packaged as a work of reportage about foreigners’ views of America, it’s really a jeremiad by Hertsgaard himself, punctuated occasionally, to be sure, by relevant quotations from cabbies, busdrivers, and, yes, a restaurateur whom he’s run across in his travels. His running theme is Americans’ parochialism: we “not only don’t know much about the rest of the world, we don’t care.” I used to buy this line, too; then I moved to Europe and found that—surprise!—people everywhere are parochial. Norwegians are no less fixated on Norway (pop. 4.5 million) than Americans are on America (pop. 280 million). And while Americans’ relative indifference to foreign news is certainly nothing to crow about, the provincial focus of Norwegian news reporting and public-affairs programming can feel downright claustrophobic. Hertsgaard illustrates Americans’ ignorance of world geography by telling us about a Spaniard who was asked at a wedding in Tennessee if Spain was in Mexico. I once told such stories as well (in fact, I began my professional writing career with a fretful op-ed about the lack of general knowledge that I, then a doctoral candidate in English, found among my undergraduate students); then I moved to Europe and met people like the sixtyish Norwegian author and psychologist who, at the annual dinner of a Norwegian authors’ society, told me she’d been to San Francisco but never to California.
One of Hertsgaard’s main interests—which he shares with several other writers who have recently published books about America and the world—is the state of American journalism. His argument, in a nutshell, is that “few foreigners appreciate how poorly served Americans are by our media and educational systems—how narrow the range of information and debate is in the land of the free.” To support this claim, he offers up the fact that “internationally renowned intellectuals such as Edward W. Said and Frances Moore Lappé” signed a statement against the invasion of Afghanistan, but were forced to run it as an ad because newspapers wouldn’t print it for free. Hertsgaard’s acid comment: “In the United States, it seems, there are some things you have to buy the freedom to say.” Now, I didn’t know who Lappé was when I read this (it turns out she wrote a book called Diet for a Small Planet), but as for the late Professor Said, no writer on earth was given more opportunities by prominent newspapers and journals to air his views on the war against terror. In the two years between 9/11 and his death in 2003, his byline seemed ubiquitous.
Yes, there’s much about the American news media that deserves criticism, from the vulgar personality journalism of Larry King and Diane Sawyer to the cultural polarization nourished by the many publishers and TV news producers who prefer sensation to substance. But to suggest that American journalism, taken as a whole, offers a narrower range of information and debate than its foreign counterparts is absurd. America’s major political magazines range from National Review and The Weekly Standard on the right to The Nation and Mother Jones on the left; its all-news networks, from conservative Fox to liberal CNN; its leading newspapers, from the New York Post and Washington Times to the New York Times and Washington Post. Scores of TV programs and radio call-in shows are devoted to fiery polemic by, or vigorous exchanges between, true believers at both ends of the political spectrum. Nothing remotely approaching this breadth of news and opinion is available in a country like Norway. Purportedly to strengthen journalistic diversity (which, in the ludicrous words of a recent prime minister, “is too important to be left up to the marketplace”), Norway’s social-democratic government actually subsidizes several of the country’s major newspapers (in addition to running two of its three broadcast channels and most of its radio); yet the Norwegian media are (guess what?) almost uniformly social-democratic—a fact reflected not only in their explicit editorial positions but also in the slant and selectivity of their international coverage.3 Reading the opinion pieces in Norwegian newspapers, one has the distinct impression that the professors and bureaucrats who write most of them view it as their paramount function not to introduce or debate fresh ideas but to remind the masses what they’re supposed to think. The same is true of most of the journalists, who routinely spin the news from the perspective of social-democratic orthodoxy, systematically omitting or misrepresenting any challenge to that orthodoxy—and almost invariably presenting the U.S. in a negative light. Most Norwegians are so accustomed to being presented with only one position on certain events and issues (such as the Iraq War) that they don’t even realize that there exists an intelligent alternative position.
Things are scarcely better in neighboring Sweden. During the run-up to the invasion of Iraq, the only time I saw pro-war arguments fairly represented in the Scandinavian media was on an episode of “Oprah” that aired on Sweden’s TV4. Not surprisingly, a Swedish government agency later censured TV4 on the grounds that the program had violated media-balance guidelines. In reality, the show, which had featured participants from both sides of the issue, had plainly offended authorities by exposing Swedish viewers to something their nation’s media had otherwise shielded them from—a forceful articulation of the case for going into Iraq.4 In other European countries, to be sure, the media spectrum is broader than this; yet with the exception of Britain, no Western European nation even approaches America’s journalistic diversity. (The British courts’ recent silencing of royal rumors, moreover, reminded us that press freedom is distinctly more circumscribed in the U.K. than in the U.S.) And yet Western Europeans are regularly told by their media that it’s Americans who are fed slanted, selective news—a falsehood also given currency by Americans like Hertsgaard.
No less regrettable than Hertsgaard’s misinformation about the American media are his comments on American affluence, which he regards as an international embarrassment and a sign of moral deficiency. He waxes sarcastic about malls, about the range of products available to American consumers (whom he describes as “dining on steak and ice cream twice a day”), and about the fact that Americans “spent $535 billion on entertainment in 1999, more than the combined GNPs of the world’s forty-five poorest nations.” He appears not to have solicited the opinions of Eastern Europeans, a great many of whom, having been deprived under Communism of both civil rights and a decent standard of living, have a deep appreciation for both American liberty and American prosperity. But then Hertsgaard, predictably, touches on Communism only in the course of making anti-American points. For example, he recalls a man in Havana who, during the dispute over Florida’s electoral votes in the 2000 presidential contest, whimsically suggested that Cuba send over election observers. (Well, that would’ve been one way to escape Cuba without being gunned down.) Hertsgaard further sneers that for many Americans, the fall of the Berlin Wall proved that they lived in “the chosen nation of God.” Now, for my part, I never heard anyone suggest such a connection. What I do remember about the Wall coming down is the lack of shame or contrition on the part of Western leftists who had spent decades appeasing and apologizing for Soviet Communism. In any event, does Hertsgaard really think that in a work purporting to evaluate America in an international context, this smirking comment about the Berlin Wall is all that need be said about the expiration of an empire that murdered tens of millions and from which the U.S., at extraordinary risk and expense, protected its allies for nearly half a century?
The victory over Soviet Communism is not the only honorable chapter of American history that Hertsgaard trashes. World War II? Though he grants that the U.S. saved Western Europe, he puts the word “saving” in scare quotes and maintains that “America had its own reasons” (economic, naturally) for performing this service. September 11? Here, in its entirety, is what he has to say about that cataclysmic day: “Suddenly Americans had learned the hard way: what foreigners think does matter.” The Iraq War? An atrocity against innocent civilians—nothing more. There’s no reference here to Saddam’s torture cells, imprisoned children, or mass graves, no mention of the fact that millions of Iraqis who lived in terror are now free. Instead, Hertsgaard cites with approval a U.N. official’s smug comment that Americans, who never understand anything anyway, have failed to grasp “that Iraq is not made up of twenty-two million Saddam Husseins” but of families and children. For a proper response to this remark, I need only quote from an address made to the Security Council by Iraqi foreign minister Hoshyar Zebari on December 16, 2003. Accusing the U.N. of failing to save Iraq from “a murderous tyranny,” Zebari said: “Today we are unearthing thousands of victims in horrifying testament to that failure. The United Nations must not fail the Iraqi people again.”5
Hertsgaard compares America unfavorably not only with Europe but—incredibly—with Africa. If “many Europeans speak two if not three languages,” he rhapsodizes, “in Africa, multilingualism is even more common.” So, one might add, are poverty, starvation, rape, AIDS infection, state tyranny and corruption, and such human-rights abominations as slavery, female genital mutilation, and the use of children as soldiers and prostitutes. Hertsgaard contrasts America’s “frenzied pace” with the “African rhythms” that he finds more congenial and notes with admiration that “Africans live in social conditions that encourage inter- change, discourage hurry, and elevate the common good over that of the individual.” In response to which it might be pointed out (a) that those “social conditions” generally go by the name of abject poverty and (b) that Hertsgaard fails to cite such recent examples of benign African “social . . . interchange” and expressions of concern for the “common good” as Mugabe’s terror regime in Zimbabwe, ethnic clashes in the Central African Republic, Somali anarchy, Rwandan genocide (800,000 dead), prolonged civil wars in Sudan (two million dead), the Democratic Republic of the Congo (1.7 million dead), Liberia (200,000 dead), the Ivory Coast, and elsewhere, not to mention massacres of Christians by Muslims in Sudan and Nigeria. To recommend Africa to Americans as a model of social harmony without a hint of qualification is not just unserious, it’s hallucinatory.6
Every nation requires serious, responsible criticism, particularly if it’s the planet’s leading economic power, the arsenal of democracy, and the center of humanity’s common culture. But Hertsgaard’s criticism of America is neither serious nor responsible. Though at one point (apropos of American medicine and science) he concedes, with breathtaking dismissiveness, that “We Americans are a clever bunch,” he usually talks about his fellow countrymen as if they’re buffoons who have mysteriously and unjustly lucked into living in the world’s richest country, while most of the rest of the species, though far brighter and more deserving, somehow ended up in grinding poverty. For him, Americans’ intellectual mediocrity would seem to be a self-evident truth, but his own observations hardly exemplify the kind of reflectiveness a reader of such a book has a right to expect. For example, when he notes with satisfaction that the young Sigmund Freud “complained . . . incessantly about [America’s] lack of taste and culture,” Hertsgaard seems not to have realized that Freud was, of course, comparing the U.S. to his native Austria, which would later demonstrate its “taste and culture” by welcoming the Nazi Anschluss. One ventures to suggest that had Freud—who escaped the Gestapo thanks to intervention by Franklin D. Roosevelt—survived to see the liberated death camps in which his four sisters perished, he might well have revised his views about the relative virtues of American and Austrian culture. . . ."

(click title for full text)

Christopher H. Whittle: Development of Beliefs in Paranormal and Supernatural Phenomena

(Skeptical Inquirer) "A new study found high levels of fictional paranormal beliefs derived from broadcasts of The X-Files in viewers who had never watched The X-Files. An examination of the origins of paranormal and supernatural beliefs leads to the creation of two models for their development. We are taught such beliefs virtually from infancy. Some are secular, some religious, and some cross over between the two. This synergy of cultural indoctrination has implications for science and skeptics.
------------------------------------------------------------------------
Two important findings emerged from a recent study I conducted on learning scientific information from prime-time television programming (Whittle 2003). The study used an Internet-based survey questionnaire posted to Internet chat groups for three popular television programs, The X-Files, ER, and Friends. Scientific (and pseudoscientific) dialogue from ER and The X-Files collected in a nine-month-long content analysis created two scales, ER science content and The X-Files pseudoscience content. Respondents were asked to agree or disagree with statements from each program (such as, "Rene Laennec used a rolled-up newspaper as the first stethoscope" [ER], and "The Wanshang Dhole, an Asian dog thought to be extinct, has pre-evolutionary features including a fifth toe pad, a dew claw, and a prehensile thumb" [The X-Files].
My first finding, that ER viewers learned specific ER science content, is an indicator that entertainment television viewers can learn facts and concepts from their favorite television programs. The second finding was spooky. There was no significant difference in the level of pseudoscientific or paranormal belief between viewers of ER and The X-Files. This finding does not seem surprising in light of Gallup and Harris polls demonstrating high levels of paranormal belief in the United States, but the beliefs assessed in the study were fictional paranormal and pseudoscientific beliefs created by the writers of The X-Files. Paranormal researchers ask questions such as, "Do you believe in astral projection, or the leaving of the body by one's spirit?" My research asked, [Do you believe] "[d]uring astral projection, or the leaving of the body for short periods of time, a person could commit a murder?" A homicidal astral projector was the plot of an X-Files episode, but ER viewers were just as likely to acknowledge belief in that paraparanormal (a concept beyond the traditional paranormal) belief as were viewers of The X-Files!
Perhaps it is as Anderson (1998) pointed out in his Skeptical Inquirer article "Why Would People Not Believe Weird Things," that "almost everything [science] tells us we do not want to hear." We are born of primordial slime, not at the hands of a benevolent and concerned supreme being who lovingly crafted us from clay; we are the result of random mutations and genetic accidents.
Anderson cited quantum mechanics as a realm of science so fantastic as to have supernatural connotations to the average individual. Quantum physicists distinguish virtual particles from real particles, blame the collapse of the wave function on their inability to tell us where the matter of our universe is at any time, and tell us that in parallel universes we may have actually dated the most popular cheerleader or football quarterback in high school, whereas in this mundane universe, we did not. It is all relative. Ghosts are a fairly predictable phenomenon compared to the we-calculated-it-but-you-cannot-sense-it world of quantum physics. Most people will agree that ghosts are the souls of the departed, but quantum physicists cannot agree on where antimatter goes. It is there but it is not. Pseudoscientific and paranormal beliefs provide a sense of order and comfort to those who hold them, giving us control over the unknown. It is not surprising that such beliefs continue to flourish in a world as utterly fantastic as ours.
After researching the paranormal in an effort to discover why ER viewers might have the extraordinary paranormal beliefs indicated on their survey questionnaires, I constructed two models of paranormal belief from my research notes (heavily drawn from Goode 2000, Johnston et al. 1995, Irwin 1993, Vikan and Stein 1993, and Tobacyk and Milford 1983). Figure 1 shows the interrelationship between the natural environment, human culture, and the individual. The culture and the individual maintain General Paranormal Beliefs, which consist of at least four relatively independent dimensions: Traditional Religious Belief, Paranormal Belief (psi), Parabiological Beings, and Folk Paranormal Beliefs (superstitions). Individuals have cognitive, affective, and behavioral schema in which these beliefs are organized. Society creates and maintains paranormal beliefs through cultural knowledge, cultural artifacts (including rituals), and expected cultural behaviors. The "Need for control, order, and meaning" domain is speculative on the culture side, but supported by research on the individual side. The demographic correlates of traditional religious paranormal belief and nonreligious paranormal belief (see Rice 2003, Goode 2000, Irwin 1995, and Maller and Lundeen 1933) are highly variable and generally reveal low levels of association. It seems that almost everyone has some level of paranormal belief but scientists find few reliable predictors of these levels. [See "What Does Education Really Do?" by Susan Carol Losh, et al., Skeptical Inquirer, September/October 2003.]

Figure 1: A comprehensive model of general paranormal belief.

A first step in future work is to identify the nonbelievers in paranormal phenomena and then explore why they are nonbelievers. Belief in the paranormal begins almost from infancy. We need to expand the research on the developmental stages of belief in the paranormal, and to do that we must study young children.
I have developed a linear model for the development of paranormal and supernatural beliefs at the individual level (figure 2). As children we are taught by parents and other adults (indoctrination by authority) about our culture's beliefs and practices. Our elders' teachings are filtered through hard-wired psychological processes. These include: control (magical) thinking, which allows a helpless infant to believe that he controls the actions of those around him ("Mother fed me because I pointed at her and smiled"), reducing his frustration level; psychological needs and desires, including making order and sense out of one's environment, having an understanding of one's place in the cosmos, feeling in control of one's destiny, and having a fantasy outlet; and the desire to please and imitate adults.

Figure 2: Cultural and biological origins model of paranormal beliefs and experiences in the individual.

We are taught about angels, witches, devils, spirits, monsters, gods, etc. virtually in the cradle. Some of these paranormal beliefs are secular, some are religious, and the most pernicious are crossover beliefs, beliefs that are at times secular and at other times religious. Santa Claus, angels and vampires, ghosts and souls, and the Easter Bunny are examples of cross-over beliefs. Crossover beliefs are attractive to children (free candy and presents), and on that basis they are readily accepted. The devils, ghosts, and monsters are reinforced through Halloween rituals and the mass media. As the child matures, some crossover beliefs, called "teaser" paranormal beliefs, are exposed as false. Traditional religious concepts are reinforced as "true and real." They give us Santa Claus and we believe in an omniscient, beneficent old elf and then they replace Santa with God, who is typically not as generous as Santa Claus and whose disapproval has more serious consequences than a lump of coal. We learn about God and Santa Claus simultaneously; only later are we told that Santa Claus is just a fairy tale and God is real.
In a synergy of cultural indoctrination and the individual's cognitive and affective development, a general belief in the paranormal and the supernatural forms. Once we have knowledge of the paranormal, we can then experience it. One cannot have Bigfoot's baby until one is aware that there is a Bigfoot, or aliens, or ghosts. In other words, you cannot see a ghost until someone has taught you about ghosts. Countervailing influences, experiential knowledge, and knowledge of realistic influence have little effect on paranormal beliefs because they are applied after the belief is established through cultural and familial authority.
The dismal statistics presented on the science literacy level of scientists and science educators by Showers (1993) argued against a rapid increase in science literacy. Scientists and science educators (1) have high levels of paranormal and pseudoscientific belief, (2) do not use their scientific knowledge when voting, (3) use nonscientific approaches in personal and social decision-making, and (4) do not have high levels of science content knowledge outside of their specific disciplines. How can we expect nonscientists to think and act scientifically if scientists and science educators do not? If we decide to mount a concerted program to disabuse the public of paranormal and pseudoscientific beliefs, we must first ask if cultures can survive without paranormal beliefs.
The media may provide fodder for pseudoscientific beliefs and create new monsters and demons for us to believe in, but each individual's culture is responsible for laying the groundwork for pseudoscientific and paranormal belief to take root. We can inform the public through dialogue in entertainment television programming about important scientific facts and concepts. We can inform the public in formal and informal science education environments, but we probably cannot greatly reduce paranormal belief without somehow fulfilling the needs currently fulfilled by it. Science educators must focus on what changes we can make and how to best make those changes. We must involve all stakeholders in the discussion of what is an appropriate level of science literacy. To paraphrase Stephen Hawking, then we shall all, science educators, scientists, and just ordinary people, be able to take part in the discussion of why it is that pseudoscientific beliefs exist. If we find the answer to that, it would be the ultimate triumph of human reason - for then we should know the mind of God.

~Christopher H. Whittle holds a B.S. degree in Earth Sciences from the University of Massachusetts, an Ed.M. from Harvard University, and a Ph.D. from the University of New Mexico. He is currently seeking a science education professorship from which he hopes to continue his research on pseudoscience."

ELLEN WINNER: Art History Can Trade Insights With the Sciences

(The Chronicle Review) "In recent years, it has become clear that the study of art need not be the exclusive domain of humanists. Economists, sociologists, and anthropologists have applied the methods of their respective disciplines to determine how market, social, and cultural forces have affected the productivity of artists; psychologists have examined the effects of mental illness on the creative process, analyzed drafts of paintings as windows into that process, and documented the influences of the visual system on the perception and production of art. As a psychologist previously trained in the humanities and in studio art, I have spent my career applying the science of cognitive psychology (and recently cognitive neuroscience) to studying the creation of and response to art.
To be sure, we scientists who wander into the art museum have to guard against many pitfalls: blind empiricism, testing hypotheses that are not theoretically grounded; unconsciously finding data to fit our theories; waiting for others to try to falsify our theories. We need to avoid reductionism: A scientific explanation of an artistic phenomenon -- say, why we are moved more by some paintings than others -- is not superior to a humanistic one, nor does it replace an explanation at the humanistic level.
Despite the dangers, however, there is much to be learned from the scientific study of art. Why, then, are so many humanists critical of it?
The very different cases of two scientists who have ventured into the field of art history, one from physics, the other from economics, provide a starting point. Both discovered a genuine phenomenon and proposed an explanation for it. The story of the physicist shows how science can make a valuable contribution to our understanding of art and suggests why humanists have failed to recognize the contribution: They are unwilling to play the science game and think like scientists. The story of the economist shows how important it is for scientists not to apply less-stringent criteria when they explain artistic phenomena than when they offer explanations of phenomena in their home discipline.
When Charles M. Falco, a physicist in the Optical Sciences Center at the University of Arizona, presented mathematical support for artist David Hockney's contention that certain early Renaissance painters used lenses to project images that they then traced, he was greeted with fury and indignation by art historians. Falco's arguments were most widely publicized in 2001 in Hockney's extensively reviewed Secret Knowledge: Rediscovering the Lost Techniques of the Old Masters, and they were presented at a high-profile conference at New York University that same year with Hockney and art historians, but they can also be found in scientific journals.
Still, even recently, when I've broached Falco's arguments to art historians, I've been greeted with surprise that I can take them seriously. The assumption seems to be that the claims have been shown to be wrong and can be dismissed. However, then I discover that the art historians don't even know the details of the argument. The devil is in the details, and understanding the exact science does matter.
The controversy over Hockney and Falco grew out of Hockney's discovery of a sudden shift toward naturalism in the 1420s and '30s in Flanders. Hockney claimed that the shift was too abrupt to have occurred without the use of optical aids that allowed artists to project images of the 3-D world onto a canvas and trace them. With the entry of Falco, evidence took the place of opinion. Falco pointed out that concave mirrors can serve as lenses that project images and that such mirrors were available as early as the 13th century. He went on to analyze anomalies in certain paintings that were consistent with the use of a lens and -- most important -- difficult to explain otherwise.
Lorenzo Lotto's painting called "Husband and Wife," of 1523-24, depicts a carpet with a complex geometric design covering a table. The carpet recedes into space. Falco demonstrated that the lines on two of the borders of the design start off receding toward one vanishing point and then move slightly toward another vanishing point. It's strange that there are two vanishing points. It's even stranger that the vanishing points of both borders shift at approximately the same depth into the scene.
But Falco offered an intriguing (somewhat technical, very precise) explanation: Lotto's use of a lens led to systematic and predictable errors. Falco calculated that Lotto must have placed his lens 150 cm. from the carpet he was painting and 84 cm. from the canvas onto which he was projecting the image of the carpet. He also calculated the focal length (54 cm.) and diameter (2.5 cm.) that Lotto's lens had to be. Those calculations were all derived from one measurement: a comparison of the shoulder width of the woman in the painting with the average shoulder width of actual women today. The difference in size between the two widths showed how much the objects on canvas were reduced -- in this case, by 56 percent.
When a lens is used to project an image, the less that image is reduced in size, the lower the depth of field that the lens can project. Because Lotto was projecting images reduced in size by only 56 percent, he had a problem -- he could project only part of the image onto the canvas. Once that part was traced, he would have had to move the lens just a tiny bit to focus farther back. Hence a slightly different vanishing point and a slightly different magnification -- both subtle errors. Falco calculated exactly how much the two vanishing points would diverge and the magnification would decrease, and his calculations agreed to within 1 percent with measurements he made from the painting. Falco tested his lens hypothesis against many paintings, and found other instances in which the errors were mathematically predicted by the use of a lens. His hypothesis did not rely only on such predictions; he also found that sometimes highly complex, three-dimensional, nongeometrical objects were rendered so precisely that use of a lens was highly probable. But the litmus test of the lens hypothesis was Falco's ability to so precisely predict nonrandom errors.
The major arguments mounted by art historians against his theory fall into seven categories: (1) artists did not need to "cheat" because they were highly trained in drawing from observation; (2) artists did not need lenses because they were so talented; (3) such devices would have been too cumbersome; (4) no written proof, from artists or others, exists that lenses were used; (5) artists could have used a grid instead of a lens to get the perspective right; (6) the lens hypothesis has been overstated; and (7) even if true, it is of no interest to art historians.
The problem with numbers 1-4 is that they fail to rule out the use of optical devices. Whether or not artists had the skill and/or training to draw without lenses, whether or not the lenses were cumbersome, and whether or not anyone at the time wrote about them, artists still may have used lenses. The arguments about training and talent are also inconsistent with the general acceptance by art historians that Renaissance artists used geometry to draw in perspective (no one suggests that artists were so talented or trained that they could draw in perfect perspective just by looking closely), and that Renaissance artists sometimes used tools such as strings, grids, and planes of glass ("Leonardo's window") to get the perspective right. The problem with the grid argument is that the use of a grid might explain how artists got the perspective right, but not predict the smoking gun, the errors.
David W. Galenson, an economist from the University of Chicago, believed he could demonstrate two different kinds of creative processes in great artists, and he made his case in a book called Painting Outside the Lines: Patterns of Creativity in Modern Art (Harvard University Press, 2001). Using 19th- and 20th-century French painters and 20th-century New York painters as his sample, Galenson showed that some artists produced their greatest works (as measured by auction price) when young (the early peakers), while others produced their greatest (most pricey) works at an older age (the late peakers). While some art historians might object to the use of price as a measure of the greatness of a work, Galenson showed that auction price also correlated with frequency of the work being reproduced in art-history textbooks (clearly reflecting a value judgment made by art historians).
Consider these contrasting examples. Pablo Picasso's "greatest" painting, "Les Demoiselles d'Avignon" (1907), marked the beginning of the Cubist revolution, and was painted when Picasso was 26. Paul Cézanne's "greatest" painting, "Les Grandes Baigneuses" (1900-5), was painted when the artist was in his 60s. Galenson argued that early peakers use a different kind of creative process from late peakers. Early peakers are "conceptual innovators," who produce individual breakthrough works; late peakers are "experimental innovators," whose work changes incrementally. Conceptual artists preconceive their works and make sudden radical innovations. They are "finders." Experimental artists work by trial and error and make gradual innovations. They are "seekers."
There is no denying that Galenson demonstrated that the price curve for some artists' work peaks early in their lives and then declines, while for others it rises steadily with age. But the explanation he gave was unsatisfactory. He entered the realm of psychology, but failed to subject his psychological theory to a scientific test.
To classify artists, Galenson relied partly on the artists' own reflections on how they worked. But cognitive psychology has demonstrated the unreliability of self-report. Richard E. Nisbett, for example, showed some time ago that people don't understand the causes of their behavior and cannot report accurately on their mental processes. David N. Perkins showed that artists make claims about their creative process (for instance, that they solve problems unconsciously) that don't stand up to scrutiny.
Galenson also used another kind of evidence: Artists who made changes from initial draft to final product were classified as experimental -- they searched as they painted. Those who made preliminary sketches but (supposedly) did not make major changes after starting work were classified as conceptual -- they preconceived their paintings. The problem here is that there is no principled distinction between planning a work through preliminary sketches and planning a work by making changes on the canvas -- both involve searching. In addition, it is not possible to demonstrate that an artist does not make major revisions on the canvas, since many changes get covered up with paint. However, in the case of Picasso, classified as conceptual, we do have evidence inconsistent with his classification. Not only did Picasso make many preliminary sketches for "Guernica," painted when he was 56, but we know he made major changes in the painting after he began, because we have photographs of initial and interim states of the painting.
To test the hypothesis of the two kinds of creative process in a rigorous social-scientific way (or by another researcher) would require the following. First, list the objective, measurable criteria for classifying an artist as conceptual or experimental: for example, the existence or absence of preliminary sketches; changes or no changes once the painting was begun; some kind of measurement of the degree of radicality of the innovation, etc. Second, use those to classify a new set of artists as either conceptual or experimental, ones for whom we do not yet know the age when the most valued work was painted. Finally, check whether those classified as conceptual turn out to have peaked early, while those classified as experimental turn out to have peaked late. That would provide an objective test of Galenson's hypothesis. Were his theory to hold up, art history would be enriched.
In contrast to Falco's theory, Galenson's has received little attention by art historians, despite the fact that he is a major scholar at a major university, and that his book, published by a major press, makes a revolutionary claim about the value of art and the creative process. I suspect that Falco's analysis would also have been bypassed by mainstream art history had Hockney not been involved and had the discipline not been directly attacked. Art historians felt insulted by David Hockney, who dismissed their observational skills when he wrote that the clues he noticed "could only have been seen by an artist, a mark-maker, who is not as far from practice, or from science, as an art historian."
Science has already illuminated many issues about the arts. For example, Margaret Livingstone, a neurobiologist, showed in 2002 how properties of the visual system affect how we see art. She put forth a scientific explanation for why the Mona Lisa's smile has always been found to be ambiguous. When we look at her lips with our peripheral vision (not looking directly at her mouth), our vision is blurrier and her smile seems more cheerful. When we look directly at the mouth, using central vision, we have a sharper image and the smile looks less cheerful -- the smile you thought you saw disappears.
As another example, in 1993 Kay Redfield Jamison, a psychologist, documented in her book Touched With Fire: Manic-Depressive Illness and the Artistic Temperament a strong link between bipolar disorder and artistic creativity. She pointed out how the thought processes involved in the beginning stage of mania can contribute to creative work -- during this state thought is rapid, energy is high, and categories broaden so that things not ordinarily classified together might be seen as similar, fostering metaphorical thought.
In my current work (in collaboration with Gottfried Schlaug, a neuroscientist at Harvard University), we are searching for the brain basis of musical talent. We are following children over several years, as they begin to study a musical instrument, and comparing their brain growth, through the use of magnetic resonance imaging, to that of children studying a foreign language or engaging in intensive sports. After several years of instrumental study, children will be rated by music educators in terms of their level of "talent." We will then look back at their brain images to find out whether there are any brain markers of musical giftedness prior to training, and whether the way in which music training affects brain development differs in those children with musical talent.
Of course, humanists should not uncritically accept what scientists claim about the arts. To decide whether or not to accept a scientific explanation of an artistic phenomenon, one must evaluate the evidence. One has to determine whether the evidence supports the claim, and if not, how the claim could be subjected to further, decisive test. One has to think scientifically. And therein lies the problem. Humanists are not trained to think in terms of propositions testable via systematic empirical evidence. A scientific finding about the arts may therefore be unfairly rejected without a careful evaluation of the evidence. That is, I believe, what has happened in the case of Charles Falco. When art historians argue that artists did not need lenses because they were so talented, they seem not to realize that the argument does not rule out the use of lenses. When they say that artists could have used a grid to get perspective right, they seem not to realize that the use of a grid would not predict the errors.
True, Falco and Hockney did not speak to the meaning or beauty of a work, issues that engage humanists. But why didn't art historians think it important to learn how an artist created that work? When the psychologist Howard E. Gruber analyzed Darwin's creative process (in his 1974 book, Darwin on Man: A Psychological Study of Scientific Creativity), showing that Darwin's insights were gradual and incremental rather than sudden, historians of science were interested. When Frederick Mosteller and David L. Wallace used statistical methods to determine authorship of certain disputed Federalist Papers, historians listened. Leonard Meyer, a musicologist, used the science of information theory to explain our reactions as we listen to a piece of music. There is thus ample precedent for the influence of science on humanists: Knowing that artists used lenses should interest art historians because the knowledge changes our understanding of how realism emerged and also gives us a new way to experience paintings.
But scientists also have to assume certain responsibilities when they cross the line between science and art. They must remain scientists, and must subject their propositions about art to scientific standards. Otherwise they do the cause of interdisciplinary work no good. Galenson's failure to put the psychological part of his theory to scientific test weakens his position when he wonders why art historians have ignored him.
Today neuroscience is moving into the study of the arts. Brain imaging allows us to track how the brain processes works of art, what parts of the brain are involved as artists develop a work of art, and how training in an art form stimulates brain growth. Scientists who do that kind of work will need a deep understanding of the art form they are studying. Humanists and cognitive scientists are, therefore, most likely going to be teaming up more to study humanistic phenomena from a scientific perspective.
Thus the clash of cultures may be lessening. To be sure, their methods will always differ. Humanists are trained to make judgments and support them with a range of qualitative evidence and arguments; scientists are trained to test arguments with empirical, replicable evidence and to use quantification as a tool. Interdisciplinary work will flourish when both sides realize that scientific questions about art do not replace humanistic ones. They are simply different. The disciplines of the sciences and art history ought to trade insights rather than insults.

~Ellen Winner is a professor of psychology at Boston College and a senior research associate at Project Zero at the Harvard Graduate School of Education. Her recent works include Gifted Children: Myths and Realities (Basic Books, 1996)."

Peter Grimes - Sadist? Pedophile? Misunderstood Loner? Metaphor for Nazi Germany? A Look at the Origins of Britten's Greatest Operatic Character

(Tom Rosenthal, Independent) "The conception, gestation and birth of an opera do not necessarily follow a human time span, nor is the paternity as unambiguous as it is among animals. We know that Benjamin Britten composed Peter Grimes, with a libretto by Montagu Slater, based on George Crabbe's poem The Borough and that its first performances in June 1945 at Sadler's Wells theatre in London established Britten as the first major English operatic composer since Purcell and, if we accept him as a Briton, Handel. In the nearly 60 years since then, Grimes — the story of a tortured fisherman hunted to death by a hostile community — has become the only internationally recognised operatic masterpiece of the post-war period, the most performed in the most countries, and, justly, the most lauded.
Yet there is a strong case to be made that the "onlie begetter" is E. M. Forster. In 1941 Britten and his partner, the tenor Peter Pears, were both committed pacifists and conscientious objectors, far from England at war and living in America. While staying with friends in Southern California, they came across a back number of that wonderful, but now alas defunct, intellectual weekly The Listener containing the script of a radio talk by Forster. Britten wrote immediately on 29 July 1941 to his friend Elizabeth Mayer: "We've just discovered the poetry of George Crabbe (all about Suffolk!) & are very excited — maybe an opera one day ...!!" The seed so eloquently planted by Forster, who later became a friend of Britten and co-wrote the libretto for his later opera Billy Budd, took nearly four years to come to fruition but there is a nice irony in this passionate adherent of Suffolk and, eventually, Aldeburgh's most distinguished resident, discovering Crabbe only by a happy accident.
Crabbe's poem The Borough is a set of unforgettable character sketches, entitled "Letters", devoted to the principal citizens of what he calls "The Borough" but which is in fact the fishing town of Aldeburgh, not all that far from Lowestoft where Britten's father laboured two centuries later as a dentist and Mrs. Britten dreamt of her composer son as "the fourth B" after Bach, Beethoven and Brahms. (Mrs. B. clearly had no time for Bartók, Bruckner, Bruch or even Ben's first mentor and teacher, Frank Bridge, and had apparently not heard of Berlioz.)
Crabbe was a notable character. Born in 1754, he trained as an apothecary and became the parish doctor of Aldeburgh. Apart from The Borough, he wrote a long poem called Inebriety in 1775, devoted to the perils of the demon drink, with which he was well acquainted. (He also consumed heroic quantities of laudanum.) He went to London and was a friend of Edmund Burke, Sir Joshua Reynolds, Charles James Fox and other grandees, including Doctor Johnson, who recommended that the impecunious poet take holy orders. This greatly improved his circumstances and he spent many years as a curate in Aldeburgh before gaining true preferment as Domestic Chaplain to the Duke of Rutland. He died in 1832.
Ronald Duncan, the poet who did the libretto for Britten's next opera, The Rape of Lucretia, wrote perceptively of Crabbe: "His poems are social documents of a period, and because of his insights his portraits remain as timeless as Rembrandt's." And, one might add, as uncompromisingly frank.
Britten's — and Slater's — Grimes is a true tragic hero. A man of passion and intelligence, with a strong desire to succeed as a fisherman; a man who wants to marry the schoolmistress, the widow Mrs. Ellen Orford, settle down and become respectable; a man at hopeless odds with the community who is more sinned against than sinning. He is, in short, a sanitised version of the Grimes depicted by Crabbe in 1810. In Crabbe's words:

Old Peter Grimes made Fishing his employ,
His Wife he cabin'd with him and his Boy,
And seem'd that Life laborious to enjoy:
To Town came quiet Peter with his Fish,
And had of all a civil word and wish.
He left his Trade upon the Sabbath-Day.

The trouble with that passage is that it concerns our Grimes's father, who does not appear in the opera. Crabbe liked old Grimes but could not abide the son, hence his words in the Preface to The Borough : "The character of Grimes, his obduracy and apparent want of feeling, his gloomy kind of misanthropy, the progress of his madness, and the horrors of his imagination, I must leave to the judgment and observation of my readers. The mind here exhibited is one untouched by pity, unstung by remorse and uncorrected by shame."
Here is just a little of what he wrote in the poem itself:

With greedy eye he look'd on all he saw,
He knew not Justice and he laugh'd at Law;
On all he mark'd he stretch'd his ready Hand,
He fish'd by Water and he filch'd by Land:
Oft in the Night has Peter dropt his Oar,
Fled from his Boat and saught for Prey on shore ...
He wanted some obedient Boy to stand
And bear the blow of his outrageous hand;
And hop'd to find in some propitious hour
A feeling Creature subject to his Power ...
But none enquir'd how Peter us'd the Rope,
Or what the Bruise, that made the Stripling stoop;
None could the Ridges on his Back behold ...
The savage Master, grin'd in horrid glee;
He'd now the power he ever lov'd to show,
A feeling Being subject to his Blow.

There is more of this. Quite enough to show that Grimes is a sadist and, by most criminal codes, guilty at least of manslaughter if not murder. There are also hints that he buggered his wretched victims. It is more or less clear, too, that he killed his father in a rage. Crabbe's Grimes dies in delirium in the poorhouse, driven into madness and death by dreadful visions of his victims, the apprentices led on to torment him by his blood-boltered father.
There is, of course, nothing intrinsically wrong in changing Crabbe's Grimes into Britten's Grimes. The majority of great operas based on literary works radically alter the original. An opera is not a play or a novel or a short story. To labour the obvious, it is a music drama frequently, though not always, based upon a work of art in a totally different medium. The exigencies of time and space often necessitate the excision of entire characters and whole sub-plots to convey the essence of the original in an entirely different form. From Verdi's Otello to Tchaikovsky's The Queen of Spades to Britten's Grimes, composer and librettist have altered, cut, reshaped and added in order to create a new masterpiece, and to cavil at lack of fidelity to an original work of genius is to miss the point. It's neither good nor bad, it's merely different and the interest is not in the whether but in the how and why.
In Britten, it's not only Grimes who undergoes a sea-change. In the opera, Ellen Orford is a youngish, saintly person, the voice of decency and compassion in a fundamentally hostile environment. In The Borough, she gets a full "Letter", in which she is revealed as a tragic figure "burthened with error and misfortune". She is a ruined woman, her teens spoiled by a stepfather with too many children who made her "nurse and wait on all the infant race", before she is seduced and abandoned, with an idiot child, by her first lover, a gentleman "much above me". Eventually she marries another man, her children die, one of them by hanging. Widowed, she opens a little school, goes blind and, more or less, starves.
As all these extracts show, one can see the wisdom of Forster's judgement that Crabbe "is not one of our great poets. But he is unusual, he is sincere, and he is entirely of his country". One can also, therefore, see why Britten and Pears — whose own input into the creation of the opera was significant — felt an empathy for Crabbe that went beyond the analysis of the town they both loved. But one also sees their need to sanitise Grimes, to turn him from out-and-out villain into a plausible dramatic hero.
One has to remember that for much of their lives as lovers, Britten and Pears (who created the role of Grimes) were, by their conduct, risking imprisonment. They saw Grimes (in an interview with Britten) as the great outsider: "A central feeling for us was that of the individual against the crowd, with ironic overtones for our own situation. As conscientious objectors we were out of it. We couldn't say we suffered physically, but naturally we experienced tremendous tension. I think it was partly this feeling which led us to make Grimes a character of vision and conflict, the tortured idealist he is, rather than the villain he was in Crabbe." All well and good. But as that shrewd and sympathetic critic Michael Kennedy has pointed out: "Is it seriously to be doubted that 'and homosexuals' were unspoken but implied words in that statement? [After 'conscientious objectors']."
It was surely imperative that Grimes was not portrayed as homosexual. (After all, Britten did not feel able to show frankly the homosexual writer Aschenbach in Death in Venice until 1973.)
As for Grimes's sadism, that too is played down, almost concealed and only demonstrated indirectly. In the opera you never see Grimes striking his third and last boy apprentice. You only see Ellen Orford discovering his bruises. But you do get the Chorus yelling: "Grimes is at his exercise."
As for the worst aspect of Crabbe's Grimes from our contemporary standpoint — his paedophilia — there is not a hint of it in the opera. This is important, particularly because of the recent BBC television programme Britten's Children and several attendant newspaper articles. It seems clear from Britten's life and work that he loved young boys and that he created much beautiful music for them, from the role of Miles in The Turn of the Screw to the many enchanting choral works for young, unbroken male voices. But with all the opportunities for contemporary exposure there is not a shred of evidence of any misconduct. David Hemmings, who sang the role of Miles in the first production of The Turn of the Screw, was warned by his father: "You know he's a homo, don't you?" But even Hemmings senior's robust homophobia did not cause him to fear pederasty and Hemmings has gone on record as saying that nothing untoward ever happened. His only gripe was that once his voice broke, in mid-aria, and he had to be replaced by another boy, Britten never spoke to him again.
It is also probably significant that Britten initially preferred homosexual collaborators for his librettos. For Paul Bunyan he had Auden, but his essentially controlling nature — and why, after all, should he not be in charge of his operas in all their details just as Verdi and many others had been? — was upset by the way in which Auden had presented him with a fait accompli rather than a Strauss/von Hofmannsthal-type of dialogue and collaboration. For Grimes he initially wanted Christopher Isherwood, who wrote back: "It is surely good melodramatic material and may be something more than that: the setting is perfect for an opera, I should think" — but turned the project down citing pressure of other work and lack of time.
So Britten turned to the heterosexual Montagu Slater, who shared his political views and, it is probably not too cynical to suggest, being less well known and successful than Auden and Isherwood, would prove more malleable as a colleague. As anyone who has studied the Verdi/Boito or Strauss/von Hofmannsthal correspondence can testify, in any final argument between composer and librettist it is the composer who — in my view rightly — always wins. It is, after all, his music that people pay to hear, not the librettist's words.
Slater lost so many of the battles that he took the unusual step of publishing his own version of his verse libretto in book form a year after the Sadler's Wells performances and, having written for the cinema, where the director almost invariably triumphs in any argument with the writer, set out in the preface his own analogy for the creative process of writing an opera: "In writing it I worked in the closest consultation with the composer, Benjamin Britten. We worked very much as a scriptwriter and director work on a film, the composer in this case being the director. The comparison has value, because for several reasons I believe it is useful at the present moment to dwell on how much there is in common between the arts of drama, opera, radio and film."
As disavowals of collaborators go that is quite tactfully expressed, but it's still pretty obvious that there was a fair amount of friction between them, as there often was with Britten when he felt he was right.
When Grimes was revived and put on at Covent Garden the director was the redoubtable Tyrone Guthrie. The then-staff director, Ande Anderson, recalled that: "We had a brilliant production by Tony Guthrie, but Ben didn't like it at all because the accent was thrown on the sea. Ben said: 'No, it's got nothing to do with the sea. It has to do with the people in the village'. Tony said: 'But Ben, the sea made the people what they were', and Ben replied: 'No, these people would be the same wherever they were'."
Britten, once having created Peter Grimes, could not be in charge of it in perpetuity. If a Guthrie, an acknowledged master in his own field, could earn his disapproval, so could other interpreters. For many of us the defining performance of Grimes is that of the Canadian Jon Vickers, who performed the role in the Sixties and Seventies. While the part may have been written, for fairly obvious reasons, for Peter Pears, it was Vickers, with a voice no less distinctive than that of Pears but of radically different timbre, who most brilliantly and movingly brought the persecuted fisherman to life. Yet he and Britten fell out badly, not because Vickers was ever anything other than a great artist, but because he would occasionally ride roughshod over the score, changing words, eliding syllables here and there, creating different emphases — and Britten was justifiably furious. There is a strange irony in this in that Vickers, more than any other interpreter of the role I've heard, evoked the elements of tragedy, pity, terror and catharsis with raw, unforgettable emotional power. Yet his view of Grimes's character was quite startlingly at odds with those of his creator. In an interview Vickers claimed that Grimes is "totally symbolic" and that he, Vickers, could "play him as a Jew" or "paint his face black and put him in a white society" while at the same time maintaining that "I will not play Peter Grimes as a homosexual" because this "reduces him to a man in a situation with a problem and I'm not interested in that kind of operatic portrayal".
From its opening performances the opera attracted strong opinions. Within days, the conductor of the 38 bus travelling along Roseberry Avenue was heard to declaim, "Next stop Sadler's Wells Theatre to hear the sadistic fisherman Peter Grimes."
Joan Cross, who sang Ellen Orford at the premiere, recalled the reception at the final curtain: "At first we didn't know. There was silence at the end and then shouting broke out. The stage crew were stunned: they thought it was a demonstration. Well it was, but fortunately of the right kind."
The great American literary critic Edmund Wilson, who heard it in an early performance in 1945, thought at first that it was an opera about war and that Grimes was Germany. While there were several favourable notices from British music critics, there were many dissenting voices. The editor of Music Review thought it an "opera virtually without melody" and described the music as "poverty stricken". Several used that ultimate English condemnation, "clever". Even Neville Cardus, reviewing the 1947 Covent Garden production, felt that, "Grimes is not a strong enough character, not psychologically realised. For this reason the opera cannot strike the authentic tragic note."
But there were many supporters. Philip Hope-Wallace thought it the most important operatic event since Hindemith's Mathis der Maler and was the first critic to point out the connections with Shostakovich's Lady Macbeth of the Mtsensk District. William Glock wrote of the Prologue that, "could Verdi have been there he would have sat back in admiration, if not always in comfort".
Perhaps the most appreciative as well as favourable student of the work was Desmond Shawe-Taylor, who noted that many people had chorused along the lines of, "At last! After so many amateurs, a professional composer of operas!" "If they are right," wrote Shawe-Taylor, "it is none too soon. With the death of Puccini and the long decrescendo of Strauss, the species looked liked becoming extinct." In fact the last internationally successful opera had been Puccini's Turandot (1926) and how interesting to think that Shawe-Taylor, for many years the doyen of music criticism in this country, ignored Janácek in his assessment. But then Janácek (who died in 1928) was more or less unknown in Britain in 1945. For Shawe-Taylor, "one can scarcely avoid seeing in Benjamin Britten a fresh hope, not only for English, but for European opera".
Edmund Wilson, once he had got over his bizarre Second World War analogy, was another staunch, and perceptive, admirer: "You feel, during the final scenes, that the indignant, shouting, trampling mob which comes to punish Peter Grimes is just as sadistic as he. And when Balstrode gets to him first and sends him out to sink himself in his boat, you feel that you are in the same boat as Grimes."
To return to the "onlie begetter" of Peter Grimes, E. M. Forster, it was surely fitting that at the first Aldeburgh Festival in 1948 — and one wonders whether it could have been so beautifully established and managed by Britten and Pears without the more or less instant success of Grimes three years earlier — a lecture was given by Forster entitled "George Crabbe and Peter Grimes ". In it the novelist muses on how he might himself have effected the transposition from poem to operatic stage:
"It amuses me to think what an opera on Peter Grimes would have been like if I had written it. I should certainly have starred the murdered apprentices. I should have introduced their ghosts in the last scene, rising out of the estuary, on either side of the vengeful greybeard, blood and fire would have been thrown in the tenor's face, hell would have opened, and on a mixture of Don Juan and the Freischütz I should have lowered my final curtain. The applause that follows is for my inward ear only. For what in the actual opera have we? No ghosts, no father, no murders, no crime in Peter's part except what is caused by the far greater crimes committed against him by society. He is the misunderstood Byronic hero. In a properly constituted community he would be happy, but he is too far ahead of his surroundings, and his fate is to drift out in his boat, a private Viking, and to perish unnoticed while work-a-day life is resumed. He is an interesting person, he is a bundle of musical possibilities, but he is not the Peter Grimes of Crabbe."
A new production of Peter Grimes — which I have seen in its Brussels staging — opens at Covent Garden next week. The director Willy Decker has certainly emphasised the outsider elements. Decker is not supervising the London version; that is in the hands of his assistant director François de Carpentries, with whom I discussed Decker's interpretation. When I asked whether, when Grimes appears in the Prologue before Lawyer Swallow's inquest on his dead second apprentice, we are to assume that the heavy coffin Grimes bears is meant to contain the boy's corpse, he agreed that this was so. He demurred at the idea that it was also to recall images of Christ carrying his cross, but said that it was a symbol of Grimes's feelings of guilt. He also confirmed that for this production Decker has delved deeply into Crabbe as well as Britten. However, he stated that there was not meant to be any indication of sexual abuse in the marvellously directed scene between Grimes and the new boy apprentice in Grimes's hut. Only fear and the uncertainty caused by that fear are manifested by the boy.
When I asked about what is, for me, the most startling aspect of the production, the scene in which Balstrode (the sea captain) and Ellen Orford, Grimes's only loyal supporters in the Borough, join their fellow citizens in the final chorus after Grimes had drowned himself, de Carpentries said that this was to indicate their wish to stay alive as part of the community. They did not join in the hunting and the destruction of Grimes but they do need to survive.
Perhaps all that this shows in this extraordinary, emotionally shattering opera, is that no matter how much Britten has altered Crabbe's original, no matter how hard and successfully he has laboured to create a tragic hero, Crabbe's view of this dysfunctional fisherman is too deeply ingrained ever to disappear entirely."

Be warned, this could be the matrix: The multiverse theory has spawned another - that our universe is a simulation, writes Paul Davies.

(Sydney Morning Herald) "If you've ever thought life was actually a dream, take comfort. Some pretty distinguished scientists may agree with you. Philosophers have long questioned whether there is in fact a real world out there, or whether "reality" is just a figment of our imagination.
Then along came the quantum physicists, who unveiled an Alice-in-Wonderland realm of atomic uncertainty, where particles can be waves and solid objects dissolve away into ghostly patterns of quantum energy.
Now cosmologists have got in on the act, suggesting that what we perceive as the universe might in fact be nothing more than a gigantic simulation.
The story behind this bizarre suggestion began with a vexatious question: why is the universe so bio-friendly? Cosmologists have long been perplexed by the fact that the laws of nature seem to be cunningly concocted to enable life to emerge. Take the element carbon, the vital stuff that is the basis of all life. It wasn't made in the big bang that gave birth to the universe. Instead, carbon has been cooked in the innards of giant stars, which then exploded and spewed soot around the universe.
The process that generates carbon is a delicate nuclear reaction. It turns out that the whole chain of events is a damned close run thing, to paraphrase Lord Wellington. If the force that holds atomic nuclei together were just a tiny bit stronger or a tiny bit weaker, the reaction wouldn't work properly and life may never have happened.
The late British astronomer Fred Hoyle was so struck by the coincidence that the nuclear force possessed just the right strength to make beings like Fred Hoyle, he proclaimed the universe to be "a put-up job". Since this sounds a bit too much like divine providence, cosmologists have been scrambling to find a scientific answer to the conundrum of cosmic bio-friendliness.
The one they have come up with is multiple universes, or "the multiverse". This theory says that what we have been calling "the universe" is nothing of the sort. Rather, it is an infinitesimal fragment of a much grander and more elaborate system in which our cosmic region, vast though it is, represents but a single bubble of space amid a countless number of other bubbles, or pocket universes.
Things get interesting when the multiverse theory is combined with ideas from sub-atomic particle physics. Evidence is mounting that what physicists took to be God-given unshakeable laws may be more like local by-laws, valid in our particular cosmic patch, but different in other pocket universes. Travel a trillion light years beyond the Andromeda galaxy, and you might find yourself in a universe where gravity is a bit stronger or electrons a bit heavier.
The vast majority of these other universes will not have the necessary fine-tuned coincidences needed for life to emerge; they are sterile and so go unseen. Only in Goldilocks universes like ours where things have fallen out just right, purely by accident, will sentient beings arise to be amazed at how ingeniously bio-friendly their universe is.
It's a pretty neat idea, and very popular with scientists. But it carries a bizarre implication. Because the total number of pocket universes is unlimited, there are bound to be at least some that are not only inhabited, but populated by advanced civilisations - technological communities with enough computer power to create artificial consciousness. Indeed, some computer scientists think our technology may be on the verge of achieving thinking machines.
It is but a small step from creating artificial minds in a machine, to simulating entire virtual worlds for the simulated beings to inhabit. This scenario has become familiar since it was popularised in The Matrix movies.
Now some scientists are suggesting it should be taken seriously. "We may be a simulation ... creations of some supreme, or super-being," muses Britain's astronomer royal, Sir Martin Rees, a staunch advocate of the multiverse theory. He wonders whether the entire physical universe might be an exercise in virtual reality, so that "we're in the matrix rather than the physics itself".
Is there any justification for believing this wacky idea? You bet, says Nick Bostrom, a philosopher at Oxford University, who even has a website devoted to the topic ( http://www.simulation-argument.com). "Because their computers are so powerful, they could run a great many simulations," he writes in The Philosophical Quarterly.
So if there exist civilisations with cosmic simulating ability, then the fake universes they create would rapidly proliferate to outnumber the real ones. After all, virtual reality is a lot cheaper than the real thing. So by simple statistics, a random observer like you or me is most probably a simulated being in a fake world. And viewed from inside the matrix, we could never tell the difference.
Or could we? John Barrow, a colleague of Martin Rees at Cambridge University, wonders whether the simulators would go to the trouble and expense of making the virtual reality foolproof. Perhaps if we look closely enough we might catch the scenery wobbling.
He even suggests that a glitch in our simulated cosmic history may have already been discovered, by John Webb at the University of NSW. Webb has analysed the light from distant quasars, and found that something funny happened about 6 billion years ago - a minute shift in the speed of light. Could this be the simulators taking their eye off the ball?
I have to confess to being partly responsible for this mischief. Last year I wrote an item for The New York Times, saying that once the multiverse genie was let out of the bottle, Matrix-like scenarios inexorably follow. My conclusion was that perhaps we should retain a healthy scepticism for the multiverse concept until this was sorted out. But far from being a dampener on the theory, it only served to boost enthusiasm for it.
Where will it all end? Badly, perhaps. Now the simulators know we are on to them, and the game is up, they may lose interest and decide to hit the delete button. For your own sake, don't believe a word that I have written.

~Paul Davies is professor of natural philosophy at Macquarie University's Australian Centre for Astrobiology. His latest book is How to Build a Time Machine."

Everything you wanted to know about 'The Simulation Argument' but were afraid to ask. . .

"This website features scholarly investigations into the idea that you might currently be literally living in a computer simulation, running on a computer built by some advanced civilization. Films like The Matrix and novels like Greg Egan's Permutation City have explored the idea that we might be living in virtual reality. But what evidence is there for or against this hypothesis? And what are its implications? The original paper featured here, "Are You Living in Computer Simulation?", presents a striking argument showing that we should take the simulation-hypothesis seriously indeed, and that if we deny it then we are committed to surprising predictions about the future possibilities for our species.

Note: The paper has attracted a huge amount of attention, both from academic philosophers and scientists and from the more general audience; it's also been widely covered in the media. Rather than attempting to provide a comprehensive set of links to all this, I have selected here a few of the most substantial contributions and commentaries. (For the rest, there is always Google.) I try to do my best to reply to email enquiries, but occasionally the volume just gets too big; so my apologies in advance if you write and I fail to get back. - N.B. . . ."

(click title for links)

From the archives: Nick Bostrom: The Simulation Argument: Why the Probability that You Are Living in a Matrix is Quite High

(Times Higher Education Supplement) "The Matrix got many otherwise not-so-philosophical minds ruminating on the nature of reality. But the scenario depicted in the movie is ridiculous: human brains being kept in tanks by intelligent machines just to produce power.
There is, however, a related scenario that is more plausible and a serious line of reasoning that leads from the possibility of this scenario to a striking conclusion about the world we live in. I call this the simulation argument. Perhaps its most startling lesson is that there is a significant probability that you are living in computer simulation. I mean this literally: if the simulation hypothesis is true, you exist in a virtual reality simulated in a computer built by some advanced civilisation. Your brain, too, is merely a part of that simulation. What grounds could we have for taking this hypothesis seriously? Before getting to the gist of the simulation argument, let us consider some of its preliminaries. One of these is the assumption of “substrate independence”. This is the idea that conscious minds could in principle be implemented not only on carbon-based biological neurons (such as those inside your head) but also on some other computational substrate such as silicon-based processors.
Of course, the computers we have today are not powerful enough to run the computational processes that take place in your brain. Even if they were, we wouldn’t know how to program them to do it. But ultimately, what allows you to have conscious experiences is not the fact that your brain is made of squishy, biological matter but rather that it implements a certain computational architecture. This assumption is quite widely (although not universally) accepted among cognitive scientists and philosophers of mind. For the purposes of this article, we shall take it for granted.
Given substrate independence, it is in principle possible to implement a human mind on a sufficiently fast computer. Doing so would require very powerful hardware that we do not yet have. It would also require advanced programming abilities, or sophisticated ways of making a very detailed scan of a human brain that could then be uploaded to the computer. Although we will not be able to do this in the near future, the difficulty appears to be merely technical. There is no known physical law or material constraint that would prevent a sufficiently technologically advanced civilisation from implementing human minds in computers.
Our second preliminary is that we can estimate, at least roughly, how much computing power it would take to implement a human mind along with a virtual reality that would seem completely realistic for it to interact with. Furthermore, we can establish lower bounds on how powerful the computers of an advanced civilisation could be. Technological futurists have already produced designs for physically possible computers that could be built using advanced molecular manufacturing technology. The upshot of such an analysis is that a technologically mature civilisation that has developed at least those technologies that we already know are physically possible, would be able to build computers powerful enough to run an astronomical number of human-like minds, even if only a tiny fraction of their resources was used for that purpose.
If you are such a simulated mind, there might be no direct observational way for you to tell; the virtual reality that you would be living in would look and feel perfectly real. But all that this shows, so far, is that you could never be completely sure that you are not living in a simulation. This result is only moderately interesting. You could still regard the simulation hypothesis as too improbable to be taken seriously.
Now we get to the core of the simulation argument. This does not purport to demonstrate that you are in a simulation. Instead, it shows that we should accept as true at least one of the following three propositions:

(1) The chances that a species at our current level of development can avoid going extinct before becoming technologically mature is negligibly small
(2) Almost no technologically mature civilisations are interested in running computer simulations of minds like ours
(3) You are almost certainly in a simulation.

Each of these three propositions may be prima facie implausible; yet, if the simulation argument is correct, at least one is true (it does not tell us which).
While the full simulation argument employs some probability theory and formalism, the gist of it can be understood in intuitive terms. Suppose that proposition (1) is false. Then a significant fraction of all species at our level of development eventually becomes technologically mature. Suppose, further, that (2) is false, too. Then some significant fraction of these species that have become technologically mature will use some portion of their computational resources to run computer simulations of minds like ours. But, as we saw earlier, the number of simulated minds that any such technologically mature civilisation could run is astronomically huge.
Therefore, if both (1) and (2) are false, there will be an astronomically huge number of simulated minds like ours. If we work out the numbers, we find that there would be vastly many more such simulated minds than there would be non-simulated minds running on organic brains. In other words, almost all minds like yours, having the kinds of experiences that you have, would be simulated rather than biological. Therefore, by a very weak principle of indifference, you would have to think that you are probably one of these simulated minds rather than one of the exceptional ones that are running on biological neurons.
So if you think that (1) and (2) are both false, you should accept (3). It is not coherent to reject all three propositions. In reality, we do not have much specific information to tell us which of the three propositions might be true. In this situation, it might be reasonable to distribute our credence roughly evenly between the three possibilities, giving each of them a substantial probability.
Let us consider the options in a little more detail. Possibility (1) is relatively straightforward. For example, maybe there is some highly dangerous technology that every sufficiently advanced civilization develops, and which then destroys them. Let us hope that this is not the case.
Possibility (2) requires that there is a strong convergence among all sufficiently advanced civilisations: almost none of them is interested in running computer simulations of minds like ours, and almost none of them contains any relatively wealthy individuals who are interested in doing that and are free to act on their desires. One can imagine various reasons that may lead some civilisations to forgo running simulations, but for (2) to obtain, virtually all civilisations would have to do that. If this were true, it would constitute an interesting constraint on the future evolution of advanced intelligent life.
The third possibility is the philosophically most intriguing. If (3) is correct, you are almost certainly now living in computer simulation that was created by some advanced civilisation. What kind of empirical implications would this have? How should it change the way you live your life?
Your first reaction might think that if (3) is true, then all bets are off, and that one would go crazy if one seriously thought that one was living in a simulation.
To reason thus would be an error. Even if we were in a simulation, the best way to predict what would happen next in our simulation is still the ordinary methods – extrapolation of past trends, scientific modelling, common sense and so on. To a first approximation, if you thought you were in a simulation, you should get on with your life in much the same way as if you were convinced that you are living a non-simulated life at the bottom level of reality.
The simulation hypothesis, however, may have some subtle effects on rational everyday behaviour. To the extent that you think that you understand the motives of the simulators, you can use that understanding to predict what will happen in the simulated world they created. If you think that there is a chance that the simulator of this world happens to be, say, a true-to-faith descendant of some contemporary Christian fundamentalist, you might conjecture that he or she has set up the simulation in such a way that the simulated beings will be rewarded or punished according to Christian moral criteria. An afterlife would, of course, be a real possibility for a simulated creature (who could either be continued in a different simulation after her death or even be “uploaded” into the simulator’s universe and perhaps be provided with an artificial body there). Your fate in that afterlife could be made to depend on how you behaved in your present simulated incarnation. Other possible reasons for running simulations include the artistic, scientific or recreational. In the absence of grounds for expecting one kind of simulation rather than another, however, we have to fall back on the ordinary empirical methods for getting about in the world.
If we are in a simulation, is it possible that we could know that for certain? If the simulators don’t want us to find out, we probably never will. But if they choose to reveal themselves, they could certainly do so. Maybe a window informing you of the fact would pop up in front of you, or maybe they would “upload” you into their world. Another event that would let us conclude with a very high degree of confidence that we are in a simulation is if we ever reach the point where we are about to switch on our own simulations. If we start running simulations, that would be very strong evidence against (1) and (2). That would leave us with only (3).

~Nick Bostrom is a British Academy postdoctoral fellow in the philosophy faculty at Oxford University. His simulation argument is published in The Philosophical Quarterly."

From the archives: Nick Bostrom: ARE YOU LIVING IN A COMPUTER SIMULATION?

(Philisophical Quarterly) "ABSTRACT

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. . . ."

(click title for full text)

Dave Johnson: It Started in April -- Smearing the Report

(Seeing the Forest Weblog) "There are three well-planned, coordinated Republican smear operations underway, designed to discredit key accusers who told us that the Bush administration was asleep on the job before 9/11.
I'll bet if you took a poll of voters, at least 40% of the likely voters would call these the biggest stories of the election, while most of those on our side of the political spectrum have barely even heard about them. The context of the smears is to destroy the credibility of those accusing Bush of not paying attention before 9/11, and of lying about WMD before the Iraq war, and, finally, to blame Clinton for all of it. It is ALL OVER the Right-wing media, but is largely "under the radar" for most of us.
The first smear is the "Gorelick memo." This is a bit complicated, but is a key to this effort to shift blame from Bush to Clinton. It started in April, and lays the groundwork for the second smear I'll be talking about. During the 9/11 hearings Attorney General Ashcroft accused former Clinton Justice Department official Jamie Gorelick of having written a memo that caused agencies of the government to not share information that would have been crucial to learning that the 9/11 plot was underway. It is called the "Wall of Separation" memo. (Note the allusion to the hated "Wall of Separation" between church and state.) Some background from an April National Review column:
"In his public testimony before the 9/11 Commission the other day, Attorney General John Ashcroft exposed Commissioner Jamie Gorelick's role in undermining the nation's security capabilities by issuing a directive insisting that the FBI and federal prosecutors ignore information gathered through intelligence investigations. But Ashcroft pointed to another document that also has potentially explosive revelations about the Clinton administration's security failures. Ashcroft stated, in part:
... [T]he Commission should study carefully the National Security Council plan to disrupt the al Qaeda network in the U.S. that our government failed to implement fully seventeen months before September 11.
The NSC's Millennium After Action Review declares that the United States barely missed major terrorist attacks in 1999 ? with luck playing a major role. Among the many vulnerabilities in homeland defenses identified, the Justice Department's surveillance and FISA operations were specifically criticized for their glaring weaknesses. It is clear from the review that actions taken in the Millennium Period should not be the operating model for the U.S. government.
What is Ashcroft talking about? An article in Reader's Digest, "Codes, Clues, Confessions" (March 2002; by Kenneth R. Timmerman), provides some valuable insight. It states, in part:
[. . .] When the Department of Justice began interviewing "Norris"/Ressam, they didn't have a clue who he was. But Judge Bruguière did. He called the Department of Justice, and offered prosecutors his file on Ressam and his ties to al Qaeda. At the time, Bruguiere said, DOJ had no idea what a big catch they had, nor did DOJ have access to any intelligence about Ressam's ties to al-Qaeda. Ultimately, because of "The Wall" Bruguiere had to testify for seven hours in Seattle to lay out the al Qaeda connection to help U.S. prosecutors make their case against Ressam.
In other words, the "wall of separation" constructed by Jamie Gorelick made it virtually impossible for U.S. authorities to stop Ahmed Rassam, the "Millenium Bomber," by design or intention. It was left to blind luck. The NSC's Millennium After Action Review ? which, based on Attorney General Ashcroft's testimony, must be devastating in its analysis of not only this event but of the Gorelick policy ? remains classified. And, most significantly, it's likely the Review's criticisms and warnings were either ignored or rejected by the Clinton Justice Department. ..."
More on this later. (Other related April Gorelick stories from the Right here, here, here, here, here, here, here, and so on. This just touches the surface of the attention the Right gave to this.)
The second is this week's Sandy Berger smear. If you just read the newspapers, it doesn't seem like a big deal. But if you pay attention to the Right's channels of communication, it is a very big deal. On talk radio it is the ONLY thing.
The NY Times has a little story about Berger today, A Kerry Adviser Leaves the Race Over Missing Documents.
Mr. Berger's aides acknowledged that when he was preparing last year for testimony before the Sept. 11 commission, he removed from a secure reading room copies of a handful of classified documents related to a failed 1999 terrorist plot to bomb the Los Angeles airport. Republicans accused him on Tuesday of stashing the material in his clothing, but Mr. Breuer called that accusation "ridiculous" and politically inspired. He said the documents' removal was accidental.
No big deal.
But all day yesterday on Limbaugh's show, and Beck's, and others, it was a different story. Limbaugh, Trousergate: Serious: Theft of Papers Showing Al-Qaeda in US Under Clinton is HUGE:
"The 9/11 commission leaked this. This is a 9/11 commission leak, I think, and I'm wondering. The White House claims they didn't know about this investigation, even though the justice department was doing it. I'll tell you what this does. This puts this into even greater context. You remember when Ashcroft showed up and testified on television even before the commission and outed Jamie Gorelick with her memo that built the wall? I think this places a lot of that in greater context now, why he did that. I think he might have been -- he couldn't discuss the investigation, but he was letting everybody know what he did know. [. . .] When he went in there to "inadvertently" purloin these documents and stuff 'em down his pants, there was no Clinton administration. He was sent in there by Bill Clinton, not the Clinton "administration."
[. . .] Here I am laughing about it, but it's big. This is big, and I'll tell you why. It's the stuff that was stolen, the stuff that's probably now been shredded, the stuff that he just inadvertently sloppily can't find.
[. . .] You know what those documents contained? Elements of evidence that Al-Qaeda was in the country in 1999! It's all part of this millennium plot that the Clinton administration tried to take a lot of credit for stopping when in fact it was just good police work by a single Customs agent. It was not the results of any directive. This all came out in the 9/11 commission report as well, or hearings. But what's missing is that there are documents elevating, or detailing elements of, Al-Qaeda entry into the United States in 1999, and so when Sandy Burglar says, "Yeah, well, I was sent by the Clinton administration," da-da-da-da-da-da-da-da-da-da, of course he was sent there by Bill Clinton to get the evidence out. That's what one of the suspicions is, because the whole point of all this has been to shove every bit of Al-Qaeda, 9/11 blame onto the Bush administration. So, you know, none of this is an accident. You don't go in there and inadvertently take things out when you're the national security advisor! You know what the rules are.
[. . .] And you know who he's working for now is John Kerry. Now, how much of what he saw did he pass on to John Kerry? Is it time maybe for John Kerry to have something to say about this? I mean, look at two of Kerry's advisors: Joe Wilson -- now patented liar -- and Sandy Berger, thief. Well, presumed, alleged thief. Oh, he admitted it. He's a thief. He admitted he took the documents, a sloppy, sloppy thief. I think it's time for Senator Kerry here to maybe tell us a little bit more than just that he went to Vietnam: what he thinks of some of his advisors.
[. . .] Now, look, there are many of us, uh, ladies and gentlemen, who suspect that one of the objectives of the 9/11 commission Democrats is to deflect any blame or association for any acts of terrorism on this country to inaction or lackadaisical behavior, laziness on the part of the Clinton administration -- and the reason we believe this is because we know that the Clinton people have been hauling ass trying to rewrite a legacy for this man.
They have been doing everything they can to erase the Monica Lewinsky image from everybody's frontal lobe when they think and hear the name Bill Clinton, and so Clinton has been doing everything he can to rehab his image. He has a very large coterie of loyal supporters, one of whom is on the 9/11 commission, one of whom should have been a witness, not a member -- one of them, Jamie Gorelick, whose memo erected the wall that prevented intelligence from sharing information it gathered with law enforcement, and now we find out that Sandy Burglar, Clinton's #1 spook outside of the CIA. I mean this is the national security advisor guy!
[. . .] So you will pardon us if we have some doubts and suspicions about this when it's the critical assessments that are suspiciously missing. The former national security advisor himself, Sandy Burglar, had ordered his anti-terror czar Richard Clarke in early 2000 to write the after-action report. He has spoken publicly about how to review brought to the forefront a realization that Al-Qaeda had reached America's shores and required more attention. That's what's missing. Berger testified that during the millennium period, "We thwarted threats, and I do believe it was important to bring the principals together on a frequent basis to consider terror threats more regularly."
[. . .] Now, let's go back, and ask: "What is this really all about, folks?" because this, despite the obvious humorous aspects, this is really serious stuff because there is an ongoing effort to spare the Clinton administration -- and Bill Clinton personally -- of any responsibility whatsoever for anything that has happened deleteriously to this country in the world of terrorism.
[. . .] And something very, very suspicious about this information that was never put into action, and that's I think another reason why it's vanished. But this information clearly illustrations and I think points out how Al-Qaeda in 1999 and 2000 are in the country, and the United States government knew it, and they didn't put any plan into action to deal with it, and that's what they are deathly afraid of having been seen. So Sandy Berger has fallen on the sword -- and as Web Hubbell had to do, may have been asked to roll over here. The information was so obviously damning that he risked his career and freedom to take this information out of there and do who-knows-what with it, and that means, folks, that that report and those documents related to it provided advice and information relevant to the 9/11 attacks, some kind of complete breakdown which was not improved later otherwise it wouldn't have been necessary to get rid of it, and that's the bottom line. Take all this sloppiness out. Take all this inadvertently out." [all emphasis added]
Let me clear one thing up - nothing is "missing". The documents that Berger took out were copies of drafts of the memos. But the entire premise of Limbaugh's - and the rest of the Right's - massive explosion yesterday is that Berger took and shredded the only copies of documents criticizing Clinton. It is just a lie. But it is repeated and repeated and repeated -- and Limbaugh's audience is very large. And for those that missed it on Limbaugh the same story was on every other right-wing talk show I tuned in yesterday.
The third component is the Joe Wilson story. Joe Wilson is the guy who went to Niger, came back and said Iraq was not trying to buy uranium, and went public with this after Bush claimed Iraq WAS trying to. So in retaliation the Bush administration "outed" his wife, a covert CIA agent tracking down people who peddle WMDs. In preparation for the Berger story, and to counter the damage done by the White House's "outing" of his wife, the Right has been circulating a new batch of lies about Wilson. In A Right-Wing Smear Is Gathering Steam, Wilson writes,
"For the last two weeks, I have been subjected ? along with my wife, Valerie Plame ? to a partisan Republican smear campaign. In right-wing blogs and on the editorial pages of the Wall Street Journal and the National Review, I've been accused of being a liar and, worse, a traitor."
This story is all over the Right-wing media. From the same Limbaugh show,
"I mean, look at two of Kerry's advisors: Joe Wilson -- now patented liar -- and Sandy Berger, thief."
In other words, don't believe anything you may have heard about the White House "outing" a CIA agent, and, by extension, anything about Bush lying about WMDs in Iraq.
Did you wonder why the Republican machine made such a big deal about Gorelick, and demanded that she resign from the 9/11 Commission? Well, now we know -- it was all preparation for this week. So, we have Gorelick, and by extension Clinton, preventing the government from sharing information. We have Wilson, and by extension Clarke and other accusers, discredited. And now we have Berger, the guy who led the effort to stop the Millenium bombing and who was trying to get the incoming Bush Administration to pay attention to al Queda, discredited.
Note that even Limbaugh credits Ashcroft with setting this all up in his April testimony to the 9/11 panel, obliquely referencing the things that were "leaked" this week. Remember, by April the entire Berger situation was over. But Ashcroft knew about it and they were using it to weave this tale to discredit critics of Bush.
Any why this week? Because this week the 9/11 Commission releases its report. And what happened was that the Clinton Administration was ALL OVER the terrorism threat, while the Bush Administration ignored it and went on vacation. That is the essence of what happened. That's the big picture. So how do they counter that? The same way they're countering ANOTHER big picture - that Kerry is a war hero and Bush didn't show up for even the light duty his daddy had arranged for him. How they do that is they spread a fog of smears so thick that people lose track of what really happened.
As Richard Clarke told us, when the government detected increased "chatter" in 1999 they TOOK ACTION. They convened a task force to see what was going on, and put top people on the problem, and coordinated, and stayed up nights, AND THEY CAUGHT THE MILENIUM BOMBERS. Contrast that with the Bush Administration before 9/11 -- on vacation, literally. And the 9/11 commission report comes out this week, and it is probably going to SAY that. Even if they don't explicitly say that, it's there and it will be the story. The Clinton Administration did their job, and governed. The Bush administration was never about governing, and we all have to live with the consequences.
So the Republicans have to knock this story down. The way Republicans fight back is with smears to discredit their accusers. They constructed a 3-part discrediting action that phased in, coming to a conclusion just before the commission releases its report."