28 April 2010

The Intellectual Situation

A Diary

We love a parade. Up and down Fifth Avenue, voices ring against the burnished buildings.

“WE WON! WE WON!”

Who won? Won what?

“Something to do with the internet,” says a kneeling mailman, reaching with his chained key to open an olive-green postbox on the curb.

“WI-FI! WI-FI!”

The marchers split into cells, pushing off in crowds of sixes and sevens. Like torn-off dough hurled back into the kneaded mix, they enact a complex choreography: blob, stretch, blob, blob; then, at a whistle, they grapevine into another blob, trailing ropy bands of demonstrators.

SER-VER ERR-OR! WE WON! FA-TAL EX-CEP-TION!”

Confetti rains down from the cornices, as trombone and tuba blow “Happy Days Are Here Again.”

A flying wedge splits the full street, pantsless, its constituents taking pictures of one another without pants. “NO PANTS DAY!” they cheer.

Lawrence Lessig passes by on a float drawn by free-range chickens.

THIS IS WHAT THE INTERNET LOOKS LIKE! THIS IS WHAT THE INTERNET LOOKS LIKE!”

“Looks like a Verizon commercial,” says a white-haired dog walker, his schnauzer tangled up with us on the sidewalk.

“This is a protest against the skeptics!” retorts a 30-something man with a soul patch. He hands us a leaflet. “Get out of the new road if you can’t lend a hand! This is a demonstration! Read our program!”

But the leaflet is blank.


INTERNET AS SOCIAL MOVEMENT

Alexander Blok was enchanted by the Bolshevik Revolution. The leading poet of the pre-revolutionary symbolist school, Blok and his pale handsome face had been freighted in the years before 1917 with all the hopes and dreams of the Russian intelligentsia. In early 1918, when that intelligentsia was still making fun of the crudeness, the foolishness, the presumption of the Bolsheviks—the way contemporary intellectuals once made fun of Wikipedia—Blok published an essay urging them to cut it out. “Listen to the Revolution,” he counseled, “with your bodies, your hearts, your minds.”

Three years later, Blok was dead, and Vladimir Mayakovsky, the tribune of the Revolution, wrote his obituary. “Blok approached our great Revolution honestly and with awe,” Mayakovsky wrote. But it was too much for him: “I remember, in the first days of the Revolution, I passed a thin, hunched-over figure in uniform warming itself by a fire near the Winter Palace. The figure called out to me. It was Blok. We walked together. . . . I said, ‘How do you like it?’ ‘It’s good,’ said Blok, and then added: ‘They burned down my library.’”

A group of peasants had torched Blok’s country house. Blok, however, refused to choose between the “good” he saw in the Revolution and the burned library, Mayakovsky wrote. “I heard him this past May in Moscow. In a half-empty auditorium, he read some old poems, quietly and sadly, about Gypsy songs, about love, about a beautiful woman—the road led no further. Further on was death, and it came.”


Ninety years later, we are living through a different revolution. Like the Russian one, it will seem in retrospect—may already seem—like a smooth inexorable process, but was in fact a series of discrete advances: First, the creation of easy-to-use web interfaces (the first recognizable browser, Mosaic, launched in 1993) and blogging platforms (Moveable Type, 1999), which enabled non-specialists to navigate and publish on the web. Second, the improvement of search technology, so that search spam could be weeded out and relevant results delivered (the most radical advance in this field was made by a Stanford graduate student named Larry Page in 1998; his PageRank algorithm would also prove the eventual financial salvation of the internet, via search-based advertising). Third, the digital integration of various media other than text (through, first, their easy digitization, and then the increase in bandwidth that allowed their continuous broadcast), including music, photos, and videos, so that more and more things could be placed online. Fourth, most recently, the spread of the internet to wireless and handheld technology, which has freed the web and its user from the shackles of the deskbound networked computer. 

All of this was difficult, amazing, perplexing, astonishing—but so was the laying of the railroads and the sending of telegraph signals across the ocean. And historians of technology like to point out that great fanfare and promises have greeted all sorts of new devices, from the radio to the fax machine. But even before former Grateful Dead lyricist John Perry Barlow penned his “Declaration of the Independence of Cyberspace” (“Governments of the industrial world,” it began, “you weary giants of flesh and steel”), the internet was no mere fax machine.1 From the first, and in no small part because of its fervent supporters, it has felt less like a technology and more like a social movement—like communism, like feminism, like rock and roll. An ideology we could call webism. While the rest of us look up movie times, buy sweaters, and post jihadi videos, the webists proclaim the new age.

In its purest form, webism comes from a specific place: California. The computer and the internet spent their childhoods there. If the rhetoric of the webists sometimes sounds like nothing so much as a mutant futuristic strain of hippie-speak, this is why. Stewart Brand, creator of the great hippie handbook Whole Earth Catalog (1968–72, mostly), was a firm believer in technology as a pathway to a better, more liberated life; influenced by the techno-transcendentalism of geodesic dome–builder Buckminster Fuller and the oracular techno-apocalyptic pronouncements of Marshall McLuhan, Brand was the founder of the first online community, the WELL, which in turn influenced the founding editors of Wired. Almost all the great computer companies and innovations have come from a very small stretch of California known as Silicon Valley, which is essentially an extension of Stanford University. The founders of Hewlett-Packard, Yahoo!, and Google all came from Stanford—as did Stewart Brand. An hour north lies San Francisco, the historic home of the counterculture. And as Fred Turner—a communications professor at Stanford—has convincingly argued, it is a mixture of the technophilia of Stanford and the countercultural ethos of San Francisco that has created the ideology of the web as we know it. The first business venture of Steve Jobs and Steve Wozniak, the founders of Apple, both college dropouts who grew up in the Silicon Valley town of Cupertino, was to build little circuit boxes to steal long distance service from the phone company in 1976. They sold them in the dorms of nearby universities—for $100 you could get a little circuit breaker and save some money on your long distance bill. Best of all, with this gadget you could stick it to the man. Thus the era of the great freeload began.

Computers, initially, were meant to keep track of weapons and personnel in the era of mass warfare; the internet to keep alive flirtatious intragovernmental email in the event of nuclear attack. Silicon Valley sought to effect an ideological reversal. “One person, one computer,” Apple sloganeered in the early ’80s; “the web is for everyone,” Netscape said when it launched its first browser in 1994. From the mechanism of our mass administration, the computer would be the means of our individual liberation. Another early Apple slogan: “Why 1984 won’t be like 1984.” This was pronounced at the end of the famous commercial (directed by Ridley Scott and aired during Super Bowl XVIII) in which a young American female athlete threw a sledgehammer at a gigantic screen on which Big Brother was delivering his latest motivational lecture (“We have created, for the first time in all history, a garden of pure ideology,” Big Brother was saying). Which reminds us of the other, related source of utopian webism: the collapse of a utopian dream elsewhere, in Soviet Russia. There is something uncanny about the fact that Tim Berners-Lee wrote his first proposal for the world wide web in March 1989, six months before the Berlin Wall came down


“It’s not a revolution if no one loses,” leading webist Clay Shirky has written. The first ones the internet revolution came for were the travel agents, those nice people who looked up flight times and prices for you on a computer, before you could do it yourself at home. Then Captain Kirk returned from the future to zap them all. Next to fall was the music industry. And have you been to a mall lately? (Have you been outside lately?) Ultimately, very few industries were unaffected. Google, built on Larry Page’s search algorithm, is now a giant corporation, perched with its $22 billion in annual revenue right next to Delta Airlines, Coca-Cola, and Bristol-Myers Squibb. (Apple, at $32 billion, is in even more exalted company.)

And then the internet came for the print media. This process has been longer, more intricate, and more emotionally fraught than the interaction of the internet with any other media.

At first the idea was merely to transfer some aspects of the print world online, with slight wrinkles. The early web magazines were Slate, Salon, Feed, and Suck—as with the best print magazines, you could count them on the fingers of one hand. Some used the unlimited space of the internet to run longer features; others used the limited attention span of internet readers to run short. Slate (funded by Microsoft) was a lighter-hearted New Republic, minus the great book reviews; Feed (funded by venture capitalists) was a more earnest version of the Village Voice. Things changed after the crash of the tech stocks in 2000. At this point the founding editor of Feed, Steven Johnson, announced that Feed and its sister webzine Suck were folding and being replaced by something called Plastic.com. Plastic.com was a new kind of site: a news aggregator. User-contributors would post links to interesting articles, with a summary, and then everyone would discuss them. This would be called “user-generated content.” It was the future of the internet, Johnson explained. Here was a man who’d burned through more than a million dollars of funding by paying a living wage to his writers and editors to produce a high-quality product that competed with traditional print media. The world, it turned out, was not ready for that. It’s still not ready.

At the time, the world wasn’t ready for Plastic.com, either. In the years to come its formula would be copied with some vulgarization and more success by sites like Reddit and Digg. News aggregator blogs like Boing Boing and Gawker would also find glory in curating and annotating news items (and only then inviting commenters in). But the apotheosis of user-generated content would come with the rise of the social networks, where the content being generated by users was not just links to interesting news items but entire photo albums, playlists, recipes, recommendations . . . in short, entire selves.


Web 2.0 has been revelatory in lots of ways—user-generated naked photos, for one—but the torrent of writing from ordinary folks has certainly been one of the most transfixing. Over the past five years the great American public has blogged and Tweeted and commented up a storm and fulfilled a great modernist dream: the inclusion, the reproduction, the self-representation of the masses. Walter Benjamin spoke of “modern man’s legitimate claim to being reproduced” by film, a claim denied modern man by the capitalist film industry; James Joyce’s Leopold Bloom lamented the fact that the wisdom of the street found no outlet in literature. Now, through a million open channels, the wisdom of the people is represented, and they can write back to power—or at least to posters of YouTube videos. A lot of this writing has been insightful, strange, and witty. A comparable amount has been racist, homophobic, misogynistic—and a great many people have simply posted very cute photos of their pets.

It is unfortunate (though also logical) that this desacralization of the written word should be taking place simultaneously with the economic destruction of the once proud print institutions. One can imagine a world in which a million voices declared that the Times was a piece of shit—and yet the Times marched on. In fact that world existed for a hundred years. Remember Noam Chomsky in Manufacturing Consent, demonstrating with twenty years of painstakingly collected press clippings that the Gray Lady was misrepresenting the plight of East Timor, Burma, Nicaragua? Remember Rick Perlstein explaining that David Halberstam’s  reporting in the early 1960s pulled us into Vietnam? The New York so-called Times helped prop up dictatorships (as well as our two-party system), pushed the increasing technophilia of our culture, was a patsy for the Bush Administration’s Iraq War strategy—and intolerably elitist to boot. To denounce the imperialist Times was a rite of passage for young American leftists. And yet, as the Trotskyist Irving Howe once wrote, “Blessed New York Times! What would radical journalism in America do without it?” For all its defects, if you read to the end of the endless articles you got most of the facts. It was the best and most comprehensive newspaper in the world.

In the past five years, no institution has wrestled with the implications of the internet as painfully, and as publicly, as the New York Times. It has devoted tremendous resources to keeping its website updated with fresh stories, and it has assigned some of its best young talent to the various Times blogs. Cursed by its own authority, and the limitations this authority places on what it can say and do, the Times has been outhustled or outshined or simply mocked by the blogosphere, but has persevered. The Times has also devoted as much room to the “story” of the new media as anyone. One of its best critics, Virginia Heffernan, now writes almost exclusively about the internet; one of the paper’s most commented-on stories in the past two years was a Times Magazine essay about the life of a compulsive blogger. (There were so many comments, and many of them were so angry, that the Times shut the comment thread down.)

Often all this attention to the new media has been to the detriment of serious reporting: last year during the protests after the rigged presidential election in Iran, there was almost as much in the Times about Twitter, and the “Twitter Revolution,” as there was about the situation on the Iranian streets. At other times, the obsession with new media has led to strange outbursts—as when the writer of a piece on Robert Caro’s monumental 1,200-page biography of Robert Moses suddenly and entirely irrelevantly bemoaned the “age when sentence fragments on a blog pass for intellectual argument.” Even as the institution itself was struggling desperately to adapt, this sort of dig at the internet emerged from the editorial desk on a regular basis, like a cry of pain.

And then there was the Times’s media critic David Carr. It fell to Carr to describe the destruction of his way of life. In the face of collapse, Carr was stoical. He did not sing and dance, but neither did he moan and weep. He wrote a memoir of his crack addiction, and (in a move to out-tradition any traditionalist, in the age of the partly fake memoir) fact-checked it. But Carr also agreed to flatter the self-regard of the young: “For every kid that I bump into who is wandering the media industry looking for an entrance that closed some time ago, I come across another who is a bundle of ideas, energy and technological mastery. The next wave is not just knocking on doors, but seeking to knock them down.” This just a few days before various websites broke the news about the hundred Times staffers taking a “buyout” and leaving the paper for good. The door being knocked down was to Carr’s house.

On the other hand, Carr had 245,000 followers on Twitter—microblogging waited like an escape helicopter on his roof.


The webists met the Times’s schizophrenia with a schizophrenia of their own. The worst of them simply cheered the almost unbelievably rapid collapse of the old media, which turned out, for all its seeming influence and power, to be a paper tiger, held up by elderly white men. But the best of them were given pause: themselves educated by newspapers, magazines, and books, they did not wish for these things to disappear entirely. (For one thing, who would publish their books?) In fact, with the rise of web 2.0 and the agony of the print media, a profound contradiction came into view. Webism was born as a technophilic left-wing splinter movement in the late 1960s, and reborn in early ’80s entrepreneurial Silicon Valley, and finally fully realized by the generation born around 1980. Whether in its right-leaning libertarian or left-leaning communitarian mode it was against the Man, and all the minions of the Man: censorship, outside control, narrative linearity. It was against elitism; it was against inequality. But it wasn’t against culture. It wasn’t against books! An Apple computer—why, you could write a book with one of those things. (Even if they were increasingly shaped and designed mostly so you could watch a movie.) One of the mysteries of webism has always been what exactly it wanted, and one of the paradoxes that emerged during the long death of print was that the webists wanted to help. They wanted to “spread the word” about new books. They wanted to “make reading exciting again.” Over and over, in increasingly irritated and sometimes outright aggressive tones (most recently at the “New Think for Old Publishers” panel at last year’s SXSW), they urged the print companies to learn how to “use the web.” They meant this in all sincerity; their anger at the publishers for failing to “use” them properly was proof of this. But to urge the “use” of something was to think of it as merely a technology. It was to forget that the amazing and powerful thing about the web was precisely that it was not a toaster; it was not a hammer. The web could not simply be “used.”

In the end one got the sense that the Times was going to be all right—that it was taking in so much of the internet (neighborhood blogging, idea blogging, slide shows, video) that eventually, after many missteps, it would hit upon the right formula. (Even if it declined to take most readers with it—see “Addled,” below.) Much less easy to imagine was a situation in which the book publishers could be made whole again. The internet could certainly be used to sell physical books (Amazon overtook Barnes & Noble as the largest bookseller in the US in early 2007); no doubt it can also be used to sell digital copies of books. But these will be the same old books, repackaged a little to fit the file requirements of the e-platforms. And to those searching for the “new think,” that will be—already has been—disappointing.

There’s a very good reason that publishers’ moves in the direction of webism—setting up author websites, blogging boringly about new books, Tweeting obligatorily about positive reviews—have been so tepid and lame when compared with the struggle of the New York Times. Ultimately, untranscendably, the publisher needs to sell you a book for $20, or $10, whether you download it or buy it at a big-box store. And if you’re a webist, and believe in crowd-sourcing, collaboration, and above all in free, you may not be buying. The confusion surrounding the internet’s relation to the book has been created by the fact that many webists emerged from the culture of the book (rather than television, say); that they themselves genuinely liked books; and that communications online took place in the medium of text. “The internet is the largest group of people who care about reading and writing ever assembled in history,” posited the SXSW publishers’ panel in 2009. But what kind of reading, what kind of writing? The internet is the largest group of people ever assembled, period. Some join Infinite Jest discussion groups. Others can’t read to the end of a wire story. Book-length literature is the product of certain historical conditions, of a certain relationship to written language. Assimilate book-ism to webism and the book looks like nothing so much as an unreadably long, out of date, and non-interactive blog post.


“The Russian Revolution,” Wired founding editor Louis Rossetto once said, “was like a schoolyard game compared to the change that’s been driven by the digital revolution.” It’s an interesting comparison. In 1917, the Bolsheviks seized power in the world’s largest country, moved the capital to Moscow, terrorized their enemies, wrote poems in praise of themselves. They began tearing down the monuments they didn’t like, and building new ones. And, of course, in the 1930s, under Stalin, they started terrorizing and murdering people who weren’t their enemies at all.

Artistically, the revolution helped usher in an explosion of public creativity—in theater, architecture, and film it was the era of the triumph of the avant-garde. After about a decade, the explosion was stifled, and the history of that stifling, under Stalin, is always read as tragedy. And it was a tragedy. But socialist realism in literature and film, Stalinist architecture on the streets, “folk” paintings in the museums and metro stations—these were more popular, by far, than the products of the avant-garde. The rejection of the avant-garde did accord with the conservative tastes of Stalin and his circle, but it also accorded with the tastes of the great Soviet people. Someone had to denounce the Formalists, the Constructivists, the theater of Meyerhold, to Stalin.  Someone told the police about Mandelshtam’s anti-Stalin poem. It came from above, but it came even more from below. Between high modernism and Stalinism, Stalinism definitely got more hits.

History of course does not repeat itself quite so neatly. History, as the great Bolshevik-Trotskyist writer Victor Serge once said, is a series of rooms, and one needs to keep opening the doors. So, yes, the internet these days displays Stalinoid tendencies (been “denounced” on the internet lately? Give it a minute), but that doesn’t mean the commenters will soon be lining us up against the wall. And, yes, the most successful, innovative sites on the internet are mostly devoted to celebrity gossip, but that doesn’t mean they won’t eventually be supplanted. The nobler goals of this revolution are to disseminate information to parts of the world that do not have it, to strengthen democracy, to give a voice to everybody, and to speak truth to power. At the same time, if you believe that the internet is a revolution, then you must take seriously the consequences of that revolution as it is. The mistake that many supporters of the Bolsheviks made was to think that once the old order had been abolished the new order would be fashioned in the image of the best of them, rather than the worst. But the revolution is not just something you carry inside you; the web is not your dream of the web. It is a real thing, playing out its destiny in the world of flesh and steel—and pixels, and books. At this point the best thing the web and the book could do for one another would be to admit their essential difference. This would allow the web to develop as it wishes, with a clear conscience, and for literature to do what it’s always done in periods of crisis: keep its eyes and ears open; take notes; and bide its time.


Higher up, in the fastnesses of lower Midtown, we see a little crowd between the lions of the 42nd Street Public Library. The lions seem like bookends, propping up the decorous line of bespectacled young women and aged, swaying men in navy blazers ranged across a middle step.

What’s this? we ask.

PEN’s introducing a petition they got up. Against the internet. There’s Arundhati Roy! And—Baryshnikov, that’s Baryshnikov!”

“Him with the beard, that’s Doctorow, I t’ink,” says a Bronx voice.

Ohmygod, we say, joining in, that’s . . . Henry James. A portly, waistcoated gentleman has stepped frontstage, smoothing his ascot and smiling while a nervous intern holds, near his lips, a mic.

“One had had intimations, from the first—not just glimmerings, but positively flashings, as from the packet boat’s lantern, when it bore the ‘mail sack’ in—that the new sorts of messages in transit, from station to station, upon wires buzzing unseen as a spirit telegraph, would develop new centres. Living upon the old route, and trying to recover, blamelessly, the emoluments and exactions requisite to his trade, the proprietor of a coach-house might very well cry thief! at the digital guides, as if the shilling spent to speed a communication, by any artifice unknown to his father’s generation, were bread taken from his children’s mouths. So, too, the older champions of the novel, our blest form, who raised the alarm from her ‘thousand windows.’ No less than these, the clerks at the counting desks of the blest revues made their cry; even at the ‘daily paper’ which has provided half the reading—if I shouldn’t perhaps say, the half-reading—of our North American cities, for a century and more, the monitors warned that the printed traffic upon which they charged the toll had so slackened, as not to remunerate the minding. Our press would go to ruin.”

“Who is that, Anthony Hopkins?”

“It’s a movie they’re making, they’re making The Ambassadors, see, but it’s Henry James himself, he’s his own hero, in love with Chad Newsome. James Franco’s playing in it, that’s why he’s in school at Columbia.”

“It’s like, what, it’s Shakespeare in Love?”

“Hey,” a new arrival says, “why’s Scrooge McDuck at the Public Library?”

“Nevertheless, the conveyances of thought and life will find a way round. In twitterings, then; in open ‘web’ ‘logs,’ as those of captains rounding the Horn, once; or, apprenticed to the wiki, by replasterings, improvements, to the encyclopedia of the many; mightn’t a young person conscious of life, and assured a living as by independent means, have a desire to ‘log on,’ and put in, or daub his corner of the portrait of the age, illuminated by so many fellow amateurs, though without tincture of gold?”

“That’s great CGI! That’s, like, Oscar-worthy! Jar Jar Binks, baby, meet HJ! Can’t you see that green screen behind him?”

But isn’t a green screen where the effects go, and mustn’t the person before it be real?

“Yeah, ‘real’! Ten to one it’s Lucas! Industrial Light and Magic!”

“Yet the young person conscious of the felt life, still, might sense, I find, that the image so figured would be, at best, but half the matter. The dreadnought revue drew more water; and its liability, in these present shallows, to run aground, inspirational of laughter, meant, even unknown to its pilots, an irresistible stirring of the depths, in soundings as long, at least, as its column-inches. Nor are most of our young people freed from anxiety of means by traveling ‘online.’ Until that old drab, money-getting, revisits his quarters, however much she bring down the bon ton, the young person of feeling can afford, as far as literary feeling goes, but touch-up work, shading, only, of a morsel of canvas his predecessor might have used just to light his pipe; while his regular footstool, at the easel, where he grubs for hireling work, may be any moment handed over to another young copyist, who takes less time.”

The climax to the speech is lost when the microphone loses power, and Henry James doesn’t know to stop talking. We are all slow to applaud, like the symphony audience not wanting to mistake the performance for over. The elegant figure steps back into shadows. There is a scent of laurel. The Master has gone back to Elysium. “Hey, this Henry James,” says the intern for an online stalking site. “That’s his real name?”


ADDLED

Pick up a newspaper or magazine these days and you find yourself judging its health by the quantity of advertising. Harper’s, the Nation, the New Republic—they are pitifully bare of ads. “Page” (online, of course) through an old copy of the New Yorker, look up Edmund Wilson’s essays on the Dead Sea Scrolls, and feel the self-confidence of another age: almost three pages of ads for every column of text. Reading the magazine online brings out an analogy that a physical copy would obscure—the huge ads, dominating the text, remind you of nothing so much as a flashy website.

A big mystery of the internet has been why the online editions of newspapers and magazines can’t make money when, with huge skyscraper ads covering half the homepage, their websites so closely resemble the most successful publications of the past. These aren’t regular old newspaper ads either but what amount to TV ads—all the better, you’d think, since you can click through to buy the product on offer without picking up a phone. What’s more, the New York Times has ten times as many readers online as it does in print (15 million versus 1.5 million)! Amid all the anxiety about the future of journalism it’s easy to overlook the absurdity of the situation: the Times is going bankrupt—while showing more ads to more readers than ever before.

What happened? One standard answer is that advertisers overpaid for ad placement in the past, and now the Gray Lady, confronted with precise readership metrics, is finally getting paid the pittance she always deserved. This seems implausible: could perpetually rationalizing, efficiency-maximizing capitalism really have misjudged the efficacy of print advertising for more than a century? Another notion is that Google, by removing the ad men from the transaction, has dropped the glamorizing “sizzle” of the hard sell—an idea only Don Draper could buy.


The numbers don’t add up. As average internet usage has risen from six hours a week in 2004 to twelve hours a week in 2009, time spent with TV, radio, and magazines has held about steady. Part of this can be accounted for by the advent of workplace computers, which, as no one familiar with the devices will be surprised to learn, has not led to any revolution in productivity: thanks to YouTube, you can now watch old music videos and get paid. Once you get home, you add the new medium to the old ones. Over the past decade, while taking out a second mortgage, Americans also bought new flatscreen TVs and equipped the whole family with laptops and cell phones. Gathered round the dinner table, we can watch TV, check email, and text all at once. In this way the Blade Runner nightmare of a universe wallpapered with ads—huge corporate blimps projecting ads onto the walls of buildings—has given way to a nimbler and more domestic reality. Along with ads on every public surface there are now ads in your home—not just on the TV stand but atop our desks, on our laps, and in our pockets. The ads line our intimate communications on Gmail and Facebook, and with the development of the Kindle, the Nook, and the iPad, they will before long infiltrate our books.

With so many new surfaces available to ads, newspapers will never make close to what they formerly earned, no matter how often we reload the Times website. As the space open to advertising continually expands, the value of each individual ad must correspondingly decline. Of course, ad revenue could go up if companies started increasing ad budgets, but over the past ninety years, through the rise of TV, radio, and the internet, total advertising spending has remained almost constant at between 2 and 3 percent of GDP. Ads themselves are premised on the infiniteness and malleability of human desire; ad budgets, on the other hand, recognize the relatively fixed and inelastic nature of disposable incomes.

It’s at this intersection of ever expanding advertising, stagnant median income, and constant ad budgets that journalism will have to live. The primary theory of the internet economy still comes from Chris Anderson, prophet-in-chief of Wired, who back in 2004 envisioned the world of the long tail, in which a few cultural commodities (Dan Brown novels) would retain huge popularity and account for half of sales, while the other half would come from selling single copies of a massive variety of more obscure items (n+1 Issues 1 through 8). This model, based largely on Amazon.com, works okay so long as what’s sold retains its value over time—unlike, say, newspaper articles. Yet journalists and editors were peculiarly well disposed to placing a low initial value on their work. The music, film, and book industries all fought piracy from the start; most newspapers simply gave away the store. Lulled by decades of massive ad profits, newspapers thought of subscriptions as little more than fees for printing and delivery; it seemed only natural, where physical costs were eliminated, to drop subscriptions as well.

In retrospect, it’s apparent that the commercial liability of newspapers and national magazines was the same as their cultural strength: they addressed issues of general interest in an all-purpose public sphere. But to advertisers this civics-class “everybody” was a consumer “nobody”: it meant the press didn’t know who its audience was, or what they could afford. To pack a reporter off to Congo or Pakistan was to spend a lot of money catering to a phantom demographic. When this was the best that advertisers could do, it’s what they did: if Macy’s was holding a sale, it advertised it in the front section of the paper between news of the defense of Kinshasa and the latest scandal in Congress, figuring that “everybody” saw it, one way or another. If Ford had a new truck to market, off it went in search of football games to interrupt. But how much more reasonable and efficient—for everyone, really—to advertise clothing sales to people who want clothing, and Ford trucks not to sports fans, but to people in the market for a truck. Before, the advertisers had to guess; now, with all the information we provide with keyword searches, on social networks, and in emails, advertising can be more precise. On top of that, the “content” of social networks, email, search engines, blogs—it somehow magically produces itself, that is to say the users produce it, that is to say it’s free. The extension of advertising to the domain of private chatter undermines the competitiveness of anything that costs more than private chatter to produce. Marx blamed the below-subsistence wages of the proletariat on the reserve army of labor; the below-subsistence revenues of the Times can be blamed on the reserve army of the social networks.

In the past we imagined a regime of total advertising as thoroughly desensitizing: you would be shown more and more ads for more and more things you didn’t need and couldn’t even want. Thus David Foster Wallace’s Year of the Depend Adult Undergarment, a calendarwide publicity campaign of questionable effectiveness for those under 60, and Year of the Whopper, less than suitable for the growing Hindu market. Truly perfect advertising, the ad-topia of the near future, will be different—personalized as well as pervasive. Tucked inside the Year of the Whopper will be, for one person, Boca Burger “Original Vegan” Week and for another person a month of Smartwater™ Sundays. Instead of desensitizing, it will be hypersensitizing. It may even be useful.


Today we Google ourselves to see what the world knows about us; tomorrow we’ll just watch the ads. The outlines of this can already be discerned in Gmail’s sometimes tactless data mining of your emails: write a friend that your cat has died and you learn, cruelly, of discounts on litter. And the extension of precision-guided advertising into social life is also there to be seen in Facebook’s “friend recommendations,” where, once we’ve added all our close friends and colleagues and vague acquaintances, we see nothing but ads for people we know of but can’t possibly ask to “friend” us—exes of significant others, secret crushes, CEOs. Personalized advertising will first solicit our minimal discretionary income and then, accidentally, show us what we badly want but can’t have.

In Brooklyn the cable companies already personalize TV ads based on demographics, and they’re about to expand the program nationwide. Our cell phones meanwhile find where we are on a street map and show sales in nearby stores—they can even send a coupon to flash to the clerk for a personalized discount. Our cell phone provider therefore knows where we go, how long we stay there, and which coupons we actually use. The company can use this information for its own advertising purposes, or sell it to another company. Combine all the different data streams and you get what’s called “reality mining.” Irwin Gotlieb, the “King of Advertising,” whose firm GroupM controls $60 billion of ad buys worldwide, gives a little taste of what this future will look like:

Today, if I decide I need to sell a high-end watch, who’s the prospect? I can identify people with discretionary income. I can identify males or females fifty or older. But down the road, I will know you’re a watch collector because I will have that data on you. How? I will know your purchase behavior. A lot of retailers have loyalty programs, and they will share this information. If consumers have searched on Google or eBay to look at watches, all these searches are data trails. So instead of assuming that because you’re wealthy you might buy a watch, I can narrow my target to the small percentage of watch collectors.

The upside is we won’t have to look at watch ads anymore. The downside: pick up a copy of even today’s slender New Yorker—about forty pages shorter than an issue from two years ago—and check out the ads that remain. There’s nothing you can afford, but don’t get offended—imperfect advertising is a dying form of progressive taxation. Not so many people are in the market for a Rolex, but they have to print the same magazine for everybody. So all subscribers shell out the same $47 a year—a cost held down by the surtax paid by a few Rolex-buyers. With perfect advertising, we won’t be able to keep riding the coattails of all the ads intended for our betters.

Newspapers and magazines (including the Times) say they’re going to start charging for online content, but the most likely way they’ll survive is by capitalizing on their tony readership. People refer to the success of the Wall Street Journal and the Financial Times, which make the most popular stories available for free (“Bear Stearns CEO Smokes Pot”) while hiding everything else that’s fit to print behind a pay wall. The next stage in online newspapers will be to throw a few scraps of gossip to the public at large; offer free and total access to readers wealthy enough to be worth advertising to; and charge everyone else on a per-story basis. In this way we will fork over a few bucks at a time to read reviews of the hardcovers we can’t buy (automatically sent to the iPads of the rich, with ads of course) and to read recaps of the Springsteen concert we can’t attend (sponsored by the Depend Adult Undergarment—tickets delivered by SMS to a “lucky” few). This is the real meaning of Chris Anderson’s latest book, Free: everything will be free, to those with purchasing power. This model may even save the New York Times—but not for those who can’t afford it.


Night falls. We walk the border of the Park, past the green padlocked book kiosks. We’ll just step in at this gate.

Someone is gently strumming a guitar. Navigating by the sound, we ought to be able to find the path here, even in the darkness.

Stumble—slip—crash. Twisted ankle. Roots of trees. Childish voices, as we’re surrounded by an unearthly blue light. A street gang, coed, holds open its cell phones like torches, with little girls and boys pushing in between them to see. “Yer out a little late for an oldster,” growls a goth-lashed, red-haired Rapunzel.

A child’s voice says, “Tell us what it was like. Before the internet.”

“Yes, yes,” the innocents say. “Before. Is it true? Was it better?”

“Well.” We’re stumped. “You certainly had more free time. And you didn’t have to look everything up—all those things, that you don’t need to know. You’d wait. Or use your memory.”

“I told you, we could do that!” the first girl says to the group, looking around the circle. “We could write things with pens! And, like, if we have to go someplace, we have those old maps, and you can use them if you know the street names!”

“Right,” one of the teens says derisively, “but how’d you ever make plans to meet anybody?”

“You’d agree to meet somewhere. In advance. Or you’d call them.”

“On a cell phone?” pipes a little girl.

“There were phones on the street. In glass booths. Even in the subway station, reception didn’t matter. You wrote everyone’s phone number down in small leather-bound books.”

“You mean you killed animals to save people’s phone numbers?” one of the kids screams.

The littlest ones are led away quickly, while we bump chests with an aggressive punked-out kid, denim vest over studded leather jacket. He’s sporting a necklace of flash drives. “U come wiv us! Now!”

The group quick-steps us deep into the park, stopping at Bethesda Fountain. “Show it?” the thug asks the group.

“Show it,” the group replies.

We turn under an arch, into a stone doorway we’ve never noticed. In the middle of a short corridor, there’s a stairway into the depths. Following the glowing Bluetooth headsets, like will-o’-the-wisps, we descend to the gang’s hideout.

The walls are decorated with scribblings on yellow legal paper and stick-figure drawings around a poster of a Leonardo man-in-circles from the Metropolitan Museum gift shop.

There at the back of the little room—in the direction of all eyes—we can make out a long table, with a billboard rising vertically out of the far end. And a familiar face. Jane Fonda—her legs in thigh-high fur boots, held apart in a shooter’s stance—is pointing a space blaster at some red-eyed, bearded astro-primitive. A knob protrudes from the front, and, on each side, a little trigger. We study the bumpers, ramps, the points we need to win an extra play.

The leader turns to us mistily, clears his throat. His indoor speaking voice is surprisingly gentle, despite the eyeteeth filed to points.

“We tried to boot it up. Can you make it work, old man?”


CAVE PAINTING

“That deaf, dumb, and blind kid / sure plays a mean pinball!” the Who sang about the eponymous hero of their rock opera Tommy. And when the audience responded too rowdily to one live performance, the drummer Keith Moon is said to have yelled back, “Have some respect! It’s a fucking opera!”

Tommy was widely understood at the time to be campaigning for the aesthetic dignity of rock and roll, a battle that has long since been won. Less apparently, this was also the opening salvo in a similar battle on behalf of games: “arcade games” at the time, and computer games as we know them now. Computer games are the latest cultural form to benefit from the collapse of the old and now embarrassing categories of high-, low-, and middlebrow. Once a slightly seditious form of loafing in teenage wastelands of the ’70s, games have won ever greater cultural legitimacy in our own unibrow period. Their promotion has followed the by now predictable trajectory of the post-’60s transvaluation of values. First games cast off the vaguely masturbatory funk of shame that came with fiddling knobs, buttons, and joysticks while doing stuff mostly inside your own head. “Everything bad is good for you,” Steven Johnson declared about the digital games that displaced the analog ones, celebrating games “that have no fixed narrative path, and thus reward repeat play with an ever-changing complexity.” These games, vastly more sophisticated than Tommy’s pinball machine or the Atari consoles of the ’80s, made children smarter, Johnson claimed, and prepared them for the competitive and insecure labor market they would enter as adults.

A next level of respectability required infiltrating academia. The easiest way was to go through the perpetually crisis-ridden, terminally confused literature departments. Under the heading of “New Literacy Studies,” Palgrave, an academic imprint of Macmillan, brought out What Video Games Have to Teach Us About Learning and Literacy (2003). The University of Pennsylvania’s Institute for the Future of the Book sponsored Ken Wark’s Gamer Theory (2007), a book of many brilliant insights, complete with an online supplement of responses and comments, but not a book about books or their future. Finally, the New York Times, having dropped the “Leisure” from its old “Arts and Leisure” rubric (everything was art now), started running video game reviews instead of stories about whether Grand Theft Auto induces teens to kill.

Yet a certain outsider sense of grievance, part of the avant-garde script from Courbet to Keith Moon, still prevails among gamers. Writing in the London Review of Books, the critic and game aficionado John Lanchester complained that “from the broader cultural point of view, video games barely exist.” He was referring to the arts pages of dead-tree newspapers and journals, which, true, don’t cover computer games in proportion to either the hours or the dollars we spend on them. In China and other economies less moribund than our own, you can even get a factory job as a gamer, acquiring “virtual gold” and special virtual weapons, which your company then sells for actual dollars to other (recreational) players from once wealthy nations who are looking to save time on their way to the top of one or another virtual hierarchy. And what do the gamers-for-hire do during their downtime? The Times tells us that they blow a lot of their money on arcade games. Only, here, at last, they play for themselves! That kind of irony has yet to make it into any computer game, no matter how avant-garde they are.

Lanchester allowed that computer games would never tell us as much about character as other forms of narrative, but pointed out two great virtues of the form: “The first is visual: the best games are already beautiful, and I can see no reason why the look of video games won’t match or surpass that of cinema. The second is to do with this sense of agency, that the game offers a world in which the player is free to act and to choose.” And both points are right. The best games do look great, and we do have a lot of choice, not just inside game worlds but among them. Raised on the flashing cursors of Zork, we’ve learned to adore the newer, pert, pretty avatars, so much sexier and more powerful than we’d ever dare imagine ourselves. We too have played the games with lush graphics inspired by Breughel and Bosch and Kurosawa; the first-person shooter games; the strategy games in intricately wrought alternate worlds or ages past; the Sims; the online worlds of Warcraft and Second Life; the sports hero simulations and guitar hero simulations. Even the Beatles (if not yet the Who) are a video game now.

We have sometimes played these games until dawn peeps through the airshaft window. Go and lie down, and the game replays itself on your retina. Part of your brain is now imprinted, perhaps forever, with a map of feudal Japan, and the exact position of your armies at the moment you decided—unwisely—to chance your band of samurai against a much larger group of peasant spearmen. Another bad decision was to spend your allotment of rice recruiting 10 samurai instead of 200 peasants. Elitist! Worse yet was the moral debate, before the console, about whether to reboot at the moment right before disaster—or to samurai on, in the lifelike knowledge that things weren’t working out exactly as planned.

But do these games, in fact—as Lanchester and many others claim—amount to art? What Lanchester doesn’t seem to notice is that the two traits he names, of beauty and goal-oriented participation, work against one another. Or so, once upon a time, most philosophers of art would have claimed. For Kant, disinterestedness was the hallmark of aesthetic experience, which temporarily suspended the private desires and wishes of the viewer, reader, or listener. And the experience of playing games is nothing if not interested, the desire to win being almost the definition of an “interest.”

This naturally has consequences for beauty. Art-beauty is not the same as being good-looking, or else Bond movies might be the most beautiful films ever made. The beauty of an image within a story depends on its place within an irreversible narrative. A famous example: toward the end of Lolita, Humbert Humbert hears the cries of children playing (non-video) games outdoors. A nice sound no matter what, some would say. But the beauty is changed if you find yourself thinking, as Humbert does, “The hopelessly poignant thing was not Lolita’s absence from my side, but the absence of her voice from that concord.” The contemporary video game, no matter how technologically perfect, has no capacity for the beauty that comes from the unrebootable.

There is a moral difference too. A tragic video game would require that you never cheat, turn off the computer when you’ve screwed up, or save the game at a point when things are going well. Even then, the tragedy witnessed would only be the tragedy of your second life, not the life of an independent entity in whom you could take a disinterested interest. Video games encourage you to identify rather than sympathize—That’s me! you say, not I feel for him.

So from the standpoint of Kant’s “purposiveness without a purpose,” the answer to the question Are video games art? appears to be an emphatic no. Kant’s was a theory of spectatorship, not participation. An art object allows our minds to play freely over it, not with it; it may fill us with joy or terror for somebody else, but these impersonal feelings are no spur to any action or skill-set enhancement. Not that this musty question of aesthetics would matter much if only video games were at issue in the issue of video games. But the preference for identification over sympathy pervades the contemporary reception of nongame narratives too. What’s a more common complaint about a novel these days than that its main character isn’t “relatable,” i.e., available to readerly identification? Meanwhile, attempted defenses of artistic “difficulty” succumb to the utilitarianism of a Steven Johnson: a few years back, Ben Marcus championed difficult fiction for the workout of the brain’s Wernicke’s area that it provided. On these grounds, you might consider Cortázar’s Hopscotch usefully difficult—but not as much so as the Saturday crossword puzzle. If video games have turned out to be art, then what has art turned out to be?

History has given Kant’s “pleasure without interest” a beating. In German philosophy, the rout began with Nietzsche’s felicitous borrowing of a phrase from Stendhal: “Beauty is only the promise of happiness.” “Who is right?” Nietzsche asked himself, “Kant or Stendhal?” Since the ’60s, American (which is to say global) culture has opted for Stendhal, or at any rate for an aesthetics of pleasure and gratification.

The post-’60s culture consumer no longer wants to be a passive spectator or a mere appreciator, neither of the free beauties of nature nor of autonomous human endeavor. Perversely, the more Nietzschean we’ve become in our attitude to the arts, the more a certain telltale ressentiment shows itself. Like an insulted gentleman, the public now demands satisfaction from its art. We want to be the ones doing it—whatever it is. We don’t want to be left out! Let us play too! Behind every gamer’s love of the game lurks a hideous primal scene: watching other children at play.

And really nothing could be more legitimate than this disconsolate playground feeling, this frustrated desire to participate. It’s at this point that computer games’ bid for dignity (never mind the “art” part) starts making some sense. The specific activities most games imitate are those associated with what has come to be called “the military-entertainment complex.” And it’s often proposed that the dignity of games therefore lies in their future utility: play Doom now so you can pilot a Predator drone later, or learn to reduce your workforce with a click of a mouse. But the most potent allure of games surely lies in their fantasized, not their realistic, relationship to work. Here, control is angstless, effortless, and enormous: you can watch rioters take to the streets of your Roman city for two minutes of gametime, send out the police, cut taxes, shelter the rich, and watch your city blossom with gentrified villas some five minutes later. There is no game, at least not yet, in which you accomplish the mission only to learn you’ve been torturing an innocent man, or get passed over for promotion. Neither is your guitar heroism cut short by an overdose of heroin or rooted in coping with your abusive father. Here is a very un-labor-force-like experience of meaningful activity.

For the best writers on video games, games are not art and don’t need to be. Games are, by design, what Plato believed epic poetry to be: ethics manuals for inhabitants of the cave. Games like Warcraft or Vice City or Civilization teach us a certain relentless, captivating logic. The logic goes like this: It doesn’t matter how beautiful your city, or character, or civilization is, so long as it dominates. We, the game masters, have given you many chances to spend your time and game resources unwisely—to build beautiful things, and to train your samurai—but the wise player knows that the winning strategy is of the scorched-earth variety. Don’t cultivate, or build, or train into expertise lots of the lowest and cheapest items on your market menu. Conquer, overpopulate, overpollute, or the computer will do this to you! These actions have clear beneficial consequences for your side, even if only sociopaths and corporations would consciously take them in real life.

The games are paradoxical. Succumbing to total self-interest, you can forget the particulars of yourself for a few hours; adapting yourself to the ruling global order, you can be the one giving orders for a while. The accompanying feeling of chagrin mixed with grandiosity, of absorption more than fun—this is more like drugs than art (not for nothing are games called “nerd crack”). And as with drugs you never know how much you might still need them in better society. In an achieved utopia, would we would still be playing these games? Would even the citizens of a happier planet—or they especially—need sumptuous regular holidays from morality, homeopathic plunges into narcissistic devastation? It would be interesting to find out. In the meantime, it’s pretty suspicious how closely the logic of game worlds resembles that of our world-system. With only the difference—a big one—that in games alone can you identify with yourself and your world at one and the same time. Your interest can be your world’s; its interests, yours.

For now we don’t need a new Parnassus in which games take their place alongside novels, poetry, film, and opera. But one can also always hope that something of the antiquated aspiration to high art will resurface among future game designers and that this will make games more morally complex as the technology advances. One day—not so far off if we believe the tireless futurists of Wired and the designers of virtual-reality suits—it might be possible to commit a virtual murder in real time that will look, sound, and perhaps even smell exactly like killing someone. That is mere technique. But maybe some designer will also be able to make us experience something like this:

Just at that moment a shaft of sunlight fell on his left boot; on the toe of the sock which protruded from the torn boot there seemed to be some stains. He took off his boot: “Yes, those are bloodstains! The whole toe of the sock is soaked in blood!” He must have carelessly stepped into the pool of blood on the floor of the old woman’s room. “But what am I to do about it now? How am I to dispose of this sock, the frayed edges of my trousers, and the pocket?”

1 To be fair to the fax machine, many people credit it with helping the Tiananmen Square activists in 1989. (See “Fax Against Fictions,” Time, June 19, 1989.)

Subscribe!

It’s the right thing to do.

Sign up now to start receiving the magazine that believes history isn't over just yet.

Subscribe now »

  • The Editors
    • Last Night at Zuccotti Park

      I saw a chant give you pause last night. “This—is—a peaceful—pro-test” was one; you all stopped shoving us and stood there like blue clad mannequins. Why did that paralyze you? More…