8/11/2005
Playgrounds Of The Self
by Christine Rosen
France in the 1540s, a wealthy man from the region of Languedoc abandoned his wife and family and was never heard from again until one day, years later, he reappeared and resumed his old life. He was Martin Guerre—or so he said. But four years after his return, his wife was convinced otherwise, and she took him to court to prove he was an imposter. The resulting trial was a sensation; at the last minute, the real Martin Guerre, who had been wounded in battle and now relied on a wooden peg leg, dramatically returned, and the imposter, a charlatan named Arnaud du Tilh, ended up swinging from a gibbet for the crime of impersonating another man. The essayist Montaigne, among others, wrote about Guerre and the story was frequently told in France down through the centuries (including a French movie starring Gerard Depardieu). Identity theft, it seems, has a long history. It also, thanks to technology, has a thriving future. As a New York Times article put it a few years ago, “the Internet is giving millions of people a taste of reinvention....People could grow more comfortable with the notion that they can make up a new life story, learn to navigate the tricky process of living a lie.”
Although imposture happened long before the Internet, identity theft has taken on a new meaning in our technological age. We all, in some sense, have two forms of identity—one external, one internal. External identity is the way others see us, and internal identity is the way we see ourselves. External identity is also how the world categorizes us, and includes markers such as our credit report, Social Security number, health insurance, political affiliation, and even the history of our consumer purchases. It is this external identity—routinized, bureaucratized, entered into databases, largely unalterable and largely out of our control—that we talk about when we talk about identity theft, of which 9.3 million U.S. adults were the victims last year, according to a 2005 Better Business Bureau survey. Already we can avail ourselves of clean-up services that remove the traces of ourselves left on our technical devices. Unlike an old toaster, an old computer contains personal information, which means you cannot simply throw it away without also tossing out clues to who you are.
The fact that we have to go to such lengths to defend and protect our identity suggests deeper questions about the meaning of identity itself. Identity, in our world, is both a commodity and a self-conception, something capable of being sold, misused, and misconstrued by strangers at the click of a mouse, and something we can alter ourselves through reinvention. Geographical mobility allows us unlimited opportunities to switch homes and social circles; medical advances allow us to alter our appearance through cosmetic surgery or change our sex through gender reassignment surgery. In the very near future, there will likely be a market for a service that erases those disparate shards of personal information that appear when someone “googles” you—like a makeup artist covering blemishes, so that your public identity is unmarred by past mistakes. In other words: cosmetic surgery for the virtual self.
Our identity develops as a result of our personal experiences, our interactions with others, and our sense of place in the world. It may “change direction several times in a lifetime, or even twice in a year,” as Roger Scruton put it. But the restless search for identity—like the “joyless quest for joy”—pervades every aspect of modern life. And this form of identity—the evolving self, you might call it—needs a stage on which to play out its many incarnations. In earlier ages, this could be a public square, a courtroom, or even the drawing room of a private home. For the false Martin Guerre it was all three of these places. As work moved outside of the home, the factory and the office became new stages as well. Yet these spaces were often felt to be too constricting. In a 1920 essay, “Every Man’s Natural Desire to Be Somebody Else,” Samuel McChord Crothers observed, “As civilization advances and work becomes more specialized, it becomes impossible for anyone to find free and full development for all his natural powers in any recognized occupation. What then becomes of the other selves? The answer must be that playgrounds must be provided for them outside the confines of daily business.”
Technology has allowed the “daily business” of our occupations to intrude more often on our home life and on our leisure time, blurring the boundaries of formal and informal space. But we have adapted by creating places where playful, vindictive, inspiring, fantastic, or bizarre identities can temporarily frolic; places where we can borrow another person’s identity and, under its cloak, pursue activities we might never contemplate performing otherwise. We have created new worlds that allow us to change our names, our sex, our race, and even our humanity so that we might, at least for a while, experience what it’s like to be something or someone else. We have created video games, the new playgrounds of the self. And while we worry, with good reason, about having our identity stolen by others, we ignore the great irony of our own mass identity theft—our own high-tech ways of inventing and reinventing the protean self, where the line between reality and virtual reality ultimately erodes and disappears.
Games People Play
I know a World War II veteran; he stormed the beach at Normandy and acquitted himself well on subsequent battlefields. He is also an urban planner. And a car thief. Once in a while he’s a bikini-clad suburbanite who enjoys a dip in the hot tub after dinner. He is a video game enthusiast. You probably know one too. According to the Entertainment Software Association (ESA), the trade group that represents the computer and video game industry, half of all Americans now play video games, and the average age of the gamer is thirty years old. According to one recent study, “some 92 percent of American kids from age two to age seventeen have regular access to video games,” but “only 80 percent live in households with computers.” Gaming is poised to challenge the financial might of both the music and the movie industries: the worldwide film industry currently earns about $45 billion, and the worldwide video game industry $28 billion, but given the increase in mobile gaming and online gaming revenue, the video game industry likely will surpass the movie industry in the near future. As for the music industry, the ESA notes, “PricewaterhouseCoopers reported last year that video games will eclipse music as the second most popular form of entertainment by 2008.”
Video games are now a permanent part of mainstream culture, one to which people devote a considerable amount of time. According to the ESA, the average adult woman gamer plays 7.4 hours per week and the average adult man gamer, 7.6 hours. Other analysts have reported even higher rates of play: International technology analysts at IDC estimated that the average gamer (not the heavy user) spends about two and one half hours gaming every day—17.5 hours per week. And gamers have racked up years of play. In a recent speech, Douglas Lowenstein, the president of the ESA, noted that recent consumer research reports found that “66 percent of gamers between [the ages of] 18 and 25 have been playing games for at least ten years, and nearly 100 percent of gamers between 12 and 17 have been playing since [the age of] 2.” The average gamer, Lowenstein noted, has been playing for 9.5 years, and gamers older than 18 average 12 years of play. How are they making time for games? More than half claim that video game play comes largely at the expense of television viewing.
Today, the three behemoths in the world of gaming hardware are Sony’s Playstation (the latest version of which, Playstation 3, is set to be released in Spring 2006); Microsoft’s Xbox, whose latest incarnation will be on sale for the 2005 holiday season; and Nintendo, whose new game box, the Revolution, has no projected release date yet. In addition, game software is available for personal computers; portable gaming devices, such as Game Boy and cell phones that include video games, are also popular. The market for video games continues to broaden; marketing research firm NPD Group reported that the first quarter 2005 U.S. retail sales of video games “saw sales of over $2.2 billion, a 23 percent increase over the same period last year.” And video games are global; a recent article in the Wall Street Journal noted that in South Korea, “online tournaments attract thousands of spectators, and professional video game players can make six-figure incomes.”
A market in game accessories and publications also thrives, producing books, manuals, and magazines. Game Informer magazine, for example, enjoys a circulation of more than 2 million readers—more subscribers than Rolling Stone or Vogue. There is even a dedicated video game TV network called G4. Games have entered the educational arena as well, with scholarly publications such as the International Journal of Intelligent Games and Simulation that study various aspects of the industry. “High schools are using Rollercoaster Tycoon to help teach physics,” Dan Hewitt, a spokesman for the ESA, told me recently. “It’s great to see that teachers are realizing that video games are an entertainment medium that can be used to teach educational principles.”
The earliest games were a far cry from classroom tools. According to Steven L. Kent, author of the exhaustive Ultimate History of Video Games, it is the pinball machine and other coin-operated amusement games that lay rightful claim to being video games’ closest ancestors. In the 1930s and 1940s, you could even find precursors of today’s “first person shooter” video games in the form of “shooting arcades,” usually located in taverns, where “players shot tiny ball bearings at targets” located in a cabinet. In 1961, a student at the Massachusetts Institute of Technology named Steve Russell created Spacewar, considered the first interactive computer game. Arcade games such as Computer Space soon followed, and by the early 1970s, the new company Atari, founded by Nolan Bushnell, was manufacturing a computerized ping-pong game called Pong. By the late 1970s and early 1980s, arcade video games were easy to find in local malls, and Atari, which had released popular games such as Football and Asteroids, was facing competition from other game manufacturers such as Midway, which had created the popular Space Invaders game. Atari responded by taking video games out of the arcade and into the home; by the early 1980s, Atari’s home Video Computer System allowed gamers to play in the privacy of their dens, including games such as Pong and the wildly popular Pac-Man.
Competitors soon followed Atari into the home, with the Japanese company Nintendo (whose name means “leave luck to heaven”) entering the market in the mid-1980s with Super Mario Brothers and Legend of Zelda. In 1989, Nintendo released the first portable video game player, the Game Boy, and by the turn of the century, companies such as Sega, Sony, and Microsoft had entered the gaming industry, releasing their own individual console gaming systems for the home. The video arcade, as a result, steadily declined in popularity. In under a century, gaming has moved from the midway, to the tavern, to the mall, and into the home—where it has taken up permanent residence.
For people whose only familiarity with video games are their rudimentary 1970s incarnations, the intricacy of the stories, sophistication, and sheer technological wizardry of today’s video games make early games such as Pong or Pac-Man feel as cutting-edge as playing a hand of whist. “There are games out there for everyone of every age,” Hewitt told me. There are the obvious genres—strategy, action, sports, racing—and the not so obvious—survival horror, first-person shooter, and brawler games. According to the ESA, the best-selling games, by genre, in 2004, were action, sports, and shooter games on gaming consoles; and strategy, children’s entertainment, and shooter games on computers. Although not as popular as action and strategy games, there are also adult-oriented video games, such as Leisure Suit Larry, which are rated “M” for mature audiences only and feature simulated sexual intercourse and nudity. Porn star Jenna Jameson recently released Virtually Jenna, an interactive, online game using 3D technology that allows users to romp and role-play pornographically; eventually the game will allow players to insert photographs of themselves so that they can enjoy a “realistic sex simulation” with the popular porn star.
For the uninitiated, the gaming world, like any subculture, can be a confusing place. It nurtures its own vocabulary and shorthand (MMORPGs—for massive multiplayer online role-playing games—and FPSs—for first-person shooters); there is also a slightly annoying, interminable hipness to the industry. “Old timers” are thirty-year olds, and if you go to Bungie.net, for example, the homepage of the gaming company that created the successful game Halo, its employees seem like a bunch of precocious children—their cheeky biographical sketches are full of saucy (and often obscure) pop culture references and goofy pictures. They seem both incredibly creative and entirely divorced from the real world, and one is almost relieved to find out that they have the guidance of a stern parent (the company is owned by Microsoft). Similarly, on the website for the video-game TV network G4, you can find instant polls like this one: “Do you enjoy WWII-themed shooters?” The answers: “1) They’re awesome. That’s my favorite war. 2) I’d prefer something more modern; 3) I’d prefer something from before that, like the Civil War or ancient China; 4) I don’t enjoy shooters, really.” It’s a far cry from The Greatest Generation. Only a cohort largely untouched by war could speak so blithely about “my favorite war.”
The marketing of video games also offers clues about their intended audience and effect. A sampling of advertisements in Game Informer magazine reveals a strong emphasis on two things: hyperbole and the supreme control the gamer will exercise. A sample of the sloganeering: “Do you have what it takes to escape death? Do you have the power to stop the show?” (Wrestle Mania 21); “War is hell—violent and bloody. Experience the uncensored story of the Normandy invasion. The lives of your men are in your hands.” (Brothers in Arms: Road to Hill 30); “Control the masses. Control the battlefield. Conquer the world. Relive the complete span of human history as you lead your civilization through 12,000 years of conquest.” (Empire Earth II). And, for Star Wars buffs, the simple but effective: “Before you see the movie, live it.” (Episode III: Revenge of the Sith video game). The games clearly appeal to the ordinary person’s desire to do something extraordinary—even if these extraordinary acts happen in virtual, rather than real, worlds.
Games also borrow from pop culture and, in return, create pop culture (Tomb Raider was a video game before it was ever a movie). A recent article in Game Informer described a new game in the “brawler” genre: The Warriors, based on the 1970s cult movie of the same name and slated for release in September 2005. As Game Informer relates, “The game focuses on brutal grapples and ground attacks,” and an image from the game shows a large black man, dressed in high seventies chic, whose face is utterly without expression as he pummels a white man (a member of a rival gang) to death outside a simulacrum of a Coney Island arcade. “Players will display progressive damage like cuts and bruises,” the magazine enthuses, “and steal car stereos for extra cash.” To gain extra power, you can take “flash,” the video game version of the popular seventies drug amyl nitrate (“poppers,” in street parlance). The Warriors includes a staple of the brawling genre: The Rage Meter. “As you continue fighting,” Game Informer notes, “your meter will fill and when you’ve accumulated enough Rage, pressing R1 and L1 simultaneously will send your Warrior into a murderous frenzy.” The article even offers helpful tips on how to wiggle your analog stick just so to achieve an effective chokehold when you mug someone.
But video games are not merely mindless violence; they are often works of great creativity and imagination. Browsing through some of the new Xbox games, I came across Gears of War, a “sci-fi shooter game” whose scenes look like a cross between an El Greco painting and the movie Alien, with unearthly dark colors, menacing skies, and ritualistic scenes. Ghost Recon 3, an “urban combat” game, was less creative, offering the realistic over-the-gun-barrel perspective that is standard for first-person shooter games. But Kameo, an upcoming adventure game, featured landscapes that look like a Willy Wonka-inspired utopia. Other game researchers, such as Robin Hunicke at Northwestern University, are working on games that will use forms of artificial intelligence to heighten the responsiveness of play. Hunicke, who as an undergraduate at the University of Chicago studied “narrative,” sees great potential for increasing the interactivity of games. “It is possible to imagine game narratives that change depending on what an individual player chooses to do,” she says. “By investing games with real consequences, I think we push them forward from the level of ‘interesting toys’ to ‘artistic statements’ that help people explore the morality of different choices and ways of behaving.”
Video games already offer some identifiable benefits for those who play them. As has often been noted in reports about video games, gaming improves hand-eye coordination: in 2003, research conducted at the University of Rochester and published in the journal Nature confirmed that “action video games can give a person the ability to monitor more objects in their visual field and do so faster than a person who doesn’t play such games.” The findings weren’t universally positive; researchers noted, “exercises that demand prolonged attention, such as reading or solving math problems, are likely not helped at all by extensive game-playing.” Still, it seems like a long time ago that President Ronald Reagan was lampooned (in books such as Reagan’s Reign of Error) for saying, as he did in 1983, “I recently learned something quite interesting about video games. Many young people have developed incredible hand, eye, and brain coordination in playing these games. The air force believes these kids will be our outstanding pilots should they fly our jets.” Reagan’s gaffe is today’s conventional wisdom.
Modern Vauxhall
It’s important to remember that brawler games such as The Warriors are only one segment of the video game industry. Nor is the player with a victim in a chokehold necessarily a disaffected youth. “The biggest misperception is that gamers are teenage boys playing games in their parents’ basements,” says Dan Hewitt from ESA. “The average age of gamers is now 30. These are adults who played video games as children and kept playing as adults.”
Nonetheless, parents and politicians have raised concerns for years. In 1993, two Democratic Senators, Joseph Lieberman and Herb Kohl, sponsored hearings on video game violence. Parents frequently complain about the confusing rating system for video games, developed by the Entertainment Software Rating Board, which includes categories for early childhood, teens, mature audiences, and adults only, among others. Last year, as the New York Times reported, New York City councilman Eric Gioia, a Democrat representing Queens, called for the city’s mass transit authority to remove advertisements for the best-selling game Grand Theft Auto: San Andreas, because he claimed the game encouraged “bigotry, racism, misogyny, and hate.”
“The brouhaha recently about state legislation is really a cyclical issue,” Hewitt told me. “In the 1920s and 1930s people were concerned about movies and morals. In the 1950s it was Elvis swiveling his hips. Today it’s video games. Ten or fifteen years from now we won’t be having this conversation.” Nobody really expects this conversation to set any legal limits, since video games are considered a protected form of First Amendment speech. But Hewitt is correct that changing mores will likely yield greater tolerance for more explicit gaming fare. And experts disagree about the impact of violent images, whether in video games or television and movies. Regardless of whether there is a provable, causal link between playing violent video games and committing acts of real violence, however, the images in video games are of a peculiarly searing nature. Hours after I had played a first-person shooter game, I could not erase the over-the-gun-barrel perspective from my mind, nor could I expunge from memory the image of my enemy’s head exploding in a profuse, bloody mess when I shot him (or the fleeting feeling of satisfaction that my “kill” gave me). As with pornography, violent video game images might not lead to antisocial behavior (indeed, for some people, they might serve as a substitute for antisocial behavior). But images have influence, and it is not merely moralizers who are concerned about their long-term effects, particularly on children who are already living in an image-saturated culture.
The industry is not immune to the public’s lingering perceptions of video games as mindless, violent entertainment. In his “State of the Industry” speech at this year’s “E3,” the annual industry confab in Los Angeles, ESA president Douglas Lowenstein called on the industry to move beyond existing genres to create games with social and political relevance. “If we can make games about terrorism,” Lowenstein said, “why can’t we make compelling games about politics or global warming? Why can’t there be games which force players to struggle with weighty moral and ethical issues within compelling game worlds?” The possibilities are endless: a first-person syringe-shooter game called “Dr. Kevorkian’s Waiting Room”? A scientific fantasy game called “Grand Stem Cell Challenge”? Or perhaps a special add-on for “The Sims: The Aging Society,” which would feature bespectacled and bow-tied Brookings Institution scholars who lead neighborhood discussions about Social Security reform, and Sims neighbors whom you can organize for a March on Washington to demand prescription drug benefits.
Video games are also merely one component of the entertainment and technology industry’s holy grail: the digital living room. Time magazine, interviewing Bill Gates about the new Xbox recently, reported that Gates’s hope is to turn “the U.S. living room into a digital, wireless, networked nerve center.” The reporter got a little carried away, calling the Xbox “the Tolstoy Machine,” and praising the sophisticated graphics and the new emotional range of the characters. But he is correct to identify the company’s larger ambitions. Microsoft’s new Xbox is, perhaps, the Trojan Horse that will eventually deliver access to more than video games in the American living room: Gates hopes to see digital music, photos, and movies and television on demand grow out of the Xbox platform. “You gotta get in there because certain members of the family think it’s a must-have type thing,” Gates said of the XBox. “But the way to cement it is as a family experience.” Microsoft has even developed an acronym for this effort: it is promoting the D.E.L., or “digital entertainment lifestyle.” It is also a lifestyle we are rapidly embracing, as the television replaces the hearth and new technologies from cell phones to the Internet mediate every dimension of home life.
Video games are, in effect, our modern Vauxhall. Like those old pleasure gardens of eighteenth-century London, they are a masterpiece of faux reality. They are the result of much careful and creative planning. They invite us to assume different identities, at least for the time we stroll through their imaginary worlds. And they exist to give us pleasure by tricking our senses. Among the attractions of Vauxhall’s elm-lined Grand Walk were three arches meant to mimic the Ruins of Palmyra and pavilions meant to resemble ancient temples. But as in those pleasure gardens of yesteryear, dangers exist for undisciplined users—dangers the video-game industry and many video game enthusiasts are loath to acknowledge.
Imagined Communities
Improved hand-eye coordination is not the reason most people play video games. It is the opportunity to be somebody else—somebody else with limitless powers and absolute control. As one gamer told the authors of the recent study, Got Game, “Games give us freedom to be, think, do, create, destroy. They let us change the answer to the question, ‘who am I?’ in ways never before possible. Games let us reach the highest highs and the lowest lows, let us play with reality and reshape it to our own ends. They give us hope and meaning, show us that our journey through life is not pointless, and help us accomplish something at the end of the day.” “With first-person shooter [games] I can get out my frustration,” a woman told a Wyoming newspaper recently. “It’s therapeutic for me.” Indeed, there is something strangely elating about figuring out how to manipulate the controller in the perfect way during a gunfight, or the feeling of satisfaction when, having adjusted to the sophisticated, tilt-a-whirl-style dimensions of the game, you successfully take down an opponent. You physically respond to the virtual world in which you’re engaged—your heart races, you experience momentary shock or disorientation when a new enemy emerges to clobber you, you shift your body to mimic the movements of your character. In other words: You’re in the game.
Shooter games might offer the frustrated housewife a therapeutic outlet for her rage, but it is role-playing games that offer players the widest range of possibility. These games allow individuals to choose their own characters and play out complicated fantasies, often online with hundreds of thousands of others. Gamers create an imaginary self and form emotional connections in the virtual world. One study from Nottingham Trent University discovered that 15 percent of people who play MMORPGs routinely switch genders. So connected and realistic are these video game worlds that they can, at times, begin to impinge on the real one. Writing more than ten years ago about an early online role-playing game, Julian Dibbell recounted the aftermath of “a rape in cyberspace,” a virtual assault that occurred in an online meeting space that set off a chain reaction of soul-searching among the strangers involved. After narrating the efforts to censure and punish the offender and the range of sincere and vociferous reactions, Dibbell concluded that what happened in this simulated world “is neither exactly real nor exactly make-believe, but nonetheless profoundly, compellingly, and emotionally true.” Or, as Howard Rheingold asked in The Virtual Community, “What kind of multiple distributed system do I become when I live part of the day as a teenage girl in a chatroom, part of the day as a serious professional in a webconference, part of the day slaying enemies as Zaxxon, the steel-eyed assassin of an online gaming tribe?” So powerful is the lure of online role-playing that in the growing niche of Christian game makers, developers are wary of role-playing games that promote any form of spiritual or moral relativism. As Jonathan Dee noted in the New York Times Magazine in May, “The Christian gamers’ position is that, while you may fight the Devil and lose, you may not fight as the Devil.”
Among frequent gamers, according to the ESA, 43 percent report that they play games online, an increase from the 31 percent who did so in 2002. One of the most popular online role-playing games, a fantasy game called EverQuest, has hundreds of thousands of registered players. (Sony, which created the game, reported for several years that it had more than 400,000 subscribers.) On average these EverQuest players spend 22 hours per week playing the game, hence the nickname: “EverCrack.” “I started playing in 1998,” one user noted in a review on Amazon.com. “They have a little feature in the game that will tell you how ‘old’ your character (’toon) is. I remember when I looked at the ‘age’ of the character and saw 70 days. 70 days? 70 days at 24/hours a day = 1,680 hours. What a tragic waste of valuable time in my life. I pulled the plug.” He advised other potential EverQuest purchasers to forgo the game and instead “LIVE LIFE!”
Friends and family members of EverQuesters often feel abandoned and angry about the amount of time (nearly one day a week for the average player) that their loved ones spend with the game. A few have even formed support groups, such as “EverQuest Widows,” an e-mail listserv that has more than 3,000 members and describes itself as “a forum for partners, family, and friends of people who play EverQuest compulsively. We turn to each other because it’s no fun talking to the back of someone’s head while they’re retrieving their corpse or ‘telling’ with their guild-mates as you speak.”
“These games are meant to be addicting,” says Dr. Maressa Hecht Orzack, director of the Computer Addiction Studies Center at McLean Hospital and a clinical assistant professor at Harvard Medical School. “EverQuest keeps extending the life of the program, so that if someone gets to Level X, the game maker suddenly adds on more levels. You can understand why some people on EverQuest Widows are so upset and devastated by what happens.” Most people, Orzack says, start playing these games “just for fun.” But “they really get hooked. The fantasy is immediate and it offers immediate gratification. What they are getting is what we call the three AAAs—an affordable, accessible, and anonymous experience.”
Orzack is no stranger to technological addictions. She became interested in them after finding herself spending hours playing the card game Solitaire on her computer. “Solitaire was my way of escaping,” she says. “I would find myself losing sleep, I was missing deadlines, and one time my late husband found me asleep at the computer.” As she worked her way out of her own habit (“Instead of playing to win, I’d play for time”) she developed forms of cognitive therapy to help others. “Your thoughts determine your feelings,” she says, “so I ask patients, what do you expect to find when you turn on the computer?” Their responses are varied. “Some want a sense of belonging or a sense of identity,” she says, “Others want a sense of power—they can be leaders of a guild, they can be people who are well respected online, but might not necessarily be otherwise.”
It is this feeling of control in the simulated world—in contrast with the real world, which is often an exercise in frustration and helplessness—that is part of what concerns Orzack when it comes to children’s use of computer and video games. “When the grades go down. That’s when the parents call me,” she says, in a slightly exasperated tone. “They use these as electronic babysitters, and that’s not conducive to good parenting.” When I asked Dan Hewitt his thoughts on video game addiction, a subject the ESA doesn’t address, he said, “It’s up to the parents to see that children are using video games responsibly and making sure that games are part of a larger, well-rounded lifestyle.” True enough—but also a way to avoid any responsibility for the sophisticated marketing and hyper-addictive quality of the games themselves.
Some parents find out about a child’s gaming habits too late. “We’re just people trying to help each other. We’re modeled after Alcoholics Anonymous,” says Liz Woolley, the founder of On-Line Gamers Anonymous (OLGA). Woolley started the organization in the summer of 2002 after her 21-year old son, Shawn, who was undergoing treatment for depression, became hooked on playing EverQuest. He committed suicide on Thanksgiving Day 2001, and he was sitting in front of his computer screen when he did. “I had no intention of getting this involved,” Woolley told me, “but when my son died, and an article went out about him, it received a big response. I was shocked because I didn’t know this was happening to so many people. Nobody was talking about it.” Woolley calls gaming addiction an “underground epidemic.”
OLGA now offers support to people online, and Woolley wants to see a broader education effort focused on gaming addiction. “In high schools they talk about drugs, alcohol, cigarettes, and AIDS,” Woolley notes. “Why not include games in those lectures?” Woolley has little patience for those who claim that gaming brings real-world benefits. “It’s called denial,” she says briskly. “If they are learning all that stuff on games, I ask them, then why aren’t you a leader instead of sitting in front of a screen?” As for connection in cyberspace, Woolley says that many of the people she talks to who are hooked on gaming say, “I have friends in the game.” “Is this our society?” she asks plaintively. “We consider a pixel a relationship?”
It appears that we do. A 2005 AOL Games poll found that one in ten gamers claims to be addicted, and more than one in four admit to losing a night of sleep to play games. A study conducted by the Pew Internet & American Life Project found that gaming has become a major part of student life on college campuses, with nearly half of the college students who play video games admitting that games impede on study habits “some” or “a lot.” Writing a few years ago in the Weekly Standard, Christopher Caldwell ruefully described his brief love affair with the game Snood, which was “starting to eat up whole afternoons” of his time. “I also begin to understand for the first time what an addiction is,” he confesses. “It’s a desperate need to simplify. An addiction is a gravitation towards anything that plausibly mimics life while being less complicated than life.”
Don’t Worry, Be Virtually Happy!
The GameStop in Montgomery Mall in Bethesda, Maryland at first glance looks like a slightly seedy video store. The brightly lit sign on the storefront (“Game” in white and “Stop” in candy apple red) is plain, and the only other attempts to lure passersby are two small cardboard displays flanking the entrance. One says “HOT!” in red letters and the other contains two promotional posters: one promising “Intense Action!” from the video game Untold Legends and another for Unreal Championship 2 that says, “Bring a knife to a gunfight” and features an obscenely buxom, blue-haired, armor-clad woman brandishing a sword.
Inside, the store is nondescript, and far less elegant than the Mimi Maternity shop and Nordstrom’s department store on either side of it. Unlike those stores, with their ambient lighting and hardwood floors, GameStop has stained gray carpeting and buzzing fluorescent lights and a slightly chaotic layout. But on this beautiful spring Sunday afternoon, it is packed, mainly with boys between the ages of eight and twelve, most of whom are slightly overweight. Many of them are lingering around the three gaming displays—one for each of the corporate gaming giants: PlayStation, Nintendo, and Xbox. The demo games are placed prominently in the store’s floor-to-ceiling glass front, turning the gamers themselves into a kind of human window display.
They don’t seem to notice. One chubby boy, about ten years old, is deep into the PlayStation 2 game, Lego Star Wars, which features roly-poly characters modeled after the “Star Wars” movies. The boy has chosen the role of Obi-Wan Kenobi and is dueling a bad guy with his lightsaber. Occasionally he grunts, or twists his shoulders sharply as he maneuvers the controller, even though his eyes never leave the screen in front of him and he rarely blinks. Once he mutters, “I need more power.” He works his way through the game with considerable skill.
After completing his first game, he is joined by a taller, slightly older boy, who asks if he can play too. It’s clear they’ve never met before, but the first boy mumbles, “Sure,” and soon they’ve started a two-person version of the game. Within moments they’re engaged in an odd sort of mediated conversation, offering each other playing tips, praising a move, or groaning in sympathy when one of their Jedi knights is slain. “Get him!” “No, over there, you need that.” “Try going through that door.” But they never look at each other, and when the taller boy’s mother comes to retrieve him, the younger boy says nothing when he leaves, continuing to stare raptly at the screen as he cues up another game. When he finishes the game and turns to walk away, I ask him how long he’s been playing video games. Looking at me with a slightly unfocused expression he answers, “Since forever,” and walks away. As he leaves I notice him reaching into the pocket of his baggy athletic shorts for something: it is a Game Boy Advance SP, and as he disappears from view, I notice he has already started playing another game, adopting the hunched, fixated posture before his handheld screen that he had assumed before the PlayStation moments earlier.
If this small glimpse into the world of the suburban gamer awakens concern about short attention spans, poor social skills, and video-game-inspired violence, then you haven’t been reading the latest literature about this new generation of gamers. In Got Game: How the Gamer Generation is Reshaping Business Forever, John C. Beck and Mitchell Wade make a sweeping claim: that the generation gap between Boomers and those who came after can be explained by a simple difference—the younger generation’s experience “growing up with video games.” Based on a survey of 2,500 Americans, largely in business professions and with varying levels of gaming experience, Beck and Wade offer a relentlessly positive portrait of gaming.
To find the good in gaming, however, often requires strenuous leaps of logic and specious interpretations of the survey results. For example, among the standard virtues taught by games, Beck and Wade note, are the following messages: “You’re the star,” “You’re the boss,” “You’re the customer and the customer is always right,” “You’re an expert,” and “You’re a tough guy.” The authors report that gamers are far more likely to consider themselves as “knowledgeable” or a “deep expert” in their chosen fields, regardless of actual experience or objective assessments of their abilities. This generation, in short, is not lacking in self-esteem. “They are so confident of their skills, in fact, that they believe they don’t have to work as hard as other people,” the authors write, and in a startling turnabout of the work ethic, “the more experience respondents have with digital games, the less likely they are to describe themselves as hard workers.” Common sense suggests that this might pose a challenge in a workplace environment, where real expertise and hard work are often essential. But Beck and Wade see this as an unalloyed good—just the kind of overconfidence the business world needs, evidently. “In ways they themselves don’t even notice,” the gaming generation “really seem to believe that the world is their video game.”
The most intriguing differences in Beck and Wade’s study are not the ones between the younger gaming generation and the older nongaming generation; they are the contrasts between young gamers and young nongamers. For example, in answer to the question “I play to do the things I can’t in real life,” 40.8 percent of younger, frequent gamers said yes, compared with only 14 percent of younger nongamers. Nongamers also appear to bring different values and priorities to the workplace. To the statement, “It is important to receive a high salary and good benefits,” 75.4 percent of younger frequent gamers answered yes, compared to 66.3 percent of younger nongamers. The only bit of bad news the authors allow is this: “In our survey we also found that gamers can be emotionally volatile. By their own estimate, they are more likely than other groups to be easily annoyed or upset. In a word, they can be irritable.” But one person’s irritable child is another person’s master of the universe—savvy, self-confident, and more sociable than their peers.
Similar claims have come from the ESA, which notes that “gamers devote more than triple the amount of time spent playing games each week to exercising or playing sports, volunteering in the community, religious activities, creative endeavors, cultural activities, and reading. In total, gamers spend 23.4 hours per week on these activities.” But when I asked about the survey data, Dan Hewitt of the ESA told me that it was a random national telephone sample of 802 people (small by polling standards), conducted by Peter D. Hart Research in September 2004. “The interesting thing,” Hewitt told me, “is that the more they play games, the more they are involved in their communities.” The responses were merely self-reported opinions; they did not include more accurate follow-up studies using techniques such as time-use diaries or observational monitoring. This is hardly a scientific basis for video-game euphoria, and one might forgive the skeptic who sees such surveys for what they probably are: ways to make video games seem innocent at worst and praiseworthy at best.
The Self-Flattery Curve
The authors of Got Game are not the only video game enthusiasts putting pen to paper to defend the medium. “Think of it as a kind of positive brainwashing,” says Steven Johnson, author of the recent book, Everything Bad is Good For You: How Today’s Popular Culture is Actually Making Us Smarter. “The popular media steadily, but almost imperceptibly, making our minds sharper, as we soak in entertainment usually dismissed as so much lowbrow fluff.” Johnson dubs this the “Sleeper Curve,” after the 1973 Woody Allen film that hilariously parodies science fiction movies, and he’s not especially modest about its supposed effects: “I believe the Sleeper Curve is the single most important new force altering the mental development of young people today, and I believe it is largely a force for good: enhancing our cognitive facilities, not dumbing them down.”
Johnson includes video games in his list of new and improved fluff, and argues that a “strong case can be made that the power of games to captivate involves their ability to tap into the brain’s natural reward circuitry.” Games promote something he calls “collateral learning” and make us adept at “exercising cognitive muscles.” Video games and television should be seen, Johnson argues, “as a kind of cognitive workout, not as a series of life lessons.”
Johnson’s book has been well-received, earning kudos from no less a trend-spotting guru than Malcolm Gladwell at the New Yorker. It is doing so for a simple, and democratic, reason: it flatters our own self-image. Johnson is not, as he repeatedly claims, challenging the conventional wisdom; he is reaffirming it. In a democratic culture, people want to be told that fulfilling their desires is actually good for them, that self-interest is also self-improvement, that the most time-consuming habit is also time well-spent. Attacking popular culture, which is the underpinning of so much of our conventional wisdom, usually earns one the sobriquet of Puritan or crank. Praising popular culture, which few people can resist, can give any modern-day guru a temporary following.
“The sky is not falling,” Johnson reassures us. “In many ways, the weather has never been better. It just takes a new kind of barometer to tell the difference.” In other words, it isn’t that things are getting worse, or that we’ve exchanged one form of entertainment for another that is more passive and less inspiring to the imagination. We’re simply not looking at the world in quite the right way, with quite the right instruments. This, of course, is a tried and true formula of persuasion. It is the method of the quack. Johnson is the modern inversion of the form—unlike the old-fashioned quack, who falsely tells you that you are sick when you are well, Johnson tells us that we’re actually healthy when we might be sick. Quacks always give the public what they want; this is the key to their success. And Johnson is our modern St. John Long, the illiterate nineteenth-century charlatan who, according to Brewer’s Dictionary of Phrase and Fable, claimed to have created a liniment that allowed him to distinguish “between disease and health.” Like Long’s liniment, Johnson’s “Sleeper Curve” is a temporarily comforting but ultimately irritating device of little long-term value.
Quacks are also notoriously disingenuous, altering their message to suit their audience. In his book, Johnson says, “The television shows and video games and movies that we’ll look at in the coming pages are not, for the most part, Great Works of Art,” later adding, “I want to be clear about one thing: The Sleeper Curve does not mean that Survivor will someday be viewed as our Heart of Darkness, or Finding Nemo our Moby Dick.” But writing on his personal blog the week after his book was released, Johnson argued just that: “We don’t have a lot of opportunities in culture to tell a story that lasts a hundred hours, but that’s exactly what we’re taking in on The Sopranos or Lost or Six Feet Under. I feel totally confident that those shows will stack up very nicely against Madame Bovary a hundred years from now, if not sooner.” Like all good mountebanks, Johnson, aiming to please as broad an audience as possible, finds consistency a crutch. The difference between Johnson and an ordinary charlatan, however, is that Johnson seems to have had the foolish bad luck to start believing his own nostrums.
And nostrums are plentiful in this book. In order to sustain his sweeping claims about popular culture, Johnson must ignore the opportunity costs of doing things like playing video games; as a result he does not adequately distinguish between gaming and other forms of intellectual activity. Nor does he give a thought to where these games are being played—in the home—and how that fact has transformed family life. As Johnson himself notes, the average video game takes forty hours to complete. For him, these games need not do more in that time than entertain and exercise some of our cognitive faculties. “Those dice baseball games I immersed myself in didn’t contain anything resembling moral instruction,” he writes of the games he played in the pre-video game era, “but they nonetheless gave me a set of cognitive tools that I continue to rely on, nearly thirty years later.” Perhaps they did, although if this book is the evidence, his thesis is clearly a failure. But what Johnson does not recognize is that the choice to play games necessarily means that other activities will not occur, whether reading, making music, or even playing real, rather than virtual, baseball. We might point to the complex nutrients in dog food, but the fact remains: a dog that does little but eat will be unhealthy, no matter how many nutrients his food happens to contain, or how often he exercises his jaws in doing so.
The evidence Johnson enthusiastically marshals to convince the reader of his claims is risible, rendering his sweeping case for the intellectual significance of video games unsustainable. He is keen, for example, on noting an increase in IQ since television and video games became more sophisticated, and cites this as evidence of his ballyhooed “Sleeper Curve.” Of this rising IQ, called the “Flynn effect,” he concedes that it is most pronounced for g—or “fluid intelligence.” “Tests that measure g often do away with words and numbers,” Johnson writes, “replacing them with questions that rely exclusively on images.” What this proves, then, is that we’re becoming more of an image-based culture, more adept at reading visual signs and symbols; this does not necessarily mean we’ve become objectively smarter. As even Johnson admits, “If you look at intelligence tests that track skills influenced by the classroom—the Wechsler vocabulary or arithmetic tests, for instance—the intelligence boom fades from view.”
Johnson is also selective in his use of evidence, a practice that renders his arguments consistently unreliable. The second half of the book, which makes the case for the edifying effects of television, is the most egregious example. Johnson never mentions the fact that we spend more time watching television than we do engaged in any other activity besides sleeping and working, and he ignores entirely research by neuroscientists that has demonstrated the negative effects of television on young children’s brain development. “Parents can sometimes be appalled at the hypnotic effect that television has on toddlers,” Johnson says soothingly. “They see their otherwise vibrant and active children gazing silently, mouth agape at the screen, and they assume the worst: the television is turning their child into a zombie....But these expressions are not signs of mental atrophy. They’re signs of focus. The toddler’s brain is constantly scouring the world for novel stimuli.” Johnson’s claim is entirely specious: the American Academy of Pediatrics “recommends no more than one to two hours of quality TV and videos a day for older children and no screen time [including computers and video games] for children under the age of 2.” A study released last year in the journal Pediatrics found a link between hours of television viewing in young children and increased risk for developing Attention-Deficit/Hyperactivity Disorder. Everything bad is evidently not good enough for Johnson to include in his book when it contradicts his questionable thesis.
Finally, it is worth asking, if everything bad is now good for you, what happened to the old “good” things? And how do we now order our priorities? What, in other words, is the new bad? At the heart of Johnson’s argument is a desire for the complete erosion of the distinction between high and low culture; why should we recognize a difference between a game and a book, after all, if both exercise our “cognitive muscles”? What is important, Johnson says, is that the new media are improving. “It’s important to point out that even the worst of today’s television...doesn’t look so bad when measured against the dregs of television past,” he says. But doesn’t this set standards too low? There has always been an important and healthy suspicion of popular culture and mass entertainment—much of it stemming from sheer snobbery, of course, but some of it from a recognition of the often-steep opportunity costs of consuming low culture rather than high, and of indulging comfortable distractions at the expense of industry. Long before the era of video games, Boswell lamented that he was spending far too much time staring into the fire and better get himself back to work; he realized, in other words, that although enjoyable, sitting in front of the fire takes a person away from other, more productive pursuits. In a world where even our lowest entertainments are an unlimited good, how can we encourage moderation and self-regulation of any entertainment?
In the end, Johnson’s argument rests on two great errors: He tries to defend the utility of video games and other amusements as a route to self-improvement, by seeing them as a form of mental gymnastics. But we are left to wonder whether other workouts of the mind, so to speak, might not serve us much better. He also argues that video games and television are just as good as other kinds of leisure—like reading a great book, conversing seriously with friends, or playing a real sport. But are we really so better off devoting ourselves to the seductive pleasures of the virtual realm, however sophisticated and entertaining? To say that something is a creative pleasure is one thing; to claim that it is, in fact, actively good for you quite another. Chocolate is a pleasure, as is champagne, and both, in the right hands, can be made and experienced creatively. But a steady diet of chocolate and champagne is not healthy. This is a distinction that Johnson fails to recognize.
What are Games For?
Johnson’s book largely simplifies and synthesizes the work of others. In What Video Games Have to Teach Us About Learning and Literacy, for example, James Paul Gee, the Tashia Morgridge Professor of Reading at the University of Wisconsin–Madison, outlined the positive benefits of video games in even greater detail. Like Saul on the road to Damascus, Gee is struck by the power of video games after trying to play one of his son’s games and realizing how challenging it was. He posits that “better theories of learning are embedded in the video games” many schoolchildren play “than in the schools they attend,” and argues that “the theory of learning in good video games fits better with the modern, high-tech global world today’s children and teenagers live in.” Gee becomes so enthusiastic about games and their “semiotic domains” that he claims the virtual relationships children develop “have important implications for social justice.”
Gee, in other words, is eager to put the Xbox in the sandbox. “Games encourage exploration, personalized meaning-making, individual expression, and playful experimentation with social boundaries,” he enthuses, “all of which cut against the grain of the social mores valued in school.” He argues for a “new model of learning through meaningful activity in virtual worlds as preparation for meaningful activity in our post-industrial, technology-rich real world.” But Gee doesn’t show us how these virtual gaming skills are actually transferable to real-world situations. Like the authors of Got Game, Gee is hopeful that they are transferable and convinced that they will improve children’s educational experience. But wishful thinking is not the same as evidence, and evidence is certainly needed when such broad claims are being made on behalf of electronic entertainments. Although Gee’s research suggests intriguing possibilities for new educational tools, we must first answer the question of their effectiveness before we put a video game in every classroom. And we must grapple with the evidence on the other side of this equation. As William Winn, who heads the University of Washington’s Human Interface Technology Laboratory, told the authors of Got Game, gamers really do think differently: “They leap around,” he writes. “It’s as though their cognitive structures were parallel, not sequential.’” Lost amid the enthusiasm for gaming, however, are two questions: Does “different” mean better? And what, in the end, are games for?
One of the first board games, created for children in the eighteenth century, was “A Journey through Europe,” a game that taught geography as well as play. By the beginning of the nineteenth century children were playing games with titles such as “The New Game of Virtue Rewarded and Vice Punished, For the Amusement of Youth of Both Sexes,” where virtues such as Temperance and Honesty were rewarded and Obstinacy and Sloth harshly punished. Games were a structured form of play used to train children in the virtues of a particular society, a practice that continued into our own era. As media critic Michael Real has argued, even less didactic board games “such as Monopoly and Clue tended to teach young people to develop strategy, think and plan ahead, be patient, play fairly, take turns, and follow written directions.” Video games are different, says Real. They “are based instead on classical conditioning theory: the sights, sounds, and colors are reinforcers. They are fast-paced and entertaining. They teach some of the same abilities as older board games, yet they reduce, without necessarily eliminating, the interpersonal interaction.”
Games can be appealing outlets for adults as well. As Christopher Lasch once observed, “Among the activities through which men seek release from everyday life, games offer in many ways the purest form of escape. Like sex, drugs, and drink, they obliterate awareness of everyday reality, but they do this not by dimming awareness but by raising it to a new intensity of concentration.” But there is something unusual about the games people play today. As Steven Johnson notes, enthusiastically, “one of the unique opportunities of this cultural moment lies precisely in the blurring of lines between kid and grownup culture: fifty-year-olds are devouring Harry Potter; the median age of the video game-playing audience is twenty-nine; meanwhile, the grade-schoolers are holding down two virtual jobs to make ends meet with a virtual family of six in The Sims.” Another man, writing in the New York Times in 2004, boasted, “I was able to gain heroic status in the eyes of my daughter by helping her fight off a nasty gang of thugs” in a video game. “Perhaps these are becoming the new essential skills for parents.”
In fact, adult enthusiasm for video games is part of a broader transformation—what communications professor Joshua Meyrowitz in his book No Sense of Place described as “the blurring of childhood and adulthood.” Children and adults now dress more alike, he noted, and they have “begun to behave more alike.” It isn’t unusual, Meyrowitz notes, to see adults engaged in “children’s play.” “The latest generation of playthings—video and computer games—are avidly played by both adults and children.” Whether this is a good thing or not, “is difficult to say,” Meyrowitz concludes, “for better or worse, though, childhood and adulthood, as they were once defined, no longer exist.”
Critics have long recognized a difference between structured games and unstructured play, particularly for children. Play is supposed to leave something to your imagination. Hearing about an ogre from a fairy tale, children are free to imagine any number of frightening creatures: perhaps a more terrifying version of the neighbor’s scary dog, or the monster from another book, such as Where the Wild Things Are. For the video-game generation, an ogre need not exist in their mind’s eye. It is already a fully realized creature, like the ogre in Capcom’s new game, Haunting Ground, whose heroine, Fiona, awakens “dressed only in a towel, locked in a cage within a decrepit mansion,” according to the New York Times. “After escaping her cage and changing into a miniskirt, Fiona discovers that in spite of her sexy outfit she is not beset by lecherous suitors but by a hulking ogre who wants to eat her.” Video games represent the commodification of imagination (and, not surprisingly, the homogenization of fantasy). They are, at root, the expression of someone else’s fantasies—fantasies that you can enter, and manipulate a bit—but fantasies that you cannot alter at a fundamental level.
Video game fantasies, although graphic and sophisticated, are also sanitized in a way that real play is not. Video games carry no real risk of physical harm or personal embarrassment, as in real games and real sports. When a child plays outdoors, he might at least risk skinning a knee; when a child plays soccer on a team, she might get nervous as she stands on the field waiting for the opening whistle or embarrassed when she makes a mistake. But this is not the case with video games. It is perhaps telling that the biggest risks to gamers are ailments associated with modern adult work: carpal tunnel syndrome and eye strain.
Video games also take us indoors, and as Richard Louv, author of Last Child in the Woods, argues, this contributes to “nature-deficit disorder.” Among the research cited in his book is a 2002 study conducted in Britain that found that eight-year-old children were perfectly capable of identifying the Pokémon video-game characters but flummoxed when presented with images of otters, beetles, and oak trees. Louv argues that a combination of television, technological toys, and irrational parents fearful of child abduction keep children indoors. “When you’re sitting in front of a screen,” Louv told a reporter on NPR recently, “you’re not using all of your senses at the same time. Nowhere [other] than in nature do kids use their senses in such a stimulated way.”
Doom or Dickens?
In Inter-Personal Divide: The Search for Community in a Technological Age, Michael Bugeja, director of the Greenlee School of Journalism and Communication at Iowa State University, argues that games are not our cultural salvation. He describes our age as in the grips of an “interpersonal divide,” which is the “social gap that develops when individuals misperceive reality because of media over-consumption and misinterpret others because of technology overuse.” This form of “displacement,” as Bugeja calls it, fosters “an unfathomable feeling of isolation not only in our hometowns but also in our homes—connected, wired, and cabled to the outside world.” Included among the effects of displacement are a “clash of environments, virtual and real” and the “blurring of role and identity.” This is a physical divide as well. In the past decade, Bugeja notes, “many children went from playing in parks in front of neighbors...to playing in mall arcades in front of parents...to playing in living-room consoles in front of each other...to playing online in their rooms in front of no one in a place that is actually not there.” As a result, “our identities no longer are associated with community but with psychographics—statistics categorizing us according to the products that we purchase and the services that we perceive to need.” Johnson’s cognitive gym and Gee’s game-enabled classroom are Bugeja’s consumer-friendly isolation chambers. “We need to spend more leisure time in the real rather than the virtual world,” Bugeja argues.
We are, as Bugeja observes, a nation more enthusiastic about entertainment than any since ancient Rome. With the First Amendment firmly entrenched in the nation’s politics and culture, critics of video games are fighting a doomed battle by focusing exclusively on content; they should look instead to the broader social transformation wrought by leisure technologies. Their popularity does not mean, as boosters would have it, that we face a future of improved hand-eye coordination and well-exercised “cognitive muscles” with no negative consequences. We must grapple with the opportunity costs of replacing old forms of play with new ones like video games, and we must come to terms with what it means for the development of identity when we so eagerly blur the line between reality and fantasy and between childhood and adulthood.
In some sense, video games are creating a new group: the isolated, childlike crowd. Gamers share a passion, a mindset, and even certain physical skills. They are committed and aroused. Yet they are physically separated from each other. What does their immersion in virtual worlds do to their experience of the real one? Are the skills learned through video-game play truly applicable in the real world? As Meyrowitz reminds us, “Exposure to information is not the same as perception and integration of information.” Nor might the cognitive tricks learned by play in the virtual world have much use, in the long run, in the real one.
In previous eras, games were supposed to provide more than mere play; they were supposed to improve us morally or physically. The conceit of contemporary times is that games improve our intelligence, and that they do this so well that we ought to integrate them into more spheres—the classroom, the boardroom, the playground—as replacements for less advanced ways of learning. Our embrace of video games is yet another chapter in the ongoing story of technology-as-liberation.
But this story isn’t as simple as it first appears, and we are failing to ask some important questions. With video games, we focus so much on how we can make the virtual world more like the physical world that we forget to ask about movement in the opposite direction. In an age when people are spending much of their work time and most of their leisure time in front of computers, televisions, and video-game screens, how is the virtual world affecting the physical one? Are we becoming so immersed in virtual reality that we end up devoting more time to the care and tending of our multiple, virtual identities than to the things in the real world that contribute to the formation of healthy identity? After all, there are a great many things you can’t do in virtual reality: you cannot satisfy material needs for food, water, or genuine physical affection; you cannot build character or develop true interpersonal skills; and although some people might be able to satisfy certain emotional needs, such satisfactions are rarely permanent.
Today’s video games are works of creativity, technical skill, and imagination. They are, in appropriate doses, healthy and satisfying playgrounds for experimentation with different identities and exploration of different worlds. But video games carry the risk—as all amusements do—of becoming the objects on which we lavish so much time and attention that we neglect the true and lasting things around us, such as our family, our friends, and our communities. Societies get the games they deserve. But when a society claims for its games the insights, sophistication, and deeply humane wisdom that other forms of culture and community have long offered—when it places Dickens alongside Doom and replaces the family hearth with an Xbox—it is well on its way to finding something more alarming than its identity stolen. It risks becoming a society without true loves or longings, filled with individuals who find solace only in make-believe worlds where the persons they really are do not really exist.
France in the 1540s, a wealthy man from the region of Languedoc abandoned his wife and family and was never heard from again until one day, years later, he reappeared and resumed his old life. He was Martin Guerre—or so he said. But four years after his return, his wife was convinced otherwise, and she took him to court to prove he was an imposter. The resulting trial was a sensation; at the last minute, the real Martin Guerre, who had been wounded in battle and now relied on a wooden peg leg, dramatically returned, and the imposter, a charlatan named Arnaud du Tilh, ended up swinging from a gibbet for the crime of impersonating another man. The essayist Montaigne, among others, wrote about Guerre and the story was frequently told in France down through the centuries (including a French movie starring Gerard Depardieu). Identity theft, it seems, has a long history. It also, thanks to technology, has a thriving future. As a New York Times article put it a few years ago, “the Internet is giving millions of people a taste of reinvention....People could grow more comfortable with the notion that they can make up a new life story, learn to navigate the tricky process of living a lie.”
Although imposture happened long before the Internet, identity theft has taken on a new meaning in our technological age. We all, in some sense, have two forms of identity—one external, one internal. External identity is the way others see us, and internal identity is the way we see ourselves. External identity is also how the world categorizes us, and includes markers such as our credit report, Social Security number, health insurance, political affiliation, and even the history of our consumer purchases. It is this external identity—routinized, bureaucratized, entered into databases, largely unalterable and largely out of our control—that we talk about when we talk about identity theft, of which 9.3 million U.S. adults were the victims last year, according to a 2005 Better Business Bureau survey. Already we can avail ourselves of clean-up services that remove the traces of ourselves left on our technical devices. Unlike an old toaster, an old computer contains personal information, which means you cannot simply throw it away without also tossing out clues to who you are.
The fact that we have to go to such lengths to defend and protect our identity suggests deeper questions about the meaning of identity itself. Identity, in our world, is both a commodity and a self-conception, something capable of being sold, misused, and misconstrued by strangers at the click of a mouse, and something we can alter ourselves through reinvention. Geographical mobility allows us unlimited opportunities to switch homes and social circles; medical advances allow us to alter our appearance through cosmetic surgery or change our sex through gender reassignment surgery. In the very near future, there will likely be a market for a service that erases those disparate shards of personal information that appear when someone “googles” you—like a makeup artist covering blemishes, so that your public identity is unmarred by past mistakes. In other words: cosmetic surgery for the virtual self.
Our identity develops as a result of our personal experiences, our interactions with others, and our sense of place in the world. It may “change direction several times in a lifetime, or even twice in a year,” as Roger Scruton put it. But the restless search for identity—like the “joyless quest for joy”—pervades every aspect of modern life. And this form of identity—the evolving self, you might call it—needs a stage on which to play out its many incarnations. In earlier ages, this could be a public square, a courtroom, or even the drawing room of a private home. For the false Martin Guerre it was all three of these places. As work moved outside of the home, the factory and the office became new stages as well. Yet these spaces were often felt to be too constricting. In a 1920 essay, “Every Man’s Natural Desire to Be Somebody Else,” Samuel McChord Crothers observed, “As civilization advances and work becomes more specialized, it becomes impossible for anyone to find free and full development for all his natural powers in any recognized occupation. What then becomes of the other selves? The answer must be that playgrounds must be provided for them outside the confines of daily business.”
Technology has allowed the “daily business” of our occupations to intrude more often on our home life and on our leisure time, blurring the boundaries of formal and informal space. But we have adapted by creating places where playful, vindictive, inspiring, fantastic, or bizarre identities can temporarily frolic; places where we can borrow another person’s identity and, under its cloak, pursue activities we might never contemplate performing otherwise. We have created new worlds that allow us to change our names, our sex, our race, and even our humanity so that we might, at least for a while, experience what it’s like to be something or someone else. We have created video games, the new playgrounds of the self. And while we worry, with good reason, about having our identity stolen by others, we ignore the great irony of our own mass identity theft—our own high-tech ways of inventing and reinventing the protean self, where the line between reality and virtual reality ultimately erodes and disappears.
Games People Play
I know a World War II veteran; he stormed the beach at Normandy and acquitted himself well on subsequent battlefields. He is also an urban planner. And a car thief. Once in a while he’s a bikini-clad suburbanite who enjoys a dip in the hot tub after dinner. He is a video game enthusiast. You probably know one too. According to the Entertainment Software Association (ESA), the trade group that represents the computer and video game industry, half of all Americans now play video games, and the average age of the gamer is thirty years old. According to one recent study, “some 92 percent of American kids from age two to age seventeen have regular access to video games,” but “only 80 percent live in households with computers.” Gaming is poised to challenge the financial might of both the music and the movie industries: the worldwide film industry currently earns about $45 billion, and the worldwide video game industry $28 billion, but given the increase in mobile gaming and online gaming revenue, the video game industry likely will surpass the movie industry in the near future. As for the music industry, the ESA notes, “PricewaterhouseCoopers reported last year that video games will eclipse music as the second most popular form of entertainment by 2008.”
Video games are now a permanent part of mainstream culture, one to which people devote a considerable amount of time. According to the ESA, the average adult woman gamer plays 7.4 hours per week and the average adult man gamer, 7.6 hours. Other analysts have reported even higher rates of play: International technology analysts at IDC estimated that the average gamer (not the heavy user) spends about two and one half hours gaming every day—17.5 hours per week. And gamers have racked up years of play. In a recent speech, Douglas Lowenstein, the president of the ESA, noted that recent consumer research reports found that “66 percent of gamers between [the ages of] 18 and 25 have been playing games for at least ten years, and nearly 100 percent of gamers between 12 and 17 have been playing since [the age of] 2.” The average gamer, Lowenstein noted, has been playing for 9.5 years, and gamers older than 18 average 12 years of play. How are they making time for games? More than half claim that video game play comes largely at the expense of television viewing.
Today, the three behemoths in the world of gaming hardware are Sony’s Playstation (the latest version of which, Playstation 3, is set to be released in Spring 2006); Microsoft’s Xbox, whose latest incarnation will be on sale for the 2005 holiday season; and Nintendo, whose new game box, the Revolution, has no projected release date yet. In addition, game software is available for personal computers; portable gaming devices, such as Game Boy and cell phones that include video games, are also popular. The market for video games continues to broaden; marketing research firm NPD Group reported that the first quarter 2005 U.S. retail sales of video games “saw sales of over $2.2 billion, a 23 percent increase over the same period last year.” And video games are global; a recent article in the Wall Street Journal noted that in South Korea, “online tournaments attract thousands of spectators, and professional video game players can make six-figure incomes.”
A market in game accessories and publications also thrives, producing books, manuals, and magazines. Game Informer magazine, for example, enjoys a circulation of more than 2 million readers—more subscribers than Rolling Stone or Vogue. There is even a dedicated video game TV network called G4. Games have entered the educational arena as well, with scholarly publications such as the International Journal of Intelligent Games and Simulation that study various aspects of the industry. “High schools are using Rollercoaster Tycoon to help teach physics,” Dan Hewitt, a spokesman for the ESA, told me recently. “It’s great to see that teachers are realizing that video games are an entertainment medium that can be used to teach educational principles.”
The earliest games were a far cry from classroom tools. According to Steven L. Kent, author of the exhaustive Ultimate History of Video Games, it is the pinball machine and other coin-operated amusement games that lay rightful claim to being video games’ closest ancestors. In the 1930s and 1940s, you could even find precursors of today’s “first person shooter” video games in the form of “shooting arcades,” usually located in taverns, where “players shot tiny ball bearings at targets” located in a cabinet. In 1961, a student at the Massachusetts Institute of Technology named Steve Russell created Spacewar, considered the first interactive computer game. Arcade games such as Computer Space soon followed, and by the early 1970s, the new company Atari, founded by Nolan Bushnell, was manufacturing a computerized ping-pong game called Pong. By the late 1970s and early 1980s, arcade video games were easy to find in local malls, and Atari, which had released popular games such as Football and Asteroids, was facing competition from other game manufacturers such as Midway, which had created the popular Space Invaders game. Atari responded by taking video games out of the arcade and into the home; by the early 1980s, Atari’s home Video Computer System allowed gamers to play in the privacy of their dens, including games such as Pong and the wildly popular Pac-Man.
Competitors soon followed Atari into the home, with the Japanese company Nintendo (whose name means “leave luck to heaven”) entering the market in the mid-1980s with Super Mario Brothers and Legend of Zelda. In 1989, Nintendo released the first portable video game player, the Game Boy, and by the turn of the century, companies such as Sega, Sony, and Microsoft had entered the gaming industry, releasing their own individual console gaming systems for the home. The video arcade, as a result, steadily declined in popularity. In under a century, gaming has moved from the midway, to the tavern, to the mall, and into the home—where it has taken up permanent residence.
For people whose only familiarity with video games are their rudimentary 1970s incarnations, the intricacy of the stories, sophistication, and sheer technological wizardry of today’s video games make early games such as Pong or Pac-Man feel as cutting-edge as playing a hand of whist. “There are games out there for everyone of every age,” Hewitt told me. There are the obvious genres—strategy, action, sports, racing—and the not so obvious—survival horror, first-person shooter, and brawler games. According to the ESA, the best-selling games, by genre, in 2004, were action, sports, and shooter games on gaming consoles; and strategy, children’s entertainment, and shooter games on computers. Although not as popular as action and strategy games, there are also adult-oriented video games, such as Leisure Suit Larry, which are rated “M” for mature audiences only and feature simulated sexual intercourse and nudity. Porn star Jenna Jameson recently released Virtually Jenna, an interactive, online game using 3D technology that allows users to romp and role-play pornographically; eventually the game will allow players to insert photographs of themselves so that they can enjoy a “realistic sex simulation” with the popular porn star.
For the uninitiated, the gaming world, like any subculture, can be a confusing place. It nurtures its own vocabulary and shorthand (MMORPGs—for massive multiplayer online role-playing games—and FPSs—for first-person shooters); there is also a slightly annoying, interminable hipness to the industry. “Old timers” are thirty-year olds, and if you go to Bungie.net, for example, the homepage of the gaming company that created the successful game Halo, its employees seem like a bunch of precocious children—their cheeky biographical sketches are full of saucy (and often obscure) pop culture references and goofy pictures. They seem both incredibly creative and entirely divorced from the real world, and one is almost relieved to find out that they have the guidance of a stern parent (the company is owned by Microsoft). Similarly, on the website for the video-game TV network G4, you can find instant polls like this one: “Do you enjoy WWII-themed shooters?” The answers: “1) They’re awesome. That’s my favorite war. 2) I’d prefer something more modern; 3) I’d prefer something from before that, like the Civil War or ancient China; 4) I don’t enjoy shooters, really.” It’s a far cry from The Greatest Generation. Only a cohort largely untouched by war could speak so blithely about “my favorite war.”
The marketing of video games also offers clues about their intended audience and effect. A sampling of advertisements in Game Informer magazine reveals a strong emphasis on two things: hyperbole and the supreme control the gamer will exercise. A sample of the sloganeering: “Do you have what it takes to escape death? Do you have the power to stop the show?” (Wrestle Mania 21); “War is hell—violent and bloody. Experience the uncensored story of the Normandy invasion. The lives of your men are in your hands.” (Brothers in Arms: Road to Hill 30); “Control the masses. Control the battlefield. Conquer the world. Relive the complete span of human history as you lead your civilization through 12,000 years of conquest.” (Empire Earth II). And, for Star Wars buffs, the simple but effective: “Before you see the movie, live it.” (Episode III: Revenge of the Sith video game). The games clearly appeal to the ordinary person’s desire to do something extraordinary—even if these extraordinary acts happen in virtual, rather than real, worlds.
Games also borrow from pop culture and, in return, create pop culture (Tomb Raider was a video game before it was ever a movie). A recent article in Game Informer described a new game in the “brawler” genre: The Warriors, based on the 1970s cult movie of the same name and slated for release in September 2005. As Game Informer relates, “The game focuses on brutal grapples and ground attacks,” and an image from the game shows a large black man, dressed in high seventies chic, whose face is utterly without expression as he pummels a white man (a member of a rival gang) to death outside a simulacrum of a Coney Island arcade. “Players will display progressive damage like cuts and bruises,” the magazine enthuses, “and steal car stereos for extra cash.” To gain extra power, you can take “flash,” the video game version of the popular seventies drug amyl nitrate (“poppers,” in street parlance). The Warriors includes a staple of the brawling genre: The Rage Meter. “As you continue fighting,” Game Informer notes, “your meter will fill and when you’ve accumulated enough Rage, pressing R1 and L1 simultaneously will send your Warrior into a murderous frenzy.” The article even offers helpful tips on how to wiggle your analog stick just so to achieve an effective chokehold when you mug someone.
But video games are not merely mindless violence; they are often works of great creativity and imagination. Browsing through some of the new Xbox games, I came across Gears of War, a “sci-fi shooter game” whose scenes look like a cross between an El Greco painting and the movie Alien, with unearthly dark colors, menacing skies, and ritualistic scenes. Ghost Recon 3, an “urban combat” game, was less creative, offering the realistic over-the-gun-barrel perspective that is standard for first-person shooter games. But Kameo, an upcoming adventure game, featured landscapes that look like a Willy Wonka-inspired utopia. Other game researchers, such as Robin Hunicke at Northwestern University, are working on games that will use forms of artificial intelligence to heighten the responsiveness of play. Hunicke, who as an undergraduate at the University of Chicago studied “narrative,” sees great potential for increasing the interactivity of games. “It is possible to imagine game narratives that change depending on what an individual player chooses to do,” she says. “By investing games with real consequences, I think we push them forward from the level of ‘interesting toys’ to ‘artistic statements’ that help people explore the morality of different choices and ways of behaving.”
Video games already offer some identifiable benefits for those who play them. As has often been noted in reports about video games, gaming improves hand-eye coordination: in 2003, research conducted at the University of Rochester and published in the journal Nature confirmed that “action video games can give a person the ability to monitor more objects in their visual field and do so faster than a person who doesn’t play such games.” The findings weren’t universally positive; researchers noted, “exercises that demand prolonged attention, such as reading or solving math problems, are likely not helped at all by extensive game-playing.” Still, it seems like a long time ago that President Ronald Reagan was lampooned (in books such as Reagan’s Reign of Error) for saying, as he did in 1983, “I recently learned something quite interesting about video games. Many young people have developed incredible hand, eye, and brain coordination in playing these games. The air force believes these kids will be our outstanding pilots should they fly our jets.” Reagan’s gaffe is today’s conventional wisdom.
Modern Vauxhall
It’s important to remember that brawler games such as The Warriors are only one segment of the video game industry. Nor is the player with a victim in a chokehold necessarily a disaffected youth. “The biggest misperception is that gamers are teenage boys playing games in their parents’ basements,” says Dan Hewitt from ESA. “The average age of gamers is now 30. These are adults who played video games as children and kept playing as adults.”
Nonetheless, parents and politicians have raised concerns for years. In 1993, two Democratic Senators, Joseph Lieberman and Herb Kohl, sponsored hearings on video game violence. Parents frequently complain about the confusing rating system for video games, developed by the Entertainment Software Rating Board, which includes categories for early childhood, teens, mature audiences, and adults only, among others. Last year, as the New York Times reported, New York City councilman Eric Gioia, a Democrat representing Queens, called for the city’s mass transit authority to remove advertisements for the best-selling game Grand Theft Auto: San Andreas, because he claimed the game encouraged “bigotry, racism, misogyny, and hate.”
“The brouhaha recently about state legislation is really a cyclical issue,” Hewitt told me. “In the 1920s and 1930s people were concerned about movies and morals. In the 1950s it was Elvis swiveling his hips. Today it’s video games. Ten or fifteen years from now we won’t be having this conversation.” Nobody really expects this conversation to set any legal limits, since video games are considered a protected form of First Amendment speech. But Hewitt is correct that changing mores will likely yield greater tolerance for more explicit gaming fare. And experts disagree about the impact of violent images, whether in video games or television and movies. Regardless of whether there is a provable, causal link between playing violent video games and committing acts of real violence, however, the images in video games are of a peculiarly searing nature. Hours after I had played a first-person shooter game, I could not erase the over-the-gun-barrel perspective from my mind, nor could I expunge from memory the image of my enemy’s head exploding in a profuse, bloody mess when I shot him (or the fleeting feeling of satisfaction that my “kill” gave me). As with pornography, violent video game images might not lead to antisocial behavior (indeed, for some people, they might serve as a substitute for antisocial behavior). But images have influence, and it is not merely moralizers who are concerned about their long-term effects, particularly on children who are already living in an image-saturated culture.
The industry is not immune to the public’s lingering perceptions of video games as mindless, violent entertainment. In his “State of the Industry” speech at this year’s “E3,” the annual industry confab in Los Angeles, ESA president Douglas Lowenstein called on the industry to move beyond existing genres to create games with social and political relevance. “If we can make games about terrorism,” Lowenstein said, “why can’t we make compelling games about politics or global warming? Why can’t there be games which force players to struggle with weighty moral and ethical issues within compelling game worlds?” The possibilities are endless: a first-person syringe-shooter game called “Dr. Kevorkian’s Waiting Room”? A scientific fantasy game called “Grand Stem Cell Challenge”? Or perhaps a special add-on for “The Sims: The Aging Society,” which would feature bespectacled and bow-tied Brookings Institution scholars who lead neighborhood discussions about Social Security reform, and Sims neighbors whom you can organize for a March on Washington to demand prescription drug benefits.
Video games are also merely one component of the entertainment and technology industry’s holy grail: the digital living room. Time magazine, interviewing Bill Gates about the new Xbox recently, reported that Gates’s hope is to turn “the U.S. living room into a digital, wireless, networked nerve center.” The reporter got a little carried away, calling the Xbox “the Tolstoy Machine,” and praising the sophisticated graphics and the new emotional range of the characters. But he is correct to identify the company’s larger ambitions. Microsoft’s new Xbox is, perhaps, the Trojan Horse that will eventually deliver access to more than video games in the American living room: Gates hopes to see digital music, photos, and movies and television on demand grow out of the Xbox platform. “You gotta get in there because certain members of the family think it’s a must-have type thing,” Gates said of the XBox. “But the way to cement it is as a family experience.” Microsoft has even developed an acronym for this effort: it is promoting the D.E.L., or “digital entertainment lifestyle.” It is also a lifestyle we are rapidly embracing, as the television replaces the hearth and new technologies from cell phones to the Internet mediate every dimension of home life.
Video games are, in effect, our modern Vauxhall. Like those old pleasure gardens of eighteenth-century London, they are a masterpiece of faux reality. They are the result of much careful and creative planning. They invite us to assume different identities, at least for the time we stroll through their imaginary worlds. And they exist to give us pleasure by tricking our senses. Among the attractions of Vauxhall’s elm-lined Grand Walk were three arches meant to mimic the Ruins of Palmyra and pavilions meant to resemble ancient temples. But as in those pleasure gardens of yesteryear, dangers exist for undisciplined users—dangers the video-game industry and many video game enthusiasts are loath to acknowledge.
Imagined Communities
Improved hand-eye coordination is not the reason most people play video games. It is the opportunity to be somebody else—somebody else with limitless powers and absolute control. As one gamer told the authors of the recent study, Got Game, “Games give us freedom to be, think, do, create, destroy. They let us change the answer to the question, ‘who am I?’ in ways never before possible. Games let us reach the highest highs and the lowest lows, let us play with reality and reshape it to our own ends. They give us hope and meaning, show us that our journey through life is not pointless, and help us accomplish something at the end of the day.” “With first-person shooter [games] I can get out my frustration,” a woman told a Wyoming newspaper recently. “It’s therapeutic for me.” Indeed, there is something strangely elating about figuring out how to manipulate the controller in the perfect way during a gunfight, or the feeling of satisfaction when, having adjusted to the sophisticated, tilt-a-whirl-style dimensions of the game, you successfully take down an opponent. You physically respond to the virtual world in which you’re engaged—your heart races, you experience momentary shock or disorientation when a new enemy emerges to clobber you, you shift your body to mimic the movements of your character. In other words: You’re in the game.
Shooter games might offer the frustrated housewife a therapeutic outlet for her rage, but it is role-playing games that offer players the widest range of possibility. These games allow individuals to choose their own characters and play out complicated fantasies, often online with hundreds of thousands of others. Gamers create an imaginary self and form emotional connections in the virtual world. One study from Nottingham Trent University discovered that 15 percent of people who play MMORPGs routinely switch genders. So connected and realistic are these video game worlds that they can, at times, begin to impinge on the real one. Writing more than ten years ago about an early online role-playing game, Julian Dibbell recounted the aftermath of “a rape in cyberspace,” a virtual assault that occurred in an online meeting space that set off a chain reaction of soul-searching among the strangers involved. After narrating the efforts to censure and punish the offender and the range of sincere and vociferous reactions, Dibbell concluded that what happened in this simulated world “is neither exactly real nor exactly make-believe, but nonetheless profoundly, compellingly, and emotionally true.” Or, as Howard Rheingold asked in The Virtual Community, “What kind of multiple distributed system do I become when I live part of the day as a teenage girl in a chatroom, part of the day as a serious professional in a webconference, part of the day slaying enemies as Zaxxon, the steel-eyed assassin of an online gaming tribe?” So powerful is the lure of online role-playing that in the growing niche of Christian game makers, developers are wary of role-playing games that promote any form of spiritual or moral relativism. As Jonathan Dee noted in the New York Times Magazine in May, “The Christian gamers’ position is that, while you may fight the Devil and lose, you may not fight as the Devil.”
Among frequent gamers, according to the ESA, 43 percent report that they play games online, an increase from the 31 percent who did so in 2002. One of the most popular online role-playing games, a fantasy game called EverQuest, has hundreds of thousands of registered players. (Sony, which created the game, reported for several years that it had more than 400,000 subscribers.) On average these EverQuest players spend 22 hours per week playing the game, hence the nickname: “EverCrack.” “I started playing in 1998,” one user noted in a review on Amazon.com. “They have a little feature in the game that will tell you how ‘old’ your character (’toon) is. I remember when I looked at the ‘age’ of the character and saw 70 days. 70 days? 70 days at 24/hours a day = 1,680 hours. What a tragic waste of valuable time in my life. I pulled the plug.” He advised other potential EverQuest purchasers to forgo the game and instead “LIVE LIFE!”
Friends and family members of EverQuesters often feel abandoned and angry about the amount of time (nearly one day a week for the average player) that their loved ones spend with the game. A few have even formed support groups, such as “EverQuest Widows,” an e-mail listserv that has more than 3,000 members and describes itself as “a forum for partners, family, and friends of people who play EverQuest compulsively. We turn to each other because it’s no fun talking to the back of someone’s head while they’re retrieving their corpse or ‘telling’ with their guild-mates as you speak.”
“These games are meant to be addicting,” says Dr. Maressa Hecht Orzack, director of the Computer Addiction Studies Center at McLean Hospital and a clinical assistant professor at Harvard Medical School. “EverQuest keeps extending the life of the program, so that if someone gets to Level X, the game maker suddenly adds on more levels. You can understand why some people on EverQuest Widows are so upset and devastated by what happens.” Most people, Orzack says, start playing these games “just for fun.” But “they really get hooked. The fantasy is immediate and it offers immediate gratification. What they are getting is what we call the three AAAs—an affordable, accessible, and anonymous experience.”
Orzack is no stranger to technological addictions. She became interested in them after finding herself spending hours playing the card game Solitaire on her computer. “Solitaire was my way of escaping,” she says. “I would find myself losing sleep, I was missing deadlines, and one time my late husband found me asleep at the computer.” As she worked her way out of her own habit (“Instead of playing to win, I’d play for time”) she developed forms of cognitive therapy to help others. “Your thoughts determine your feelings,” she says, “so I ask patients, what do you expect to find when you turn on the computer?” Their responses are varied. “Some want a sense of belonging or a sense of identity,” she says, “Others want a sense of power—they can be leaders of a guild, they can be people who are well respected online, but might not necessarily be otherwise.”
It is this feeling of control in the simulated world—in contrast with the real world, which is often an exercise in frustration and helplessness—that is part of what concerns Orzack when it comes to children’s use of computer and video games. “When the grades go down. That’s when the parents call me,” she says, in a slightly exasperated tone. “They use these as electronic babysitters, and that’s not conducive to good parenting.” When I asked Dan Hewitt his thoughts on video game addiction, a subject the ESA doesn’t address, he said, “It’s up to the parents to see that children are using video games responsibly and making sure that games are part of a larger, well-rounded lifestyle.” True enough—but also a way to avoid any responsibility for the sophisticated marketing and hyper-addictive quality of the games themselves.
Some parents find out about a child’s gaming habits too late. “We’re just people trying to help each other. We’re modeled after Alcoholics Anonymous,” says Liz Woolley, the founder of On-Line Gamers Anonymous (OLGA). Woolley started the organization in the summer of 2002 after her 21-year old son, Shawn, who was undergoing treatment for depression, became hooked on playing EverQuest. He committed suicide on Thanksgiving Day 2001, and he was sitting in front of his computer screen when he did. “I had no intention of getting this involved,” Woolley told me, “but when my son died, and an article went out about him, it received a big response. I was shocked because I didn’t know this was happening to so many people. Nobody was talking about it.” Woolley calls gaming addiction an “underground epidemic.”
OLGA now offers support to people online, and Woolley wants to see a broader education effort focused on gaming addiction. “In high schools they talk about drugs, alcohol, cigarettes, and AIDS,” Woolley notes. “Why not include games in those lectures?” Woolley has little patience for those who claim that gaming brings real-world benefits. “It’s called denial,” she says briskly. “If they are learning all that stuff on games, I ask them, then why aren’t you a leader instead of sitting in front of a screen?” As for connection in cyberspace, Woolley says that many of the people she talks to who are hooked on gaming say, “I have friends in the game.” “Is this our society?” she asks plaintively. “We consider a pixel a relationship?”
It appears that we do. A 2005 AOL Games poll found that one in ten gamers claims to be addicted, and more than one in four admit to losing a night of sleep to play games. A study conducted by the Pew Internet & American Life Project found that gaming has become a major part of student life on college campuses, with nearly half of the college students who play video games admitting that games impede on study habits “some” or “a lot.” Writing a few years ago in the Weekly Standard, Christopher Caldwell ruefully described his brief love affair with the game Snood, which was “starting to eat up whole afternoons” of his time. “I also begin to understand for the first time what an addiction is,” he confesses. “It’s a desperate need to simplify. An addiction is a gravitation towards anything that plausibly mimics life while being less complicated than life.”
Don’t Worry, Be Virtually Happy!
The GameStop in Montgomery Mall in Bethesda, Maryland at first glance looks like a slightly seedy video store. The brightly lit sign on the storefront (“Game” in white and “Stop” in candy apple red) is plain, and the only other attempts to lure passersby are two small cardboard displays flanking the entrance. One says “HOT!” in red letters and the other contains two promotional posters: one promising “Intense Action!” from the video game Untold Legends and another for Unreal Championship 2 that says, “Bring a knife to a gunfight” and features an obscenely buxom, blue-haired, armor-clad woman brandishing a sword.
Inside, the store is nondescript, and far less elegant than the Mimi Maternity shop and Nordstrom’s department store on either side of it. Unlike those stores, with their ambient lighting and hardwood floors, GameStop has stained gray carpeting and buzzing fluorescent lights and a slightly chaotic layout. But on this beautiful spring Sunday afternoon, it is packed, mainly with boys between the ages of eight and twelve, most of whom are slightly overweight. Many of them are lingering around the three gaming displays—one for each of the corporate gaming giants: PlayStation, Nintendo, and Xbox. The demo games are placed prominently in the store’s floor-to-ceiling glass front, turning the gamers themselves into a kind of human window display.
They don’t seem to notice. One chubby boy, about ten years old, is deep into the PlayStation 2 game, Lego Star Wars, which features roly-poly characters modeled after the “Star Wars” movies. The boy has chosen the role of Obi-Wan Kenobi and is dueling a bad guy with his lightsaber. Occasionally he grunts, or twists his shoulders sharply as he maneuvers the controller, even though his eyes never leave the screen in front of him and he rarely blinks. Once he mutters, “I need more power.” He works his way through the game with considerable skill.
After completing his first game, he is joined by a taller, slightly older boy, who asks if he can play too. It’s clear they’ve never met before, but the first boy mumbles, “Sure,” and soon they’ve started a two-person version of the game. Within moments they’re engaged in an odd sort of mediated conversation, offering each other playing tips, praising a move, or groaning in sympathy when one of their Jedi knights is slain. “Get him!” “No, over there, you need that.” “Try going through that door.” But they never look at each other, and when the taller boy’s mother comes to retrieve him, the younger boy says nothing when he leaves, continuing to stare raptly at the screen as he cues up another game. When he finishes the game and turns to walk away, I ask him how long he’s been playing video games. Looking at me with a slightly unfocused expression he answers, “Since forever,” and walks away. As he leaves I notice him reaching into the pocket of his baggy athletic shorts for something: it is a Game Boy Advance SP, and as he disappears from view, I notice he has already started playing another game, adopting the hunched, fixated posture before his handheld screen that he had assumed before the PlayStation moments earlier.
If this small glimpse into the world of the suburban gamer awakens concern about short attention spans, poor social skills, and video-game-inspired violence, then you haven’t been reading the latest literature about this new generation of gamers. In Got Game: How the Gamer Generation is Reshaping Business Forever, John C. Beck and Mitchell Wade make a sweeping claim: that the generation gap between Boomers and those who came after can be explained by a simple difference—the younger generation’s experience “growing up with video games.” Based on a survey of 2,500 Americans, largely in business professions and with varying levels of gaming experience, Beck and Wade offer a relentlessly positive portrait of gaming.
To find the good in gaming, however, often requires strenuous leaps of logic and specious interpretations of the survey results. For example, among the standard virtues taught by games, Beck and Wade note, are the following messages: “You’re the star,” “You’re the boss,” “You’re the customer and the customer is always right,” “You’re an expert,” and “You’re a tough guy.” The authors report that gamers are far more likely to consider themselves as “knowledgeable” or a “deep expert” in their chosen fields, regardless of actual experience or objective assessments of their abilities. This generation, in short, is not lacking in self-esteem. “They are so confident of their skills, in fact, that they believe they don’t have to work as hard as other people,” the authors write, and in a startling turnabout of the work ethic, “the more experience respondents have with digital games, the less likely they are to describe themselves as hard workers.” Common sense suggests that this might pose a challenge in a workplace environment, where real expertise and hard work are often essential. But Beck and Wade see this as an unalloyed good—just the kind of overconfidence the business world needs, evidently. “In ways they themselves don’t even notice,” the gaming generation “really seem to believe that the world is their video game.”
The most intriguing differences in Beck and Wade’s study are not the ones between the younger gaming generation and the older nongaming generation; they are the contrasts between young gamers and young nongamers. For example, in answer to the question “I play to do the things I can’t in real life,” 40.8 percent of younger, frequent gamers said yes, compared with only 14 percent of younger nongamers. Nongamers also appear to bring different values and priorities to the workplace. To the statement, “It is important to receive a high salary and good benefits,” 75.4 percent of younger frequent gamers answered yes, compared to 66.3 percent of younger nongamers. The only bit of bad news the authors allow is this: “In our survey we also found that gamers can be emotionally volatile. By their own estimate, they are more likely than other groups to be easily annoyed or upset. In a word, they can be irritable.” But one person’s irritable child is another person’s master of the universe—savvy, self-confident, and more sociable than their peers.
Similar claims have come from the ESA, which notes that “gamers devote more than triple the amount of time spent playing games each week to exercising or playing sports, volunteering in the community, religious activities, creative endeavors, cultural activities, and reading. In total, gamers spend 23.4 hours per week on these activities.” But when I asked about the survey data, Dan Hewitt of the ESA told me that it was a random national telephone sample of 802 people (small by polling standards), conducted by Peter D. Hart Research in September 2004. “The interesting thing,” Hewitt told me, “is that the more they play games, the more they are involved in their communities.” The responses were merely self-reported opinions; they did not include more accurate follow-up studies using techniques such as time-use diaries or observational monitoring. This is hardly a scientific basis for video-game euphoria, and one might forgive the skeptic who sees such surveys for what they probably are: ways to make video games seem innocent at worst and praiseworthy at best.
The Self-Flattery Curve
The authors of Got Game are not the only video game enthusiasts putting pen to paper to defend the medium. “Think of it as a kind of positive brainwashing,” says Steven Johnson, author of the recent book, Everything Bad is Good For You: How Today’s Popular Culture is Actually Making Us Smarter. “The popular media steadily, but almost imperceptibly, making our minds sharper, as we soak in entertainment usually dismissed as so much lowbrow fluff.” Johnson dubs this the “Sleeper Curve,” after the 1973 Woody Allen film that hilariously parodies science fiction movies, and he’s not especially modest about its supposed effects: “I believe the Sleeper Curve is the single most important new force altering the mental development of young people today, and I believe it is largely a force for good: enhancing our cognitive facilities, not dumbing them down.”
Johnson includes video games in his list of new and improved fluff, and argues that a “strong case can be made that the power of games to captivate involves their ability to tap into the brain’s natural reward circuitry.” Games promote something he calls “collateral learning” and make us adept at “exercising cognitive muscles.” Video games and television should be seen, Johnson argues, “as a kind of cognitive workout, not as a series of life lessons.”
Johnson’s book has been well-received, earning kudos from no less a trend-spotting guru than Malcolm Gladwell at the New Yorker. It is doing so for a simple, and democratic, reason: it flatters our own self-image. Johnson is not, as he repeatedly claims, challenging the conventional wisdom; he is reaffirming it. In a democratic culture, people want to be told that fulfilling their desires is actually good for them, that self-interest is also self-improvement, that the most time-consuming habit is also time well-spent. Attacking popular culture, which is the underpinning of so much of our conventional wisdom, usually earns one the sobriquet of Puritan or crank. Praising popular culture, which few people can resist, can give any modern-day guru a temporary following.
“The sky is not falling,” Johnson reassures us. “In many ways, the weather has never been better. It just takes a new kind of barometer to tell the difference.” In other words, it isn’t that things are getting worse, or that we’ve exchanged one form of entertainment for another that is more passive and less inspiring to the imagination. We’re simply not looking at the world in quite the right way, with quite the right instruments. This, of course, is a tried and true formula of persuasion. It is the method of the quack. Johnson is the modern inversion of the form—unlike the old-fashioned quack, who falsely tells you that you are sick when you are well, Johnson tells us that we’re actually healthy when we might be sick. Quacks always give the public what they want; this is the key to their success. And Johnson is our modern St. John Long, the illiterate nineteenth-century charlatan who, according to Brewer’s Dictionary of Phrase and Fable, claimed to have created a liniment that allowed him to distinguish “between disease and health.” Like Long’s liniment, Johnson’s “Sleeper Curve” is a temporarily comforting but ultimately irritating device of little long-term value.
Quacks are also notoriously disingenuous, altering their message to suit their audience. In his book, Johnson says, “The television shows and video games and movies that we’ll look at in the coming pages are not, for the most part, Great Works of Art,” later adding, “I want to be clear about one thing: The Sleeper Curve does not mean that Survivor will someday be viewed as our Heart of Darkness, or Finding Nemo our Moby Dick.” But writing on his personal blog the week after his book was released, Johnson argued just that: “We don’t have a lot of opportunities in culture to tell a story that lasts a hundred hours, but that’s exactly what we’re taking in on The Sopranos or Lost or Six Feet Under. I feel totally confident that those shows will stack up very nicely against Madame Bovary a hundred years from now, if not sooner.” Like all good mountebanks, Johnson, aiming to please as broad an audience as possible, finds consistency a crutch. The difference between Johnson and an ordinary charlatan, however, is that Johnson seems to have had the foolish bad luck to start believing his own nostrums.
And nostrums are plentiful in this book. In order to sustain his sweeping claims about popular culture, Johnson must ignore the opportunity costs of doing things like playing video games; as a result he does not adequately distinguish between gaming and other forms of intellectual activity. Nor does he give a thought to where these games are being played—in the home—and how that fact has transformed family life. As Johnson himself notes, the average video game takes forty hours to complete. For him, these games need not do more in that time than entertain and exercise some of our cognitive faculties. “Those dice baseball games I immersed myself in didn’t contain anything resembling moral instruction,” he writes of the games he played in the pre-video game era, “but they nonetheless gave me a set of cognitive tools that I continue to rely on, nearly thirty years later.” Perhaps they did, although if this book is the evidence, his thesis is clearly a failure. But what Johnson does not recognize is that the choice to play games necessarily means that other activities will not occur, whether reading, making music, or even playing real, rather than virtual, baseball. We might point to the complex nutrients in dog food, but the fact remains: a dog that does little but eat will be unhealthy, no matter how many nutrients his food happens to contain, or how often he exercises his jaws in doing so.
The evidence Johnson enthusiastically marshals to convince the reader of his claims is risible, rendering his sweeping case for the intellectual significance of video games unsustainable. He is keen, for example, on noting an increase in IQ since television and video games became more sophisticated, and cites this as evidence of his ballyhooed “Sleeper Curve.” Of this rising IQ, called the “Flynn effect,” he concedes that it is most pronounced for g—or “fluid intelligence.” “Tests that measure g often do away with words and numbers,” Johnson writes, “replacing them with questions that rely exclusively on images.” What this proves, then, is that we’re becoming more of an image-based culture, more adept at reading visual signs and symbols; this does not necessarily mean we’ve become objectively smarter. As even Johnson admits, “If you look at intelligence tests that track skills influenced by the classroom—the Wechsler vocabulary or arithmetic tests, for instance—the intelligence boom fades from view.”
Johnson is also selective in his use of evidence, a practice that renders his arguments consistently unreliable. The second half of the book, which makes the case for the edifying effects of television, is the most egregious example. Johnson never mentions the fact that we spend more time watching television than we do engaged in any other activity besides sleeping and working, and he ignores entirely research by neuroscientists that has demonstrated the negative effects of television on young children’s brain development. “Parents can sometimes be appalled at the hypnotic effect that television has on toddlers,” Johnson says soothingly. “They see their otherwise vibrant and active children gazing silently, mouth agape at the screen, and they assume the worst: the television is turning their child into a zombie....But these expressions are not signs of mental atrophy. They’re signs of focus. The toddler’s brain is constantly scouring the world for novel stimuli.” Johnson’s claim is entirely specious: the American Academy of Pediatrics “recommends no more than one to two hours of quality TV and videos a day for older children and no screen time [including computers and video games] for children under the age of 2.” A study released last year in the journal Pediatrics found a link between hours of television viewing in young children and increased risk for developing Attention-Deficit/Hyperactivity Disorder. Everything bad is evidently not good enough for Johnson to include in his book when it contradicts his questionable thesis.
Finally, it is worth asking, if everything bad is now good for you, what happened to the old “good” things? And how do we now order our priorities? What, in other words, is the new bad? At the heart of Johnson’s argument is a desire for the complete erosion of the distinction between high and low culture; why should we recognize a difference between a game and a book, after all, if both exercise our “cognitive muscles”? What is important, Johnson says, is that the new media are improving. “It’s important to point out that even the worst of today’s television...doesn’t look so bad when measured against the dregs of television past,” he says. But doesn’t this set standards too low? There has always been an important and healthy suspicion of popular culture and mass entertainment—much of it stemming from sheer snobbery, of course, but some of it from a recognition of the often-steep opportunity costs of consuming low culture rather than high, and of indulging comfortable distractions at the expense of industry. Long before the era of video games, Boswell lamented that he was spending far too much time staring into the fire and better get himself back to work; he realized, in other words, that although enjoyable, sitting in front of the fire takes a person away from other, more productive pursuits. In a world where even our lowest entertainments are an unlimited good, how can we encourage moderation and self-regulation of any entertainment?
In the end, Johnson’s argument rests on two great errors: He tries to defend the utility of video games and other amusements as a route to self-improvement, by seeing them as a form of mental gymnastics. But we are left to wonder whether other workouts of the mind, so to speak, might not serve us much better. He also argues that video games and television are just as good as other kinds of leisure—like reading a great book, conversing seriously with friends, or playing a real sport. But are we really so better off devoting ourselves to the seductive pleasures of the virtual realm, however sophisticated and entertaining? To say that something is a creative pleasure is one thing; to claim that it is, in fact, actively good for you quite another. Chocolate is a pleasure, as is champagne, and both, in the right hands, can be made and experienced creatively. But a steady diet of chocolate and champagne is not healthy. This is a distinction that Johnson fails to recognize.
What are Games For?
Johnson’s book largely simplifies and synthesizes the work of others. In What Video Games Have to Teach Us About Learning and Literacy, for example, James Paul Gee, the Tashia Morgridge Professor of Reading at the University of Wisconsin–Madison, outlined the positive benefits of video games in even greater detail. Like Saul on the road to Damascus, Gee is struck by the power of video games after trying to play one of his son’s games and realizing how challenging it was. He posits that “better theories of learning are embedded in the video games” many schoolchildren play “than in the schools they attend,” and argues that “the theory of learning in good video games fits better with the modern, high-tech global world today’s children and teenagers live in.” Gee becomes so enthusiastic about games and their “semiotic domains” that he claims the virtual relationships children develop “have important implications for social justice.”
Gee, in other words, is eager to put the Xbox in the sandbox. “Games encourage exploration, personalized meaning-making, individual expression, and playful experimentation with social boundaries,” he enthuses, “all of which cut against the grain of the social mores valued in school.” He argues for a “new model of learning through meaningful activity in virtual worlds as preparation for meaningful activity in our post-industrial, technology-rich real world.” But Gee doesn’t show us how these virtual gaming skills are actually transferable to real-world situations. Like the authors of Got Game, Gee is hopeful that they are transferable and convinced that they will improve children’s educational experience. But wishful thinking is not the same as evidence, and evidence is certainly needed when such broad claims are being made on behalf of electronic entertainments. Although Gee’s research suggests intriguing possibilities for new educational tools, we must first answer the question of their effectiveness before we put a video game in every classroom. And we must grapple with the evidence on the other side of this equation. As William Winn, who heads the University of Washington’s Human Interface Technology Laboratory, told the authors of Got Game, gamers really do think differently: “They leap around,” he writes. “It’s as though their cognitive structures were parallel, not sequential.’” Lost amid the enthusiasm for gaming, however, are two questions: Does “different” mean better? And what, in the end, are games for?
One of the first board games, created for children in the eighteenth century, was “A Journey through Europe,” a game that taught geography as well as play. By the beginning of the nineteenth century children were playing games with titles such as “The New Game of Virtue Rewarded and Vice Punished, For the Amusement of Youth of Both Sexes,” where virtues such as Temperance and Honesty were rewarded and Obstinacy and Sloth harshly punished. Games were a structured form of play used to train children in the virtues of a particular society, a practice that continued into our own era. As media critic Michael Real has argued, even less didactic board games “such as Monopoly and Clue tended to teach young people to develop strategy, think and plan ahead, be patient, play fairly, take turns, and follow written directions.” Video games are different, says Real. They “are based instead on classical conditioning theory: the sights, sounds, and colors are reinforcers. They are fast-paced and entertaining. They teach some of the same abilities as older board games, yet they reduce, without necessarily eliminating, the interpersonal interaction.”
Games can be appealing outlets for adults as well. As Christopher Lasch once observed, “Among the activities through which men seek release from everyday life, games offer in many ways the purest form of escape. Like sex, drugs, and drink, they obliterate awareness of everyday reality, but they do this not by dimming awareness but by raising it to a new intensity of concentration.” But there is something unusual about the games people play today. As Steven Johnson notes, enthusiastically, “one of the unique opportunities of this cultural moment lies precisely in the blurring of lines between kid and grownup culture: fifty-year-olds are devouring Harry Potter; the median age of the video game-playing audience is twenty-nine; meanwhile, the grade-schoolers are holding down two virtual jobs to make ends meet with a virtual family of six in The Sims.” Another man, writing in the New York Times in 2004, boasted, “I was able to gain heroic status in the eyes of my daughter by helping her fight off a nasty gang of thugs” in a video game. “Perhaps these are becoming the new essential skills for parents.”
In fact, adult enthusiasm for video games is part of a broader transformation—what communications professor Joshua Meyrowitz in his book No Sense of Place described as “the blurring of childhood and adulthood.” Children and adults now dress more alike, he noted, and they have “begun to behave more alike.” It isn’t unusual, Meyrowitz notes, to see adults engaged in “children’s play.” “The latest generation of playthings—video and computer games—are avidly played by both adults and children.” Whether this is a good thing or not, “is difficult to say,” Meyrowitz concludes, “for better or worse, though, childhood and adulthood, as they were once defined, no longer exist.”
Critics have long recognized a difference between structured games and unstructured play, particularly for children. Play is supposed to leave something to your imagination. Hearing about an ogre from a fairy tale, children are free to imagine any number of frightening creatures: perhaps a more terrifying version of the neighbor’s scary dog, or the monster from another book, such as Where the Wild Things Are. For the video-game generation, an ogre need not exist in their mind’s eye. It is already a fully realized creature, like the ogre in Capcom’s new game, Haunting Ground, whose heroine, Fiona, awakens “dressed only in a towel, locked in a cage within a decrepit mansion,” according to the New York Times. “After escaping her cage and changing into a miniskirt, Fiona discovers that in spite of her sexy outfit she is not beset by lecherous suitors but by a hulking ogre who wants to eat her.” Video games represent the commodification of imagination (and, not surprisingly, the homogenization of fantasy). They are, at root, the expression of someone else’s fantasies—fantasies that you can enter, and manipulate a bit—but fantasies that you cannot alter at a fundamental level.
Video game fantasies, although graphic and sophisticated, are also sanitized in a way that real play is not. Video games carry no real risk of physical harm or personal embarrassment, as in real games and real sports. When a child plays outdoors, he might at least risk skinning a knee; when a child plays soccer on a team, she might get nervous as she stands on the field waiting for the opening whistle or embarrassed when she makes a mistake. But this is not the case with video games. It is perhaps telling that the biggest risks to gamers are ailments associated with modern adult work: carpal tunnel syndrome and eye strain.
Video games also take us indoors, and as Richard Louv, author of Last Child in the Woods, argues, this contributes to “nature-deficit disorder.” Among the research cited in his book is a 2002 study conducted in Britain that found that eight-year-old children were perfectly capable of identifying the Pokémon video-game characters but flummoxed when presented with images of otters, beetles, and oak trees. Louv argues that a combination of television, technological toys, and irrational parents fearful of child abduction keep children indoors. “When you’re sitting in front of a screen,” Louv told a reporter on NPR recently, “you’re not using all of your senses at the same time. Nowhere [other] than in nature do kids use their senses in such a stimulated way.”
Doom or Dickens?
In Inter-Personal Divide: The Search for Community in a Technological Age, Michael Bugeja, director of the Greenlee School of Journalism and Communication at Iowa State University, argues that games are not our cultural salvation. He describes our age as in the grips of an “interpersonal divide,” which is the “social gap that develops when individuals misperceive reality because of media over-consumption and misinterpret others because of technology overuse.” This form of “displacement,” as Bugeja calls it, fosters “an unfathomable feeling of isolation not only in our hometowns but also in our homes—connected, wired, and cabled to the outside world.” Included among the effects of displacement are a “clash of environments, virtual and real” and the “blurring of role and identity.” This is a physical divide as well. In the past decade, Bugeja notes, “many children went from playing in parks in front of neighbors...to playing in mall arcades in front of parents...to playing in living-room consoles in front of each other...to playing online in their rooms in front of no one in a place that is actually not there.” As a result, “our identities no longer are associated with community but with psychographics—statistics categorizing us according to the products that we purchase and the services that we perceive to need.” Johnson’s cognitive gym and Gee’s game-enabled classroom are Bugeja’s consumer-friendly isolation chambers. “We need to spend more leisure time in the real rather than the virtual world,” Bugeja argues.
We are, as Bugeja observes, a nation more enthusiastic about entertainment than any since ancient Rome. With the First Amendment firmly entrenched in the nation’s politics and culture, critics of video games are fighting a doomed battle by focusing exclusively on content; they should look instead to the broader social transformation wrought by leisure technologies. Their popularity does not mean, as boosters would have it, that we face a future of improved hand-eye coordination and well-exercised “cognitive muscles” with no negative consequences. We must grapple with the opportunity costs of replacing old forms of play with new ones like video games, and we must come to terms with what it means for the development of identity when we so eagerly blur the line between reality and fantasy and between childhood and adulthood.
In some sense, video games are creating a new group: the isolated, childlike crowd. Gamers share a passion, a mindset, and even certain physical skills. They are committed and aroused. Yet they are physically separated from each other. What does their immersion in virtual worlds do to their experience of the real one? Are the skills learned through video-game play truly applicable in the real world? As Meyrowitz reminds us, “Exposure to information is not the same as perception and integration of information.” Nor might the cognitive tricks learned by play in the virtual world have much use, in the long run, in the real one.
In previous eras, games were supposed to provide more than mere play; they were supposed to improve us morally or physically. The conceit of contemporary times is that games improve our intelligence, and that they do this so well that we ought to integrate them into more spheres—the classroom, the boardroom, the playground—as replacements for less advanced ways of learning. Our embrace of video games is yet another chapter in the ongoing story of technology-as-liberation.
But this story isn’t as simple as it first appears, and we are failing to ask some important questions. With video games, we focus so much on how we can make the virtual world more like the physical world that we forget to ask about movement in the opposite direction. In an age when people are spending much of their work time and most of their leisure time in front of computers, televisions, and video-game screens, how is the virtual world affecting the physical one? Are we becoming so immersed in virtual reality that we end up devoting more time to the care and tending of our multiple, virtual identities than to the things in the real world that contribute to the formation of healthy identity? After all, there are a great many things you can’t do in virtual reality: you cannot satisfy material needs for food, water, or genuine physical affection; you cannot build character or develop true interpersonal skills; and although some people might be able to satisfy certain emotional needs, such satisfactions are rarely permanent.
Today’s video games are works of creativity, technical skill, and imagination. They are, in appropriate doses, healthy and satisfying playgrounds for experimentation with different identities and exploration of different worlds. But video games carry the risk—as all amusements do—of becoming the objects on which we lavish so much time and attention that we neglect the true and lasting things around us, such as our family, our friends, and our communities. Societies get the games they deserve. But when a society claims for its games the insights, sophistication, and deeply humane wisdom that other forms of culture and community have long offered—when it places Dickens alongside Doom and replaces the family hearth with an Xbox—it is well on its way to finding something more alarming than its identity stolen. It risks becoming a society without true loves or longings, filled with individuals who find solace only in make-believe worlds where the persons they really are do not really exist.