Daniel Tosh is Important

1 04 2012

Daniel Tosh is a stand-up comedian and television host.  I doubt many people would describe him as particularly socially-conscious in either of those roles.  His show, Tosh.0, is a hybrid of stand-up, sketch comedy, and internet video commentary and includes potentially offensive material in bits such as “Is It Racist?” and “Rico’s Black Clip of the Week.” I think that Daniel Tosh, and Tosh.0 in particular, is a prime example of postmodern entertainment that pushes the boundaries of social issues in a way that results in elevated discourse rather than crass exploitation.

Tosh.0 is Postmodern

The television show is nowhere near original. Despite my description above, it is inherently a clip show.  Its reliance on home videos posted on the internet make it the America’s Funniest Home Videos of the 21st Century.  The format of a host in front of a green screen commenting on clips owes its existence to Talk Soup (later re-named The Soup), originally hosted by Greg Kinnear.

Of course, something doesn’t have to be original to be entertaining.  Tosh’s style in delivery and class clown grin make the show engaging and somehow personal, and the adult content of both the videos and the commentary give the show a bite not found in either television predecessor.  The show plays like a highlight reel of internet comment posts, weeding out the merely shocking, racist, or pithy and showcasing the truly snarky and hilariously cynical.

The unoriginality of the show seems to categorize it as mere pastiche, but Tosh.0 is unabashedly self-aware.  From the inclusion of the writing and production crew in sketches to the mockingly prophetic sign-offs before the final commercial break of each episode (Tosh signs off with a reference to a cancelled Comedy Central show:  “We’ll be right back with more Sarah Silverman Program!”), Tosh highlights not only the mechanisms of the show’s production, but also the reality that the lifespan of the show itself is limited.  The sign-off was perhaps more prescient in the early days of the show.  As with many Comedy Central shows, its low production costs come with low expectations from the network—cancellation of a Comedy Central show is a foregone conclusion.  That is, of course, until it catches fire like South Park did, or Chappelle’s Show, or even The Daily Show.

Tosh has also made reference to his predecessors on air.  “Hey, I heard there’s some show called The Soup that totally ripped off our format!  The idea for this show came to me in a dream!  With Greg Kinnear, except it really wasn’t Greg Kinnear…”  In this season’s Web Redemption of a horrible sketch comedy trio, Tosh led the segment saying, “Hey, sketch comedy is hard.  If someone brilliant like Dave Chappelle can go crazy doing it, what makes you think you’ll be any good?”

Tosh.0 is Socially Conscious

The fraternity with Chappelle is based on more than that of hosting popular Comedy Central programs.  Richard Pryor paved the way for Dave Chappelle, and Dave Chappelle paved the way for Daniel Tosh.

Chappelle is credited for approaching issues of race in a comedic way on television unflinchingly and uncompromisingly.  He made fun of racism—not just white racism toward blacks, but also black racism toward whites and Asians, and even other blacks.  It can be cynically concluded that Chappelle and Pryor (who did the same thing thirty years earlier in stand-up comedy) could get away with calling out black racism because they themselves were (are) black.  Daniel Tosh proves that the race of the commentator is not the determining factor for this kind of statement.

The clip that spawned the recurring bit, “Is It Racist” was a video of an Asian toddler in a pool, held afloat by his or her head suspended in a plastic floating ring.  Among many jokes, Tosh cracked, “Is it racist if I can’t tell if her eyes are open or not?”  After a brief pause, he said indignantly, “I’m saying ‘Is it?’  Yes… yes, I’m being told by the audience that yes, it is racist.”

Jokes about racism regarding African Americans, Latinos, Asians, Jewish people, and even white people are all approached with a level of honesty and self-effacement that makes them engaging rather than mean.  In a web redemption from this season, Tosh interviews a couple who’s wedding was ruined by a sandstorm.  The groom was Mexican and the bride was white.  Rather than shy away from racial comments when in the actual presence of a minority, Tosh addresses it head-on.  Any menace in this line of questioning is deadened by the fact that Tosh is conducting the interview in a heart-shaped hot tub.  He often uses the physical appearance of his own nearly-nude body to neutralize potentially heated or offensive confrontations.  It also helps that during these interviews, he is unabashedly positive, which is unexpected given the bite of the rest of the show.

Context is key for Tosh’s approach to topics like race, sexuality, abortion, and religion.  He is making jokes, yes.  But his delivery and his appearance, as well as the jokes themselves, communicate an awareness of his own place in the larger issue underlying the comedic bit.  In comparison, it is much harder to see positivity in the comments by viewers on Tosh.0 blog posts.  Many comments come across as simply racist, rather than as addressing racism.

Below is the clip of the Asian “Neck Tube Baby” bit from Tosh.0.  Not only is it an example of Tosh’s approach to race, it also includes the show’s characteristic reflexivity, acknowledging the production of the bit itself.

Daniel Tosh is Uplifting

I’ll be honest.  For the first two seasons of Tosh.0, I changed the channel or left the room during the “Web Redemption” segment.  I’ve never been a fan of cringe-inducing comedy, and the idea of taking someone’s most embarrassing moment, already broadcast to the entire internet, and making a seven-minute television segment based entirely on that moment, seemed too mean-spirited and too awkward for me to watch comfortably.  My fears were unfounded.

Tosh brings the people in question to Los Angeles and interviews them to begin the segment.  The interview includes the cracking of jokes, of course, but Tosh is truly laughing with the interviewee.  The redemption part of the segment is typically cheesy.  The person gets a second chance to complete whatever task when awry and got them internet famous for some sort of mistake.  A girl gets a chance to walk down stairs in a prom dress without tripping.  A guy gets a chance to park a Ford Mustang in a garage without running it through the wall.  Typically, in these bits, Tosh is the main point of comedy—often employed through the use of a goofy costume such as the Pope outfit worn for the redemption of the married couple mentioned earlier.  Most of the time, the person succeeds in their attempt to redeem themselves, even though that redemption is a little low in the area of a pay-off.  They still have the internet embarrassment out there, though by now they’ve probably come to terms with it.  Heck, they did agree to be on a show knowing full well that the embarrassing moment was the reason for their appearance.

In some cases, however, the person fails in their comedic-sketch attempt at redemption.  Tosh uses this to aim the humor away from the person involved, however.  An appearance by Ron Jeremy after a girl falls down the stairs in a prom dress for a second time becomes a joke about Ron Jeremy (Ron Jeremy is his own joke about himself).  Dennis Rodman appears from nowhere to block a man’s attempted trick basketball shot.  That was perhaps my favorite save.  On returning to the set (these bits are shot on location and shown as clips during the hosted show), Tosh points out that for $5,000, you can have Dennis Rodman show up at your house and do whatever you want… for about five minutes, which mocks the show for paying that much for the cameo and Rodman for shilling himself out so shamelessly.

Daniel Tosh is Important

Daniel Tosh is not what I would consider an activist comedian.  He’s not out to make some great social change in the world.  He’s out to make people laugh and, if you believe his shtick, make a lot of money doing it.  But performers don’t necessarily have to be performing ABOUT an issue to make a difference regarding an issue.  It’s often a matter of bringing the conversation up.  If that approach is comedic, the conversation is that much easier to start.  Tosh’s approach is more high-brow than it may seem at first glance, and for that, we thank you.


Neil deGrasse Tyson is Wrong

4 03 2012

I like Neil deGrasse Tyson.  I think he is a warm and engaging face for science on television.  He’s no Adam Savage or Jaime Hyneman—I have yet to see him blow up anything.  To my eyes, he’s no Bill Nye.  That is one titanic bowtie to try to fill.  But, as celebrities of the hard sciences go, Neil deGrasse Tyson is a shining example.

As host of Nova scienceNOW on PBS, he has proven to be engaging and photogenic.  He makes astrophysics something that at least seems accessible to a large audience.  He is the director of the Hayden Planetarium and a research associate in astrophysics at the Museum of Natural History.  When it comes to astrophysics, Neil deGrasse Tyson knows his stuff.  However, when it comes to the cultural mindsets of the Twentieth and Twenty-first Centuries, he is mistaken.

Clip of Feb. 27 Interview on The Daily Show

I am basing my criticism on an interview he gave last week with Jon Stewart of The Daily Show, promoting his book, Space Chronicles:  Facing the Ultimate Frontier.  Stewart characterizes the book as lamenting the fact that the United States, as a culture, no longer prioritizes space exploration.  Tyson acknowledges that the Cold War, fear, and the military industrial complex were the driving force behind the rapid advancements in space exploration from the 1960s until 1972, the last manned mission to the moon.  I will add that moon missions stopped around the same time the Vietnam War ended, drawing to a close the hot part of the Cold War.

Tyson claims that it was the space race that inspired society to “think about ‘Tomorrow’—the Homes of Tomorrow, the Cities of Tomorrow… all of this was focused on enabling people to make Tomorrow come.”  This is where he is wrong.  The space race was a symptom of this mindset, but it the mindset of modernism he is talking about, not just of the space age.  A focus on technological progress is one of the most rudimentary tenets of modernism, with its roots in the Enlightenment.  We see it in the Industrial Revolution, we see it in the advancement of movements in Modern Art, and we see it in the development of technology for war, transportation and communication before, during, and after the space race:  from airplanes to telephones to ipods.  Tyson even cites The World’s Fair as an example of an event geared around the space race.  While the World’s Fairs of the 1960s certainly reflected the interest in space exploration in particular, the institution itself has roots in early modernism—in the Nineteenth Century.

Chicago World's Fair, 1893--long before the space race

Despite being incorrect about its origins, Tyson is correct in pointing out that the drive for progress was the great economic engine of the Twentieth Century, and that careers in science and technology were essential for that progress.  The combined factors of fear, war, and modernist pursuit of progress meant that those careers were celebrated as important for the betterment of society.  Little Jimmy wanted to be an astronaut or a rocket scientist because it was a glamorous and important part of society, an attitude that was reflected in films, news broadcasts, and federal funding.

Stewart assumes that the diminished interest in space exploration had to do with expectations of achievements were not matching the pace of their execution—that we expected to be on Mars by 1970 and since we weren’t there, we got tired of waiting.  Tyson augments his assumption, saying that the diminished interest came from not advancing a frontier.  “The Space Shuttle boldly went where hundreds had gone before.”  This is not the frontier exploration that gains headlines in a world looking for better, faster, stronger, bolder, and further.

Aside from being wrong about the societal motivation behind the space race and the connected advancements in technology, Neil deGrasse Tyson clings to that modernist mindset.  His solution for society is to increase funding for NASA in order to mount a manned mission to Mars, which he believes will excite the populace to value the activity of scientists and technologists, thus fueling the economies of the Twenty-first Century.

Maybe Tyson just wants to revive the careers of Gary Sinise and Tim Robbins. It does promise to be thrilling and exhilarating.

As I have written before, I am skeptical about the notion that we are in an era outside of modernist influence.  While originality in art or even in invention is not necessarily the hallmark of progress that it used to be, advancement is nonetheless necessary for success in our creative, corporate, and governmental evaluations.  A person only needs to look at one very celebrated company—Apple—to understand that advancement and progress are still very much parts of our ideology, and that is the second instance where Tyson is wrong.

Contemporary society does value the activity of scientists.  It might not value the same kinds of scientists that made big, physical advancements like space exploration or the atom bomb, but it does value the kinds of scientific advancements that power the new economic driver: information.  According to postmodern theorist Jean-François Lyotard, the purpose of science is no longer the “pure” goal of its Enlightenment origins. “Instead of hovering above, legitimation descends to the level of practice and becomes immanent in it.”  For Lyotard, scientists are no longer trying to find an absolute “Truth” about the universe (that might come from the exploration of, say, space), but seeking to advance the commoditization of knowledge—the consumption of information.

In a way, Tyson one-ups Lyotard.  By acknowledging the driving force of fear in the space race, he acknowledges that the societal motivation for scientific advancement was outcome-based (winning the Cold War), rather than ideologically-based Truth-seeking.  Even at the height of modernism, pure science was a myth.  Nonetheless, the ideas of Lyotard underlie the entire undertaking of contemporary science.  It isn’t about an authoritative Truth, it’s about consumable truths. For scientists, those consumable truths are technological advancements—however minute, however arbitrary. We do value scientists, as long as they are working toward something we can consume.

The fact that, in this photo, the iphone resembles the monolith from 2001: A Space Odyssey is pure coincidence.

The space race produced consumables—Tang, Velcro, the Tempur-Pedic bed—those were indirect in reaching the consumer market.  Today’s advancements directly aimed at consumers with tablet computers, smart phones, and cars that park themselves.  These advancements aren’t a byproduct of some high-minded pursuit of pure scientific exploration, but directly researched, experimented upon, and produced for us.

I sympathize with Neil deGrasse Tyson.  He wants a modernist society where the pursuit of Truth motivates a populace and advances a culture.  But, as he acknowledges, that pure science may never have been the real motivator at all.  Science is now inextricably linked to product value in technology.  The advancements are more accessible, but they are less tangible.

Works Cited:

Tyson, Neil deGrasse. Interview by Jon Stewart. The Daily Show. Comedy Central. Comedy Partners, New York.  Feb. 27,2012. Television.

Fraser, Nancy and Nicholson, Linda.  “Social Criticism Without Philosophy:  An Encounter Between Feminism and Postmodernism,” Universal Abandon:  The Politics of Postmodernism.  Ross, Andrew, ed. Minneapolis:  University of Minnesota Press, 1988, p. 87.

Pure Art Sells Out

6 01 2012

The specialized treatment of art education at the university level separates art from other aspects of life. As I have stated before, a qualification for something to be considered “High” or “Fine” art is that the entire purpose of the object is to be art and art alone.  This is the culmination of the modernist mandate for authority and therefore for purity.  To be an expert in something, one must study it extensively and exclusively.  To become an authority in art, an artist must be entirely focused on art and therefore what is produced is art for art’s sake—a pure art.

Jean Michel Basquiat's studio: a working temple of art

This isn’t an attitude that is limited to art.  Other disciplines follow the pattern:  music, religion, philosophy, science, etc.  It is in science, and the Enlightenment approach to science that so influenced modern thought, where we see how specialization is so important.  I could use Theoretical Physics as an example of a form of science that is almost entirely detached from any aspect of the everyday existence of an average person living on planet earth.  String Theory and inquiry into the status of light as a particle or a wave have little bearing on the day-to-day work of a plumber.  However, I think the scientific method itself is a prime example of how specialization and singular focus work in science, which we can then see echoed in larger areas of study like art.

The television show Mythbusters is a fantastic pop-culture example of the use of the scientific method.  The cast will start with a myth or bit of urban lore.  Say, for this episode, they are taking a scene from the movie Lethal Weapon 2 where Roger Murtaugh (Danny Glover) discovers explosives under his toilet, knowing that if he stands up, his house will be blown to bits.  The solution, in the film, is for Murtaugh and Martin Riggs (Mel Gibson) to jump into the adjacent cast-iron bathtub, which will shield them from the force of the explosion.  The question the Mythbusters pose is, “Will a cast-iron bathtub shield a person from the force of a C-4 explosion like it did in the movie?”

The scientific method requires focused inquiry.  Adam Savage and Jamie Hyneman aren’t looking at the plausibility of Murtaugh and Riggs’ car chase which leads to the discovery of South African Krugerrands and the subsequent attacks by “South African thugs,” or into any of the other spectacular stunts depicted in the film.  The scene is picked apart, with one specific aspect tested after another.  They test how easily one man can pull another into a bathtub from a toilet if the man on the toilet is unable to use his legs due to numbness.  For the show-finishing test, they focus on the shock protection of a cast-iron bathtub.  After determining what variables are acceptable in their experiment and which need to be removed (namely, actual people and a real house), they construct a bathroom on a bomb range with pressure-sensors and a ballistics dummy to record how much of the shock wave reaches inside the bathtub.

This photo isn't from the same episode, but it's still badass.

The Mythbusters engage in solid science, and in solid science, each experiment is designed to test one hypothesis.  If the results confirm the hypothesis or disconfirm it, the science is still solid.  In fact one of the defining factors of so-called “hard science” is that 50-50 possibility for failure.  If a hypothesis is tested in a way where a result could be produced that neither confirms or disconfirms it, the science is faulty—there are too many variables that must be eliminated from the experiment in order to make the hypothesis falsifiable.

The results of hard science carry absolute authority:  a hypothesis is either confirmed or disconfirmed, there is no way to argue for one or the other once the experiment has been carried out.  It is the singular focus of science that gives it this authority, and therefore other areas of study echo that kind of inquiry.  The study of art focuses on art itself—to be an authority is to be an expert, and to be an expert is to study something solely and exhaustively.  This is how we have modeled education.  High school specializes by class (1st period, you study Science, 2nd period, you study Latin, etc.), while trade schools specialize by, well, trade, while universities specialize by major and therefore department.

In art, an education focused entirely on art produces artist who make art that is, at its core, about art.  Though we think ourselves to be past the “art-for-art’s-sake” mantra of Abstract Expressionism or Minimalism, the work we produce is referencing other works, other periods of art history, and is a product of our focused education.  An artist like me might propose that anything can be considered art, which is true.  In a bizarre paradox, the supposed non-art activities that artists bring into the fold as art are justifiable as such because our sole area of expertise is art.  We are artists, so anything we do is art.

What this produces, as Howard Singerman and others lament, is a circular production of artist-educators.  The non-art activities produced as art—the “Alternative Media,” the “New Genre,” the weird, out-there, crazy stuff like performance and video and installation and earthworks and sound art—do not have much of a place in the art market. These artworks are difficult to quantify and commodify, and are therefore difficult to sell as objects.  Since they can’t really be sold on the primary market, there’s little to sell on the secondary market (auction houses frequented by collectors) and therefore the path to the institutions of legitimation, namely, museums, is obstructed.

With a lack of accessibility to the market, the path to legitimation instead leads through the institutions of education.  Enter the artist-educator.  Enter the visiting artist.  Enter the special lecturer.  The majority of students graduating from MFA programs are qualified to make art, certainly (really—how much qualification do you need?  More in another blog), but they are qualified for little else in a world that requires “employment” in order to have enough money to live.  Since many graduates focus on the ephemeral or the experiential rather than on saleable products, their education seems to limit their job possibilities to education.  MFA graduates become art instructors, teaching a new generation in a manner as focused and limited as the one in which they were taught. They can also become visiting artists, touring the lecture circuit of universities and art schools, earning not only stipends and lecturer fees, but also legitimation and a place in the pantheon of art history.  The most obvious example I can think of is Chris Burden, who is not an artist who produced much in the way of art objects, but is nonetheless immortalized in textbooks thanks to his performances and perhaps more, arguably, because of his personal qualification of his performances and installations as an instructor and visiting artist.

'Shoot,' by Christ Burden (1971) was entirely experiential. Even the documentation is lacking. Yet, it is a seminal work, and is known by any student studying performance art.

As I can tell you from experience, finding a place in the ivory tower of academia is no easy task.  There are few positions available for the thousands upon thousands graduating with MFAs every spring, and in an economy like this, with budgets slashed and art budgets the first on the chopping block, even those positions are dwindling.  Young graduates and emerging artists are force to cope with existence in a world where their newly-gained and accredited expertise will not get them very far.  Outside of Graphic Design courses, little mention is made in university art curriculum of self-marketing and business practices, even in courses with such promising titles as “Professional Practices.”  Outside of the miracle of gallery representation and excessive sales, how is a given artist expected to make it in a work-a-day world and still have the time, resources and opportunities to both make and exhibit their work?  While the chances of being an institutionally-legitimized “successful” artist are low, how does one still manage to be a success?

It may be that the definitions for success and legitimation for artists needs to shift for our current age of art.  I am certain that the qualification for art as something that is only made to be art has to change.  For someone to be successful at making art, one needs the support of both other artists and a community that finds the art both accessible and important.  High-minded artists and afficianados might argue that what I’m suggesting is that artists sell out and dumb-down their work—that they make kitsch in order to survive.  The pugilist in me wants to quote Lars Ulrich of Metallica:  “Yeah, we sell out—every seat in the house.”

Just because something is good business doesn’t make it bad art.  Metallica earned the scorn of purists by suddenly gaining mass-market appeal with their self-titled 1991 album, also known as The Black Album.  It wasn’t “metal” enough if it appealed to people outside the “educated” and the “specialized.”  But Metallica’s music, when looked at over the span of thirty years, is a continually evolving thing—and I argue that the band has always been unafraid to take risks in order to explore a new idea musically.  Sometimes it appealed to a large audience and thus brought more people into the world of heavy music than may have become interested in it otherwise.  Sometimes it failed—I give you St. Anger.  However, the exploration that Metallica engages in, however popular or unpopular, is an example of the kind of thing you’re taught to do in art or in music.  The problem is that it is seen as being less than pure by those more focused specifically on metal.

Remember how upset "purists" were when the members of Metallica cut their hair?

Metallica’s wide success depended upon appealing to listeners outside of the pure focus of metal music.  They eschewed the institutions of metal legitimation (whatever those may be—sweaty sets in dive bars attended by 50 people?) and adopted a new institution, in this case, mass approval (this was a tactic adopted by pop music long ago, moving away from the academic approval implied by classical and even academic jazz).  The success of artists may too depend on appealing to audiences outside of the institutions of legitimation as we know them. This may or may not include “selling out,” and will certainly require an attitude toward producing art that veers from the purity of art as taught in an academic setting.

As a suggestion for a possible route to take in this regard, allow me to relate a conversation I recently had with a friend.  While he was, one point, an artist, this friend has been involved in business for 8 years.  He was suggesting a way to earn money toward an artistic venture that, initially, seemed too tied to marketing to be acceptable in an art setting. He wanted to use a crowdsourcing site (like Kickstarter) to raise enough money to buy a CNC router.  He proposed using the router to create images on plywood.  Buyers would select from stock images that were provided or would have their own images to be created on the wood.  To me, this sounded like a very basic, kitsch-based business scheme: make images of peoples babies or dogs on plywood and charge them $300.  His business model seemed sound, but it seemed like just that:  business.

Using a computer program, the router bores different sized holes into plywood that has been painted black.

Here you can see both the texture of the holes and the image itself.

“I don’t want to just make crappy kitsch prints for people—where’s the art in that?”  I complained.

“You don’t get the router just for that!”  He explained.  “You need to offer people who are investing on Kickstarter something in return—they aren’t getting dividends for this investment.  You make them the 4’ by 4’ half-tone image of their grandmother and you then have this awesome router that you can make anything you want with and you didn’t have to pay for out of your pocket!  Now that you’ve got it, you can make, like, a topographical map and fill all the lakes with fiberglass resin, or crazy computer-designed three-dimensional sculpture or whatever this tool is capable of.  The kitsch stuff is just what you do to pay for the tool.”

In this model, the artist is engaging in creative production albeit half of it in the realm of the “low,” the “kitsch.”  He or she isn’t becoming lost to art in the world of the work-week, nor is he or she becoming lost to the wider world in the insulated baffles of academia. Is it “selling out?”  From the viewpoint of pure art, yes.  It may also be an option for success as an artist outside of academia and outside of the art market as we know it.

I don’t have a prescription for how to be successful as an artist in an age after art.  It may be a matter of each individual working out a way to continue creative production while at the same time making some sort of a living.  The art market is not treated in the “traditional” manner of speculative production and sale through the use of a dealer and eventually historical recognition in the hands of a museum.  Likewise, the closed system of academia loses its power of legitimation as artists in so-called “alternative” areas find venues and audiences outside of the ivory tower.  The idea of legitimation is all but ignored, so a question remains as to how history will immortalize what is produced in this age after art.  Although, if we accept that we are in an age after art—where art is no longer something to be isolated and produced in and of itself—it may be that history is in the same boat.  In an age after history, the question of legitimation may be moot.

Musings on Methods of Communication

28 10 2011

Looking out my window, there is a man with a small child—probably four or five years old—walking down the sidewalk.  The man is looking into his cell phone, probably at a text.  The child is tugging at the man’s pants, trying to get him to go the other direction—trying to get his attention to look at something fascinating like a squirrel or a dead bug.  But the man is distractedly continuing.  He’s not necessarily ignoring his child—he is tugging back as if to say, “No, we have to go this way,” but he is detached.  He is otherwise engaged in whatever is on the screen of that phone.

Distracted parents are nothing new, and we can travel back in time and see the same scene with other devices.  Ten years ago, the person would be talking on the phone.  Thirty years ago, the man may be hurrying home to a land-line to retrieve a message on an answering machine.  Forty years ago, the man may be engrossed in a newspaper story as he walked down the sidewalk.  While the distractedness and preoccupation is not new, overall there does seem to be a shift back to communicating via text as opposed to verbally.

Methods of communication have changed over time.  From Gutenberg to the telegraph to fax machines to smart phones, technology has facilitated grand sweeping changes to the methods we use to transmit information from one person to another.  The curmudgeon in me wants to rail at the tide of progress, lamenting the “less personal” approach taken in the present time, but surely a person in the Renaissance may have said the same thing about moveable type.  “What?  You can just mass-produce copy after copy of this manuscript?  Where’s the time spent pondering the true meaning of the text?  If you’re just blindly churning them out, you aren’t spending the hours with each letter, forcing you to ponder what is really behind it.”

I am finally getting a new cell phone plan today, and I have come to the realization that I will need to break down and allow for more text and data and less calls.  Texting is something that I have a hard time with.  Without the nuances of inflection and intonation, I have had many a text message received poorly.  What’s more, I think in longer sentences than the text message is designed for.  It takes me forever to type out a response to someone’s question that may be as simple as, “Where are you going for lunch?”  The straightforwardness of the language required and the expected brevity of the messages lead me to connect the text message with the telegraph.  It’s like we’re moving backwards.  The only difference between now and 1909 is that we don’t need a messenger to deliver the text to us—that messenger is in our pockets all the time.

These are more than telegram-delivery boys.  They can instantly send our messages out—not just between cell phones, but to the entire internet.  Maybe you’re even reading this blog on a smart phone.  We are no longer tied to our homes or wi-fi hotspots to post a blog, status update, or tweet to the entire world.  Everyone can see what we have to say!  And yet, we walk along sidewalks, gazing into our phones, ignoring each other as we pass by in real life.  We can communicate with everybody and yet we talk to nobody.

If we are communicating without contact, I question how real the communication is.  Through all our posts and texts and blogs, are we saying anything of consequence?  Is there any action that comes from all this information transmission?  Are those actions and consequences real, or are they hyperreal?  Of course there are real-world consequences resulting from digital communication.  Just ask Anthony Wiener.  But inadvertent results are far from intentional.  With the power of such mass communication, what more can we learn about and from each other and what can that help us learn about ourselves?

For Contemporary Critique, I sit at a computer and type essays with the intent that they will be read by many, many people.  Sixty years ago, I would have needed a publisher to do this.  Twenty years ago, I still would have needed access to a relatively cheap copy shop and a few friends to help add content for a ‘Zine.  With this blog, I need no editor and no outside evaluation or affirmation, I can simply type, post, and know that out there, somewhere, at least one person has read and understood what I am saying.

As simple as they may seem, it takes at least a few people to put together an effective 'zine.

I am fond of warning artists against what I call “masturbatory” art—art that is solely made for the artist himself, disregarding its impact on any outside viewer.  Additionally, one of the chief purposes of object-based art is communication.  So it follows that I warn against masturbatory communication as well.  In text message- and internet post-based communication, we are working in a one-way fashion similar to art objects or television.  The artist makes the object with a specific intent, and the viewer is left to decipher that intent on his own.  I can send you a text message, but I can’t adjust my statement to a quizzical look or fine-tune my intent with a certain inflection.  With this one-way method of communication, it seems imperative that whomever may choose to use it put as much thought into their statements as an artist puts into his product.

Does this mean we need MFA programs for blog posts?  Editors for text messages?  Publisher-approval for tweets?  Those may all be a bit extreme.  But having an audience in mind for whatever the method of communication may lead to more clear choices, and more clear understanding down the road.

Mark Zuckerberg and Troy Davis

23 09 2011

Karl Marx famously stated that “religion is the opiate of the masses.”  By this, he meant that the institution of religion keeps the masses satiated and compliant to the will of those in power—those with the capital, those whose ultimate goal was profit above all else. Clement Greenberg had similar ideas about kitsch (though he would be appalled to be so closely linked to Marx). Kitsch satisfies the uncultured, the uneducated, and can be used by those in power to manipulate the opinions and will of the people to their own ends.  Walter Benjamin saw similar potential in entertainment, specifically in film—both Benjamin and Greenberg saw the way Hitler used propaganda and mass-appeal to his own ends as examples of entertainment, rather than religion, as the opiate of the 20th Century masses.

“Entertainment” as such is a bit more difficult to pin down in the age of simulacra, pastiche, and the hyperreal.  It has become ubiquitous and constant in American culture—the rise of the smart phone has brought the power of the internet into the palm of your hand.  Yet this power is used more often to play Scrabble with friends or fiddle around on Facebook or Twitter. One never has to wait to be entertained by an inane post from a friend, a Star Wars/kitten meme, or an insipid 140-character rant from a B-list celebrity or athlete.  Even while waiting in line at the movie theater to be entertained, we seek entertainment from our phones.

The infinite power of the internet... at its best.

The power of Facebook is both greater and less than it is purported to be, either by media outlets or by its own self-promotion.  There was fanfare and congratulations over the role the social media site played in the Arab Spring, with much media attention centered on its use in Egypt.  “Facebook made democracy possible in an oppressed country,” seems to be the underlying attitude of many.  But Facebook did not liberate Egypt:  Egyptians did.  Surely, there was some communication between protesters that did take place on the site, but it was the protests and actions taken by the people, and their resistance against being put down by force, that ultimately resulted in regime change.

Still, the perception of Facebook as the ambassador of democracy to a troubled region has led to an inflated sense of both pride and confidence among Americans. Since the ideology of democracy is at the core of our identity (i.e. “Democracy is Good”), and Facebook helped bring democracy to the Middle East, then Facebook is an example of democracy at its finest.  This, of course, is not true.

Facebook was invented, and is only possible, in a country that holds the Freedom of Speech in high regard.  Facebook is not a democracy, it is a corporation—a private enterprise.  The content placed on Facebook is the sole property of Facebook itself, which can censor anything it chooses (so far, it has chosen not to censor and has been banned in China since 2009 as a result). It can also make changes however it pleases, regardless of what its users may think about the changes.  This week has seen a major change to the layout of the site, with some new features added, and has brought the “wrath” of its customers, with countless angry posts (on Facebook) complaining and demanding a change back.  Mark Zuckerberg is not going to change it back.

This twerp's a billionaire. Do you honestly believe he gives a crap about what you think?

This is not the first time Facebook has made changes, and not the first time its users have been upset.  In the end, by and large, the users don’t leave.  There was an outcry over rules changes in 2009 that ultimately did pressure a reversal of stance by the website.  However, as far back as 2006, the “Students Against Facebook News Feed” group pressured the site to give some control to users to “opt out” of the news feed feature.  In 2009, those controls were removed.  In 2010, nobody was complaining about their lack of control of the news feed.  A year ago, the site made a gradual change to a “New Profile,” that initially seemed voluntary, until the “New Profile” was the only option.

What is dangerous is that Facebook provides the illusion of democracy outside of itself.  Jean Baudrillard made similar statements about the hyperreality of Disneyland.  Baudrillard is not concerned with the fiction that Disneyland presents (i.e. a cleaned-up version of American Main Street), but its function as a “deterrence machine… It’s meant to be an infantile world, in order to make us believe that the adults are elsewhere, in the ‘real’ world, and to conceal the fact that real childishness is everywhere.”  For Baudrillard, the fiction of Disneyland allows us to think that the real world is just that—real.  When, for him it is hyperreal (especially Los Angeles):  a series of images and simulacra.

Facebook is not a democracy. However, the use of Facebook, even when acknowledging that it is an autocracy, allows users to believe that democracy is real outside of Facebook.  The ubiquity of the site—the fact that so many people use it—make it seem as if it is the perfect vehicle to enact democracy, even if it isn’t one itself.  However, this ubiquity feeds the notion that enacting democracy can be as simple as posting a link or a status or a profile picture.  “I’m communicating with so many people,” seems to be the thought, “of course this will make a difference.”  Posting on the internet, without any real-world action, is lazy activism.  It is akin to wearing a sandwich board on the sidewalk, shouting through a megaphone.

The same day Facebook users were writing outraged posts over the new layout, convicted murderer Troy Davis was put to death in Georgia.  The execution was controversial, not just because of the fact that it was an execution, but because many of the witnesses who had testified in the trial had changed or recanted their testimony.  Yet, through all the appeals and Supreme Court hearing requests, the verdict remained unchanged.

There was plenty of Facebook traffic regarding the case.  Many, many of my Facebook friends posted messages of hope for a stay or a pardon, dismay at the fact that the execution was carried out, and scoldings of the people posting about Facebook while a man who may have been innocent was put to death.

The similarity of the Facebook and Troy Davis posts struck me.  In a week, or a month, or a year, who will remember what the old Facebook layout even looked like?  Can you remember the layout in 2009?  Two days after the changes, I see no posts complaining about the layout, when it seemed to be all anyone talked about on Wednesday.  Those so passionately posting about Troy Davis are today posting about their writing, their workdays, their plans for the weekend.  There is no mention of injustice.  There are no links to websites organizing protests against capital punishment.  I am not saying there is no Facebook activity regarding Troy Davis—there are numerous pages and posts—I am saying that the traffic in my circle of friends has very little to do with the case two days after the execution.

This image puts the issues into perspective, but it highlights the limits of Facebook activism (I found this image on Facebook).

Facebook provides the illusion, not necessarily of democracy, but of involvement.  You can post, you can have your say, you can feel like you’ve been a part of something.  Then you can go back to your own life, back to your minutiae, back to being entertained.  When speech is not followed up with action, nothing changes.  When nothing changes, the powerful maintain power over the masses—whether it is Mark Zuckerberg, the State of Georgia, or a dictator in the Middle East.

As an Epilogue, I must say that I believe in the power of the Freedom of Speech, and I believe that Facebook (or, say, blogs) can act as a key communication tool to foment change—to act as the spark of activism.  But the key to activism is action—which takes work in real life, not just online.  I am curious to hear how people go about following up their internet communication with action, especially from those who may be rightfully angry about my dismissal of posts regarding Troy Davis.

The Nostalgia of 9/11

9 09 2011

Here we are nearing the middle of September, a time when, once again, we start to see a buildup in cultural production—television programming, radio interviews, news commentary, etc.—centered around the topic of remembering the attacks on the World Trade Center towers and the Pentagon on September 11, 2001.  This year, marking the tenth anniversary of the event, has the familiar commemorative speeches, memorial services and monument dedications that we have come to expect.

The further away we get from the date of those attacks, and the more memorializing that happens concerning them, the less impact the events seem to have.  The iconic images are, by now, quite familiar—the video shots of planes hitting the towers, the collapse of each, almost in slow motion, the people fleeing from the onrushing cloud of dust and debris, the thousands walking across the Brooklyn Bridge, the photo of the firemen raising a flag on a damaged and twisted flagpole.  The repetition of those images, especially over time, begins to obscure our own personal memories, our own personal experiences, of that day.

Jean Baudrillard argues that the attacks, to most of the world, were in fact a non-event.  I was living in Spokane, Washington, nowhere near New York City, Pennsylvania, or the Pentagon.  My experience of that day was through the images, not in the events themselves.  The attacks did not really happen to me.  But in a hyperreal world, “factual” experience isn’t the end of the story.  While the physical attacks had no bearing on my experience, the symbol of the attacks did.  The images that were repeated over and over again that day, and in the weeks and months that followed, on television, radio (if  you’ll remember, all radio stations switched to whatever news-format they were affiliated with for about a week), and the internet.  The images were re-born in conversations between friends, family, and acquaintances.  The violence did not happen to us, but the symbol of violence did.  As Baudrillard states, “Only symbolic violence is generative of singularity.”  Rather than having a pluralistic existence—each person with their own experience and understanding of any given topic—our collective experience is now singular.  Nine-eleven didn’t physically happen to me, so it’s not real, but it is real. It’s more real than real.  It’s hyper-real.

But in the ten years since, the hyperreality of the attacks seems to be fading into something else.  As the vicarious (for most of us) experience fades into memory, the singularity of that symbolic violence is shifting into one of nostalgia.  The events as historic fact are replaced by our contemporary ideas about that history as it reflects our own time.  Nostalgia films of, say, the 1950s aren’t about the ‘50s.  They are about how we view the ‘50s from 2011.

The 1950s scenes in Back to the Future don't show us the 1950s. They show us the 1950s as seen from the 1980s.

We’ve seen this nostalgia as early as the 2008 Presidential campaign, which included many candidates using the shorthand for the attacks (“Nine-eleven”) to invoke the sense of urgency or unity or the collective shock of that day.  The term “nine-eleven” no longer just refers to the day and attacks, but to everything that went with them and to the two resulting wars and nearly ten years of erosion of civil liberties.  What happens with this nostalgia is that details become muted and forgotten, and we end up molding whatever we are waxing nostalgic about into something we want to see—to a story we can understand and wrap our heads around.

The Daily Show With Jon Stewart Mon – Thurs 11p / 10c
Even Better Than the Real Thing
Daily Show Full Episodes Political Humor & Satire Blog The Daily Show on Facebook

This morning I listened to a radio interview of a man who carried a woman bound to a wheelchair down some 68 floors of one of the towers on the day of the attacks.  He was labeled a hero, but in subsequent years, slid into survivor’s (or hero’s) guilt and general cynicism.  He looked around the United States in the years after the attacks and saw the petty strife, the cultural fixation on celebrity trivialities, and the partisan political divide seemingly splitting the country in two.  He longed for the America of the time immediately following the attacks, “Where we treated each other like neighbors,” the kind of attitude, as suggested by the interviewer, that led him to offer to help this woman he did not know in the first place.

Certainly, there was the appearance of national unity after the attacks.  Signs hung from freeway overpasses expressing sympathy for those in New York.  Flags hung outside every house in sight.  People waited for hours to donate blood on September 12, just to try to do something to help.  The symbols of unity were abundant, but division abounded as well.  Many were still angry, skeptical, and suspicious of George W. Bush, who had been granted the presidency by a Supreme Court decision which, to some, bordered on illegal.  Within communities, fear and paranoia led to brutal attacks on Muslim (and presumed-Muslim) citizens.  Fear led to post offices and federal buildings blockaded from city traffic.  In Boise, a haz-mat team was called due to suspicious white dust, feared to be anthrax, on the steps of the post office.  It turned out to be flour placed there to help direct a local running club on their course. The flags were still flying, but the supposed sense of unity and “neighborhood” was, in actuality, suspicion.

To look back at September 11th, 2001 and view it as a time of unity in comparison to the contemporary political divide is nostalgia.  The view is not of the historical time period, but what one wants that time period to have been that then acts as an example of what the present “should” be.  Perhaps nostalgia is inevitable.  As time passes and memories fade, the repeated symbols of any given time or event become re-purposed, gain new meaning from the reality (or hyperreality) from which they are being viewed.  The goal for many regarding the attacks is to “never forget.”  The repetition of the images keeps us from forgetting, but it also contributes to the memory changing.

Sources:  Baudrillard, Jean.  “The Gift of Death.” originally published in Le Monde, Nov. 3, 2001

Here and Now (radio show).  “A Reluctant 9/11 Hero Looks Back.”  Airdate:  Sept. 9, 2011

Not Knowing

1 07 2011

Last fall, my father, brother, and I all went to a Boise State University football game.  It was an auspicious occasion, as the Broncos were facing the Oregon State Beavers and the game was nationally televised in prime time.  It was an exciting game, the Broncos won, and a good time was had by all.  Seeing a sporting event in person provides a full-immersion sensory experience–the game, the crowd, the weather, the sounds of the bands and the public-address announcer, the smell of the grass (or blue field-turf in this case) and concessions, even the dog that runs out to retrieve the tee after each kickoff—that you don’t get from watching the game at home on TV.  The difference that I found most refreshing, however, confuses many people I try to explain it to.  I like the fact that, when you’re watching the game in person, you don’t know everything that’s going on.

There's a little white speck in the top left part of this photo. That's me! I think.

Depending on the network and the stakes, a nationally televised football game has somewhere in the neighborhood of twenty cameras at work.  When you’re watching a game at home, simply visually, it’s as if you’re watching it from twenty different positions within the stadium.  You don’t just have the “best seat in the house,” you have the twenty best seats in the house.  When you’re at the game, you have one seat.  And it might be a bad seat.  At the game I went to, we were high up in the stands, just behind the left corner of the south end zone.  With only one vantage point and one set of eyes, my perception of what was happening was limited.  Watching at home, you can be watching the ball during the play, but then be taken back for a replay of what you just watched except this time you’re seeing what the wide receiver was doing away from the ball.  “Oh, Brent, that corner is starting to get under the receiver’s skin.  It’s getting pretty chippy out there,” Kirk Herbstreit might say as you view the slow-motion footage of the two athletes shoving each other while running down the field.

To be sure, at home one has the opportunity to see the game from many more physical viewpoints than the person at the live event.  But that experience of the game is mediated.  Football is a complicated game.  Players are split between offensive players and defensive sides for each team.  There are long- and short-yardage specialists for both sides.  Each side has its own coordinating coach and, the higher the stakes, the more individual position coaches are used—the quarterbacks coach, the offensive line coach, the defensive backs coach.  To compare sports to war can be dangerous, but in the area of complexity of strategy involved football is closer to military conflict than, say, tiddlywinks.  Because of this complexity, the broadcast analyst plays a crucial role in the television viewer’s understanding of the game.  Without exception, the major-network analysts for pro and college games are men who either played or coached at that level.  They have years of education and experience with the strategies tactics of the game, and the good ones are able to communicate what they are seeing and how it is affecting the situation of either team.

This is what makes all those camera angles and slo-mo replays possible.

So, when one is watching a football game at home, that person is getting a more thorough and insightful presentation of the event that is taking place.  However, that experience, however thorough, is mediated.  The camera angles that are shown are being chosen by the director, and those individual shots are being composed and focused by each cameraman.  The viewer’s knowledge of what factors are affecting the outcome (say, a lack of running game or an injury to a key player) are being clarified and contextualized by the analyst.

In fact, that analyst is being assisted by field reporters, producers and the director in what to address via what replay is being shown or what information is available.  Yes, the home experience of the football game is broad, but it is packaged and delivered by a team of cameramen, directors, producers, and analysts.  You may feel like you know everything about the game you’re watching but what you know is limited to what they provide.  Your experience isn’t even their experience (it must be something completely different to watch a game with a director and a producer telling you through an earphone what the next replay will be while you’re also supposed to be speaking about the game your watching both on a field and through a monitor in front of you), it’s the experience they have made for you.

On the other hand, the experience one has at a football game is his or hers alone.  You may be watching the runner with the ball and miss the excellent swim-move made by the defensive end right before the tackle.  You might be having a conversation with the face-paint-clad fan next to you and miss the time-out performance by the cheer squad.  You probably won’t be aware of the trouble the Bronco offense is having running to the left side due to a thumb injury to the left guard, or that this field goal kicker is 48% from this range.  But you have just as full of an experience of the game.  Your opinions on strategy and understanding of what has taken place are first-hand experience, not mediated by a network team of dozens of people.  You know what you’ve seen, but you don’t know “everything.”

I attempted to explain my attitude to my father as we were watching a replay of the game the next night, which seemed almost surreal.  Here we were, supplementing our experience of the game we’d seen first-hand with a second-run airing of the same game as shown to a third party, as if to make our experience more complete, more real.  While it did seem a little trippy, surreal isn’t the right term.  What we were engaging in was hyper-real.

Jean Baudrillard explored the notion of the hyperreal.  For him, hyperrealism is a defining characteristic of postmodernity.  It is the collapse of the distinction between the representation and what it is representing—between the representation and the “real.”  I am not arguing here that the game I witnessed was more real than the game that was broadcast on ESPN.  I’m saying that both games were real.  Hyperrealism is the acknowledgment that what is represented IS reality.

In another context, Michel Foucault argues that discourse is reality, meaning that the discussion about a topic (sexuality for Foucault, football for us) constitutes what that topic is and what it means.  Discourse can be history books, movies, or football telecasts, and all constitute how we understand history as reality.  An example of this is the discourse on Vietnam provided by television and movies.  Increasingly, especially for those of us who did not live through or have any direct experience of that war, what we see in films like Full Metal Jacket or Platoon constitutes our experience, and therefore our knowledge of the Vietnam conflict.  For us, the films aren’t about Vietnam, they are Vietnam.

The "Vietnam" scenes of Full Metal Jacket were filmed at an abandoned gasworks outside London.

Hyperrealism is pervasive.  A week ago, a friend of mine got a text message from his girlfriend that we both made a joke about.  He immediately went onto facebook and posted an extension of that joke onto my wall.  The conversation and the joke spanned three realities—the text, the actual interaction, and facebook—none of these is more “real” than the others, yet two are representations of conversations on different digital planes.  Yet they are all intertextual extensions of the same conversation.

To connect this to the football game, the game I witnessed was no more or less real than the game broadcast on television.  And once I watched the game in the rebroadcast, both experiences became my one singular experience of the game.  The real and the represented are one thing, and my trip to the BSU game is now hyperreal.

For me, there is a lure to the unmitigated first-hand experience of watching the game in person, of not “knowing” all of what happened.  My experience of the game was subjective—no one else saw the game exactly the way I did.  To not know is to be a single person in a single place at a single time.  To not know is to be human on a very basic level.  To not know is to be a part of reality instead of hyperreality, if only for a moment.

Bibliographic information:

Storey, John.  An Introduction to Cultural Theory and Popular Culture.  Athens, GA:  The University of Georgia Press, 1998.

%d bloggers like this: