Intention

29 04 2012

Artists, critics, and academics insist that the defining factor for any object or action to be art is intent.  Even in a postmodern mindset where anything—any act, any work of cultural production, or any object (any thing)—can be art, what makes that thing art is the intent that it is art.  This, of course, is rooted in the Modernist ideology of authority.

Modern thought places the utmost importance in authority, because it is through authoritative figures, statements, and processes that we can determine Truth.  And capital-T “Truth” is the utmost authority.  For this purpose, fields of study are singled out and highly educated experts spend their time investigating and advancing their knowledge of these fields, producing work that is True Science or True Music or True Art.  By designating himself as an Artist, a person then declares his intent to make art.  From then on, what he decides is art—what he intends art to be—is just that.  His justification is manifest in his position as an Authority on Art, an authority granted by specialization and expertise.

In the period of High Modernism (namely, the movements of Abstract Expressionism and Minimalism), the intent of being art was enough justification for a thing to be art. During the Postmodern period, however, simply being art was not enough justification for an object.  Beginning in the late 1960s, art gained (or re-gained) the requirement of meaning.  In order to have impact, the work needed to do more than just be art, it needed to mean something.

Barbara Kruger, Your Body is a Battleground, 1989

In some circles, this “meaning something” depended on shock—a tool inherited from early Modernist painters who seemed intent on forcing the advancement of society, which was another topic of Modernist importance.  High Modernists like Picasso and Pollock aimed their shock inward—the shock of non-representational painting pushing art to a more advanced, more specialized place.  However, like Realist painters such as Manet, Postmodern artists like Barbara Kruger and Ed Kienholtz aimed their shock outward, putting society itself in the crosshairs.

Activist artists like Judy Chicago, Mel Chin, Guillermo Gomez-Peña, and Sue Coe made artwork with dual intent:  to be art and to disrupt.  The requirement for rupture seems to have become inherent, especially in art produced and justified in an academic setting.  Disruption  may not always be readily apparent, and thus artist’s statements emerge as a way to explain what is disruptive about a particular work or a particular artists’ oeuvre.

What is peculiar about the supremacy of rupture as a requirement of art is that the intent of rupture seems to have the capability of being granted after the fact.  Artists who do not intend to their work to be disruptive in the present to be dismissed, and artists who created rupture in the past, whether or not they set out to do so, are elevated.  A reader commented on my last post (Thomas Kincade is Dead.  Long Live Thomas Kincade) on Facebook, arguing against my comparison of Kincade to Andy Warhol:

Even if he claims that he did not intend to, Warhol’s imagery (as banal as it was) at the time forced an examination of the boundaries of art (rupture). That’s pioneering. Kinkade’s imagery (although his methods of production and commercialism could be argued as similar to Warhol’s) does not hold the same power of rupture, just based on content alone.

Warhol was famously non-committal about his intentions regarding meaning in his work.  He made works with a popular appeal in a businesslike way that seemed to challenge the accepted specialized, reified nature of art. Critics, history books, and hero-worship have assigned the intent of rupture to Warhol, not Warhol himself.  If intent is all important in the status of an artist, is assigned intent just as powerful as declared intent?

It appears that this is the case.  The reader concluded her comments by writing, “I believe Kinkade’s illuminated cottage scenes are more along the lines of an allopathic art—an easy sell.”  Kincade was about business and selling, and Warhol was about critiquing the art world and/or society. However, Warhol’s own statement on the matter was that “Being good at business is the most fascinating kind of art.”

The figure of Andy Warhol has been ascribed the role of sly critic of mass consumer culture and big-money art markets even with the facts and trappings of his fame and wealth readily apparent.  A similar statement can be made about the work and person that is Jeff Koons.  My favorite statement regarding Koons comes from Robert Hughes, “If cheap cookie jars could become treasures in the 1980s, then how much more the work of the very egregious Jeff Koons, a former bond trader, whose ambitions took him right through kitsch and out the other side into a vulgarity so syrupy, gross, and numbing, that collectors felt challenged by it.”

Hughes goes on to say, and I agree, that you will be hard-pressed to find anyone in the art world who claims to actually like Koons’ work.  But because it is ultra-kitsch and still presented as art, we assume the intent is to critique the vulgarity and simplicity of consumer or of the art market itself.  Koons is a businessman, and a shrewd one at that.  He makes a lot of money by “challenging” collectors while stating directly that he is not intending to critique or challenge art, beauty, or kitsch.

Of course, he is challenging them.  It is not his stated intent that is accepted as fact, but it is the intent we as viewers and critics have assigned to him.  In a postmodern view, the authority has shifted to the reader, to the viewer—to the end consumer of a cultural product.  We are no longer interested in a Truth of art, but instead we accept the personal truths of our own subjective views.  Saying you didn’t intent to go over the speed limit does not mean you didn’t do it, and Jeff Koons, Andy Warhol, or even Thomas Kincade saying they don’t intend to create disruptive art work doesn’t mean they aren’t doing it.

If rupture is the new defining characteristic of art, then intent no longer can be.  A child doesn’t intent to disrupt a funeral, but it will because it wants attention.  Attention is the intent, but rupture occurs nonetheless.  Kincade just wanted attention and fame, but that shouldn’t stop us from viewing the work as a disruptive critique of the market.  It hasn’t stopped us from doing the same with Warhol.

Jeff Koons next to his own sculpture, Pink Panther (1988)

The reader’s comment used the word “allopathic.”  Allopathic, according to Merriam-Webster online, is “relating to or being a system of medicine that aims to combat disease by using remedies (as drugs or surgery) which produce effects that are different from or incompatible with those of the disease being treated.”  In this case, the system of art critique is allopathic.  Typically, critique is aimed at works of art that intend to be art in a certain way.  Here, we are critiquing work in a way different or incompatible with its supposed intentions when being produce.  In a world of relative truths, that doesn’t make the critique any less valid.





Neil deGrasse Tyson is Wrong

4 03 2012

I like Neil deGrasse Tyson.  I think he is a warm and engaging face for science on television.  He’s no Adam Savage or Jaime Hyneman—I have yet to see him blow up anything.  To my eyes, he’s no Bill Nye.  That is one titanic bowtie to try to fill.  But, as celebrities of the hard sciences go, Neil deGrasse Tyson is a shining example.

As host of Nova scienceNOW on PBS, he has proven to be engaging and photogenic.  He makes astrophysics something that at least seems accessible to a large audience.  He is the director of the Hayden Planetarium and a research associate in astrophysics at the Museum of Natural History.  When it comes to astrophysics, Neil deGrasse Tyson knows his stuff.  However, when it comes to the cultural mindsets of the Twentieth and Twenty-first Centuries, he is mistaken.

Clip of Feb. 27 Interview on The Daily Show

I am basing my criticism on an interview he gave last week with Jon Stewart of The Daily Show, promoting his book, Space Chronicles:  Facing the Ultimate Frontier.  Stewart characterizes the book as lamenting the fact that the United States, as a culture, no longer prioritizes space exploration.  Tyson acknowledges that the Cold War, fear, and the military industrial complex were the driving force behind the rapid advancements in space exploration from the 1960s until 1972, the last manned mission to the moon.  I will add that moon missions stopped around the same time the Vietnam War ended, drawing to a close the hot part of the Cold War.

Tyson claims that it was the space race that inspired society to “think about ‘Tomorrow’—the Homes of Tomorrow, the Cities of Tomorrow… all of this was focused on enabling people to make Tomorrow come.”  This is where he is wrong.  The space race was a symptom of this mindset, but it the mindset of modernism he is talking about, not just of the space age.  A focus on technological progress is one of the most rudimentary tenets of modernism, with its roots in the Enlightenment.  We see it in the Industrial Revolution, we see it in the advancement of movements in Modern Art, and we see it in the development of technology for war, transportation and communication before, during, and after the space race:  from airplanes to telephones to ipods.  Tyson even cites The World’s Fair as an example of an event geared around the space race.  While the World’s Fairs of the 1960s certainly reflected the interest in space exploration in particular, the institution itself has roots in early modernism—in the Nineteenth Century.

Chicago World's Fair, 1893--long before the space race

Despite being incorrect about its origins, Tyson is correct in pointing out that the drive for progress was the great economic engine of the Twentieth Century, and that careers in science and technology were essential for that progress.  The combined factors of fear, war, and modernist pursuit of progress meant that those careers were celebrated as important for the betterment of society.  Little Jimmy wanted to be an astronaut or a rocket scientist because it was a glamorous and important part of society, an attitude that was reflected in films, news broadcasts, and federal funding.

Stewart assumes that the diminished interest in space exploration had to do with expectations of achievements were not matching the pace of their execution—that we expected to be on Mars by 1970 and since we weren’t there, we got tired of waiting.  Tyson augments his assumption, saying that the diminished interest came from not advancing a frontier.  “The Space Shuttle boldly went where hundreds had gone before.”  This is not the frontier exploration that gains headlines in a world looking for better, faster, stronger, bolder, and further.

Aside from being wrong about the societal motivation behind the space race and the connected advancements in technology, Neil deGrasse Tyson clings to that modernist mindset.  His solution for society is to increase funding for NASA in order to mount a manned mission to Mars, which he believes will excite the populace to value the activity of scientists and technologists, thus fueling the economies of the Twenty-first Century.

Maybe Tyson just wants to revive the careers of Gary Sinise and Tim Robbins. It does promise to be thrilling and exhilarating.

As I have written before, I am skeptical about the notion that we are in an era outside of modernist influence.  While originality in art or even in invention is not necessarily the hallmark of progress that it used to be, advancement is nonetheless necessary for success in our creative, corporate, and governmental evaluations.  A person only needs to look at one very celebrated company—Apple—to understand that advancement and progress are still very much parts of our ideology, and that is the second instance where Tyson is wrong.

Contemporary society does value the activity of scientists.  It might not value the same kinds of scientists that made big, physical advancements like space exploration or the atom bomb, but it does value the kinds of scientific advancements that power the new economic driver: information.  According to postmodern theorist Jean-François Lyotard, the purpose of science is no longer the “pure” goal of its Enlightenment origins. “Instead of hovering above, legitimation descends to the level of practice and becomes immanent in it.”  For Lyotard, scientists are no longer trying to find an absolute “Truth” about the universe (that might come from the exploration of, say, space), but seeking to advance the commoditization of knowledge—the consumption of information.

In a way, Tyson one-ups Lyotard.  By acknowledging the driving force of fear in the space race, he acknowledges that the societal motivation for scientific advancement was outcome-based (winning the Cold War), rather than ideologically-based Truth-seeking.  Even at the height of modernism, pure science was a myth.  Nonetheless, the ideas of Lyotard underlie the entire undertaking of contemporary science.  It isn’t about an authoritative Truth, it’s about consumable truths. For scientists, those consumable truths are technological advancements—however minute, however arbitrary. We do value scientists, as long as they are working toward something we can consume.

The fact that, in this photo, the iphone resembles the monolith from 2001: A Space Odyssey is pure coincidence.

The space race produced consumables—Tang, Velcro, the Tempur-Pedic bed—those were indirect in reaching the consumer market.  Today’s advancements directly aimed at consumers with tablet computers, smart phones, and cars that park themselves.  These advancements aren’t a byproduct of some high-minded pursuit of pure scientific exploration, but directly researched, experimented upon, and produced for us.

I sympathize with Neil deGrasse Tyson.  He wants a modernist society where the pursuit of Truth motivates a populace and advances a culture.  But, as he acknowledges, that pure science may never have been the real motivator at all.  Science is now inextricably linked to product value in technology.  The advancements are more accessible, but they are less tangible.

Works Cited:

Tyson, Neil deGrasse. Interview by Jon Stewart. The Daily Show. Comedy Central. Comedy Partners, New York.  Feb. 27,2012. Television.

Fraser, Nancy and Nicholson, Linda.  “Social Criticism Without Philosophy:  An Encounter Between Feminism and Postmodernism,” Universal Abandon:  The Politics of Postmodernism.  Ross, Andrew, ed. Minneapolis:  University of Minnesota Press, 1988, p. 87.





Heroes

25 11 2011

Cultural figures regarded as heroes often follow a similar path to other, mythical heroic figures.  From Superman to Hercules to Jackson Pollock to Kurt Cobain, there are components that we tend to latch onto in order to label the person as “great.”  Aside from a skill in a particular field, are that the hero must be, in some way, separate from society.  In mythology, the hero must make a trip to the underworld.  “Real world” heroes, it seems to follow, must also take a trip to the underworld, but they don’t end up returning.  “Real world” cultural heroes must be dead.

Even Superman made a trip to the afterlife.

In the classic Western, the man without a name shows up in a seemingly sleepy town that is overrun by a criminal cattle-rustling gang (Tombstone), or a corrupt mayor (Unforgiven), two families vying for its control (A Fistful of Dollars).  The hero is a symbol of something from outside of society, as represented in the town.  Superman is outside of the society of Earth, as Superman is from Krypton.  Spiderman is a little bit of a trick to fit into this mold—Spiderman is a teenage boy, not necessarily something outside of the society of New York.  However, Stan Lee purposely created Spiderman (and many of his heroes) to be a teenager—teenagers, almost without fail, feel alienated from the society of which they are a part.  Since they feel themselves to be outside of society, they see society as an outsider—even if they really aren’t.

With real-world cultural heroes, it is a similar stretch to see how a given person may exist outside of society.  However, it is often what is glamorized about the person.  Take Vincent Van Gogh for example.  If a person on the street knows nothing else about Van Gogh, they will know that he was in some way crazy and they will certainly know the story about his cutting off of his own ear—which is a crazy thing to do.  A person afflicted with mental illness is outside of the normal boundaries of societal expectations.  This also shows up in the chemical dependency of many cultural heroes.  Earnest Hemingway was an alcoholic.  So was Jackson Pollock.  Sigmund Freud was hooked on cocaine.  For Elvis Presley it was pills, for Kurt Cobain it was heroin, for Hunter S. Thompson it was every drug under the sun.

Proof of the cultural influence of the counter-cultural.

For each of these, and for many more, we see the figure as being outside of the normal confines of expected social behavior.  They are, in some way, “other” than us.  Hunter S. Thompson might be close to the perfect example because, not only did he exist outside of society, he did it in a purposeful manner.  He plunged headfirst into Gonzo journalism and brought the rest of us along for the ride—to see the seedy underbelly of Las Vegas not as a participant, but as a mentally altered, “objective” observer.  His writing is from the point of view of alienation, and through that, we can put ourselves in the position of the hero, if only for a short while.

The real world cultural heroes I have listed here have something in common other than substance abuse.  They are all dead.  Classical Greek heroes make a trip to the underworld.  So did the Roman copy of the Greek hero, Aeneas.  So did the American version of Hercules:  Superman.  So did the basis for the Christian faith:  Jesus.

Non-mythical and non-religious figures have a difficult time returning from the dead, but figures who leave some sort of artifacts have a way to continue “existing” after they have died, even if they are not technically alive.  Van Gogh’s paintings draw crowds and high prices well into the 21st Century. The songs of Presley and Cobain continue to get airplay or to be downloaded onto ipods, even the work of Sigmund Freud, largely abandoned in professional psychology, finds its way into literary, artistic, and academic production.

The longevity of the work of these individuals is the indication of their heroic impact. However, the impact of the works themselves is largely dependent on the fact that they are dead.  Once an artist is no longer capable of creating new work, their oeuvre is complete.  They won’t be around to create new work—so the supply is fixed (hence, with increased demand, prices can go up—see sales figures for Van Gogh’s sunflowers or Warhol’s collection of kitsch cookie jars).  Also, the work is static—unchanging. We can think of Jackson Pollock’s work as the drip action-paintings of the 1950s and not have to worry that he may have been influenced by Minimalism or Pop or some postmodern abhorrence later on in life.  He wasn’t around to be affected by those.  His work can remain pure in his death.

In poets, artists, and musicians especially, (and certainly other professions who heroize historical figures) the pattern of substance abuse and death influences the behavioral patterns of students and young professionals in the field.  In ways, it seems that art students want to find some sort of chemical dependence in order to be like the artists they are taught to revere.  On the flip side of that, one might argue that the “creative mind” is already inclined toward such behavior, since to be truly creative requires an ability to think outside of the accepted confines of societal thought—to exist outside of society.

Personally, I am wary of any broad generalizations made about “creative minds,” as if they are sentenced to be artists and addicts and have no way to behave as, say, an engineer or someone with a “scientific mind.”  While some truly creative people are truly troubled mentally or chemically, many, many more are wannabe hipsters who think that if they drink enough or take enough drugs they’ll be able to be like their heroes—addicted, then dead.

To that end, I am reminded of Sid Vicious.  Sid was no great bass player and really didn’t have an ounce of musical or poetic talent in him.  He was recruited to be in the Sex Pistols because he had the punk look—he seemed to embody the attitude of a group desperately rebelling against society. Maybe that’s all that punk truly was (or is)—an all-encompassing, willful effort to exist outside of society, not necessarily to change it in any way or to contribute some “great” work of art to make general progress.  If that was the goal, Sid Vicious can certainly be seen as punk’s patron saint.

Sid Vicious: A whole lot of style, very little substance

This attitude of nihilism, however, doesn’t line up with the notion of the heroic cultural figure.  Heroes, in existing outside of society, in some way progress or protect society as a whole.  The good guy in the Western chases the corrupt officials out and the city can be civilized again.  Superman fights for “Truth, Justice, and The American Way.”  Jackson Pollock influences the direction of abstraction in art, and the reaction against abstraction, to this very day.

Kurt Cobain existed at the intersection of the outsider and the cultural paragon.  He wanted so much to be outside of the popular culture he was so much an influence on that, in the end, it killed him.  Rather, he killed himself.  True cultural heroes, whether they want it or not, are as much a part of the greater culture as anything they project themselves to be apart from.  Perhaps it is that paradox that drives them further away.  Perhaps it is the paradox itself we end up elevating as heroic.





Musings on Methods of Communication

28 10 2011

Looking out my window, there is a man with a small child—probably four or five years old—walking down the sidewalk.  The man is looking into his cell phone, probably at a text.  The child is tugging at the man’s pants, trying to get him to go the other direction—trying to get his attention to look at something fascinating like a squirrel or a dead bug.  But the man is distractedly continuing.  He’s not necessarily ignoring his child—he is tugging back as if to say, “No, we have to go this way,” but he is detached.  He is otherwise engaged in whatever is on the screen of that phone.

Distracted parents are nothing new, and we can travel back in time and see the same scene with other devices.  Ten years ago, the person would be talking on the phone.  Thirty years ago, the man may be hurrying home to a land-line to retrieve a message on an answering machine.  Forty years ago, the man may be engrossed in a newspaper story as he walked down the sidewalk.  While the distractedness and preoccupation is not new, overall there does seem to be a shift back to communicating via text as opposed to verbally.

Methods of communication have changed over time.  From Gutenberg to the telegraph to fax machines to smart phones, technology has facilitated grand sweeping changes to the methods we use to transmit information from one person to another.  The curmudgeon in me wants to rail at the tide of progress, lamenting the “less personal” approach taken in the present time, but surely a person in the Renaissance may have said the same thing about moveable type.  “What?  You can just mass-produce copy after copy of this manuscript?  Where’s the time spent pondering the true meaning of the text?  If you’re just blindly churning them out, you aren’t spending the hours with each letter, forcing you to ponder what is really behind it.”

I am finally getting a new cell phone plan today, and I have come to the realization that I will need to break down and allow for more text and data and less calls.  Texting is something that I have a hard time with.  Without the nuances of inflection and intonation, I have had many a text message received poorly.  What’s more, I think in longer sentences than the text message is designed for.  It takes me forever to type out a response to someone’s question that may be as simple as, “Where are you going for lunch?”  The straightforwardness of the language required and the expected brevity of the messages lead me to connect the text message with the telegraph.  It’s like we’re moving backwards.  The only difference between now and 1909 is that we don’t need a messenger to deliver the text to us—that messenger is in our pockets all the time.

These are more than telegram-delivery boys.  They can instantly send our messages out—not just between cell phones, but to the entire internet.  Maybe you’re even reading this blog on a smart phone.  We are no longer tied to our homes or wi-fi hotspots to post a blog, status update, or tweet to the entire world.  Everyone can see what we have to say!  And yet, we walk along sidewalks, gazing into our phones, ignoring each other as we pass by in real life.  We can communicate with everybody and yet we talk to nobody.

If we are communicating without contact, I question how real the communication is.  Through all our posts and texts and blogs, are we saying anything of consequence?  Is there any action that comes from all this information transmission?  Are those actions and consequences real, or are they hyperreal?  Of course there are real-world consequences resulting from digital communication.  Just ask Anthony Wiener.  But inadvertent results are far from intentional.  With the power of such mass communication, what more can we learn about and from each other and what can that help us learn about ourselves?

For Contemporary Critique, I sit at a computer and type essays with the intent that they will be read by many, many people.  Sixty years ago, I would have needed a publisher to do this.  Twenty years ago, I still would have needed access to a relatively cheap copy shop and a few friends to help add content for a ‘Zine.  With this blog, I need no editor and no outside evaluation or affirmation, I can simply type, post, and know that out there, somewhere, at least one person has read and understood what I am saying.

As simple as they may seem, it takes at least a few people to put together an effective 'zine.

I am fond of warning artists against what I call “masturbatory” art—art that is solely made for the artist himself, disregarding its impact on any outside viewer.  Additionally, one of the chief purposes of object-based art is communication.  So it follows that I warn against masturbatory communication as well.  In text message- and internet post-based communication, we are working in a one-way fashion similar to art objects or television.  The artist makes the object with a specific intent, and the viewer is left to decipher that intent on his own.  I can send you a text message, but I can’t adjust my statement to a quizzical look or fine-tune my intent with a certain inflection.  With this one-way method of communication, it seems imperative that whomever may choose to use it put as much thought into their statements as an artist puts into his product.

Does this mean we need MFA programs for blog posts?  Editors for text messages?  Publisher-approval for tweets?  Those may all be a bit extreme.  But having an audience in mind for whatever the method of communication may lead to more clear choices, and more clear understanding down the road.





The Nostalgia of 9/11

9 09 2011

Here we are nearing the middle of September, a time when, once again, we start to see a buildup in cultural production—television programming, radio interviews, news commentary, etc.—centered around the topic of remembering the attacks on the World Trade Center towers and the Pentagon on September 11, 2001.  This year, marking the tenth anniversary of the event, has the familiar commemorative speeches, memorial services and monument dedications that we have come to expect.

The further away we get from the date of those attacks, and the more memorializing that happens concerning them, the less impact the events seem to have.  The iconic images are, by now, quite familiar—the video shots of planes hitting the towers, the collapse of each, almost in slow motion, the people fleeing from the onrushing cloud of dust and debris, the thousands walking across the Brooklyn Bridge, the photo of the firemen raising a flag on a damaged and twisted flagpole.  The repetition of those images, especially over time, begins to obscure our own personal memories, our own personal experiences, of that day.

Jean Baudrillard argues that the attacks, to most of the world, were in fact a non-event.  I was living in Spokane, Washington, nowhere near New York City, Pennsylvania, or the Pentagon.  My experience of that day was through the images, not in the events themselves.  The attacks did not really happen to me.  But in a hyperreal world, “factual” experience isn’t the end of the story.  While the physical attacks had no bearing on my experience, the symbol of the attacks did.  The images that were repeated over and over again that day, and in the weeks and months that followed, on television, radio (if  you’ll remember, all radio stations switched to whatever news-format they were affiliated with for about a week), and the internet.  The images were re-born in conversations between friends, family, and acquaintances.  The violence did not happen to us, but the symbol of violence did.  As Baudrillard states, “Only symbolic violence is generative of singularity.”  Rather than having a pluralistic existence—each person with their own experience and understanding of any given topic—our collective experience is now singular.  Nine-eleven didn’t physically happen to me, so it’s not real, but it is real. It’s more real than real.  It’s hyper-real.

But in the ten years since, the hyperreality of the attacks seems to be fading into something else.  As the vicarious (for most of us) experience fades into memory, the singularity of that symbolic violence is shifting into one of nostalgia.  The events as historic fact are replaced by our contemporary ideas about that history as it reflects our own time.  Nostalgia films of, say, the 1950s aren’t about the ‘50s.  They are about how we view the ‘50s from 2011.

The 1950s scenes in Back to the Future don't show us the 1950s. They show us the 1950s as seen from the 1980s.

We’ve seen this nostalgia as early as the 2008 Presidential campaign, which included many candidates using the shorthand for the attacks (“Nine-eleven”) to invoke the sense of urgency or unity or the collective shock of that day.  The term “nine-eleven” no longer just refers to the day and attacks, but to everything that went with them and to the two resulting wars and nearly ten years of erosion of civil liberties.  What happens with this nostalgia is that details become muted and forgotten, and we end up molding whatever we are waxing nostalgic about into something we want to see—to a story we can understand and wrap our heads around.

The Daily Show With Jon Stewart Mon – Thurs 11p / 10c
Even Better Than the Real Thing
www.thedailyshow.com
http://media.mtvnservices.com/mgid:cms:item:comedycentral.com:260617
Daily Show Full Episodes Political Humor & Satire Blog The Daily Show on Facebook

This morning I listened to a radio interview of a man who carried a woman bound to a wheelchair down some 68 floors of one of the towers on the day of the attacks.  He was labeled a hero, but in subsequent years, slid into survivor’s (or hero’s) guilt and general cynicism.  He looked around the United States in the years after the attacks and saw the petty strife, the cultural fixation on celebrity trivialities, and the partisan political divide seemingly splitting the country in two.  He longed for the America of the time immediately following the attacks, “Where we treated each other like neighbors,” the kind of attitude, as suggested by the interviewer, that led him to offer to help this woman he did not know in the first place.

Certainly, there was the appearance of national unity after the attacks.  Signs hung from freeway overpasses expressing sympathy for those in New York.  Flags hung outside every house in sight.  People waited for hours to donate blood on September 12, just to try to do something to help.  The symbols of unity were abundant, but division abounded as well.  Many were still angry, skeptical, and suspicious of George W. Bush, who had been granted the presidency by a Supreme Court decision which, to some, bordered on illegal.  Within communities, fear and paranoia led to brutal attacks on Muslim (and presumed-Muslim) citizens.  Fear led to post offices and federal buildings blockaded from city traffic.  In Boise, a haz-mat team was called due to suspicious white dust, feared to be anthrax, on the steps of the post office.  It turned out to be flour placed there to help direct a local running club on their course. The flags were still flying, but the supposed sense of unity and “neighborhood” was, in actuality, suspicion.

To look back at September 11th, 2001 and view it as a time of unity in comparison to the contemporary political divide is nostalgia.  The view is not of the historical time period, but what one wants that time period to have been that then acts as an example of what the present “should” be.  Perhaps nostalgia is inevitable.  As time passes and memories fade, the repeated symbols of any given time or event become re-purposed, gain new meaning from the reality (or hyperreality) from which they are being viewed.  The goal for many regarding the attacks is to “never forget.”  The repetition of the images keeps us from forgetting, but it also contributes to the memory changing.

Sources:  Baudrillard, Jean.  “The Gift of Death.” originally published in Le Monde, Nov. 3, 2001

Here and Now (radio show).  “A Reluctant 9/11 Hero Looks Back.”  Airdate:  Sept. 9, 2011





On Connoisseurship

2 09 2011

Connoisseur.  The word itself reeks of snobbery. It brings to mind men in sport coats with leather elbow patches wearing ascots while sitting in overstuffed leather chairs smoking pipes and holding snifters of 100 year-old scotch.  Connoisseurs are experts, people who enjoy, appreciate, or critique something based on knowledge of details and subtleties.  Connoisseurs know why 100 year-old scotch is superior to others, what separates a good work of art from a bad one, and the difference between a masterwork by Tennyson and the vulgar work of a slam poet.

The Ladies Man knows a lot about wine... you might call him a "Wine-Know."

The difference between a connoisseur and a layperson is, supposedly, one of education and taste.  In theory, one must be taught to appreciate the subtleties of fine scotch—one must know the details of the process of production, how to detect the smoky bouquet of flavors provided by the aging process and the burnt layer inside the oak barrels, the consistency of the fluid against the roof of the mouth, blah, blah, blah.  What is required to become a scotch connoisseur is the ability to speak eloquently to justify his opinion, and, above all, access to the high-end scotch he is justifying his opinion about.  Why is it expensive?  Because it’s good.  Why is it good?  Because it’s expensive.  It’s exclusive.  Not everyone has access to it, therefore it is rare, therefore it is something to be coveted, praised, and held in high regard.  Connoisseurs can afford it, so they only drink “good” beer and “good” whiskey.

The rest of us, in the words of poet Kristen Smith, know in our heart of hearts that “no beer or whiskey is ever bad!”  Whiskey, beer, steak, art, poetry—the common attitude of laypeople is that they like what they like.  To each his own, in the case of matters of opinion, on what he might prefer.  This is, at the heart, a pluralist attitude.  What is good to one person may not be good to another, but neither opinion has any more cultural weight.  I like the Beatles.  A former student professes to hate the Beatles, but likes Jazz.  I am not going to convince him that the Beatles hold a higher cultural place than Jazz, just as he isn’t going to convince me of the reverse.  So we just leave each other to our own opinions and move on with our day.  What each of us prefers is dependent on our own personal tastes.  A connoisseur might see this statement and remark, “There’s no accounting for taste.”

While populists might not want to acknowledge it, the statement is true.  There is no accounting for personal taste—it isn’t quantified, justified, or legitimized.  Those are all key components provided by connoisseurs and institutions to answer questions of taste with definitive categorizations.  I could argue until I’m blue in the face that Rolling Rock is just as good as Samuel Adams Boston Lager, but the continued awards won by the latter prove that it holds a higher place in American beer culture.  It is the institutions of legitimation of art that arrange the strata of artistic output—the museums, galleries, and auction houses identify, define, and quantify what art is good and how much it is worth.  In this case, it is the role of the critic, acting as publicized connoisseur, to educate the wider public on how these works fit in to the overall picture of quality that has been painted by these institutions.  Much like the Samuel Adams TV ads in which the CEO and brewers tell you how the beer is made and that you should appreciate it, the role of the critic in art is that of marketing.

Don't drink the beer to see if it's good! Shove your nose in hops! That's how you know it's good!

Clement Greenberg exemplifies this role in regards to Abstract Expressionism.  As America’s “art critic laureate,” he was able to not only see for himself the qualities that made the work of Pollock and de Kooning  “good” art; he was able to write the justification of why convincingly enough that, in the end, the greater American public agreed with him.  They acknowledged the primacy of abstraction in painting and the position of the galleries, auction houses, and museums was now the accepted truth in regard to quality in art.

However, Greenberg’s formalist criticism and attachment to a universal idea of beauty in art, regardless of historical period, led him to be the model for the caricature of the out-of-touch, snobby art critic.  He wanted no knowledge of the person or process of making in a work (or so he claimed), and would not look at a work until he was viewing it all at once—as if expecting it to overwhelm him with its greatness, if it indeed possessed it.  He would stand with his back to a work and wheel around to view it, or cover his eyes until he was ready to take it in, or simply have the lights off in the room so he couldn’t see it until they were turned on and, like a flash, the painting overtook him.

To see this in action, view a scene from the film Who the #$&% is Jackson Pollock?  The documentary follows the path of a painting discovered in a second-hand store by a truck driver that may or may not be by Jackson Pollock.  To help to solve the dispute, former director of the Metropolitan Museum of Art, Thomas Hoving, is called in.  The painting is installed in a room, and Hoving walks in, covering his eyes.  He sits in a chair directly in front of the painting and looks at the floor for a few seconds before abruptly raising his head, eyes wide open, in order to have the presence or absence of Pollock-ness hit him square in the face.

This is Thomas Hoving.

From his actions to his dress to his manner of speech, Hoving personifies the stereotype of the connoisseur.  The film ultimately brands the art establishment as snobs and hypocrites—using Hoving’s and other’s refusal to acknowledge the painting based on lack of provenance pitted against a CSI-like forensics investigation that seems to place the painting in Pollock’s Long Island workshop itself.

But, to dismiss connoisseurship in favor of pluralism is problematic.  Whether it is based on marketing, public relations, or personal involvement, people have opinions and a collective group will ultimately pass judgment on a given cultural product one way or another.  Groups that are more invested will be more passionate in their arguments, groups with more education and skill in persuasion will be more convincing, and groups with access to funds or institutions of legitimation will ultimately make their opinions into acknowledged classifications.  Legitimation comes with the acquiescence of the greater public.

Inevitably, in discussions on connoisseurship and legitimation, an artist will eventually argue that he or she makes work for him- or herself, not for any general public or for anyone else at all.  This is a lie.  A work of visual art is made to be seen—to be seen by someone other than the artist.  If it were not, the artist could just think of the image, never execute it, and be happy with it.  A work of poetry or prose is written to be read or performed to be heard.  All art, from writing to painting to film to music, is, at its heart, communication, and communication must take place between at least two people.  This is true of traditional artworks focused on communicating beauty, and equally true of artworks based on sharing experience.  Once the work in question is in the public sphere, the general impulse is to evaluate it.  Enter the experts; enter the judgments; enter the machinery of legitimation.

The second a work is on display, the process of judgement begins.

Still, a painter or poet may argue that they don’t ever show their work to anyone, that they write it and leave it in a notebook, or they paint it and put it in a closet.  Surely, this kind of masturbatory production of art occurs.  However, these artists then make the argument that, because they don’t exhibit to any public, their art shouldn’t be judged as “bad.”  I suppose that is valid.  I can’t say an artwork is bad if I haven’t seen it.  However, it is inconsequential.  It has no place in the greater cultural discourse that is art.  Masturbating doesn’t mean you’re good or bad in the realm of sex, it means you aren’t a part of sex.  Making work only “for yourself” doesn’t make it bad art, because it isn’t even involved in the rest of art.

Connoisseurship is ubiquitous, and it happens even in areas of cultural production deemed “low” by experts of high standing.  Slam Poetry is a niche art form, widely dismissed by literary poets as too easy, too steeped in cliché and too obvious to be considered high art.  Even so, there is connoisseurship within slam itself—audience members who go to as many shows as possible and have opinions on one poet over another or even rank poems by a single poet.  A certain type of “hostage poem” (one that uses topics that stir universal emotions; topics such as rape or cancer) is generally panned by poets, but often scores well among audiences.  The structure of slam itself is geared toward qualitative evaluation:  there are scores, there is a winner.  Even for an artist outside of the kind of art accepted by so-called experts, to dismiss evaluation doesn’t work.  Within every kind of production—artistic, cultural, or otherwise—there are experts, there is evaluation.

From art to poetry to metal, any cultural product has its share of connoisseurs.

A connoisseur can be Thomas Hoving, all houndstooth jacket and condescending speech.  A connoisseur can also be an expert in street art, or carpentry, or Norwegian cooking.  We see critical writing and opinion on everything from video games to symphonies.  Our cultural output seems to be built to be evaluated, and we look to experts to help us classify what is and isn’t worth out time.





Let’s Talk About Lady Gaga

26 08 2011

Originality just doesn’t seem to be all that important anymore.  Oh, sure, there seems to be a cultural drive toward innovation, but just how much innovation can we, as a society, take?  Technological innovation is not my primary target here, and surely there are examples of technological originality that drives cultural shifts in behavior such as smartphones or ipods.  Although, some areas of technological advancement are somewhat hindered by a societal push against originality.  No matter how revolutionary the electric car that may be developed by one automotive firm or another, they all maintain the same general “look” of the kinds of automobiles we have been used to for over eighty years.  When cars move drastically in style from the typical design with a longer front end to house an engine, they seem silly (take the BMW Isetta for example—there’s a reason it was used as the nerd Steve Erkel’s car in the sitcom Family Matters).  There is no real purpose for having this space in electric or hybrid or even gas-powered cars, but cars that change drastically don’t sell, because the public chooses the familiar over the innovative.

Really, though. This is a ridiculous car.

Culturally, at least “mass-culturally,” we do not seek out the truly innovative, strange, or original.  As I write this, the film Fright Night is opening.  I actually had no idea it was a remake of a 1980s B-movie, though I certainly saw no reason to put it into a category of “ground-breaking films.”  From the previews, it seems like a vampire-filled version of the Scream films, which, while reflexive, were themselves rehashing a horror-movie formula that has been around since the dawn of the genre.  My point is that the non-original quality of contemporary entertainment is not limited to remakes of previous cultural production, but that the formulas are used over and over again, and the cultural quotation that occurs between the individual instances of using those formulas is so universal, that we often don’t realize that anything is being quoted.

As an example, let’s look at an ad from the current Foot Locker campaign:

It seems harmless enough, if a bit stupid.  However, the average television viewer may or may not be aware of the internet video series this ad is similar to:  Drunk History.

Four years after the first Drunk History video, the Foot Locker ad is using many of the same triggers for humor:  a loose grasp of historical facts and contemporary language and behaviors used by historical figures in re-enactments.  It is also using a similar laid-back delivery in narration.  It’s not drunken delivery, but it is somewhat slow and a bit monotone.

As I have outlined before in Ad-Stiche, pastiche can be seen as quotation that seems to make no indication that it is aware of the fact that it is a quotation.  It isn’t satirical or mocking of the original source, nor is it an homage.  It is simply a hollow parody.  Ads as pastiche may seem too easy, too obvious.  Advertisers copying something popular is practically encouraged as a way to tap into the contemporary consciousness.  However, Drunk History is a bit on the obscure side and, more importantly, it is old.  Four years after its first burst of popularity, with thousands of memes, viral videos, and flying pop-tart cats being produced and distributed in the meantime.  In this case, the parody becomes subsumed, unconscious; hence, it becomes pastiche.

Perhaps my favorite example of originality’s lack of importance in contemporary culture is Lady Gaga.  Some years ago, there was a bit of noise raised over the similarity in sound between her song, “Alejandro” and Ace of Base’s 1993 song, “All That She Wants.”

 

There is obvious quotation in the opening few bars with the flute/synthesizer, arguably the melody, and perhaps even in the narrative.  I don’t see “Alejandro” as an Ace of Base rip-off, but as a knowing acknowledgement of a type of fetishization of Latin American men in popular music—not just with Ace of Base and Lady Gaga, but with Abba as well.  The use of the name “Fernando” is, to me, an obvious allusion to the Abba song of the same name.  It even has flutes!  The reason this becomes pastiche is because, while some of the target audience for the Lady Gaga song might be familiar with Ace of Base, they are largely unaware of Abba, and overall they are unaware of the fact that Gaga is seeking to quote and allude to these earlier songs, not to steal the work.

More recent claims about Lady Gaga stealing from previous songs have been made regarding stylistic similarities between “Born This Way” and Madonna’s “Express Yourself.”  And there, of course, are many. (FYI–if you click the “Born This Way” link, it’s the full music video, complete with extended movie-intro.  You may just want to skip to the middle to get the gist of the song.)  In fact, Lady Gaga’s entire persona is built on the kind of performance-based, change of identity, strong female presence that Madonna embodied in the 1980s and 90s. But Madonna also used pastiche and quotation, most obviously of the look of glamorous, golden-age Hollywood stars like Greta Garbo and Marilyn Monroe, just as much as Gaga uses it.  Gaga is just more blatant, or perhaps I should say, more open, and possibly, more aware.  Lady Gaga’s name itself is quotation, referring to the 1984 Queen song, “Radio Gaga.”

Madonna and Marlene Dietrich. You could do this, side by side, with Greta Garbo as well.

As before, I am not denouncing this trend toward unoriginality and pastiche.  Nor am I disparaging Lady Gaga for employing them.  I love Lady Gaga.  If I could find a way to incorporate her into every single class I teach, I would do it.  What I am doing here is highlighting areas in which we see cultural production on a very commercially and critically successful level, and the originality of that production is not the most important draw—it’s the personae, it’s the performance.  Originality is no longer the touchstone of cultural achievement, packaging is.

Packaging... egg... Gaga... it's a metaphor! Get it?





Not Knowing

1 07 2011

Last fall, my father, brother, and I all went to a Boise State University football game.  It was an auspicious occasion, as the Broncos were facing the Oregon State Beavers and the game was nationally televised in prime time.  It was an exciting game, the Broncos won, and a good time was had by all.  Seeing a sporting event in person provides a full-immersion sensory experience–the game, the crowd, the weather, the sounds of the bands and the public-address announcer, the smell of the grass (or blue field-turf in this case) and concessions, even the dog that runs out to retrieve the tee after each kickoff—that you don’t get from watching the game at home on TV.  The difference that I found most refreshing, however, confuses many people I try to explain it to.  I like the fact that, when you’re watching the game in person, you don’t know everything that’s going on.

There's a little white speck in the top left part of this photo. That's me! I think.

Depending on the network and the stakes, a nationally televised football game has somewhere in the neighborhood of twenty cameras at work.  When you’re watching a game at home, simply visually, it’s as if you’re watching it from twenty different positions within the stadium.  You don’t just have the “best seat in the house,” you have the twenty best seats in the house.  When you’re at the game, you have one seat.  And it might be a bad seat.  At the game I went to, we were high up in the stands, just behind the left corner of the south end zone.  With only one vantage point and one set of eyes, my perception of what was happening was limited.  Watching at home, you can be watching the ball during the play, but then be taken back for a replay of what you just watched except this time you’re seeing what the wide receiver was doing away from the ball.  “Oh, Brent, that corner is starting to get under the receiver’s skin.  It’s getting pretty chippy out there,” Kirk Herbstreit might say as you view the slow-motion footage of the two athletes shoving each other while running down the field.

To be sure, at home one has the opportunity to see the game from many more physical viewpoints than the person at the live event.  But that experience of the game is mediated.  Football is a complicated game.  Players are split between offensive players and defensive sides for each team.  There are long- and short-yardage specialists for both sides.  Each side has its own coordinating coach and, the higher the stakes, the more individual position coaches are used—the quarterbacks coach, the offensive line coach, the defensive backs coach.  To compare sports to war can be dangerous, but in the area of complexity of strategy involved football is closer to military conflict than, say, tiddlywinks.  Because of this complexity, the broadcast analyst plays a crucial role in the television viewer’s understanding of the game.  Without exception, the major-network analysts for pro and college games are men who either played or coached at that level.  They have years of education and experience with the strategies tactics of the game, and the good ones are able to communicate what they are seeing and how it is affecting the situation of either team.

This is what makes all those camera angles and slo-mo replays possible.

So, when one is watching a football game at home, that person is getting a more thorough and insightful presentation of the event that is taking place.  However, that experience, however thorough, is mediated.  The camera angles that are shown are being chosen by the director, and those individual shots are being composed and focused by each cameraman.  The viewer’s knowledge of what factors are affecting the outcome (say, a lack of running game or an injury to a key player) are being clarified and contextualized by the analyst.

In fact, that analyst is being assisted by field reporters, producers and the director in what to address via what replay is being shown or what information is available.  Yes, the home experience of the football game is broad, but it is packaged and delivered by a team of cameramen, directors, producers, and analysts.  You may feel like you know everything about the game you’re watching but what you know is limited to what they provide.  Your experience isn’t even their experience (it must be something completely different to watch a game with a director and a producer telling you through an earphone what the next replay will be while you’re also supposed to be speaking about the game your watching both on a field and through a monitor in front of you), it’s the experience they have made for you.

On the other hand, the experience one has at a football game is his or hers alone.  You may be watching the runner with the ball and miss the excellent swim-move made by the defensive end right before the tackle.  You might be having a conversation with the face-paint-clad fan next to you and miss the time-out performance by the cheer squad.  You probably won’t be aware of the trouble the Bronco offense is having running to the left side due to a thumb injury to the left guard, or that this field goal kicker is 48% from this range.  But you have just as full of an experience of the game.  Your opinions on strategy and understanding of what has taken place are first-hand experience, not mediated by a network team of dozens of people.  You know what you’ve seen, but you don’t know “everything.”

I attempted to explain my attitude to my father as we were watching a replay of the game the next night, which seemed almost surreal.  Here we were, supplementing our experience of the game we’d seen first-hand with a second-run airing of the same game as shown to a third party, as if to make our experience more complete, more real.  While it did seem a little trippy, surreal isn’t the right term.  What we were engaging in was hyper-real.

Jean Baudrillard explored the notion of the hyperreal.  For him, hyperrealism is a defining characteristic of postmodernity.  It is the collapse of the distinction between the representation and what it is representing—between the representation and the “real.”  I am not arguing here that the game I witnessed was more real than the game that was broadcast on ESPN.  I’m saying that both games were real.  Hyperrealism is the acknowledgment that what is represented IS reality.

In another context, Michel Foucault argues that discourse is reality, meaning that the discussion about a topic (sexuality for Foucault, football for us) constitutes what that topic is and what it means.  Discourse can be history books, movies, or football telecasts, and all constitute how we understand history as reality.  An example of this is the discourse on Vietnam provided by television and movies.  Increasingly, especially for those of us who did not live through or have any direct experience of that war, what we see in films like Full Metal Jacket or Platoon constitutes our experience, and therefore our knowledge of the Vietnam conflict.  For us, the films aren’t about Vietnam, they are Vietnam.

The "Vietnam" scenes of Full Metal Jacket were filmed at an abandoned gasworks outside London.

Hyperrealism is pervasive.  A week ago, a friend of mine got a text message from his girlfriend that we both made a joke about.  He immediately went onto facebook and posted an extension of that joke onto my wall.  The conversation and the joke spanned three realities—the text, the actual interaction, and facebook—none of these is more “real” than the others, yet two are representations of conversations on different digital planes.  Yet they are all intertextual extensions of the same conversation.

To connect this to the football game, the game I witnessed was no more or less real than the game broadcast on television.  And once I watched the game in the rebroadcast, both experiences became my one singular experience of the game.  The real and the represented are one thing, and my trip to the BSU game is now hyperreal.

For me, there is a lure to the unmitigated first-hand experience of watching the game in person, of not “knowing” all of what happened.  My experience of the game was subjective—no one else saw the game exactly the way I did.  To not know is to be a single person in a single place at a single time.  To not know is to be human on a very basic level.  To not know is to be a part of reality instead of hyperreality, if only for a moment.

Bibliographic information:

Storey, John.  An Introduction to Cultural Theory and Popular Culture.  Athens, GA:  The University of Georgia Press, 1998.





Ad-stiche

10 06 2011

I often find myself attempting to illustrate Frederic Jameson’s notion of pastiche.  It’s not the easiest thing to explain, in that, on the surface, it is so similar to parody.  Jameson defines pastiche as “blank parody.”  According to him, pastiche lacks the “ulterior motive” of parody, “amputated of the satiric impulse, devoid of laughter and of any conviction that alongside the abnormal tongue [it has] momentarily borrowed, some healthy linguistic normality exists.”  To paraphrase:  pastiche is a copy, like parody; however, it doesn’t acknowledge and isn’t even aware that it is copying anything, nor that there is anything to be copied.

Satirical parody uses mimicry to point out some sort of fallacy or folly concerning the thing being copied.  The Daily Show uses its format as a parody of television news to make apparent the falseness and hypocrisy of mainstream and cable news sources.  Satirical motives aren’t necessarily required for parody however.  I doubt that anyone would argue that any song by Weird Al Yankovic is seeking to point out any specific folly of the original song or artist—but there is absolutely acknowledgment that the Weird Al song is a copy of something else.  Here, he performs parody on two levels, copying both Dire Straits and The Beverly Hillbillies.

Yankovic is not engaged in pastiche.  He is aware, acknowledging, and mindful of what exactly his work mimics and the relationship it has with the original.  For quite some time, I have used the television show Family Guy, specifically the Star Wars episodes, as an example of pastiche.  When compared with Mel Brooks’ Space Balls, the lack of pointedness is apparent.  Brooks seeks to lampoon the tropes and even the commercial success of the Star Wars franchise, while Seth MacFarlane (creator of Family Guy) seems content to simply re-enact the films with his own characters, throwing in a few tired jokes here and there.

I’m no longer as convinced as I once was with the example of Family Guy‘s Star Wars episodes as pastiche.  I think it’s more fitting to see the entire series of Family Guy as a pastiche of the animated sit-com to which it owes its existence:  The Simpsons.  To accuse Family Guy of ripping off The Simpsons is too generous, I think.  From the look, writing, and plot structure of the show, it appears that Family Guy has no awareness that it is the same show—albeit less funny and less original.  The fact that MacFarlane now has two shows that are pastiches of Family Guy itself (American Dad and The Cleveland Show) adds to my point.

My points about Family Guy often fall on indignant ears of students or friends.  Because of the show’s popularity, people miss how unoriginal it really is.  And, surely, my own dislike for the show colors my arguments and, perhaps to some extent, my judgment.  However, recent television advertising has provided me with some stellar examples of pastiche, that may make the concept more clear.

This ad is obviously targeted toward twenty-something hipsters, so the lack of historical awareness regarding the punchline might be a product of knowing that the target audience might not have any idea where it came from.  But that does not stop it from being pastiche—in fact, it more firmly establishes it as such.

The pastiche of this commercial lies in the line “Can I get a hot tub?”  The actor delivers the line, inexplicably, in the style of James Brown.  The hot tub appears, the agent says, “Nice,” and the commercial ends.  Of course, what the actor is alluding to is a Saturday Night Live sketch from the 1980s starring Eddie Murphy.

http://www.hulu.com/embed/b7jfI71jOPQMAU4LOO_TLQ

In the sketch, Murphy is parodying the voice, singing style and stage presence of James Brown, though changing the scene from the expected concert stage to what appears to be a low-budget cable access talk show centered around a hot tub.  Eddie Murphy is making fun of James Brown.  The actor in the State Farm commercial is unaware (or at least doesn’t acknowledge) that he is imitating Eddie Murphy imitating James Brown.  The commercial is a representation of a representation (here, an imitation of an imitation), and therefore simulacrum.  It is also blank parody—with no awareness that there is a norm to be diverging from.  The delivery of the line “Can I get a hot tub?” is presented as natural and true excitement from the character, not as parody of something that came twenty-five years before.

This entire ad campaign is pastiche.  It has the exact same premise, structure, and visual aesthetic as a Macintosh ad campaign from a few years ago that featured Jonathan Hodgman and Justin Long as a nerdy PC and cool-kid Mac.

T-Mobile’s ad is the same ad, only for a different product.  Yet there is no acknowledgment that it is parodying or copying anything.  What is the ad agency’s motive here?  Is it to visually associate T-Mobile with Macintosh, luring Mac users to the phone company?  That makes little sense, considering the ad campaign began when Mac’s iPhone was exclusively available on the AT&T network.  While the makers of the ad likely were familiar with the previous campaign (how else could they copy it so completely?), there is no awareness within the ad itself.  It is blank parody.  It is pastiche.

Jameson comes across as being decidedly negative about pastiche.  And, as we can see from my writing, I come across in a similar vein.  However, the opinions of cultural critics aside, pastiche is really just a component of our contemporary society.  Writers are fond of comparing the postmodern condition to schizophrenia and even drug addiction in that society seems completely focused on the present.  What is happening right now is all-important, and events, products, and even people from the past are left by the wayside and forgotten.  In the twenty-four hour news cycle, Anthony Weiner’s penis obscured reporting about long-running wars in Iraq and Afghanistan, Donald Trump was reported on as a front-runner for the Republican presidential nomination without ever even running, and Charlie Sheen dominated headlines for weeks.  Remember that?  Charlie Sheen was everywhere.  Does he even matter anymore?

Pastiche reflects this attitude.  Quotations from earlier eras of production can be made, but no awareness that they are quotations is necessary on the part of the viewer.  What matters is what is being produced and experienced right now.  Now, if you’ll excuse me, I have a sudden urge to switch cell phone carriers.





The When and The Where

13 05 2011

Last Thursday was the first Thursday of May.  First Thursdays in Boise are designated with a special name:  “First Thursday,” and are part of a promotional package for the city to showcase its arts and culture.  Galleries stay open later than usual, the Boise Art Museum stays open later than normal and is free all day, the “Artist in Residence” program (which are a number of unused retail spaces in a downtown building that have been given over to artists for a few months at a time) is available to the public, and various other special events take place throughout downtown.  This is not something unique to Boise—other cities have First Fridays, “Art Walks,” Third Tuesdays, Every Other Fourth Wednesday, etc.  Whatever the city and whatever the day, it seems to be a bit of an attention-getting (and downtown business-boosting) advertisement to remind people that, hey!  There’s culture in this town!

In Boise, the first Thursday of May has taken on a sort of “Super First Thursday” status with the annually growing interest in what is now known as Modern Art.  For this event, The Modern Hotel (originally a TravelLodge motel in the 1960s that was renovated into an über-hip boutique hotel in 2007) turns over all of its rooms to local artists who make temporary installations, one-night-only performances, or just turn the room into their own personal gallery.  Originally, it was known as “Art at the Modern” and did not take up the entire hotel.  It has grown to the point where not only is each room utilized, but the action has spilled into the courtyard, the parking lot, even the street.  This year’s event required streets to be closed down to accommodate the thousands in attendance.

As you can see, it was exceptionally well-attended and even a flash-mob was able to fit right in.  It was a Boise-specific Art Holiday and everyone, it seems, got exactly what they wanted, just like Christmas.  Surely, when art is something this special, everyone involved can appreciate something, even if they weren’t necessarily art-inclined before.  However, when art is treated as something so specialized and unique, it runs the risk of becoming marginalized and, in the long run, less capable of creating a true impact.

http://www.fox12idaho.com/global/video/popup/pop_playerLaunch.asp?vt1=v&clipFormat=flv&clipId1=5823337&at1=News&h1=Modern Art In Boise&flvUri=&partnerclipid=

This kind of news story is not uncommon.  Actually, I think this gives more weight to the art created and the event itself than some other reports, like the bemused curiosity  or even hostility that Morley Safer so expertly wielded on 60 Minutes.  But even though the intent of the story is to promote art rather than marginalize it, it treats the event as a curiosity.  “What are these crazy artists up to?  This guy’s covering himself in dirt and writing poetry.”  Without the anchor saying or showing it, the implication here is a roll of the eyes and a nudging, “Can you believe this?”

But press is press, even when it’s on television or YouTube or some guy’s blog.  Even though it may be treated as a trivial curiosity, when art is reported on as something special, it generates interest and familiarizes the public, in this case, with “alternative media” productions like installations and interventionist performance art.  But with the attention, the frame of “Art” descends on the event and separates it from the rest of life.  Art isn’t just for everyday.  Art is special, like church, and therefore it needs special days.  Church has Sunday.  Art has First Thursday.  It needs special buildings.  Church has, well, a church.  Art has Galleries or Art Museums or Hotels that are turned into galleries for one night a year.  And with this attitude, art, like church for many people, becomes something we don’t think about until those specific times and places.  We go on with our lives every other day of the week, not thinking about art until it’s the right time and place.

But what about art that doesn’t wait for the “right” time and place?  What about graffiti?  Is it only “Street Art” when Banksy and Shepard Fairey do it in London and LA, but vandalism if some no-name does it in Boise or Olympia?  What about activist art?  Is it only art when it’s been canonized in a history book like Suzanne Lacy and Leslie Labowitz’s In Mourning and in Rage (1977)?  What about art that is a public access television show?  A YouTube video?  A low-traffic blog?  A walk along sand dunes?  The dissolution of a township?  What about the 20 year-old man who was arrested in North Boise yesterday for walking around naked?  Might he be art?

<a href=”http://www.idahostatesman.com/2011/05/12/1647084/police-arrest-naked-man-from-street.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+IdahostatesmancomLocalNewsBoise+%28IdahoStatesman.com+Boise%29&amp;utm_content=My+Yahoo”>http://www.idahostatesman.com/2011/05/12/1647084/police-arrest-naked-man-from-street.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+IdahostatesmancomLocalNewsBoise+%28IdahoStatesman.com+Boise%29&amp;utm_content=My+Yahoo</a>.

Of course, placing the frame of “art” over objects and activities that exist outside of that frame has the potential to lessen their impact.  To the broader public, Judith Baca’s ongoing Great Wall of Los Angeles may be seen as less representative of the collaborative social impact of the project when attached to the name of Baca as its mastermind or “artist.”  Labeling something as “art” that could also be something else takes the opportunity of the “something else” out of the equation.  The object or action in question becomes simply and solely art.

These examples of Lifelike Art, as Allan Kaprow calls them, work best when they occupy the spaces in between the frames of “art” and, say, “theater” or “social work.”  For instance, I have the opportunity to be a part of a show of faculty artwork this coming fall.  Since my primary works involve television, performance art, or performance poetry, my expectation would be to use something along those lines for this exhibition.  However, my work in television relies on the fact that the viewers who come across it do not think they are viewing art.  The effectiveness of the satire comes in the surprise within their response.  It is the same with interventionist performance.  The unwitting “participants” of Boise Naval Base’s Election (2004) did not see the action as spectacle, as artistic—they were caught up in a goofy action on their way to dinner that may (or may not) have caused them to think a little differently about democracy in America.

BNB members Russ Wood and Flint Weisser campaign on the sidewalk during Election.

When something like this comes into the gallery, it becomes spectacle—it becomes theater. My favorite example of this did not actually occur in a gallery, but within the frame of “art” nonetheless.  Guillermo Gómez-Peña and Roberto Sifuentes staged “The Cruci-Fiction Project” in 1994, attaching themselves to 16-foot crosses at Marin Headlands Park in front of 300 invited guests and members of the press.  The performance was meant to be a critique of the state-sponsored enmity of people of color, specifically Latinos, in California.  Even though fliers were distributed through the crowd, asking people to free them from “their martyrdom and take us down from the crosses as a gesture of political commitment,” it took the audience over three hours to realize that their lives were actually in danger and organize themselves to get them down.  In that time, Sifuentes had nearly fallen unconscious, and Gómez-Peña had dislocated his shoulder.  The internal injuries were such that, the next day a doctor informed them that in another half-hour, they would have died.

Guillermo Gómez-Peña during "The Cruci-Fiction Project"

The invited guests knew they were there for art, and the presumption was that the performance was like theater–that nobody really dies and nobody really gets hurt.  They reality of the action was subsumed into the assumption of the artifice of art. The act as art fell short of its stated goal.

Could this have made a greater impact had the frame of art not been place over the action?  I, for one, do not have an answer.  Perhaps, outside of the frame, a visitor to the park would have asked if the artists were okay and found help to take them down sooner.  And perhaps that person would have thought about his action of helping someone who looked so over-stereotypically like the vilified Latinos of the news stories.  And perhaps he wouldn’t have.  Photos taken of this event have been reproduced as postcards with no caption, Sifuentes’s name omitted and Gómez-Peña’s misspelled.  It’s unclear what impact, if any, those postcards have had on the people they’ve been sent to with stories of a San Fransisco vacation.

When artist friends of mine are hard at work in the studio, I often ask them, “Is it art yet?”  Usually, I’m asking them if they think the work is done, but I am also asking if it’s something that is ready to be set aside as art in its own special place, to be viewed at its own special time.  But what happens when the answer is “no?”  Not that it isn’t finished, but that it shouldn’t be viewed in a special place or at a special time.  What if it’s something that isn’t “art?”  Where can we go from there?

(Information on “The Cruci-Fiction Project” came from:  Gómez-Peña, Guillermo.  “When Our Performance Personas Walk Out of the Museum.”  Dangerous Border Crossers:  The Artist Talks Back.  New York:  Routledge; 2000.  pp. 62-72)