Failure is Always an Option

15 07 2012

I stole this slogan from Mythbusters, and it applies to pretty much the entirety of life, not just science and not just art.  Sometimes, the best plans and the most professional presentation you can muster just aren’t enough.  Nobody shows up to your event.  Your artwork does not sell.  A judge gives you a 1.3 for your poem.  1.3!  That happened to me once.  The fact that failure is a possibility should not dissuade you from attempting something.  Nobody ever did anything truly great without the very real possibility of falling flat on their face.

On a small scale, this applies to making changes to a given artwork.  If you are working on a drawing and don’t want to make a needed change because you are afraid that you might mess up the whole thing, the whole drawing will suffer as a result.  Poems that you can’t bear to edit even though they are too long or don’t communicate your idea clearly won’t do anything but stay mediocre unless you do something to change it.

This guy does not take a lot of artistic risks.

Great artists take risks and great artists fail.  It’s a fact of progress, and there’s no use being afraid of it.  In my experience the anticipation of failure is more gut-wrenching than the failure itself.

Of course, sometimes failures scuttle careers.  In April, I wrote a blog entry about how Daniel Tosh is Important.  I argued that his satire is more cutting and critical than the dick-jokes and racism it seems to be perpetuating, and I stand by what I wrote.  Tosh now finds himself on the wrong end of the ire of many, especially feminists, after responding to a heckler during a comedy show with a “joke” about the heckler being gang-raped.

From what I understand of the incident, Tosh had been making a point about how there are terrible things in the world, but that doesn’t mean nobody should make jokes about them.  When the woman called out that “rape jokes are never funny,” he responded in a satirical attempt to exaggerate his own stance by cracking that it would be funny if she were raped by five members of the audience right then and there.

His response was a failure.  It did not effectively satirize mindless rape jokes, nor did it satirize knee-jerk indignation regarding humor with violence as its genesis.  Because this one response failed, the entirety of Tosh’s body of work comes into question—is he really just as bad as the horrible “comics” who respond to the Tosh.0 blog posts?

A similar thing happened to Michael Richards in 2006 and public opinion of him still hasn’t recovered.  In 2011, I posted a vitriolic critique of university art education on Facebook.  I am no longer a professor.

My purpose is to illustrate that even big-time celebrities fail.  Whether I defend or vilify Daniel Tosh, he is still important.  What more are we seeking as artists?  Whatever the risks you may take as an artist, the fear of failure shouldn’t stop you from taking them.  Public opinion is something to pay attention to and try to manage as a professional artist, but to attempt to cater to it is not the answer.  After all, if what you’re saying doesn’t make your voice shake, is it really worth saying?

For a response to the Daniel Tosh incident, please read this remarkable post by Lindy West:  How to Make a Rape Joke.





Thomas Kinkade is Dead. Long Live Thomas Kinkade.

15 04 2012

When Thomas Kinkade died last week, I got a few emails and Facebook wall posts from former students.  “I don’t know why, but I feel compelled to inform you about this.”  I have had a long and complex relationship with the art of Thomas Kinkade, and his death brought him to the forefront of the greater consciousness (however briefly) once again.

This is an example of a Thomas Kinkade painting. There's no need to look at any other ones, they're all pretty much this.

Thomas Kinkade was a painter.  He was a very well-known painter who produced thousands of paintings of quaint cottages in idyllic settings.  Tomas Kinkade was a businessman.  His galleries are in malls all across America.  There are calendars, coffee mugs, prints of various qualities and price ranges, “original” paintings, and even a Kinkade-themed housing development in Northern California.  For most in the art world, this places Kinkade firmly in the category of kitsch, with his reliance on mass reproductions of artworks and an easy appeal to a general populace.

When I was a graduate teaching assistant, instructing introductory-level studio classes, our first-day activity was to have the students fill out a form answering various questions about themselves.  What kind of music did they like?  What other art classes had they taken?  And, of course, who was their favorite artists.  When the TAs would get together after the first day, the conversation always turned to that last question.  How many Van Goghs did you get?  How many Dalís?  Any late, great, unknown uncles?  And the kicker—the question that would make us howl with laughter and wretch with disgust—how many Kinkades?  We were snobs.  We were art snobs.  We were educated art snobs, and we were going to educate these uninitiated undergrads about what was good and what was bad art, and Thomas Kinkade was bad art.

Yes, this is exactly what we looked like. This is what all art snobs look like.

There are several factors that go into dismissing Kinkade outright, and the kitsch argument is only one of them.  His galleries, even if they were not franchised McDonald’s of paintings scattered across America and found next to the Dillard’s north wing of the mall, were vanity galleries.  A vanity gallery is an art gallery that is owned and operated by the artist himself or herself.  While academic art instruction sees itself as operating outside of the art market, the market’s peculiar institutions of legitimation are sacrosanct.

A gallery, in the operation of the art market, is a proprietorship of a connoisseur who gathers the work of a group of artists, legitimized by their inclusion in this stable (yes, this is how the collection of artists represented by a gallery are referred to).  The connoisseur in the form of the art dealer then sells the work to connoisseurs in the form of the buyers.  The connections and collecting history of the connoisseurs provide the provenance for the work, and the connection with that provenance further legitimizes the artist.  They aren’t just making great work.  It’s great because of who owns it and because of what else they own or have owned.  A Jenny Saville isn’t just important because it’s a monumental painting, but because it was purchased by Charles Saatchi, and Saatchi also purchased work by Damien Hirst and Sam Taylor-Wood.  Good connections in the primary market lead to even better connections in the secondary (auction) market, which lead to collection or donation to museums, which are the ultimate arbiters of what is important.  What is important to museums is what ends up in art history text books, and it is what is taught to students as high art.  This process, as convoluted as it is, begins in the person of the art dealer.

In a vanity gallery, the artist circumvents the dealer in order to get his or her work to the primary market.  The work is sold, yes, but in the view of the institutions of legitimation, a necessary step in gaining legitimacy has been skipped.  How can these primary consumers know what they like if they don’t have a connoisseur to tell them what is important?  Thomas Kinkade made the vanity gallery into Wal-Mart, selling directly to consumers, legitimation be damned.

Yes, this gallery is in a mall.

Aside from the inconsequential, saccharine-sweet subject matter of Kincade’s paintings, his primary sin in the eyes of the art world is this crucial skipped step.  Other popular kitsch artists are simply ignored:  Maxfield Parrish, Norman Rockwell, Anne Geddes, whoever took those photographs of children in adult clothes in the 1980s. For those who hate Kincade, he is more than ignored, he is reviled, and other “sins” are held up as support for this judgment that are allowed to pass with other artists—even artists widely recognized by the institutions as important.

Even as a student myself, I lambasted Kinkade’s use of employees in creating his works.  “They aren’t even his!” I would argue, “He’s just the financier!  He’s a businessman.  Not an artist.”  Many people share the expectation that the genius artist’s hands are the only hands that work toward creating the final object.  The image of the artist’s studio in the heads of art students and the general public alike is one of a lone artist, toiling away at his massive projects.

Art is not made this way.  For an artist to make money, very rarely is this even a possibility.  In pre-Modern art eras, the sole-genius-production ideal was not as closely held.  Renaissance artists like da Vinci and Michelangelo were part of a guild system, where apprentices would do basic work like backgrounds in paintings or rough out the major forms for a sculpture.  The master was the boss in this situation, but the workload was shared.  Four hundred years later, Monet used apprentices and employees to crank out painting after painting of water lilies and haystacks.  In the 1960s, Andy Warhol went so far as to refer to his studio as The Factory, with artists, actors, photographers, and even lackeys all contributing to its output.  Jeff Koons and Takashi Murakami do not lay a finger on the massive sculptures and paintings produced and exhibited under their names.  These artists occupy the highest tiers of the art-historical hierarchy (Koons is certainly up for debate—another blog on him later), and their output is directly related to the use of employee artisans to physically create the works.

A painter at work in Murakami's Kaikai Kiki studio in New York.

I have written at length about the dangers of high art alienating itself from the tastes and opinions of culture at large.  The disdain for items produced for a consumer, mass, or popular market is self-defeating.  How much differently would high art be perceived if Alan Kaprow’s Art Store had been in malls all across America?  What if connoisseurship was permanently circumvented and every person’s opinion had equal validity in the market?  What if legitimation depended more on quality of communication than quality of provenance and connections?  Would Kinkade have died an art start?  Would his paintings be in the Powerpoints of 100-level Art History survey courses?

In many ways, Thomas Kinkade fit the mold of the superstar hero artist.  He had ambition, ego, and is even rumored to have died due to alcoholism.  In the pantheon of art gods, those qualities have eclipsed any technical talent since 1956.  Personally, I can’t stand the work of Thomas Kinkade.  I also hate the Rococo.  But the Rococo has a place in art history textbooks.  Maybe Thomas Kinkade should, too.

Jean-Honoré Fragonard, "The Happy Accidents of the Swing," c. 1767.
Just... ew.





Daniel Tosh is Important

1 04 2012

Daniel Tosh is a stand-up comedian and television host.  I doubt many people would describe him as particularly socially-conscious in either of those roles.  His show, Tosh.0, is a hybrid of stand-up, sketch comedy, and internet video commentary and includes potentially offensive material in bits such as “Is It Racist?” and “Rico’s Black Clip of the Week.” I think that Daniel Tosh, and Tosh.0 in particular, is a prime example of postmodern entertainment that pushes the boundaries of social issues in a way that results in elevated discourse rather than crass exploitation.

Tosh.0 is Postmodern

The television show is nowhere near original. Despite my description above, it is inherently a clip show.  Its reliance on home videos posted on the internet make it the America’s Funniest Home Videos of the 21st Century.  The format of a host in front of a green screen commenting on clips owes its existence to Talk Soup (later re-named The Soup), originally hosted by Greg Kinnear.

Of course, something doesn’t have to be original to be entertaining.  Tosh’s style in delivery and class clown grin make the show engaging and somehow personal, and the adult content of both the videos and the commentary give the show a bite not found in either television predecessor.  The show plays like a highlight reel of internet comment posts, weeding out the merely shocking, racist, or pithy and showcasing the truly snarky and hilariously cynical.

The unoriginality of the show seems to categorize it as mere pastiche, but Tosh.0 is unabashedly self-aware.  From the inclusion of the writing and production crew in sketches to the mockingly prophetic sign-offs before the final commercial break of each episode (Tosh signs off with a reference to a cancelled Comedy Central show:  “We’ll be right back with more Sarah Silverman Program!”), Tosh highlights not only the mechanisms of the show’s production, but also the reality that the lifespan of the show itself is limited.  The sign-off was perhaps more prescient in the early days of the show.  As with many Comedy Central shows, its low production costs come with low expectations from the network—cancellation of a Comedy Central show is a foregone conclusion.  That is, of course, until it catches fire like South Park did, or Chappelle’s Show, or even The Daily Show.

Tosh has also made reference to his predecessors on air.  “Hey, I heard there’s some show called The Soup that totally ripped off our format!  The idea for this show came to me in a dream!  With Greg Kinnear, except it really wasn’t Greg Kinnear…”  In this season’s Web Redemption of a horrible sketch comedy trio, Tosh led the segment saying, “Hey, sketch comedy is hard.  If someone brilliant like Dave Chappelle can go crazy doing it, what makes you think you’ll be any good?”

Tosh.0 is Socially Conscious

The fraternity with Chappelle is based on more than that of hosting popular Comedy Central programs.  Richard Pryor paved the way for Dave Chappelle, and Dave Chappelle paved the way for Daniel Tosh.

Chappelle is credited for approaching issues of race in a comedic way on television unflinchingly and uncompromisingly.  He made fun of racism—not just white racism toward blacks, but also black racism toward whites and Asians, and even other blacks.  It can be cynically concluded that Chappelle and Pryor (who did the same thing thirty years earlier in stand-up comedy) could get away with calling out black racism because they themselves were (are) black.  Daniel Tosh proves that the race of the commentator is not the determining factor for this kind of statement.

The clip that spawned the recurring bit, “Is It Racist” was a video of an Asian toddler in a pool, held afloat by his or her head suspended in a plastic floating ring.  Among many jokes, Tosh cracked, “Is it racist if I can’t tell if her eyes are open or not?”  After a brief pause, he said indignantly, “I’m saying ‘Is it?’  Yes… yes, I’m being told by the audience that yes, it is racist.”

Jokes about racism regarding African Americans, Latinos, Asians, Jewish people, and even white people are all approached with a level of honesty and self-effacement that makes them engaging rather than mean.  In a web redemption from this season, Tosh interviews a couple who’s wedding was ruined by a sandstorm.  The groom was Mexican and the bride was white.  Rather than shy away from racial comments when in the actual presence of a minority, Tosh addresses it head-on.  Any menace in this line of questioning is deadened by the fact that Tosh is conducting the interview in a heart-shaped hot tub.  He often uses the physical appearance of his own nearly-nude body to neutralize potentially heated or offensive confrontations.  It also helps that during these interviews, he is unabashedly positive, which is unexpected given the bite of the rest of the show.

Context is key for Tosh’s approach to topics like race, sexuality, abortion, and religion.  He is making jokes, yes.  But his delivery and his appearance, as well as the jokes themselves, communicate an awareness of his own place in the larger issue underlying the comedic bit.  In comparison, it is much harder to see positivity in the comments by viewers on Tosh.0 blog posts.  Many comments come across as simply racist, rather than as addressing racism.

Below is the clip of the Asian “Neck Tube Baby” bit from Tosh.0.  Not only is it an example of Tosh’s approach to race, it also includes the show’s characteristic reflexivity, acknowledging the production of the bit itself.

Daniel Tosh is Uplifting

I’ll be honest.  For the first two seasons of Tosh.0, I changed the channel or left the room during the “Web Redemption” segment.  I’ve never been a fan of cringe-inducing comedy, and the idea of taking someone’s most embarrassing moment, already broadcast to the entire internet, and making a seven-minute television segment based entirely on that moment, seemed too mean-spirited and too awkward for me to watch comfortably.  My fears were unfounded.

Tosh brings the people in question to Los Angeles and interviews them to begin the segment.  The interview includes the cracking of jokes, of course, but Tosh is truly laughing with the interviewee.  The redemption part of the segment is typically cheesy.  The person gets a second chance to complete whatever task when awry and got them internet famous for some sort of mistake.  A girl gets a chance to walk down stairs in a prom dress without tripping.  A guy gets a chance to park a Ford Mustang in a garage without running it through the wall.  Typically, in these bits, Tosh is the main point of comedy—often employed through the use of a goofy costume such as the Pope outfit worn for the redemption of the married couple mentioned earlier.  Most of the time, the person succeeds in their attempt to redeem themselves, even though that redemption is a little low in the area of a pay-off.  They still have the internet embarrassment out there, though by now they’ve probably come to terms with it.  Heck, they did agree to be on a show knowing full well that the embarrassing moment was the reason for their appearance.

In some cases, however, the person fails in their comedic-sketch attempt at redemption.  Tosh uses this to aim the humor away from the person involved, however.  An appearance by Ron Jeremy after a girl falls down the stairs in a prom dress for a second time becomes a joke about Ron Jeremy (Ron Jeremy is his own joke about himself).  Dennis Rodman appears from nowhere to block a man’s attempted trick basketball shot.  That was perhaps my favorite save.  On returning to the set (these bits are shot on location and shown as clips during the hosted show), Tosh points out that for $5,000, you can have Dennis Rodman show up at your house and do whatever you want… for about five minutes, which mocks the show for paying that much for the cameo and Rodman for shilling himself out so shamelessly.

Daniel Tosh is Important

Daniel Tosh is not what I would consider an activist comedian.  He’s not out to make some great social change in the world.  He’s out to make people laugh and, if you believe his shtick, make a lot of money doing it.  But performers don’t necessarily have to be performing ABOUT an issue to make a difference regarding an issue.  It’s often a matter of bringing the conversation up.  If that approach is comedic, the conversation is that much easier to start.  Tosh’s approach is more high-brow than it may seem at first glance, and for that, we thank you.





Neil deGrasse Tyson is Wrong

4 03 2012

I like Neil deGrasse Tyson.  I think he is a warm and engaging face for science on television.  He’s no Adam Savage or Jaime Hyneman—I have yet to see him blow up anything.  To my eyes, he’s no Bill Nye.  That is one titanic bowtie to try to fill.  But, as celebrities of the hard sciences go, Neil deGrasse Tyson is a shining example.

As host of Nova scienceNOW on PBS, he has proven to be engaging and photogenic.  He makes astrophysics something that at least seems accessible to a large audience.  He is the director of the Hayden Planetarium and a research associate in astrophysics at the Museum of Natural History.  When it comes to astrophysics, Neil deGrasse Tyson knows his stuff.  However, when it comes to the cultural mindsets of the Twentieth and Twenty-first Centuries, he is mistaken.

Clip of Feb. 27 Interview on The Daily Show

I am basing my criticism on an interview he gave last week with Jon Stewart of The Daily Show, promoting his book, Space Chronicles:  Facing the Ultimate Frontier.  Stewart characterizes the book as lamenting the fact that the United States, as a culture, no longer prioritizes space exploration.  Tyson acknowledges that the Cold War, fear, and the military industrial complex were the driving force behind the rapid advancements in space exploration from the 1960s until 1972, the last manned mission to the moon.  I will add that moon missions stopped around the same time the Vietnam War ended, drawing to a close the hot part of the Cold War.

Tyson claims that it was the space race that inspired society to “think about ‘Tomorrow’—the Homes of Tomorrow, the Cities of Tomorrow… all of this was focused on enabling people to make Tomorrow come.”  This is where he is wrong.  The space race was a symptom of this mindset, but it the mindset of modernism he is talking about, not just of the space age.  A focus on technological progress is one of the most rudimentary tenets of modernism, with its roots in the Enlightenment.  We see it in the Industrial Revolution, we see it in the advancement of movements in Modern Art, and we see it in the development of technology for war, transportation and communication before, during, and after the space race:  from airplanes to telephones to ipods.  Tyson even cites The World’s Fair as an example of an event geared around the space race.  While the World’s Fairs of the 1960s certainly reflected the interest in space exploration in particular, the institution itself has roots in early modernism—in the Nineteenth Century.

Chicago World's Fair, 1893--long before the space race

Despite being incorrect about its origins, Tyson is correct in pointing out that the drive for progress was the great economic engine of the Twentieth Century, and that careers in science and technology were essential for that progress.  The combined factors of fear, war, and modernist pursuit of progress meant that those careers were celebrated as important for the betterment of society.  Little Jimmy wanted to be an astronaut or a rocket scientist because it was a glamorous and important part of society, an attitude that was reflected in films, news broadcasts, and federal funding.

Stewart assumes that the diminished interest in space exploration had to do with expectations of achievements were not matching the pace of their execution—that we expected to be on Mars by 1970 and since we weren’t there, we got tired of waiting.  Tyson augments his assumption, saying that the diminished interest came from not advancing a frontier.  “The Space Shuttle boldly went where hundreds had gone before.”  This is not the frontier exploration that gains headlines in a world looking for better, faster, stronger, bolder, and further.

Aside from being wrong about the societal motivation behind the space race and the connected advancements in technology, Neil deGrasse Tyson clings to that modernist mindset.  His solution for society is to increase funding for NASA in order to mount a manned mission to Mars, which he believes will excite the populace to value the activity of scientists and technologists, thus fueling the economies of the Twenty-first Century.

Maybe Tyson just wants to revive the careers of Gary Sinise and Tim Robbins. It does promise to be thrilling and exhilarating.

As I have written before, I am skeptical about the notion that we are in an era outside of modernist influence.  While originality in art or even in invention is not necessarily the hallmark of progress that it used to be, advancement is nonetheless necessary for success in our creative, corporate, and governmental evaluations.  A person only needs to look at one very celebrated company—Apple—to understand that advancement and progress are still very much parts of our ideology, and that is the second instance where Tyson is wrong.

Contemporary society does value the activity of scientists.  It might not value the same kinds of scientists that made big, physical advancements like space exploration or the atom bomb, but it does value the kinds of scientific advancements that power the new economic driver: information.  According to postmodern theorist Jean-François Lyotard, the purpose of science is no longer the “pure” goal of its Enlightenment origins. “Instead of hovering above, legitimation descends to the level of practice and becomes immanent in it.”  For Lyotard, scientists are no longer trying to find an absolute “Truth” about the universe (that might come from the exploration of, say, space), but seeking to advance the commoditization of knowledge—the consumption of information.

In a way, Tyson one-ups Lyotard.  By acknowledging the driving force of fear in the space race, he acknowledges that the societal motivation for scientific advancement was outcome-based (winning the Cold War), rather than ideologically-based Truth-seeking.  Even at the height of modernism, pure science was a myth.  Nonetheless, the ideas of Lyotard underlie the entire undertaking of contemporary science.  It isn’t about an authoritative Truth, it’s about consumable truths. For scientists, those consumable truths are technological advancements—however minute, however arbitrary. We do value scientists, as long as they are working toward something we can consume.

The fact that, in this photo, the iphone resembles the monolith from 2001: A Space Odyssey is pure coincidence.

The space race produced consumables—Tang, Velcro, the Tempur-Pedic bed—those were indirect in reaching the consumer market.  Today’s advancements directly aimed at consumers with tablet computers, smart phones, and cars that park themselves.  These advancements aren’t a byproduct of some high-minded pursuit of pure scientific exploration, but directly researched, experimented upon, and produced for us.

I sympathize with Neil deGrasse Tyson.  He wants a modernist society where the pursuit of Truth motivates a populace and advances a culture.  But, as he acknowledges, that pure science may never have been the real motivator at all.  Science is now inextricably linked to product value in technology.  The advancements are more accessible, but they are less tangible.

Works Cited:

Tyson, Neil deGrasse. Interview by Jon Stewart. The Daily Show. Comedy Central. Comedy Partners, New York.  Feb. 27,2012. Television.

Fraser, Nancy and Nicholson, Linda.  “Social Criticism Without Philosophy:  An Encounter Between Feminism and Postmodernism,” Universal Abandon:  The Politics of Postmodernism.  Ross, Andrew, ed. Minneapolis:  University of Minnesota Press, 1988, p. 87.





Super PACs and Satire

13 01 2012

Last night (January 12, 2012), Stephen Colbert announced that he is “forming an exploratory committee to lay the ground work” for his “possible candidacy for the President of the United States of South Carolina.” Colbert is not the first comedian to make an actual run for president in a primary election.  Pat Paulsen pioneered the tactic, starting in 1968 and running again in 1972, ’80, ’88, ’92 and ’96.  Paulsen took second to Bill Clinton in the 1996 New Hampshire primaries, beating out some real politicians.  Pat Paulsen’s campaigns, while legitimate (in that he was really on primary ballots) were, at their core, satire.  He used the platform of the campaign to engage in double-talk and tongue-in-cheek attacks on other candidates which essentially mocked the politicians and the political process.

The idea for Paulsen to run for President was proposed by The Smothers Brothers when he was a cast member of their show.

Satire, as defined by Merriam-Webster, is “trenchant wit, irony or sarcasm used to expose or discredit vice or folly.”  While satire often employs humor, it does not always have to be funny.  A classic example of literary satire is Jonathan Swift’s “A Modest Proposal,” in which he mocks the political stances on Irish poverty of the time by suggesting that the indigent Irish eat their children.  The pamphlet is not exactly a knee-slapper.  He is not truly suggesting this as a solution for starvation, he is proposing this exaggerated solution in order to expose the hypocrisy of how the situation was being treated.  For satire to be effective, as Swift’s was, it needs to be exaggerated enough to not be taken as “real,” and be targeted enough that the viewer or reader knows exactly what is being ridiculed and why.

Colbert is not the first to run for President as satire.  This isn’t even his first time running—he ran in the South Carolina primaries in 2008 as well.  While he and others, like Paulsen, are certainly out to mock the policies and campaign hypocrisy of politicians overall, Colbert’s run this time is more specifically targeting the anonymous and unlimited corporate funding of campaigns made possible by the Supreme Court’s 2010 decision in Citizens United v. Federal Election Commission.  This decision held that a corporation is legally defined as a person, and is therefore entitled to the right to free speech, which it exercises through money.

Since early in this election cycle, Colbert has been satirically exposing the folly of this decision which culminated with the establishment of a Super PAC (Political Action Committee).  Americans for a Better Tomorrow Tomorrow has already used its considerable funds to run campaign ads encouraging voters in the Ames straw poll to write in Rick Parry (rather than the actual candidate, Rick Perry).  But more than simply purchasing air-time, Colbert has exposed the ease with which these organizations are formed by doing it on air, with the help of legal (and on-air) advice from Trevor Potter, former FEC chair and general counsel for John McCain’s 2008 campaign.

Trevor Potter, Stephen Colbert, and John Stewart celebrate after handing over control of Colbert's Super PAC to Stewart on The Colbert Report, January 12, 2012.

In recent days, polling results have shown Stephen Colbert with a higher percentage than actual candidate John Huntsman, which has prompted this move to “candidacy.”  It has also provided the opportunity to show the fuzziness of the requirement that Super PACs cannot coordinate with any one candidate.  Last night, before announcing his exploratory committee, Colbert transferred control of Americans for a Better Tomorrow Tomorrow to Jon Stewart, host of The Daily Show (which airs immediately prior to The Colbert Report, both on Comedy Central, and is where Colbert was employed before his current show).  The transfer takes only one form and two signatures, and making the transfer to his friend and business partner, on air, serves to highlight the laughable “lack” of coordination between Super PACs and candidates across the board.

Through exaggeration, sarcasm, irony and humor, satire can expose the folly and hypocrisy that underlie day to day life, whether it is in politics or business or interpersonal relationships.  But, as with deconstruction, it stops after the attitude or behavior in question is dismantled.  Colbert and Stewart can point out the abuses by various politicians and corporations of the Citizens United decision, but as comedians and satirists, they would be equally obliged to mock whatever hidden ideology might be behind any proposed solution.

Of course, this is an idealized view of satire—a definition of what satirists are supposed to be.  But they are also people.  Colbert and Stewart have deeply held beliefs about the direction politics should take, about corporate involvement in campaigns and how that should be regulated.  Moral satirists, like journalists, have a fine line to walk between deconstruction and activism.\

At the moment, Colbert seems happy to pick apart.  In the past, he and Stewart have sought to build up.  Their October 2010 “Rally to Restore Sanity/March to Keep Fear Alive,” had a fairly direct message after all the satire on stage:  one of civic unity in an era of divisive politics and soundbites.  Jon Stewart’s own behavior at times makes more of an impact as an example regarding this sentiment.  On January 11, 2012 he interviewed Republican Senator Jim Demint and managed to have an exceptionally civil discussion even though the two disagreed on the economic solutions Demint was proposing.

The power of satire is not something that can be found in and of itself.  It needs both behavioral and activist support to make quantifiable and effective changes in the folly it seeks to expose.  If you’re looking for examples of how this can work, you don’t need to look any further than Stephen Colbert and Jon Stewart.

Vote Colbert, South Carolina!

Note:  I did not include any links to video clips from The Colbert Report or The Daily Show mentioned in this blog.  They are available online at www.thedailyshow.com and at www.colbertnation.comBe warned, the clips contain advertisements (Comedy Central is, after all, a commercial enterprise).  Contemporary Critique in no way endorses the Jumbaco.





Pure Art Sells Out

6 01 2012

The specialized treatment of art education at the university level separates art from other aspects of life. As I have stated before, a qualification for something to be considered “High” or “Fine” art is that the entire purpose of the object is to be art and art alone.  This is the culmination of the modernist mandate for authority and therefore for purity.  To be an expert in something, one must study it extensively and exclusively.  To become an authority in art, an artist must be entirely focused on art and therefore what is produced is art for art’s sake—a pure art.

Jean Michel Basquiat's studio: a working temple of art

This isn’t an attitude that is limited to art.  Other disciplines follow the pattern:  music, religion, philosophy, science, etc.  It is in science, and the Enlightenment approach to science that so influenced modern thought, where we see how specialization is so important.  I could use Theoretical Physics as an example of a form of science that is almost entirely detached from any aspect of the everyday existence of an average person living on planet earth.  String Theory and inquiry into the status of light as a particle or a wave have little bearing on the day-to-day work of a plumber.  However, I think the scientific method itself is a prime example of how specialization and singular focus work in science, which we can then see echoed in larger areas of study like art.

The television show Mythbusters is a fantastic pop-culture example of the use of the scientific method.  The cast will start with a myth or bit of urban lore.  Say, for this episode, they are taking a scene from the movie Lethal Weapon 2 where Roger Murtaugh (Danny Glover) discovers explosives under his toilet, knowing that if he stands up, his house will be blown to bits.  The solution, in the film, is for Murtaugh and Martin Riggs (Mel Gibson) to jump into the adjacent cast-iron bathtub, which will shield them from the force of the explosion.  The question the Mythbusters pose is, “Will a cast-iron bathtub shield a person from the force of a C-4 explosion like it did in the movie?”

The scientific method requires focused inquiry.  Adam Savage and Jamie Hyneman aren’t looking at the plausibility of Murtaugh and Riggs’ car chase which leads to the discovery of South African Krugerrands and the subsequent attacks by “South African thugs,” or into any of the other spectacular stunts depicted in the film.  The scene is picked apart, with one specific aspect tested after another.  They test how easily one man can pull another into a bathtub from a toilet if the man on the toilet is unable to use his legs due to numbness.  For the show-finishing test, they focus on the shock protection of a cast-iron bathtub.  After determining what variables are acceptable in their experiment and which need to be removed (namely, actual people and a real house), they construct a bathroom on a bomb range with pressure-sensors and a ballistics dummy to record how much of the shock wave reaches inside the bathtub.

This photo isn't from the same episode, but it's still badass.

The Mythbusters engage in solid science, and in solid science, each experiment is designed to test one hypothesis.  If the results confirm the hypothesis or disconfirm it, the science is still solid.  In fact one of the defining factors of so-called “hard science” is that 50-50 possibility for failure.  If a hypothesis is tested in a way where a result could be produced that neither confirms or disconfirms it, the science is faulty—there are too many variables that must be eliminated from the experiment in order to make the hypothesis falsifiable.

The results of hard science carry absolute authority:  a hypothesis is either confirmed or disconfirmed, there is no way to argue for one or the other once the experiment has been carried out.  It is the singular focus of science that gives it this authority, and therefore other areas of study echo that kind of inquiry.  The study of art focuses on art itself—to be an authority is to be an expert, and to be an expert is to study something solely and exhaustively.  This is how we have modeled education.  High school specializes by class (1st period, you study Science, 2nd period, you study Latin, etc.), while trade schools specialize by, well, trade, while universities specialize by major and therefore department.

In art, an education focused entirely on art produces artist who make art that is, at its core, about art.  Though we think ourselves to be past the “art-for-art’s-sake” mantra of Abstract Expressionism or Minimalism, the work we produce is referencing other works, other periods of art history, and is a product of our focused education.  An artist like me might propose that anything can be considered art, which is true.  In a bizarre paradox, the supposed non-art activities that artists bring into the fold as art are justifiable as such because our sole area of expertise is art.  We are artists, so anything we do is art.

What this produces, as Howard Singerman and others lament, is a circular production of artist-educators.  The non-art activities produced as art—the “Alternative Media,” the “New Genre,” the weird, out-there, crazy stuff like performance and video and installation and earthworks and sound art—do not have much of a place in the art market. These artworks are difficult to quantify and commodify, and are therefore difficult to sell as objects.  Since they can’t really be sold on the primary market, there’s little to sell on the secondary market (auction houses frequented by collectors) and therefore the path to the institutions of legitimation, namely, museums, is obstructed.

With a lack of accessibility to the market, the path to legitimation instead leads through the institutions of education.  Enter the artist-educator.  Enter the visiting artist.  Enter the special lecturer.  The majority of students graduating from MFA programs are qualified to make art, certainly (really—how much qualification do you need?  More in another blog), but they are qualified for little else in a world that requires “employment” in order to have enough money to live.  Since many graduates focus on the ephemeral or the experiential rather than on saleable products, their education seems to limit their job possibilities to education.  MFA graduates become art instructors, teaching a new generation in a manner as focused and limited as the one in which they were taught. They can also become visiting artists, touring the lecture circuit of universities and art schools, earning not only stipends and lecturer fees, but also legitimation and a place in the pantheon of art history.  The most obvious example I can think of is Chris Burden, who is not an artist who produced much in the way of art objects, but is nonetheless immortalized in textbooks thanks to his performances and perhaps more, arguably, because of his personal qualification of his performances and installations as an instructor and visiting artist.

'Shoot,' by Christ Burden (1971) was entirely experiential. Even the documentation is lacking. Yet, it is a seminal work, and is known by any student studying performance art.

As I can tell you from experience, finding a place in the ivory tower of academia is no easy task.  There are few positions available for the thousands upon thousands graduating with MFAs every spring, and in an economy like this, with budgets slashed and art budgets the first on the chopping block, even those positions are dwindling.  Young graduates and emerging artists are force to cope with existence in a world where their newly-gained and accredited expertise will not get them very far.  Outside of Graphic Design courses, little mention is made in university art curriculum of self-marketing and business practices, even in courses with such promising titles as “Professional Practices.”  Outside of the miracle of gallery representation and excessive sales, how is a given artist expected to make it in a work-a-day world and still have the time, resources and opportunities to both make and exhibit their work?  While the chances of being an institutionally-legitimized “successful” artist are low, how does one still manage to be a success?

It may be that the definitions for success and legitimation for artists needs to shift for our current age of art.  I am certain that the qualification for art as something that is only made to be art has to change.  For someone to be successful at making art, one needs the support of both other artists and a community that finds the art both accessible and important.  High-minded artists and afficianados might argue that what I’m suggesting is that artists sell out and dumb-down their work—that they make kitsch in order to survive.  The pugilist in me wants to quote Lars Ulrich of Metallica:  “Yeah, we sell out—every seat in the house.”

Just because something is good business doesn’t make it bad art.  Metallica earned the scorn of purists by suddenly gaining mass-market appeal with their self-titled 1991 album, also known as The Black Album.  It wasn’t “metal” enough if it appealed to people outside the “educated” and the “specialized.”  But Metallica’s music, when looked at over the span of thirty years, is a continually evolving thing—and I argue that the band has always been unafraid to take risks in order to explore a new idea musically.  Sometimes it appealed to a large audience and thus brought more people into the world of heavy music than may have become interested in it otherwise.  Sometimes it failed—I give you St. Anger.  However, the exploration that Metallica engages in, however popular or unpopular, is an example of the kind of thing you’re taught to do in art or in music.  The problem is that it is seen as being less than pure by those more focused specifically on metal.

Remember how upset "purists" were when the members of Metallica cut their hair?

Metallica’s wide success depended upon appealing to listeners outside of the pure focus of metal music.  They eschewed the institutions of metal legitimation (whatever those may be—sweaty sets in dive bars attended by 50 people?) and adopted a new institution, in this case, mass approval (this was a tactic adopted by pop music long ago, moving away from the academic approval implied by classical and even academic jazz).  The success of artists may too depend on appealing to audiences outside of the institutions of legitimation as we know them. This may or may not include “selling out,” and will certainly require an attitude toward producing art that veers from the purity of art as taught in an academic setting.

As a suggestion for a possible route to take in this regard, allow me to relate a conversation I recently had with a friend.  While he was, one point, an artist, this friend has been involved in business for 8 years.  He was suggesting a way to earn money toward an artistic venture that, initially, seemed too tied to marketing to be acceptable in an art setting. He wanted to use a crowdsourcing site (like Kickstarter) to raise enough money to buy a CNC router.  He proposed using the router to create images on plywood.  Buyers would select from stock images that were provided or would have their own images to be created on the wood.  To me, this sounded like a very basic, kitsch-based business scheme: make images of peoples babies or dogs on plywood and charge them $300.  His business model seemed sound, but it seemed like just that:  business.

Using a computer program, the router bores different sized holes into plywood that has been painted black.

Here you can see both the texture of the holes and the image itself.

“I don’t want to just make crappy kitsch prints for people—where’s the art in that?”  I complained.

“You don’t get the router just for that!”  He explained.  “You need to offer people who are investing on Kickstarter something in return—they aren’t getting dividends for this investment.  You make them the 4’ by 4’ half-tone image of their grandmother and you then have this awesome router that you can make anything you want with and you didn’t have to pay for out of your pocket!  Now that you’ve got it, you can make, like, a topographical map and fill all the lakes with fiberglass resin, or crazy computer-designed three-dimensional sculpture or whatever this tool is capable of.  The kitsch stuff is just what you do to pay for the tool.”

In this model, the artist is engaging in creative production albeit half of it in the realm of the “low,” the “kitsch.”  He or she isn’t becoming lost to art in the world of the work-week, nor is he or she becoming lost to the wider world in the insulated baffles of academia. Is it “selling out?”  From the viewpoint of pure art, yes.  It may also be an option for success as an artist outside of academia and outside of the art market as we know it.

I don’t have a prescription for how to be successful as an artist in an age after art.  It may be a matter of each individual working out a way to continue creative production while at the same time making some sort of a living.  The art market is not treated in the “traditional” manner of speculative production and sale through the use of a dealer and eventually historical recognition in the hands of a museum.  Likewise, the closed system of academia loses its power of legitimation as artists in so-called “alternative” areas find venues and audiences outside of the ivory tower.  The idea of legitimation is all but ignored, so a question remains as to how history will immortalize what is produced in this age after art.  Although, if we accept that we are in an age after art—where art is no longer something to be isolated and produced in and of itself—it may be that history is in the same boat.  In an age after history, the question of legitimation may be moot.





Heroes

25 11 2011

Cultural figures regarded as heroes often follow a similar path to other, mythical heroic figures.  From Superman to Hercules to Jackson Pollock to Kurt Cobain, there are components that we tend to latch onto in order to label the person as “great.”  Aside from a skill in a particular field, are that the hero must be, in some way, separate from society.  In mythology, the hero must make a trip to the underworld.  “Real world” heroes, it seems to follow, must also take a trip to the underworld, but they don’t end up returning.  “Real world” cultural heroes must be dead.

Even Superman made a trip to the afterlife.

In the classic Western, the man without a name shows up in a seemingly sleepy town that is overrun by a criminal cattle-rustling gang (Tombstone), or a corrupt mayor (Unforgiven), two families vying for its control (A Fistful of Dollars).  The hero is a symbol of something from outside of society, as represented in the town.  Superman is outside of the society of Earth, as Superman is from Krypton.  Spiderman is a little bit of a trick to fit into this mold—Spiderman is a teenage boy, not necessarily something outside of the society of New York.  However, Stan Lee purposely created Spiderman (and many of his heroes) to be a teenager—teenagers, almost without fail, feel alienated from the society of which they are a part.  Since they feel themselves to be outside of society, they see society as an outsider—even if they really aren’t.

With real-world cultural heroes, it is a similar stretch to see how a given person may exist outside of society.  However, it is often what is glamorized about the person.  Take Vincent Van Gogh for example.  If a person on the street knows nothing else about Van Gogh, they will know that he was in some way crazy and they will certainly know the story about his cutting off of his own ear—which is a crazy thing to do.  A person afflicted with mental illness is outside of the normal boundaries of societal expectations.  This also shows up in the chemical dependency of many cultural heroes.  Earnest Hemingway was an alcoholic.  So was Jackson Pollock.  Sigmund Freud was hooked on cocaine.  For Elvis Presley it was pills, for Kurt Cobain it was heroin, for Hunter S. Thompson it was every drug under the sun.

Proof of the cultural influence of the counter-cultural.

For each of these, and for many more, we see the figure as being outside of the normal confines of expected social behavior.  They are, in some way, “other” than us.  Hunter S. Thompson might be close to the perfect example because, not only did he exist outside of society, he did it in a purposeful manner.  He plunged headfirst into Gonzo journalism and brought the rest of us along for the ride—to see the seedy underbelly of Las Vegas not as a participant, but as a mentally altered, “objective” observer.  His writing is from the point of view of alienation, and through that, we can put ourselves in the position of the hero, if only for a short while.

The real world cultural heroes I have listed here have something in common other than substance abuse.  They are all dead.  Classical Greek heroes make a trip to the underworld.  So did the Roman copy of the Greek hero, Aeneas.  So did the American version of Hercules:  Superman.  So did the basis for the Christian faith:  Jesus.

Non-mythical and non-religious figures have a difficult time returning from the dead, but figures who leave some sort of artifacts have a way to continue “existing” after they have died, even if they are not technically alive.  Van Gogh’s paintings draw crowds and high prices well into the 21st Century. The songs of Presley and Cobain continue to get airplay or to be downloaded onto ipods, even the work of Sigmund Freud, largely abandoned in professional psychology, finds its way into literary, artistic, and academic production.

The longevity of the work of these individuals is the indication of their heroic impact. However, the impact of the works themselves is largely dependent on the fact that they are dead.  Once an artist is no longer capable of creating new work, their oeuvre is complete.  They won’t be around to create new work—so the supply is fixed (hence, with increased demand, prices can go up—see sales figures for Van Gogh’s sunflowers or Warhol’s collection of kitsch cookie jars).  Also, the work is static—unchanging. We can think of Jackson Pollock’s work as the drip action-paintings of the 1950s and not have to worry that he may have been influenced by Minimalism or Pop or some postmodern abhorrence later on in life.  He wasn’t around to be affected by those.  His work can remain pure in his death.

In poets, artists, and musicians especially, (and certainly other professions who heroize historical figures) the pattern of substance abuse and death influences the behavioral patterns of students and young professionals in the field.  In ways, it seems that art students want to find some sort of chemical dependence in order to be like the artists they are taught to revere.  On the flip side of that, one might argue that the “creative mind” is already inclined toward such behavior, since to be truly creative requires an ability to think outside of the accepted confines of societal thought—to exist outside of society.

Personally, I am wary of any broad generalizations made about “creative minds,” as if they are sentenced to be artists and addicts and have no way to behave as, say, an engineer or someone with a “scientific mind.”  While some truly creative people are truly troubled mentally or chemically, many, many more are wannabe hipsters who think that if they drink enough or take enough drugs they’ll be able to be like their heroes—addicted, then dead.

To that end, I am reminded of Sid Vicious.  Sid was no great bass player and really didn’t have an ounce of musical or poetic talent in him.  He was recruited to be in the Sex Pistols because he had the punk look—he seemed to embody the attitude of a group desperately rebelling against society. Maybe that’s all that punk truly was (or is)—an all-encompassing, willful effort to exist outside of society, not necessarily to change it in any way or to contribute some “great” work of art to make general progress.  If that was the goal, Sid Vicious can certainly be seen as punk’s patron saint.

Sid Vicious: A whole lot of style, very little substance

This attitude of nihilism, however, doesn’t line up with the notion of the heroic cultural figure.  Heroes, in existing outside of society, in some way progress or protect society as a whole.  The good guy in the Western chases the corrupt officials out and the city can be civilized again.  Superman fights for “Truth, Justice, and The American Way.”  Jackson Pollock influences the direction of abstraction in art, and the reaction against abstraction, to this very day.

Kurt Cobain existed at the intersection of the outsider and the cultural paragon.  He wanted so much to be outside of the popular culture he was so much an influence on that, in the end, it killed him.  Rather, he killed himself.  True cultural heroes, whether they want it or not, are as much a part of the greater culture as anything they project themselves to be apart from.  Perhaps it is that paradox that drives them further away.  Perhaps it is the paradox itself we end up elevating as heroic.








%d bloggers like this: