Tuesday, January 26, 2016
Comedy Changes the World (2006)
I welcomed David Schimke’s essay “Want To Know What’s Really Going On? Ask a Comic” (Utne Sept-Oct ’06), particularly the necessary connection and comparison between today’s satirists and the suffering of Lenny Bruce. But Schimke’s analysis manages to completely miss the major distinction of artists like Jon Stewart, Stephen Colbert, and the writing and research teams that are essential to their success. Certainly Stewart and Colbert, like Maher and Rock, directly satirize the outrageous abuses of power that make reading headlines an irony-rich exercise. But politics and politicians are not the central subject of The Daily Show and The Colbert Report: their main target has always been (and continues to be) the sorry state of journalism, particularly television news.
Because they are themselves part of the empire of corporate media that they satirize, these comedians must include a level of self-satire and deliberate irony that – while learned from Letterman and company – take political satire and comedy itself into a new dimension of reflection, responsibility, and intelligence. Schimke overlooks how Colbert and Stewart must themselves practice responsible journalism in order to satirize the abundant examples of irresponsible journalism. Perhaps this is one way (among many) that these “jokesters” can in fact change the world, through fake news that provides truth that so-called “real” news won’t.
Nor, Mr. Schimke, would this be the first time that comedians and satirists have changed the world – as the examples of Lenny Bruce, Jonathan Swift, R. B. Sheridan, and George Bernard Shaw should attest.
Letter to the Editor of Utne Reader, Fall 2006.
Lost Formulae (2005)
Written/posted 17 October 2005, shortly after I moved to Texas, and not long after I discovered Lost.
Television drama, like most forms of drama (if not most forms of fiction), tends to thrive in mass production once artists hit on a successful formula. Happily, thanks to the contributions of deconstructionists and structuralism, academic criticism seems to have moved past the knee-jerk rejection of formulae as inherently lowbrow or unoriginal. Roland Barthes managed to demonstrate how narrative formulae are at work in cultural forms one might not otherwise describe as narrative (professional wrestling, for instance, or portrait photography); Stephen Johnson (in his book Everything Bad Is Good For You) demonstrates the potential utility of formulae and variations from formulae as a deliberate and constructed game artists play with their audiences. Kennings and riddle-games, as I've talked about here before, reveal the long history of formulae in texts and hint at their potential cultural utility. We might be able to take a step beyond commercial interest - knowing what formula makes a program successful, sustainable, and lucrative - and examine how formulae and their variations (in TV drama, as an example) enrich a text's potential and empowers its audience.
Take Lost, for example, ABC's quasi-SF hit that helped open the floodgates for a new breed of SF/horror programs this season. At first glance, on first description, Lost appears to fit quite nicely within existing forms and previous parallels. The most apparent, of course, is Gilligan's Island, which also told stories about a group of strangers isolated on a deserted island. The castaways provide a pseudo-microcosm of society in the form of familiar character types (leaders/heroes, ingenues, bungling comedians, sage advisers, mercantile opportunists, romantic interests) and undergo trials both mundane and fantastical, particularly conflicts that arise from petty or personal agendas, but never forgetting the demands of survival or the aspiration for future rescue. Of course, one is a half-hour comedy, the other an hour-long drama, but there are parallels - at least, when we describe the formula this way.
This is hardly a formula, however; it's really more of a description of results. That society should seem to be represented by a microcosm is hardly surprising, since character types are not only ancient models based on observations of society, but cultural paradigms that we use to perceive and program our behavior in society. How the microcosmic relation is constructed is far more interesting than simply noting that one seems to exist. And this is one way in which Lost clearly varies from Gilligan's Island on a generic level: the latter uses character as a given, unquestioned and dependably predictable, while the former uses character as the point of departure, the starting point for constructing plot.
Conventionally, one-hour TV drama has long employed A-plots and B-plots (a better description than "plot" and "subplot" in that "subplots" can often take the foreground for the better part of a one-hour episode), in which two stories are told simultaneously. The best-known structure is that of drama and comic relief - a standby for programs like L.A. Law, for instance, just as it had been in melodrama, American musicals, and variety shows. This is not the only way to deploy A-plots and B-plots, however, and their utility has recently expanded with the rise of series arcs, or long-term plots that last for longer than a single episode. Lost uses A-plots and B-plots to sustain suspense over the long haul of the entire series, while rewarding the audience with smaller versions of the larger arc in each one-hour episode.
What makes this structure work most effectively, however, is the choice to reward long-term investment in the B-plots (installments of the larger, series arc) with short-term narratives that manage to expand interest in the series as a whole, while managing to remain somewhat self-contained. The key to this mechanism is character. By means of flashbacks and retrospectives, character histories are gradually revealed - but always in terms of a current, ongoing action or development (typically framed as a series of choices). This process forms the A-plot: we will learn more about these characters, as each story is ultimately about them; but these characters will grow, change, and often confound our expectations by means of their interaction with each other and the situation as a whole as it develops (i.e., the B-plot). One might look at it as Aristotle's poetic principles in action: plot is the soul of the drama, but the windows on that soul are through character actions as they overlap with and shape that plot, and through their collective choices enact it.
To put this another way - to abbreviate the formula - Lost lays before its audience an assortment of mysteries: strangers. At first we (as the characters do) interpret these mysteries according to familiar patterns - we think of these characters as particular types. But each plot, each episode, subverts and challenges these expectations by exploring characters in detail, and revealing not only details about these characters' pasts, but an ongoing series of actions in which these characters actually decide what kind of people they are going to be. Our suspense is engaged: we want to find out who these people are, and the more we find out, the more mysterious they become. But the focus on individual characters (or individual character relationships) allows us to be satisfied with a single installment, and feel as if a whole story has been told - while drawing out our interest, and magnifying the allure of the series as a whole, through the interweaving of these characters' stories in the larger fabric (revealed in glimpses only). Each episode satisfies; each episode increases our interest in the next.
How does this empower the audience? Insofar as it demands that the audience do the work of seeing how this interweaving of characters' private tales with the development of the whole manages to embody - not some symbolic set of icons representing an ideal society - but the process of constructing a society itself. The show manages through its structure to focus not only on individuals, but on the nature of community itself. A template is not simply presented to the audience as a product to be consumed; rather, a system is devised through which the audience can (through imaginative engagement) play out various scenarios that both challenge and construct community.
It might be obvious, but it seems worth noting, the irony of examining community through isolation - not only the isolation of becoming "Lost" on a mysterious island, but also the "isolation" that television's so-called "wasteland" is supposed to inflict on passive viewers.
Television drama, like most forms of drama (if not most forms of fiction), tends to thrive in mass production once artists hit on a successful formula. Happily, thanks to the contributions of deconstructionists and structuralism, academic criticism seems to have moved past the knee-jerk rejection of formulae as inherently lowbrow or unoriginal. Roland Barthes managed to demonstrate how narrative formulae are at work in cultural forms one might not otherwise describe as narrative (professional wrestling, for instance, or portrait photography); Stephen Johnson (in his book Everything Bad Is Good For You) demonstrates the potential utility of formulae and variations from formulae as a deliberate and constructed game artists play with their audiences. Kennings and riddle-games, as I've talked about here before, reveal the long history of formulae in texts and hint at their potential cultural utility. We might be able to take a step beyond commercial interest - knowing what formula makes a program successful, sustainable, and lucrative - and examine how formulae and their variations (in TV drama, as an example) enrich a text's potential and empowers its audience.
Take Lost, for example, ABC's quasi-SF hit that helped open the floodgates for a new breed of SF/horror programs this season. At first glance, on first description, Lost appears to fit quite nicely within existing forms and previous parallels. The most apparent, of course, is Gilligan's Island, which also told stories about a group of strangers isolated on a deserted island. The castaways provide a pseudo-microcosm of society in the form of familiar character types (leaders/heroes, ingenues, bungling comedians, sage advisers, mercantile opportunists, romantic interests) and undergo trials both mundane and fantastical, particularly conflicts that arise from petty or personal agendas, but never forgetting the demands of survival or the aspiration for future rescue. Of course, one is a half-hour comedy, the other an hour-long drama, but there are parallels - at least, when we describe the formula this way.
This is hardly a formula, however; it's really more of a description of results. That society should seem to be represented by a microcosm is hardly surprising, since character types are not only ancient models based on observations of society, but cultural paradigms that we use to perceive and program our behavior in society. How the microcosmic relation is constructed is far more interesting than simply noting that one seems to exist. And this is one way in which Lost clearly varies from Gilligan's Island on a generic level: the latter uses character as a given, unquestioned and dependably predictable, while the former uses character as the point of departure, the starting point for constructing plot.
Conventionally, one-hour TV drama has long employed A-plots and B-plots (a better description than "plot" and "subplot" in that "subplots" can often take the foreground for the better part of a one-hour episode), in which two stories are told simultaneously. The best-known structure is that of drama and comic relief - a standby for programs like L.A. Law, for instance, just as it had been in melodrama, American musicals, and variety shows. This is not the only way to deploy A-plots and B-plots, however, and their utility has recently expanded with the rise of series arcs, or long-term plots that last for longer than a single episode. Lost uses A-plots and B-plots to sustain suspense over the long haul of the entire series, while rewarding the audience with smaller versions of the larger arc in each one-hour episode.
What makes this structure work most effectively, however, is the choice to reward long-term investment in the B-plots (installments of the larger, series arc) with short-term narratives that manage to expand interest in the series as a whole, while managing to remain somewhat self-contained. The key to this mechanism is character. By means of flashbacks and retrospectives, character histories are gradually revealed - but always in terms of a current, ongoing action or development (typically framed as a series of choices). This process forms the A-plot: we will learn more about these characters, as each story is ultimately about them; but these characters will grow, change, and often confound our expectations by means of their interaction with each other and the situation as a whole as it develops (i.e., the B-plot). One might look at it as Aristotle's poetic principles in action: plot is the soul of the drama, but the windows on that soul are through character actions as they overlap with and shape that plot, and through their collective choices enact it.
To put this another way - to abbreviate the formula - Lost lays before its audience an assortment of mysteries: strangers. At first we (as the characters do) interpret these mysteries according to familiar patterns - we think of these characters as particular types. But each plot, each episode, subverts and challenges these expectations by exploring characters in detail, and revealing not only details about these characters' pasts, but an ongoing series of actions in which these characters actually decide what kind of people they are going to be. Our suspense is engaged: we want to find out who these people are, and the more we find out, the more mysterious they become. But the focus on individual characters (or individual character relationships) allows us to be satisfied with a single installment, and feel as if a whole story has been told - while drawing out our interest, and magnifying the allure of the series as a whole, through the interweaving of these characters' stories in the larger fabric (revealed in glimpses only). Each episode satisfies; each episode increases our interest in the next.
How does this empower the audience? Insofar as it demands that the audience do the work of seeing how this interweaving of characters' private tales with the development of the whole manages to embody - not some symbolic set of icons representing an ideal society - but the process of constructing a society itself. The show manages through its structure to focus not only on individuals, but on the nature of community itself. A template is not simply presented to the audience as a product to be consumed; rather, a system is devised through which the audience can (through imaginative engagement) play out various scenarios that both challenge and construct community.
It might be obvious, but it seems worth noting, the irony of examining community through isolation - not only the isolation of becoming "Lost" on a mysterious island, but also the "isolation" that television's so-called "wasteland" is supposed to inflict on passive viewers.
American Dreamtime (2004)
The ancients lived their lives steeped in myth, living alongside and within it. Medieval consciousness imagined itself as existing simultaneously with the past: while there was an admission of historical time, history co-existed with a way of seeing the world guided and focused by constant symbolism. Similarly, Thomas Mann constructed his novels around the principle that it was possible to tell the story of one’s own life by following the shape of myths; his metaphor was that we walk in the footsteps of heroes, by re-living and embodying their quests in our own lives. Meaning is seldom absent from such a world view; indeed, meaning is literally everywhere. The first Australians call this the Dreamtime: a world of story that is neither long ago nor far away, but mythic nonetheless.Frances Yates described such a way of seeing in her construction of the ars memoria – the art of memory. In the art of memory, one builds imaginary cathedrals in the hollows of one’s mind, modeled after physical cathedrals. The individual’s inner cathedral was built to contain the story of one’s life, translated into symbols and narratives enclosed within familiar structures, vividly imagined and meticulously practiced. (It is not so impossible an art, I suspect, as it sounds at first.) While the contents of every such imagined cathedral were individual, they were made out of and housed in structures shared by many; every individual house of memory was unique, but the architecture and building blocks were collectively the same. Those who practiced the art of memory built inner worlds – we might call them private infospheres - shaped individually out of shared materials, whose design and components resembled and coincided with the outer world.
We might imagine the infosphere as a postmodern way to perceive mythic consciousness: a world made of shared information combined in infinite variations. We, too, walk inside myth. The American Dream is most often invoked as a description of a common aspiration, but it also captures in a layered metaphor a state of mind, as well: a shared dream, a people defined by a commonly constructed idealism. But today’s American Dream – like the consumption-driven infosphere of today, dominated by a crazed rush to control and brand images and ideas so as to profit from their exchange – lies outside our individual control, and perhaps our collective will, too. We sleep but think we are awake; we believe we perceive, pragmatically but optimistically, what is truly the real world. But we ourselves have shaped our gaze, like sleepwalkers, to avoid the unpleasant cracks that shiver across our collective façade. We are like prisoners who eagerly keep watch on walls of our own construction, unaware even that we are afraid of that which lies beyond those walls. Cognitive dissonance begins to describe this – that state of mind wherein we reconstruct our understanding of the world to avoid and revise those data which contradict or endanger the fantasy, the Dream. I wouldn’t hesitate to claim that America today lives in a constant state of cognitive dissonance – constantly fantasizing, but – and this is the key, since fantasy itself is not the problem – unaware that we are fantasizing. Indeed, we adamantly insist that our Dream is the only Dream.
An early Christian (and apocryphal) myth called the Hymn of the Pearl – a beautiful text whose imagery and architecture can be found rippling through other traditions and times as well, and whose influence can be charted in many influential artists (such as August Strindberg, for instance, in A Dream Play) – tells a similar story. It imagines a hero who disguises herself (or himself) in order to seek out a precious stone she has accidentally lost. In the process of her quest, she loses herself; she forgets who she is, where she came from, and why she seeks the Pearl. The story hinges on the moment of anagnorisis, the opposite of forgetting (and a critical structural element in drama); some early Christians called it gnosis, a remembering of the forgotten divine. The Pearl can be used as a metaphor for this; as is the life of the princess (or prince) who descends into the world and forgets she has descended. The American Dream is a dream without gnosis, addicted to amnesia (because history, like myth, reminds us of ways of seeing that we might prefer to forget or ignore). We are like Oedipus in Sophocles’ tragedy; we look for the American Character, for a sense of collectively shared identity (perhaps condensed in a cultural crucible euphemistically called The Melting Pot), but blind ourselves in the moment that we grasp what we seek. We have fallen into dream and out of balance; we have forgotten who we are; and we turn our resentment on those very myths that drive and shape our desire, as though the Pearl itself is at fault for our willful, shame-driven self-forgetting.
What would it take for us to awaken? Or, perhaps: what would it take for us to re-claim (collectively and individually) the authorship of our own myths, and thereby of our own lives?
First written/posted 11 December 2004
We might imagine the infosphere as a postmodern way to perceive mythic consciousness: a world made of shared information combined in infinite variations. We, too, walk inside myth. The American Dream is most often invoked as a description of a common aspiration, but it also captures in a layered metaphor a state of mind, as well: a shared dream, a people defined by a commonly constructed idealism. But today’s American Dream – like the consumption-driven infosphere of today, dominated by a crazed rush to control and brand images and ideas so as to profit from their exchange – lies outside our individual control, and perhaps our collective will, too. We sleep but think we are awake; we believe we perceive, pragmatically but optimistically, what is truly the real world. But we ourselves have shaped our gaze, like sleepwalkers, to avoid the unpleasant cracks that shiver across our collective façade. We are like prisoners who eagerly keep watch on walls of our own construction, unaware even that we are afraid of that which lies beyond those walls. Cognitive dissonance begins to describe this – that state of mind wherein we reconstruct our understanding of the world to avoid and revise those data which contradict or endanger the fantasy, the Dream. I wouldn’t hesitate to claim that America today lives in a constant state of cognitive dissonance – constantly fantasizing, but – and this is the key, since fantasy itself is not the problem – unaware that we are fantasizing. Indeed, we adamantly insist that our Dream is the only Dream.
An early Christian (and apocryphal) myth called the Hymn of the Pearl – a beautiful text whose imagery and architecture can be found rippling through other traditions and times as well, and whose influence can be charted in many influential artists (such as August Strindberg, for instance, in A Dream Play) – tells a similar story. It imagines a hero who disguises herself (or himself) in order to seek out a precious stone she has accidentally lost. In the process of her quest, she loses herself; she forgets who she is, where she came from, and why she seeks the Pearl. The story hinges on the moment of anagnorisis, the opposite of forgetting (and a critical structural element in drama); some early Christians called it gnosis, a remembering of the forgotten divine. The Pearl can be used as a metaphor for this; as is the life of the princess (or prince) who descends into the world and forgets she has descended. The American Dream is a dream without gnosis, addicted to amnesia (because history, like myth, reminds us of ways of seeing that we might prefer to forget or ignore). We are like Oedipus in Sophocles’ tragedy; we look for the American Character, for a sense of collectively shared identity (perhaps condensed in a cultural crucible euphemistically called The Melting Pot), but blind ourselves in the moment that we grasp what we seek. We have fallen into dream and out of balance; we have forgotten who we are; and we turn our resentment on those very myths that drive and shape our desire, as though the Pearl itself is at fault for our willful, shame-driven self-forgetting.
What would it take for us to awaken? Or, perhaps: what would it take for us to re-claim (collectively and individually) the authorship of our own myths, and thereby of our own lives?
First written/posted 11 December 2004
Five Responses in Ten Days (2004)
When I worked in Vermont, I was very active with KC/ACTF as a respondent. I wrote this after a memorable jaunt to five different college performances in November, 2004.
... On Monday I completed a brief, ACTF-sponsored tour that invited me to drive more than 1500 miles to five different colleges over the course of ten days.
As I see it, one of the most important services provided by KC/ACTF (a national organization for the advancement of college-level theatre) is that of the “response session,” in which a colleague well-versed in academic theatre is invited to respond to a college production. While I volunteered to respond to these shows, I feel honored by each of my hosts to have been invited to share my comments. I’d like to thank them for giving me an opportunity to see their engaging, challenging, and excellently entertaining work:
Thanks to Colby College for their beautiful, surprising, and reflective production of The Tempest. Those spirits (and their plays within plays) still haunt me.
Thanks to the University of Maine at Machias for provocative social engagement and bold silence speaking volumes in their production of The Moonlight Room.
Thanks to Franklin Pierce College (and particularly to Bob Lawson, for his hospitality), where I found myself transported and challenged by an intellectual, emotional fantasia on Edgar Allen Poe in Dark Cathedrals of the Heart.
Thanks to Eastern Connecticut State University (and guest artist Larry Hunt) for fascinating me with poignant and compelling masks and faces in their utterly engaging rendition of Plautus’ Roman comedy in The Brothers M.
Thanks to Dean College for bringing me Plautus, too, (his play, of course, not the man himself), in a Menaechmi both bold and blatant. This Saturday evening was sexy without being cheap, brave without being rash, and kept me laughing all the way home.
Many thanks to Jim Beauregard for making the whole trip possible.
And an additional thank-you to Ashleigh Ward (Saint Michael’s College ’04) for inviting me to Newburyport, Massachusetts, for an excellent cap to the entire “tour” via a fully professional production of a new play, Cannibals, whose irony-soaked investigation of the lives of frustrated actresses gave me cause to reflect on why I chose a profession in academic theatre over the performance industry.
... On Monday I completed a brief, ACTF-sponsored tour that invited me to drive more than 1500 miles to five different colleges over the course of ten days.
As I see it, one of the most important services provided by KC/ACTF (a national organization for the advancement of college-level theatre) is that of the “response session,” in which a colleague well-versed in academic theatre is invited to respond to a college production. While I volunteered to respond to these shows, I feel honored by each of my hosts to have been invited to share my comments. I’d like to thank them for giving me an opportunity to see their engaging, challenging, and excellently entertaining work:
Thanks to Colby College for their beautiful, surprising, and reflective production of The Tempest. Those spirits (and their plays within plays) still haunt me.
Thanks to the University of Maine at Machias for provocative social engagement and bold silence speaking volumes in their production of The Moonlight Room.
Thanks to Franklin Pierce College (and particularly to Bob Lawson, for his hospitality), where I found myself transported and challenged by an intellectual, emotional fantasia on Edgar Allen Poe in Dark Cathedrals of the Heart.
Thanks to Eastern Connecticut State University (and guest artist Larry Hunt) for fascinating me with poignant and compelling masks and faces in their utterly engaging rendition of Plautus’ Roman comedy in The Brothers M.
Thanks to Dean College for bringing me Plautus, too, (his play, of course, not the man himself), in a Menaechmi both bold and blatant. This Saturday evening was sexy without being cheap, brave without being rash, and kept me laughing all the way home.
Many thanks to Jim Beauregard for making the whole trip possible.
And an additional thank-you to Ashleigh Ward (Saint Michael’s College ’04) for inviting me to Newburyport, Massachusetts, for an excellent cap to the entire “tour” via a fully professional production of a new play, Cannibals, whose irony-soaked investigation of the lives of frustrated actresses gave me cause to reflect on why I chose a profession in academic theatre over the performance industry.
After This Fall (2004)
Written after the Red Sox won the World Series in 2004:
A few years ago, Sports Illustrated imagined New England after a World Series victory by the Boston Red Sox. Months after the celebrations ended, an inexplicable malaise would creep through the fans. Without defeat, they would lose direction; the Boston Red Sox, World Champions, would lose the haunting “almost” that made them distinctive.
But when the Red Sox finally won a World Series this year, the story of their victory was more amazing than anything fans could imagine – and yet the story we most hoped for. After lying prostrate before the New York Yankees, these “happy idiots” produced one of the greatest comebacks in sports history. How beautiful that a team driven by impossible standards should meet defeat at the hands of a team driven by sheer joy; how poignant that New York’s impossible perfectionism should fall to Boston’s practical idealism.
Can Red Sox Nation survive now that “long-suffering” must be stricken from the word “Red Sox fan”? Those who think it can’t don’t understand. Our passion was never about our pain; it was always about hope. The Red Sox finished second to the Yankees seven years in a row. For a Yankees fan, that level of defeat would be intolerable. For a Red Sox fan, this was a fountain of hope. It hurt – it definitely hurt. But I could always count on the Red Sox to make things interesting. Like many other fans, I never loved the Red Sox because they lost. I loved – and love – the team because every game was a thing of beauty. That’s what loving the Red Sox, for me, has always been about: blind, passionate, unfathomable but unquenchable hope – and a beautiful story to make the game worth watching.
Some have said that the Red Sox, those “lovable losers,” will by winning lose their lovability. But no Red Sox fan loves losing. What we love – and what defines that land without borders, Red Sox Nation – is possibility itself. Possibility is what makes every game new, every season “next” season, and every struggle an epic. I can’t thank these Red Sox enough for telling a story, time and again, that kept me coming back for more. If you think this year’s story was amazing … just wait ‘till next year.
A few years ago, Sports Illustrated imagined New England after a World Series victory by the Boston Red Sox. Months after the celebrations ended, an inexplicable malaise would creep through the fans. Without defeat, they would lose direction; the Boston Red Sox, World Champions, would lose the haunting “almost” that made them distinctive.
But when the Red Sox finally won a World Series this year, the story of their victory was more amazing than anything fans could imagine – and yet the story we most hoped for. After lying prostrate before the New York Yankees, these “happy idiots” produced one of the greatest comebacks in sports history. How beautiful that a team driven by impossible standards should meet defeat at the hands of a team driven by sheer joy; how poignant that New York’s impossible perfectionism should fall to Boston’s practical idealism.
Can Red Sox Nation survive now that “long-suffering” must be stricken from the word “Red Sox fan”? Those who think it can’t don’t understand. Our passion was never about our pain; it was always about hope. The Red Sox finished second to the Yankees seven years in a row. For a Yankees fan, that level of defeat would be intolerable. For a Red Sox fan, this was a fountain of hope. It hurt – it definitely hurt. But I could always count on the Red Sox to make things interesting. Like many other fans, I never loved the Red Sox because they lost. I loved – and love – the team because every game was a thing of beauty. That’s what loving the Red Sox, for me, has always been about: blind, passionate, unfathomable but unquenchable hope – and a beautiful story to make the game worth watching.
Some have said that the Red Sox, those “lovable losers,” will by winning lose their lovability. But no Red Sox fan loves losing. What we love – and what defines that land without borders, Red Sox Nation – is possibility itself. Possibility is what makes every game new, every season “next” season, and every struggle an epic. I can’t thank these Red Sox enough for telling a story, time and again, that kept me coming back for more. If you think this year’s story was amazing … just wait ‘till next year.
Out of Your Head (2004) or, Life in a Bubble
A soapbox rant from 26 January 2004, prompted by Bush the Second and Secretary Rumsfeld. The names and referents have changed. References to "life in a bubble" in politics today often refers to those figures who have begun drinking their own Kool-Aid.
It's time for a brief rant; please excuse the soapbox.
I think it was within the last couple of weeks when Secretary of Defense Donald Rumsfeld, speaking to the national press corps, repeatedly confused Osama Bin Laden with Saddam Hussein. The Daily Show has already enjoyed the comic potential of this sadly frightening episode - in which one of the most powerful men in the world demonstrated an alarming level of hysteria as well as confusion - but I found it reminding me of a conversation I had a few months ago with a family friend.
She asked me to explain why I felt that President Bush wasn't as dumb as he sometimes seemed to be. Rumseld's frightening gaffe, I think, helps explain my position. Rumseld himself is a veteran of politics in the White House, the Pentagon, and Capitol Hill, and has demonstrated ruthless cleverness. How could he possibly confuse Bin Laden and Hussein? While I enjoy entertaining conspiracy theory - it is possible that this was yet another deliberate effort by the Bush administration to juxtapose two independent concepts (like Saddam Hussein and 9/11) through relentless talking points - his almost manic delivery suggests it was an accident.
I suspect that the tactics deployed by the Bush administration are so compelling that the politicians now believe their own hype. Logical contradictions and contrary facts have no place in this world-view, which is built on a bedrock of unquestioning faith. As the authors of All The President's Spin carefully demonstrate, misdirection is one of the most effective tactics the Bush PR machine uses. As Michael Gazzaniga observes in his discussion of how humanity thinks, we are more susceptible to such techniques than we imagine; he suggests that it's very much a part of how we see the world.
Gazzaniga describes a famous trick ("Out of Your Hat") performed by Harry Blackwell, Sr., in which the magician created the illusion of pulling a full-sized donkey out of his top hat. Magicians "use the simple device of redirecting our attention to make objects that are in our full view, that we know our retina transmitted to our brain, go unnoticed." (The Mind's Past, 1998.) By carefully directing the audience's attention towards a lovely assistant and some elaborate gestures - using speech to focus that attention - the audience completely overlooked another assistant simply walking on stage in full view with the donkey. It was only when Blackwell re-focused his audience's gaze towards the donkey that it "appeared" to them.
Now, imagine a situation in which the trick goes on as planned, but Blackwell has begun believing his own powers of conjuration. Perhaps he startles himself when the donkey appears. This is not unlike the situation we now face with the current administration. Through repetition, constant misdirection, relentless adherence to talking points, and fanatical secrecy, our current leadership has managed to convince itself that the illusion it's been selling to the American public is real.
Rumsfeld and Bush probably don't deserve to be singled out for this; self-delusion is part of how our minds function. Not only do we revise information as it comes in, we constantly re-write our own histories throughout the course of our lives. (Ronald Reagan, for instance, confounded his staff and the press corps with stories from his life that clearly never took place - and no one could ever really tell if Reagan himself was aware of the discrepancies.)
But this is hardly an excuse. Cynically accepting misdirection as a fundamental part of the way we perceive and communicate is insufficient. As I pointed out in my discussion of skaldic poetry (below) this kind of deception is only acceptable when all sides are completely aware of it. There is clearly a large portion of the American public that actively wishes to be deceived - and has little desire to be disabused of their comfortable illusions. Why believe that American forces might be systematically killing more Iraqis than the insurgents themselves? In the case of the Abu Ghraib prison scandal - which shouldn't have come as a surprise to anyone familiar with the documented history of American conduct in Vietnam - the most popular response to the issue, and Bush's rebuttal, was the repeated assertion that, in essence, "Americans don't do such things." It's almost as if we don't need Bush's help to look away from uncomfortable facts, towards attractive illusions of moral superiority.
The problem, however, is that Rumsfeld's conflation (the summation of years of effort to conflate Iraq and Al-Qaeda in the public's imagination) didn't take place during a magic show, and the news from Iraq - while determined by the conventions of narrative - is not a poetic fiction. We are fooling ourselves by our own active misdirection - from the world around us to what we think that world ought to be. Rumsfeld and Bush are not fools; they are devout believers in a lie.
The antidote, I think, is the same as the skaldic poet's, and the magician's - to constantly remind the listeners that they are playing a part in the act. In the case of politics, however, the audience must stop playing a passive role, and realize that it's playing a role in its own deception.
It's time for a brief rant; please excuse the soapbox.
I think it was within the last couple of weeks when Secretary of Defense Donald Rumsfeld, speaking to the national press corps, repeatedly confused Osama Bin Laden with Saddam Hussein. The Daily Show has already enjoyed the comic potential of this sadly frightening episode - in which one of the most powerful men in the world demonstrated an alarming level of hysteria as well as confusion - but I found it reminding me of a conversation I had a few months ago with a family friend.
She asked me to explain why I felt that President Bush wasn't as dumb as he sometimes seemed to be. Rumseld's frightening gaffe, I think, helps explain my position. Rumseld himself is a veteran of politics in the White House, the Pentagon, and Capitol Hill, and has demonstrated ruthless cleverness. How could he possibly confuse Bin Laden and Hussein? While I enjoy entertaining conspiracy theory - it is possible that this was yet another deliberate effort by the Bush administration to juxtapose two independent concepts (like Saddam Hussein and 9/11) through relentless talking points - his almost manic delivery suggests it was an accident.
I suspect that the tactics deployed by the Bush administration are so compelling that the politicians now believe their own hype. Logical contradictions and contrary facts have no place in this world-view, which is built on a bedrock of unquestioning faith. As the authors of All The President's Spin carefully demonstrate, misdirection is one of the most effective tactics the Bush PR machine uses. As Michael Gazzaniga observes in his discussion of how humanity thinks, we are more susceptible to such techniques than we imagine; he suggests that it's very much a part of how we see the world.
Gazzaniga describes a famous trick ("Out of Your Hat") performed by Harry Blackwell, Sr., in which the magician created the illusion of pulling a full-sized donkey out of his top hat. Magicians "use the simple device of redirecting our attention to make objects that are in our full view, that we know our retina transmitted to our brain, go unnoticed." (The Mind's Past, 1998.) By carefully directing the audience's attention towards a lovely assistant and some elaborate gestures - using speech to focus that attention - the audience completely overlooked another assistant simply walking on stage in full view with the donkey. It was only when Blackwell re-focused his audience's gaze towards the donkey that it "appeared" to them.
Now, imagine a situation in which the trick goes on as planned, but Blackwell has begun believing his own powers of conjuration. Perhaps he startles himself when the donkey appears. This is not unlike the situation we now face with the current administration. Through repetition, constant misdirection, relentless adherence to talking points, and fanatical secrecy, our current leadership has managed to convince itself that the illusion it's been selling to the American public is real.
Rumsfeld and Bush probably don't deserve to be singled out for this; self-delusion is part of how our minds function. Not only do we revise information as it comes in, we constantly re-write our own histories throughout the course of our lives. (Ronald Reagan, for instance, confounded his staff and the press corps with stories from his life that clearly never took place - and no one could ever really tell if Reagan himself was aware of the discrepancies.)
But this is hardly an excuse. Cynically accepting misdirection as a fundamental part of the way we perceive and communicate is insufficient. As I pointed out in my discussion of skaldic poetry (below) this kind of deception is only acceptable when all sides are completely aware of it. There is clearly a large portion of the American public that actively wishes to be deceived - and has little desire to be disabused of their comfortable illusions. Why believe that American forces might be systematically killing more Iraqis than the insurgents themselves? In the case of the Abu Ghraib prison scandal - which shouldn't have come as a surprise to anyone familiar with the documented history of American conduct in Vietnam - the most popular response to the issue, and Bush's rebuttal, was the repeated assertion that, in essence, "Americans don't do such things." It's almost as if we don't need Bush's help to look away from uncomfortable facts, towards attractive illusions of moral superiority.
The problem, however, is that Rumsfeld's conflation (the summation of years of effort to conflate Iraq and Al-Qaeda in the public's imagination) didn't take place during a magic show, and the news from Iraq - while determined by the conventions of narrative - is not a poetic fiction. We are fooling ourselves by our own active misdirection - from the world around us to what we think that world ought to be. Rumsfeld and Bush are not fools; they are devout believers in a lie.
The antidote, I think, is the same as the skaldic poet's, and the magician's - to constantly remind the listeners that they are playing a part in the act. In the case of politics, however, the audience must stop playing a passive role, and realize that it's playing a role in its own deception.
Spinning Stories (2004)
At my brother’s suggestion, I recently devoured All The President’s Spin (2004) and enjoyed its rational, careful analysis of how the aesthetics of campaign rhetoric can operate as a form of political power – and how the Bush administration seems to have demonstrated a quantum leap in spin technique. It occurred to me that “spin” was a term that had crept up on me unnoticed; I don’t remember when I first heard it used, I’m not quite sure how long it’s been around, and for several years I’m quite sure I didn’t know what it meant. Now, like many an infectious meme, the concept of “spin” has proved far too useful to shake.
The Wikipedia’s entry on spin argues that the term signifies “a heavily biased portrayal in one’s own favor of an event or situation” designed to bring the spinner back on top. The label of “spin” implies that a source of information has been “disingenuous, deceptive and/or highly manipulative” in using tactics intended “to sway audiences away from widespread (and often commonsense) perceptions.”
The Online Etymology Dictionary (which notes the appearance of “spin doctor” during the Reagan presidency, in 1984) shows that the word comes from the Old English spinnan (twisting fibers into thread) and also implies stretching. Political spin, likewise, involves stretching facts and twisting them around each other to transform the thread of public discourse. Spin first meant “revolving” or “turning rapidly” in the 17th century; political spin certainly involves turning not only the facts around (to make them mean the opposite of what they appear to indicate by shifting their context) and disorienting the press and the public, but also a reversal of fortune in that negative data is re-worked in order to serve the argument that it initially appeared to refute. The goal of spin, after all, is to come out on top, no matter what the available information seems to indicate.
What fascinates me is how this tactic – and I enjoy how the excellent correspondents at Spinsanity manage to consistently treat “spin” as a tactic, not a condition – is often perceived as a state rather than a methodology. The emphasis, in other words, is on the pre-existing “bias” of the spin doctor, rather than on the feats of misdirection and illusionism that are required to make spin happen. (Note that bias, like spin, stems in part from language about the weaving and cutting of fabric – as does the word text, and, of course, fabrication).
But if one accepts bias as a fact of human nature – not insurmountable, of course, but nevertheless a basic tendency to look at the world from one’s own point of view (go figure) – then we can see a certain amount of “spin” in the stories we tell ourselves about who we are and what we’ve experienced. In the 12th century, Snorri Sturluson’s retelling of Norse myths preserves and re-uses a tactic favored by the Norse poets: the use of misdirection and deception, both as a plot device and a rhetorical strategy. His concept of poetry was deeply informed by this: poetry is a beautiful deception, a vastly creative construction: a distortion at best, and a fabrication at worst.
How is it that we have become such fundamentalists when it comes to story-telling that we expect reporters to be capable of conveying information without bias? Do we believe that their words can somehow provide an unadulterated access to the truth? Nor do I suggest that the existence of bias invalidates the concept of truth. Aristotle argued when he defended the value of poetry that many imitations of the truth can provide greater access to truth – through their compilation and comparison – than any one “authoritative” account might provide. I mean by all of this to suggest that our culture seems addicted to an almost unquestioned belief in pure information, as if information could exist independent of the story that shapes, conveys, and catalyzes it. David Mindich writes:
“… reporters, despite their claims to be ‘objective,’ did not (and do not) operate in a vacuum. This is what makes the information/story dichotomy so untenable: information cannot be conveyed without an organizing narrative, and stories cannot be told without conveying information.” (Just the Facts, 133).
Poets of other times knew this; the Northern skalds told a story that their audiences knew had been twisted and changed, to suit both the occasion and the audience. They told of how the world was created by ancient gods who could spin the world out of the “deceiving gap” that was before creation. They sang that their own words were borrowed or stolen; that their words, like the words of chieftains and even gods, should never be completely trusted. It was only in full knowledge that they were being, in some part, deceived by the poet that the audience could really play the game of meaning with him – and thereby absorb, with a skeptical ear and eye, the poet’s story in such a way that they could might glean some truth from it.
We, on the other hand, appear to have lost sight of both the unavoidable deception inherent among reporters – who use a ruthlessly minimal form of poetry, but poetry nonetheless – and their willingness, as well as our own, to be blatantly misled. It seems as if many reporters would remain blithely unaware of how their claims to objective transparency deny the power that they exert, particularly in constructing and perpetuating a particular bias.
Ultimately the worst sort of poet, it seems to me, is the one that claims that everyone else is doing the spinning. Such liars are dangerous: from their point of view, the world only revolves around them. We must also recall that no story can ever provide the complete and total truth. (If it could, it wouldn’t be a story – it would be reality itself.) By understanding the mechanisms of storytelling we can better perceive how a story has been spun – because all stories spin. It’s just that some poets are more honest about their alterations than others.
The Wikipedia’s entry on spin argues that the term signifies “a heavily biased portrayal in one’s own favor of an event or situation” designed to bring the spinner back on top. The label of “spin” implies that a source of information has been “disingenuous, deceptive and/or highly manipulative” in using tactics intended “to sway audiences away from widespread (and often commonsense) perceptions.”
The Online Etymology Dictionary (which notes the appearance of “spin doctor” during the Reagan presidency, in 1984) shows that the word comes from the Old English spinnan (twisting fibers into thread) and also implies stretching. Political spin, likewise, involves stretching facts and twisting them around each other to transform the thread of public discourse. Spin first meant “revolving” or “turning rapidly” in the 17th century; political spin certainly involves turning not only the facts around (to make them mean the opposite of what they appear to indicate by shifting their context) and disorienting the press and the public, but also a reversal of fortune in that negative data is re-worked in order to serve the argument that it initially appeared to refute. The goal of spin, after all, is to come out on top, no matter what the available information seems to indicate.
What fascinates me is how this tactic – and I enjoy how the excellent correspondents at Spinsanity manage to consistently treat “spin” as a tactic, not a condition – is often perceived as a state rather than a methodology. The emphasis, in other words, is on the pre-existing “bias” of the spin doctor, rather than on the feats of misdirection and illusionism that are required to make spin happen. (Note that bias, like spin, stems in part from language about the weaving and cutting of fabric – as does the word text, and, of course, fabrication).
But if one accepts bias as a fact of human nature – not insurmountable, of course, but nevertheless a basic tendency to look at the world from one’s own point of view (go figure) – then we can see a certain amount of “spin” in the stories we tell ourselves about who we are and what we’ve experienced. In the 12th century, Snorri Sturluson’s retelling of Norse myths preserves and re-uses a tactic favored by the Norse poets: the use of misdirection and deception, both as a plot device and a rhetorical strategy. His concept of poetry was deeply informed by this: poetry is a beautiful deception, a vastly creative construction: a distortion at best, and a fabrication at worst.
How is it that we have become such fundamentalists when it comes to story-telling that we expect reporters to be capable of conveying information without bias? Do we believe that their words can somehow provide an unadulterated access to the truth? Nor do I suggest that the existence of bias invalidates the concept of truth. Aristotle argued when he defended the value of poetry that many imitations of the truth can provide greater access to truth – through their compilation and comparison – than any one “authoritative” account might provide. I mean by all of this to suggest that our culture seems addicted to an almost unquestioned belief in pure information, as if information could exist independent of the story that shapes, conveys, and catalyzes it. David Mindich writes:
“… reporters, despite their claims to be ‘objective,’ did not (and do not) operate in a vacuum. This is what makes the information/story dichotomy so untenable: information cannot be conveyed without an organizing narrative, and stories cannot be told without conveying information.” (Just the Facts, 133).
Poets of other times knew this; the Northern skalds told a story that their audiences knew had been twisted and changed, to suit both the occasion and the audience. They told of how the world was created by ancient gods who could spin the world out of the “deceiving gap” that was before creation. They sang that their own words were borrowed or stolen; that their words, like the words of chieftains and even gods, should never be completely trusted. It was only in full knowledge that they were being, in some part, deceived by the poet that the audience could really play the game of meaning with him – and thereby absorb, with a skeptical ear and eye, the poet’s story in such a way that they could might glean some truth from it.
We, on the other hand, appear to have lost sight of both the unavoidable deception inherent among reporters – who use a ruthlessly minimal form of poetry, but poetry nonetheless – and their willingness, as well as our own, to be blatantly misled. It seems as if many reporters would remain blithely unaware of how their claims to objective transparency deny the power that they exert, particularly in constructing and perpetuating a particular bias.
Ultimately the worst sort of poet, it seems to me, is the one that claims that everyone else is doing the spinning. Such liars are dangerous: from their point of view, the world only revolves around them. We must also recall that no story can ever provide the complete and total truth. (If it could, it wouldn’t be a story – it would be reality itself.) By understanding the mechanisms of storytelling we can better perceive how a story has been spun – because all stories spin. It’s just that some poets are more honest about their alterations than others.
Faking the News (2004)
The Daily Show on Comedy Central bills itself – with proud self-mockery – as “just your basic cable fake news show.” It is becoming increasingly recognized, particularly among media critics, as an unexpectedly powerful venue – and a journalistic force to be reckoned with. Congressional guests have remarked often that they hadn’t heard of the show before – and, now that they knew about it, were terrified to appear with Jon Stewart. But the show really isn’t a stealth vehicle for underground investigative journalism, or an entertainment-industry mouthpiece for the left; its only claim is that its purpose is comedy, which is also its only license. This is what gives The Daily Show such efficacy.
Consider the question of journalistic objectivity. To use one popular metaphor, journalists try to open a window on the world; their goal is to be the transparent glass, so that the audience looks through their stories to the events those stories describe. Journalists like to be heeded, but not necessarily looked at (politicians are somewhat similar in this respect). Like a tragedian, a journalist aspires to a clear and direct plot, with compelling emotional force and few distractions, the better to place the content in full relief.
But comedy in general is more about the frame of a window than the glass. The urge incomedy is to be in on the joke; that is, part of that external frame of reference that looks at a thing and finds it funny. Arthur Koestler once suggested that laughter comes from an explosive experience of creativity: suddenly, two otherwise unrelated fields of experience unexpectedly collide – we put the pieces together – and we laugh. Comedy as a form demands an audience's awareness of the joke, that jokes are being made. A comedian needs to be looked at in order to be heeded; the audience must be given permission to laugh, and the comedian must be given license to be funny.
Despite the importance in comedy of identifying with the subject, laughter depends in some part on distance, on stepping back from a moment in a very gripping physical sense (think about it - your body literally stops and has a seizure). “Someday we’ll all look back on this and laugh.” We speak of comedians as providing a means to getting ourselves to look at the world in the proper perspective, and of laughter as medicine.
In an interview with Del Close, one of the authors of Truth in Comedy, the playwright John Guare asked what Close thought the purpose of political satire was. Close didn’t hesitate: “Death,” he replied. The purpose of the satirical comic was to brutally and ruthlessly reveal the truth, Close maintained; “knock ‘em dead” is a goal, not just a metaphor. The truth as revealed in comedy is so toxic – indeed, to the comic perspective, everything and everyone is perfectly defective – that when confronted with it, our bodies have no choice but to respond with laughter. Laughter provides catharsis and relieves the tension that builds as a reaction to unpleasant realities. The comic, Close felt, provides a means of shocking ourselves back to health.
That moment of laughter is a measure of objectivity; in it, we viscerally separate ourselves from our experience. We see the joke; we get it. Sometimes we can even anticipate the punchline – and still the laughter can carry us away. This is because the comic announces herself; she reminds us, “I’m just being funny,” almost as a means of saying, “Hey, I’m just as much a mess as everyone else.” The frame is obvious and available to view – on the Daily Show, there is an almost celebratory shoddiness to their approach. At any rate, they never fail to exaggerate their own failures, or inflate their importance. And while denigrating themselves, they also satirize their models, the journalists. The Daily Show looks at the process of looking through a journalistic window. Journalism is critiqued; but it also must be practiced, or the jokes won’t work – the audience won’t get the joke if the story isn’t told.
I might even go so far as to suggest that “fake” news – if its purpose is satire – bears an even greater burden of truth. Audiences are always tougher on comedy, after all – and they can tell when a performer is faking it.
First posted/written 16 September 2004, on the subject of The Daily Show with John Stewart.
Consider the question of journalistic objectivity. To use one popular metaphor, journalists try to open a window on the world; their goal is to be the transparent glass, so that the audience looks through their stories to the events those stories describe. Journalists like to be heeded, but not necessarily looked at (politicians are somewhat similar in this respect). Like a tragedian, a journalist aspires to a clear and direct plot, with compelling emotional force and few distractions, the better to place the content in full relief.
But comedy in general is more about the frame of a window than the glass. The urge incomedy is to be in on the joke; that is, part of that external frame of reference that looks at a thing and finds it funny. Arthur Koestler once suggested that laughter comes from an explosive experience of creativity: suddenly, two otherwise unrelated fields of experience unexpectedly collide – we put the pieces together – and we laugh. Comedy as a form demands an audience's awareness of the joke, that jokes are being made. A comedian needs to be looked at in order to be heeded; the audience must be given permission to laugh, and the comedian must be given license to be funny.
Despite the importance in comedy of identifying with the subject, laughter depends in some part on distance, on stepping back from a moment in a very gripping physical sense (think about it - your body literally stops and has a seizure). “Someday we’ll all look back on this and laugh.” We speak of comedians as providing a means to getting ourselves to look at the world in the proper perspective, and of laughter as medicine.
In an interview with Del Close, one of the authors of Truth in Comedy, the playwright John Guare asked what Close thought the purpose of political satire was. Close didn’t hesitate: “Death,” he replied. The purpose of the satirical comic was to brutally and ruthlessly reveal the truth, Close maintained; “knock ‘em dead” is a goal, not just a metaphor. The truth as revealed in comedy is so toxic – indeed, to the comic perspective, everything and everyone is perfectly defective – that when confronted with it, our bodies have no choice but to respond with laughter. Laughter provides catharsis and relieves the tension that builds as a reaction to unpleasant realities. The comic, Close felt, provides a means of shocking ourselves back to health.
That moment of laughter is a measure of objectivity; in it, we viscerally separate ourselves from our experience. We see the joke; we get it. Sometimes we can even anticipate the punchline – and still the laughter can carry us away. This is because the comic announces herself; she reminds us, “I’m just being funny,” almost as a means of saying, “Hey, I’m just as much a mess as everyone else.” The frame is obvious and available to view – on the Daily Show, there is an almost celebratory shoddiness to their approach. At any rate, they never fail to exaggerate their own failures, or inflate their importance. And while denigrating themselves, they also satirize their models, the journalists. The Daily Show looks at the process of looking through a journalistic window. Journalism is critiqued; but it also must be practiced, or the jokes won’t work – the audience won’t get the joke if the story isn’t told.
I might even go so far as to suggest that “fake” news – if its purpose is satire – bears an even greater burden of truth. Audiences are always tougher on comedy, after all – and they can tell when a performer is faking it.
First posted/written 16 September 2004, on the subject of The Daily Show with John Stewart.
Making the News (2004)
Every story is told through transformation; storytelling changes stories. It’s perhaps a truism that stories change people; I often encounter theatre students who tell me of seeing one show that changed their lives forever. People can easily change stories, without even meaning to; anyone who’s ever played “Telephone” understands how quickly and drastically a message can be changed by its repeated transmission. But such changes are not simply interference; the complete process of telling a story involves several steps, each of which requires transformation in order to function.
A storyteller hears or gathers the material for her story; even her imagination is made of memories and familiar patterns. Taking in any information at all requires relating that information to previous experience; because the storyteller’s experience is uniquely her own, she perceives the raw material of her story in an absolutely individual way. Then she shapes the story – perhaps in different ways for different audiences. The story is changed to fit the specific occasion or purpose. And then the story is told – it is fundamentally shaped by the moment of its performance, or in the case of written stories, both the form of its publication and how it is ultimately read.
Telling stories of any kind demands changing the story – even if only changing the teller and the told. Why, then, does modern journalism cling to the notion that a storyteller can and should be passive in relation to the story? It is a peculiar – and perhaps suspicious – medium that tries to render itself completely invisible; it’s odd for a storyteller to exert the power of telling a story, but claim to be passively and dispassionately transmitting reality.
David Mindich notes as much in his book Just the Facts: How “Objectivity” Came to Define American Journalism (NY: NYU Press, 1998), where he traces the development of current journalistic “objectivity” in the 19th century press. He objects to
Mindich certainly doesn’t make the error of presuming that his own perspective, and his own voice, doesn’t exist; the story he tells seems historically accurate, but also quite clearly his own. It seems that an almost Brechtian perspective must be brought to bear in the news: a distinct and critical awareness, in both the audience and the storytellers, that a transformative transaction is taking place through the intoxicating power of narrative.
First written/posted 8 September 2004.
A storyteller hears or gathers the material for her story; even her imagination is made of memories and familiar patterns. Taking in any information at all requires relating that information to previous experience; because the storyteller’s experience is uniquely her own, she perceives the raw material of her story in an absolutely individual way. Then she shapes the story – perhaps in different ways for different audiences. The story is changed to fit the specific occasion or purpose. And then the story is told – it is fundamentally shaped by the moment of its performance, or in the case of written stories, both the form of its publication and how it is ultimately read.
Telling stories of any kind demands changing the story – even if only changing the teller and the told. Why, then, does modern journalism cling to the notion that a storyteller can and should be passive in relation to the story? It is a peculiar – and perhaps suspicious – medium that tries to render itself completely invisible; it’s odd for a storyteller to exert the power of telling a story, but claim to be passively and dispassionately transmitting reality.
David Mindich notes as much in his book Just the Facts: How “Objectivity” Came to Define American Journalism (NY: NYU Press, 1998), where he traces the development of current journalistic “objectivity” in the 19th century press. He objects to
“the idea that somehow journalism is an ‘objective’ craft and that journalists are engaged in a basically passive endeavor. … journalists, the story goes, are not active constructors of a story. Even when more active verbs are used to describe reportage, as when journalists ‘gather’ the facts or ‘uncover’ the story, they are still basically observers, poking their noses into an area where others have not yet gone. … One of the reasons no one has written a history of ‘objectivity’ is that it’s difficult to discuss an ethic that is defined by its practitioners’ lack of perspective, bias, and even action” (7).He continues:
“But journalists do do things. … To say that journalists make the news does not mean that they fake the news. Nor is it to say, as some sociologists have suggested, that the news never reflects the outside world. It simply means that journalists do and must construct stories, because of their membership in the world of humanity” (8).
Mindich certainly doesn’t make the error of presuming that his own perspective, and his own voice, doesn’t exist; the story he tells seems historically accurate, but also quite clearly his own. It seems that an almost Brechtian perspective must be brought to bear in the news: a distinct and critical awareness, in both the audience and the storytellers, that a transformative transaction is taking place through the intoxicating power of narrative.
First written/posted 8 September 2004.
Poets' Mead and Culture (2004)
There are countless issues and examples that Lawrence Lessig raises that deserve broad and open public discussion. I’ll probably hit a few of these in future posts. But for the moment, consider the Norse concept of “content” as articulated in myths about the wisdom of Kvasir and the mead of poetry (described in previous posts).
Like digital content, the mead of poetry is defined as much by what is changed as what is preserved. Scandinavian poets knew that they were clothed in borrowed robes; of course, they were operating in a feudal culture, but even so, the kings and lords who might lay claim to the person of a poet could never claim ownership of a poem. Poems were often dedicated to royalty and chieftains (they still are); but the purpose of such dedications was never to say “this poem belongs to King Harald” but “this poem was devised to honor King Harald,” presumably by ensuring that his memory would be preserved in the transformative matrix of culture that is sometimes called “posterity.”
More to the point, even the gods themselves were said to have stolen poetry (the poet’s mead) and the wisdom it enables (Kvasir’s wisdom, accessed through poetry). Rather than evading the reality that art and culture are always crafted out of what precedes them, and claiming sole ownership of a cultural work, the myth of Kvasir and the metaphor of the poet’s mead declares up front that what the artist creates is borrowed, if not stolen. In fact, by uttering the words of poetry – which, I must point out, was primarily a performed rather than a published medium in the 12th century – the poet was literally spilling his mead out into the listening audience, where they might taste of it themselves … in fact, literally drink it with the poet by internalizing the words, and (like the poet) speaking them aloud as their own.
For a contemporary example, consider your favorite song. Perhaps the lyrics are dear to you, because the song marks a critical time in your life. If you wish to sing the words, are you stealing from the artist? After all, the singer could never have anticipated what those words might mean to you, could she?
And the skaldic poet – who announced that his poetry was itself stolen or borrowed from the gods – literally gave shared ownership of his work to his community, quite literally praying that the audience would take that work and make it their own – that they might all drink deeply of Kvasir’s wisdom together.
First written/posted 5 September 2004
Like digital content, the mead of poetry is defined as much by what is changed as what is preserved. Scandinavian poets knew that they were clothed in borrowed robes; of course, they were operating in a feudal culture, but even so, the kings and lords who might lay claim to the person of a poet could never claim ownership of a poem. Poems were often dedicated to royalty and chieftains (they still are); but the purpose of such dedications was never to say “this poem belongs to King Harald” but “this poem was devised to honor King Harald,” presumably by ensuring that his memory would be preserved in the transformative matrix of culture that is sometimes called “posterity.”
More to the point, even the gods themselves were said to have stolen poetry (the poet’s mead) and the wisdom it enables (Kvasir’s wisdom, accessed through poetry). Rather than evading the reality that art and culture are always crafted out of what precedes them, and claiming sole ownership of a cultural work, the myth of Kvasir and the metaphor of the poet’s mead declares up front that what the artist creates is borrowed, if not stolen. In fact, by uttering the words of poetry – which, I must point out, was primarily a performed rather than a published medium in the 12th century – the poet was literally spilling his mead out into the listening audience, where they might taste of it themselves … in fact, literally drink it with the poet by internalizing the words, and (like the poet) speaking them aloud as their own.
For a contemporary example, consider your favorite song. Perhaps the lyrics are dear to you, because the song marks a critical time in your life. If you wish to sing the words, are you stealing from the artist? After all, the singer could never have anticipated what those words might mean to you, could she?
And the skaldic poet – who announced that his poetry was itself stolen or borrowed from the gods – literally gave shared ownership of his work to his community, quite literally praying that the audience would take that work and make it their own – that they might all drink deeply of Kvasir’s wisdom together.
First written/posted 5 September 2004
Mead of Poetry (2004)
Odin is sometimes equated with Jove because (like Jove) he appears in the myths as the “High One,” the lord of his fellow gods. But when the Romans described the gods of the Germanic tribes, they equated Odin/Woden with Mercury, a.k.a. Hermes. Hermes and his Egyptian antecedent, Thoth, are both attributed with the invention of writing. As the maker of runes and the god of poetry and poets, Odin plays a similar role; like the other two, he is also associated with the magical properties of speech and spell-casting.
Poetry, as the shaping of language through memory, imagination, and speech, was regarded as a mystery, a secret craft not unlike magic. Such power required considerable training, which came at a price. Curiously, at every stage in the myths, poetry is considered so powerful that no single individual can be said to contain or own it. The tales of how Odin claimed power over poets are full of duplicity, theft, and lying, but poetry itself escapes any attempt to utterly control it. It seems entirely appropriate that a common kenning for poetry was mead – for example, “Kvasir’s mead” or “the dwarfs’ mead”. In this tradition, one can be said to literally become drunk with poetry.
~ Kvasir was one of the wisest of the gods, and traveled the world teaching everyone he met. Two dwarves, named Fjalar and Galar, thought to profit by Kvasir’s death; they killed him and reduced his remains in a pot called Odrerir, where they mixed Kvasir’s blood with honey to create a powerful mead. Anyone who drank this mead would gain access to Kvasir’s wisdom: the mead, itself, was poetry.
~ Suttung the giant had a different axe to grind with this murderous pair of dwarves; they had killed his mother. Upon capturing them, he demanded the mead of poetry as a ransom for their lives. Hoping to keep its power to himself, he kept the mead in a deep cave, guarded by his daughter, Gunnlod. Everyone coveted the mead – particularly Odin, who was known to steal that which caught his eye.
~ Through a series of disguises and schemes, Odin managed to break into the cave and seduce the giant’s daughter. For three nights, he drank the mead; on the third night, he changed into an eagle in order to escape. Suttung discovered the theft, and changed into an eagle as well to give chase. Some of the mead escaped Odin’s mouth as he flew; some he allowed to drop, in order to distract the giant close behind him. When he finally made it over the walls of Asgard, he spat out the bulk of the mead into vessels the gods had prepared, making his plan complete.
But as the story ends, the gods themselves cannot claim all of the precious mead for themselves. Both deliberately and by accident, some of the mead fell on the earth, where it touched some of the living; those whom the spilled mead have touched become poets. While a somewhat visceral and queasy metaphor for divine inspiration, it is an important point that in this metaphor,
[First written/posted 2 September 2004]
Poetry, as the shaping of language through memory, imagination, and speech, was regarded as a mystery, a secret craft not unlike magic. Such power required considerable training, which came at a price. Curiously, at every stage in the myths, poetry is considered so powerful that no single individual can be said to contain or own it. The tales of how Odin claimed power over poets are full of duplicity, theft, and lying, but poetry itself escapes any attempt to utterly control it. It seems entirely appropriate that a common kenning for poetry was mead – for example, “Kvasir’s mead” or “the dwarfs’ mead”. In this tradition, one can be said to literally become drunk with poetry.
~ Kvasir was one of the wisest of the gods, and traveled the world teaching everyone he met. Two dwarves, named Fjalar and Galar, thought to profit by Kvasir’s death; they killed him and reduced his remains in a pot called Odrerir, where they mixed Kvasir’s blood with honey to create a powerful mead. Anyone who drank this mead would gain access to Kvasir’s wisdom: the mead, itself, was poetry.
~ Suttung the giant had a different axe to grind with this murderous pair of dwarves; they had killed his mother. Upon capturing them, he demanded the mead of poetry as a ransom for their lives. Hoping to keep its power to himself, he kept the mead in a deep cave, guarded by his daughter, Gunnlod. Everyone coveted the mead – particularly Odin, who was known to steal that which caught his eye.
~ Through a series of disguises and schemes, Odin managed to break into the cave and seduce the giant’s daughter. For three nights, he drank the mead; on the third night, he changed into an eagle in order to escape. Suttung discovered the theft, and changed into an eagle as well to give chase. Some of the mead escaped Odin’s mouth as he flew; some he allowed to drop, in order to distract the giant close behind him. When he finally made it over the walls of Asgard, he spat out the bulk of the mead into vessels the gods had prepared, making his plan complete.
But as the story ends, the gods themselves cannot claim all of the precious mead for themselves. Both deliberately and by accident, some of the mead fell on the earth, where it touched some of the living; those whom the spilled mead have touched become poets. While a somewhat visceral and queasy metaphor for divine inspiration, it is an important point that in this metaphor,
- poetry is regurgitated and re-consumed. It travels through many stages and transformations (like mead); it does not belong to any one individual.
- It is likewise an important point that poetry – the distillation of the mind of the wisest of gods, and providing access to that knowledge – is constantly shadowed by deception and distortion.
- Finally, the poets themselves are the media (the vessels, the transformers) of poetry. In Norse tradition, poets are powerful, but one should never entirely trust them – no more than one should ever entirely trust Odin.
[First written/posted 2 September 2004]
Odin's Runes (from the Edda)
The Norse god Odin (a.k.a. Odinn, Woden, and Wotan) sings of how he took up the secret craft of runes and rune-making in an ancient Scandinavian lay, "The Song of the High One" (Hávamal). Here is a key segment, translated from medieval Icelandic by Patricia Terry (in Poems of the Elder Edda, Philadelphia: University of Pennsylvania Press, 1990):
------------------
Odin said:
I know that I hung on a high windy tree
for nine long nights;
pierced by a spear - Odin's pledge -
given myself to myself.
No one can tell about that tree,
from what deep roots it rises.
They brought me no bread, no horn to drink from,
I gazed toward the ground.
Crying aloud, I caught up runes;
finally I fell.
Nine mighty songs I learned from the son
of Bölthorn, Bestla's father,
and I came to drink of that costly mead
the holy vessel held.
Thus I learned the secret lore,
prospered and waxed in wisdom;
I won words from the words I sought,
verses multiplied where I sought verse.
------------------
Odin said:
I know that I hung on a high windy tree
for nine long nights;
pierced by a spear - Odin's pledge -
given myself to myself.
No one can tell about that tree,
from what deep roots it rises.
They brought me no bread, no horn to drink from,
I gazed toward the ground.
Crying aloud, I caught up runes;
finally I fell.
Nine mighty songs I learned from the son
of Bölthorn, Bestla's father,
and I came to drink of that costly mead
the holy vessel held.
Thus I learned the secret lore,
prospered and waxed in wisdom;
I won words from the words I sought,
verses multiplied where I sought verse.
------------------
(First posted 30 August 2004)
Digital Runes (2004)
Written as preamble for the original version of my first blog, Digital Runes, and posted 30 August 2004.
Why “Digital Runes”? Let me clarify that my purpose is not to create a system for psychic prognostication via the internet. Neither do I propose to examine the history of alphabets, at least in any precise scientific sense. Between these two accepted meanings of “rune” – (a) characters inscribed in wood or stone used for fortune-telling and (b) a specific alphanumeric system of writing used by ancient proto-literate societies in Northern Europe – are more provocative meanings for the term. A rune is also a riddle: a specific figure whose meaning is activated through the imagination of the “reader” - who might also be described as "listener" or even "singer." Digital Runes, then, signifies a collection of images, figures, reference points, and so on, that are intended to provoke a creative response in the reader – as well as serve as a system for remembering the content that is generated.
The great Norwegian playwright Henrik Ibsen described one of his contemporaries as understanding how runic symbols differ from more conventional symbolism:
Although semiotics and deconstruction have complicated the referential stability of language and representation, the basic structure of the sign remains that of an indicator: a sign points to something. The meaning of any sign may unfold in the interactions among signifier, signified, and interpreter, but the shape of its function is based on periodic and definitive resolutions (as if one were walking a path marked with clear road signs) – through icons that narrow the field of possible meaning and achieve communicative closure. To put that another way – signs direct the reader on a specific journey with a defined path and a definite end.
Runes are not signs, in this sense. Runes mark out not a road, but a riddle – they don’t so much direct the interpreter on a specific path as inscribe the boundaries for a game. Runes point away from answers and direct referents. Instead of closing meaning (arguably the basic purpose of communicating with someone – being understood), runes use expanding layers and fields of reference to open meaning. The purpose of a rune is not to transparently lead the interpreter to a defined meaning; it is to deliberately obscure a specific meaning in order to open up a larger field of possible interpretations. Runes attach to contexts, not precise definitions; and the ancient riddle-game that they invoke is not the sort where the answers are printed upside-down at the bottom of the page. To put it another way, runes function as acts, as an activity, rather than establishing facts or set conditions. A rune never means one thing; it stimulates a process of making meanings.
The history of the word “rune” itself gives some indication of this, as well as how flexible the concept of runes can be. In the 12th century, the Middle English word rune meant an utterance, whisper, or murmur; by 1200 it could also mean speech, language, and even a song or poem. This usage descended from the Old English run which meant “secret” or “mystery” – suggesting that the later usage carried with it the artistry of evasion. All of these later forms (and there were many in Northern European languages) are cognates of the Old Saxon runa, which means a secret or mystery, but also “counsel,” as in sage advice. Later descendants include the German raunen (“whisper”) and the English rumor . (For more information, see The Barnhart Dictionary of Etymology, NY: H.W. Wilson, 1988.)
The relationship between these carved figures in wood or stone and the magic associated with them was more than denotative, as a specific rune could be used for varying and diverse purposes. But as an artifact, a rune not only concealed a secret, its use was also a craft – a mystery that had to be learned through practice. Runes had to be enacted (activated by the “reader”) in order to be useful. The value of these devices was not simply as alphabetic characters, defining a coherent sound; they served as the marker for secrets, whispers, rumors, songs, and poems – all activities involving the multiplication of meaning, rather than its limitation. As an idea, then, “rune” describes a fairly coherent range of related concepts: something hidden has been marked, but requires the active participation of the user in order to be used. Moreover, just as a rumor tends to escape the event or story from which it originates – expanding into a broad sphere of unanticipated interpretation and transformation – a rune likewise begins a process of meaning-making, rather than simply bringing it to rest.
“Digital Runes,” then, describes an experiment in using kernels of data – inscribed figures of text, imagery, or events – that I encounter and reflect upon. My purpose in each is not to exhaust a particular subject but to open it for further speculation. Each essay – each post – comes from an impromptu attempt to activate the potential significance of a starting point of discrete data. In the case of this particular post, today’s musing stems from a passage on runes in my doctoral dissertation, The House of Memory (2000). But in combination with the subjects of the other posts, a new range of possible meanings and associations can be opened up. Indeed, the authorship – as is the case on blogs open to reader commentary – is shared by all who participate in it. All it takes active engagement and creative reflection to share in the craft of reading runes.
Why “Digital Runes”? Let me clarify that my purpose is not to create a system for psychic prognostication via the internet. Neither do I propose to examine the history of alphabets, at least in any precise scientific sense. Between these two accepted meanings of “rune” – (a) characters inscribed in wood or stone used for fortune-telling and (b) a specific alphanumeric system of writing used by ancient proto-literate societies in Northern Europe – are more provocative meanings for the term. A rune is also a riddle: a specific figure whose meaning is activated through the imagination of the “reader” - who might also be described as "listener" or even "singer." Digital Runes, then, signifies a collection of images, figures, reference points, and so on, that are intended to provoke a creative response in the reader – as well as serve as a system for remembering the content that is generated.
The great Norwegian playwright Henrik Ibsen described one of his contemporaries as understanding how runic symbols differ from more conventional symbolism:
“[He] has allowed the symbolism to stand there without commentary like a runic inscription, leaving it to each member of the audience to interpret it according to his or her individual needs … And the play does not end at the fall of the curtain on the fifth act. The true end lies beyond; the poet indicates the direction in which we may seek; it is now up to each one of us to find his or her own way there.”
Quoted in Michael Meyer, Ibsen: A Biography (1971: 148).
An author who uses runes is not interested in concealing a specific, concrete solution or hidden meaning; rather, what is hidden is the exact potential of the author's reference, which the reader (or listener, or performer) must unlock through interpretation.
Although semiotics and deconstruction have complicated the referential stability of language and representation, the basic structure of the sign remains that of an indicator: a sign points to something. The meaning of any sign may unfold in the interactions among signifier, signified, and interpreter, but the shape of its function is based on periodic and definitive resolutions (as if one were walking a path marked with clear road signs) – through icons that narrow the field of possible meaning and achieve communicative closure. To put that another way – signs direct the reader on a specific journey with a defined path and a definite end.
Runes are not signs, in this sense. Runes mark out not a road, but a riddle – they don’t so much direct the interpreter on a specific path as inscribe the boundaries for a game. Runes point away from answers and direct referents. Instead of closing meaning (arguably the basic purpose of communicating with someone – being understood), runes use expanding layers and fields of reference to open meaning. The purpose of a rune is not to transparently lead the interpreter to a defined meaning; it is to deliberately obscure a specific meaning in order to open up a larger field of possible interpretations. Runes attach to contexts, not precise definitions; and the ancient riddle-game that they invoke is not the sort where the answers are printed upside-down at the bottom of the page. To put it another way, runes function as acts, as an activity, rather than establishing facts or set conditions. A rune never means one thing; it stimulates a process of making meanings.
The history of the word “rune” itself gives some indication of this, as well as how flexible the concept of runes can be. In the 12th century, the Middle English word rune meant an utterance, whisper, or murmur; by 1200 it could also mean speech, language, and even a song or poem. This usage descended from the Old English run which meant “secret” or “mystery” – suggesting that the later usage carried with it the artistry of evasion. All of these later forms (and there were many in Northern European languages) are cognates of the Old Saxon runa, which means a secret or mystery, but also “counsel,” as in sage advice. Later descendants include the German raunen (“whisper”) and the English rumor . (For more information, see The Barnhart Dictionary of Etymology, NY: H.W. Wilson, 1988.)
The relationship between these carved figures in wood or stone and the magic associated with them was more than denotative, as a specific rune could be used for varying and diverse purposes. But as an artifact, a rune not only concealed a secret, its use was also a craft – a mystery that had to be learned through practice. Runes had to be enacted (activated by the “reader”) in order to be useful. The value of these devices was not simply as alphabetic characters, defining a coherent sound; they served as the marker for secrets, whispers, rumors, songs, and poems – all activities involving the multiplication of meaning, rather than its limitation. As an idea, then, “rune” describes a fairly coherent range of related concepts: something hidden has been marked, but requires the active participation of the user in order to be used. Moreover, just as a rumor tends to escape the event or story from which it originates – expanding into a broad sphere of unanticipated interpretation and transformation – a rune likewise begins a process of meaning-making, rather than simply bringing it to rest.
“Digital Runes,” then, describes an experiment in using kernels of data – inscribed figures of text, imagery, or events – that I encounter and reflect upon. My purpose in each is not to exhaust a particular subject but to open it for further speculation. Each essay – each post – comes from an impromptu attempt to activate the potential significance of a starting point of discrete data. In the case of this particular post, today’s musing stems from a passage on runes in my doctoral dissertation, The House of Memory (2000). But in combination with the subjects of the other posts, a new range of possible meanings and associations can be opened up. Indeed, the authorship – as is the case on blogs open to reader commentary – is shared by all who participate in it. All it takes active engagement and creative reflection to share in the craft of reading runes.
Kenning "Grok" (2004)
A variant meaning of the term kenning derives from an older Anglo-Saxon verb for “knowing” or “recognizing,” ken. Various definitions associate this kind of kenning with sight and seeing, which dovetails curiously with the poetic device and its use of imagery (as well as the powerful use of vision in ancient memory arts). In this light, this blog – an experiment with latter-day kennings – also becomes a verb: an exploration of knowing, particularly through multimedia experiences.
That said, it seems a bright idea to help curious readers ken my more obscure references, such as the term “grokking” in the heading of my last post. For those of you unfamiliar with Robert Heinlein’s science fiction novel Stranger in a Strange Land (1961) “grok” is a verb invented by Heinlein and later popularized among SF fans and neopagans. It’s a useful description for a depth of understanding and communication that transcends language, and it’s one of the more useful slang terms you’ll ever come across.
In brief, “to grok” is to understand something so completely that one internalizes its meaning.
In Heinlein’s novel, grok literally means “to drink.” Because the word comes from a culture where water is so scarce that it assumes religious significance, this kind of drinking carries with it huge metaphorical associations. To drink water with another is to share life with another; sharing water is to share existence as well as sustenance. Grokking, then, is itself a metaphor for sharing meaning on so profound a level that the meaning shared (like water consumed) becomes part of each drinker.
Let’s take this from another angle. Imagine that we’re watching a particularly spectacular sunset over the waters of the Mississippi River. You say to me, “What a spectacular sunset!” I might reply: “I hear you,” which suggests that I’ve heard your opinion (without necessarily agreeing with it). I might say “I understand,” which implies that I have some concept of how and why you find this sunset so spectacular. I might reply, “I know,” which hints that I concur with your view. But if I were to say, “I grok,” what would that mean?
It would mean that – in a gesture bordering on faith – I have not only heard and understood your view, I share it to the point where I have effectively made it my own. I’ve internalized your statement, the moment, the sunset itself, and the memory that all of these have made. Insofar as it’s humanly possible, I’ve tried to make what you see my own – and, by reflecting your view, made my view yours. We have shared understanding, just as one might share a drink of water. One cannot grok without sharing.
What distinguishes grokking from other forms of understanding and knowing, I think, is the extent to which it must be shared – to grok, one must partake of a shared experience. And if we may return to kennings for a moment, the kind of knowing suggesting by the ancient words “ken” and “kenning” also involves a shared experience – a shared vision, story, or experience, whether or not that shared experience is actually present. The neologism “grok” meets the ancient logic of kenning through the sharing of experience.
This, too, is one of the guiding principles behind this blog, and I devoutly hope that if you’re reading this, you feel invited to respond and participate in a digital conversation, this digital experience of explored meaning. I see the internet in general, and blogs in particular, as an exceptional opportunity for strangers (and friends) to share their perspectives and ideas through a new and unfamiliar language of experience – a language, by the way, that increasingly shapes the world we live in.
Do you grok?
First written/posted: 27 August 2004
That said, it seems a bright idea to help curious readers ken my more obscure references, such as the term “grokking” in the heading of my last post. For those of you unfamiliar with Robert Heinlein’s science fiction novel Stranger in a Strange Land (1961) “grok” is a verb invented by Heinlein and later popularized among SF fans and neopagans. It’s a useful description for a depth of understanding and communication that transcends language, and it’s one of the more useful slang terms you’ll ever come across.
In brief, “to grok” is to understand something so completely that one internalizes its meaning.
In Heinlein’s novel, grok literally means “to drink.” Because the word comes from a culture where water is so scarce that it assumes religious significance, this kind of drinking carries with it huge metaphorical associations. To drink water with another is to share life with another; sharing water is to share existence as well as sustenance. Grokking, then, is itself a metaphor for sharing meaning on so profound a level that the meaning shared (like water consumed) becomes part of each drinker.
Let’s take this from another angle. Imagine that we’re watching a particularly spectacular sunset over the waters of the Mississippi River. You say to me, “What a spectacular sunset!” I might reply: “I hear you,” which suggests that I’ve heard your opinion (without necessarily agreeing with it). I might say “I understand,” which implies that I have some concept of how and why you find this sunset so spectacular. I might reply, “I know,” which hints that I concur with your view. But if I were to say, “I grok,” what would that mean?
It would mean that – in a gesture bordering on faith – I have not only heard and understood your view, I share it to the point where I have effectively made it my own. I’ve internalized your statement, the moment, the sunset itself, and the memory that all of these have made. Insofar as it’s humanly possible, I’ve tried to make what you see my own – and, by reflecting your view, made my view yours. We have shared understanding, just as one might share a drink of water. One cannot grok without sharing.
What distinguishes grokking from other forms of understanding and knowing, I think, is the extent to which it must be shared – to grok, one must partake of a shared experience. And if we may return to kennings for a moment, the kind of knowing suggesting by the ancient words “ken” and “kenning” also involves a shared experience – a shared vision, story, or experience, whether or not that shared experience is actually present. The neologism “grok” meets the ancient logic of kenning through the sharing of experience.
This, too, is one of the guiding principles behind this blog, and I devoutly hope that if you’re reading this, you feel invited to respond and participate in a digital conversation, this digital experience of explored meaning. I see the internet in general, and blogs in particular, as an exceptional opportunity for strangers (and friends) to share their perspectives and ideas through a new and unfamiliar language of experience – a language, by the way, that increasingly shapes the world we live in.
Do you grok?
First written/posted: 27 August 2004
Grokking Kennings (2004)
In oral cultures, we are sometimes told, memory served those needs that written text fulfills in literate societies. Earlier cultural historians tended to argue that writing brought a quantum leap in the preservation and perpetuation of knowledge (which is almost certainly true). However, their reasoning was based on an inadequate understanding of memory; in particular, the extent to which memory can be trained by technique and shaped by art. One brief example can provide an indication of how flexible and creative the memory arts – “mnemotechnics” – of ancient societies were … as well as the pervasiveness of those techniques in modern societies.
The kenning, after which this blog is named, refers to a device used throughout Germanic (particularly Norse) poetry and mythology. Simply put, a kenning is a story in miniature: an evocative phrase, sometimes associated with imagery, that alludes to another story without actually mentioning or summarizing it. For a familiar example, consider:
Achilles’ heel
In conversation or narrative, this simply means “vulnerable spot” or “weak point.” But why does it mean this? Nothing in “Achilles” or “heel” specifically evokes weakness or vulnerability. The phrase acquires its meaning from an absent story – moreover, a story that many who use the phrase might not even know in full. Achilles was the greatest of Greek heroes, in part because his mother immersed him into the River Styx shortly after he was born, making him invulnerable to harm. But she had to hold on to his heel to do so, which meant that this single spot on his body would remain unprotected. When Achilles finally fell in battle, it was due to this minor flaw in his otherwise unassailable person.
All of this, of course, is far too big a mouthful for anyone wishing to refer to it in detail; but if a poet, say, wishes to evoke the powerful image of a single weak spot in an otherwise perfect defense, the phrase itself can be used as an anchor for memory. Without having to re-tell Achilles’ story, the artist can refer to it through a kenning, a kind of literary metaphor made of memory. For example:
Kryptonite is Superman’s Achilles ’ heel, because exposure to this green ore can remove his powers and eventually kill him.
Without telling Achilles’ story, the speaker can make a connection between it and the related (modern) tale of Superman, a similarly invulnerable superhero. The basic principle of kennings, then, is to allow speakers to refer to entire narratives through powerfully economical metaphors, linking not just images and ideas, but entire stories to each other.
What distinguishes kennings – which can function just as effectively in literate societies – is that they involve the audience in a remembered absence. By evoking without describing the absent story, a host of possible connections and parallels are opened for the audience to consider; unlike an allusion, however, typically a kenning does not prescribe or specify what connections might exist. The audience becomes co-creator, through the medium of memory, opening the narrative to possible meanings and associations that the artist never specifically imagined.
In a sense, a kenning functions very similarly to a hyperlink; in a hypertext, one can pass over linked phrases or words, or re-trace the author’s steps to sources or related material by following the link. The key to understanding kennings – and by extension, the power of arts of memory even in literate societies – is that they provide the power of hyperlinks without having to be specifically anchored to another site. Imagine a hyperlink definition, for instance, that provided a randomly selected website from a short list of choices. Imagine if, in a hypertext describing Superman, clicking on “kryptonite” brought up an entire assortment of literary and artistic precedents and parallels (such as Achilles’ heel, Smaug’s missing scale, Siegfried’s back, and so on). Through memory – and particularly a memory trained by exposure to the narratives, myths, and imagery of an entire culture – the written word catalyzes an interpretive activity.
I’ve named this site after kennings, then, to evoke that kind of activity and associate it with this blog. My purpose here will be to address texts and experiences, narratives and images, through a language of connection, and place these brief essays in a context where other readers might expand their meaning and their potential further than I might imagine. I invite readers to browse these essays through the binocular focus not only of their creativity, but their memories, providing associations, connections, commentary, and arguments that the essays themselves could never contain.
I dedicate this work at the outset to a high school teacher named J. D. Soley, who taught me how to write critically (“analyze – avoid mere summary – and be concrete”), but also rapidly– through the countless impromptu essays he would assign in the classes I took from him. This site marks my effort to return to the basic principles of the writer’s craft that he revealed to me over fifteen years ago.
Written/Posted: 27 August 2004
The kenning, after which this blog is named, refers to a device used throughout Germanic (particularly Norse) poetry and mythology. Simply put, a kenning is a story in miniature: an evocative phrase, sometimes associated with imagery, that alludes to another story without actually mentioning or summarizing it. For a familiar example, consider:
Achilles’ heel
In conversation or narrative, this simply means “vulnerable spot” or “weak point.” But why does it mean this? Nothing in “Achilles” or “heel” specifically evokes weakness or vulnerability. The phrase acquires its meaning from an absent story – moreover, a story that many who use the phrase might not even know in full. Achilles was the greatest of Greek heroes, in part because his mother immersed him into the River Styx shortly after he was born, making him invulnerable to harm. But she had to hold on to his heel to do so, which meant that this single spot on his body would remain unprotected. When Achilles finally fell in battle, it was due to this minor flaw in his otherwise unassailable person.
All of this, of course, is far too big a mouthful for anyone wishing to refer to it in detail; but if a poet, say, wishes to evoke the powerful image of a single weak spot in an otherwise perfect defense, the phrase itself can be used as an anchor for memory. Without having to re-tell Achilles’ story, the artist can refer to it through a kenning, a kind of literary metaphor made of memory. For example:
Kryptonite is Superman’s Achilles ’ heel, because exposure to this green ore can remove his powers and eventually kill him.
Without telling Achilles’ story, the speaker can make a connection between it and the related (modern) tale of Superman, a similarly invulnerable superhero. The basic principle of kennings, then, is to allow speakers to refer to entire narratives through powerfully economical metaphors, linking not just images and ideas, but entire stories to each other.
What distinguishes kennings – which can function just as effectively in literate societies – is that they involve the audience in a remembered absence. By evoking without describing the absent story, a host of possible connections and parallels are opened for the audience to consider; unlike an allusion, however, typically a kenning does not prescribe or specify what connections might exist. The audience becomes co-creator, through the medium of memory, opening the narrative to possible meanings and associations that the artist never specifically imagined.
In a sense, a kenning functions very similarly to a hyperlink; in a hypertext, one can pass over linked phrases or words, or re-trace the author’s steps to sources or related material by following the link. The key to understanding kennings – and by extension, the power of arts of memory even in literate societies – is that they provide the power of hyperlinks without having to be specifically anchored to another site. Imagine a hyperlink definition, for instance, that provided a randomly selected website from a short list of choices. Imagine if, in a hypertext describing Superman, clicking on “kryptonite” brought up an entire assortment of literary and artistic precedents and parallels (such as Achilles’ heel, Smaug’s missing scale, Siegfried’s back, and so on). Through memory – and particularly a memory trained by exposure to the narratives, myths, and imagery of an entire culture – the written word catalyzes an interpretive activity.
I’ve named this site after kennings, then, to evoke that kind of activity and associate it with this blog. My purpose here will be to address texts and experiences, narratives and images, through a language of connection, and place these brief essays in a context where other readers might expand their meaning and their potential further than I might imagine. I invite readers to browse these essays through the binocular focus not only of their creativity, but their memories, providing associations, connections, commentary, and arguments that the essays themselves could never contain.
I dedicate this work at the outset to a high school teacher named J. D. Soley, who taught me how to write critically (“analyze – avoid mere summary – and be concrete”), but also rapidly– through the countless impromptu essays he would assign in the classes I took from him. This site marks my effort to return to the basic principles of the writer’s craft that he revealed to me over fifteen years ago.
Written/Posted: 27 August 2004
Reposting Old Posts
I've discovered that Blogger, thanks to the ubiquitous changes at Google, has rendered older materials on defunct blogs inaccessible. I'm transferring posts from Digital Runes (kenning.blogspot.com) to here as part of an experiment to
- recover a dead blog and revive it
- figure out if blogger is still worth the trouble
- revive a semi-public space for experiments in writing
The forms change so quickly, it's easy to forget the content. (Assuming, of course, that these are somehow indistinguishable from each other ...)
Reprint
(composed at Starwood, July 2007, at a writing workshop)
(from a slip of paper drawn from a hat reading "guzzle down more emptyness")
"guzzle down more emptyness"
drink deep of forgetfulness
harvest longing
from a farmed crop of planted seeds
of oblivion
of sleep
take hold of the goblet of metaphor
grasp it by handles like similes
tip the brim towards your mouth
don't wait
and guzzle down more emptyness
feast on forgotten rhymes
sit at a table brimming to bursting
with discarded language
dead words
cooked - with spices
having slain them in the hunt
through a forest a woods a wilderness
of longing
with a sideplatter loaded to bursting
with for of to too many prepositions
wipe your mouth (carefully) with an adverb
and taste the salt of a dash of epic
that casts its savor into the flavor of everything
you've remembered
and all that which slips your mind
be careful not to glut yourself
when you become too full
and keep the edge of hunger to your lips
it keeps the flavors fresh
if need be
excrete an adjective
vomit forth a poem
knowing full well that time is a napkin
with 100% absorbency
guaranteed by entropy
and that the feast itself
is all that lasts
eat the words
let them become you and you become them
let emptiness change the constitution of your nature
so that you can know what fullness is
and can taste the difference
between
(from a slip of paper drawn from a hat reading "guzzle down more emptyness")
"guzzle down more emptyness"
drink deep of forgetfulness
harvest longing
from a farmed crop of planted seeds
of oblivion
of sleep
take hold of the goblet of metaphor
grasp it by handles like similes
tip the brim towards your mouth
don't wait
and guzzle down more emptyness
feast on forgotten rhymes
sit at a table brimming to bursting
with discarded language
dead words
cooked - with spices
having slain them in the hunt
through a forest a woods a wilderness
of longing
with a sideplatter loaded to bursting
with for of to too many prepositions
wipe your mouth (carefully) with an adverb
and taste the salt of a dash of epic
that casts its savor into the flavor of everything
you've remembered
and all that which slips your mind
be careful not to glut yourself
when you become too full
and keep the edge of hunger to your lips
it keeps the flavors fresh
if need be
excrete an adjective
vomit forth a poem
knowing full well that time is a napkin
with 100% absorbency
guaranteed by entropy
and that the feast itself
is all that lasts
eat the words
let them become you and you become them
let emptiness change the constitution of your nature
so that you can know what fullness is
and can taste the difference
between
Subscribe to:
Comments (Atom)