Category: Literary studies

Disparate Economies

In the course of my reading group’s discussion of Pride and Prejudice the other night, I commented that there are always two economies: one of wealth, the other of status.  The rankings that competition for preeminence on those scales produces do not (in many cases) coincide.  This is particularly obvious in Austen’s world, where fortunes made from “trade” do not secure the kind of social status that gentry like the Bennetts enjoy, despite their fairly modest wealth.

But money and status are not entirely disconnected.  Bingley (who eventually marries the eldest Bennett daughter, Jane) is the beneficiary of his father’s success in trade—and is in the process of “laundering” the substantial wealth that he has inherited.  He will marry a daughter from the gentry (basically, people whose money derives from land and who have the financial wherewithal to not have to work) and is looking around to purchase an “estate.”  His family will move from the world of trade to the status of “landed” in one generation.  The novel makes it clear that Darcy’s family made a similar move a generation or two back.  Lady Catherine (Darcy’s obnoxious aunt) is not “old” aristocracy; her title only goes back two generations.

[An important sidenote: the supposed firewall between money gained through “trade” and the “old money” of the landed aristocracy was more fiction than fact. Those safe five percent returning investments on which the gentry lived only partially derived from their English estates, with their rent paying tenants and agricultural products. Their money was also invested–as Austen registers in Mansfield Park–in the plantations worked by slaves to produce sugar, cotton, and tobacco in the Americas. Similarly, of course, the great textile factories of the Industrial Revolution depended on cotton produced by slaves. On top of all that, until just about exactly the time of Austen’s death, there were the incomes and profits generated directly by the slave trade.]

The relative openness of British society, especially in exactly Austen’s lifetime (1775-1817), to such status enhancement is often cited as one reason the British never suffered the kind of revolution that unfolded in France.  The new wealth generated by the Industrial Revolution could buy status as well as all the other goodies money can buy.  The aristocracy was not closed (as it was in France).  The novel’s example is the Lucas family.  The father is knighted for giving a pretty speech when the king comes to town.  And the novel pokes (fairly gentle) fun at the newly minted Sir William’s pretensions to status—especially since his household is dirt poor. 

The Lucas sub-plot indicates (as does Lady Catherine’s efforts to prevent the marriage of Elizabeth and Darcy) that snobbery is rife.  Social climbing (the use of money and marriage to launder one’s declassé origins) is always vulnerable to those who will sneer at the pretensions of the newcomers.  The charge of vulgarity always lurks.  Which is why “manners” are so crucial in the novel—and in its assessment of the character of its characters.  Elizabeth may be technically right when she insists to Lady Catherine that she is Darcy’s social equal in every respect.  But the behavior of her mother and of her sisters, of which Elizabeth is deeply ashamed, puts the lie to that courageous assertion.  One has to act one’s status—or the game is lost.  That’s why the real sting in Elizabeth’s refusal of Darcy’s first proposal comes when she tells him his manner has been ungentlemanly.  Just as Elizabeth cannot gainsay what Darcy has to say about her family in that proposal, he cannot deny, upon reflection, that she is right to say he has not acted like a gentleman.

All of this is to say that for Austen, status has substance.  She is not blind to the absurdities of status—both of the naked efforts to attain it and the pufferies practiced by those who think they have it (pufferies, as in Lady Catherine’s and Sir William’s cases, that stem from insecurity about actually possessing the status they nominally possess.  After all, why would Lady Catherine suffer the obsequies of Mr. Collins unless she needed to be constantly assured of her eminence?)

Still, Austen also respects status even as she mocks how people strive for it—and inhabit it once attained.  She believes in the codes of the gentleman, in the codes of what the French call politesse, because they enable social intercourse along lines that she desires.  She is well aware that civility often masks indifference and even outright hostility (as in the case of Bingley’s sisters and their behavior toward Jane and Elizabeth Bennett), but she greatly prefers that hypocrisy (the tribute vice pays to virtue) to the outright vulgarity of Mrs. Bennett.  A world in which hostility and hatred must be veiled is a better world than one of direct (to the mattresses) competition. 

Everything is doubled in Austen.  She sees the utility (to use a vulgar word she would never use) of the “ways” of her world even as she satirizes the deficiencies of those “ways” and is keenly aware of how people use them to serve selfish, even nefarious, ends.  Thus the novel warns us against taking “manners” at face value.  They can be a mask, as they are in the case of Wickham.  The substance of status can be a lie as well.  Judgment of others is the primary—and incredibly difficult—task for everyone in Austen’s novels, but for no one more than her heroines.  To make a mistake in whom one marries is, particularly for women (but also for men as Mr. Bennett’s case shows), an utter disaster.  But it is incredibly difficult to know who another truly is.  Status and manners are only partial clues and can be deceiving.  Austen is very severe on characters she deems “stupid.”  Stupidity, in her novels, it seems to me, is evidenced most directly in either lacking any interest in judging/interpreting the character of others (Mr. Collins is too self-involved in putting forward his own pretensions to ever see another person) or in blindly accepting at face value worldly markers of character (Mrs. Bennett, for whom a man’s fortune is all you need to know.) To take either money or social status as an accurate marker of character, of someone’s true worth, is a grievous mistake.

How does this translate to today’s world?  Not very directly, but it’s not irrelevant either.  The competition for money is more direct today—and there is not much social stigma attached to the source of one’s wealth or to engagement in direct, undisguised, efforts to accumulate money.  It would be tempting to say that money and status are more directly aligned in today’s America than it was in Austen’s England. That is, because gaining money is not stigmatized, to become wealthy is also to achieve status. To some extent that is true.  But it is still complicated.  There is not a one-to-one correspondence.  We still utilize a concept of “vulgarity”—the obverse of which might be captured in terms like “esteem” or “respect.” And we have our own laundering system, primarily in our prestige obsessed system of higher education. The newly rich want to send their children to the Ivies or other prestigious private universities (with maybe three or four public flagships also acceptable) as markers of having “made it.”

The obvious case for the still incomplete alignment of money and status is Donald Trump.  Long before he got involved in politics, Trump was a by-word for vulgarity.  And there is a decent case to be made that he only got involved in politics out of resentment at being laughed at by Barack Obama.  Certainly, resentment against Obama (a “class act” if there ever was one) is a major motivator for Trump.  Another, somewhat different case, would be Brittney Spears.  If the notion of “nouveau riche” or parvenu haunts Trump, the specter of “white trash” hovers over Spears.  And it seems pretty obvious that philanthropy to prestigious cultural institutions—the Ivy league universities, the operas in NYC and San Francisco, art museums and the like—are a contemporary way to launder money, to use it to attain status, entrance into the right social circles.

I am always befuddled when I read all those “social” novels—by Thackeray, Proust, Edith Wharton among others—where social climbing is the dominant motive driving the characters’ actions.  In the worlds I inhabit, such ambitions seem utterly absent.  In contemporary America, where is “society” of that sort even to be found? If you wanted to “climb,” where would you go and what would you do?  Who (like Proust’s Verdurins and Guermantes) are today’s social arbiters?  Outside of NYC and San Francisco, are there really social hierarchies, exclusive events/salons/balls that outsiders fervently dream of getting access to—and people who do anything and everything to gain that access?  It just doesn’t seem the way life in present day USA is organized.  I have no doubt that some philanthropy is driven by the desire to be associated with other donors whom one wants to hang with, but I have also known and worked with other philanthropists to whom attaining some increase in social status is of no interest to them.

So I am left with the puzzle of how the economies of wealth and status work today.  What are the terms of competition for these goods?  I won’t talk about competition for wealth here today, although that’s an interesting topic to which I would like to return.  Partly because I think some roads to wealth today rely on the kinds of media that have also greatly altered the forms status now takes and the ways to gain it.  (What I have in mind is the competition for venture capital—and the ways in which style over substance can win the day as in the cases of WeWork, FX, and Thanatos.) 

Anyway, here’s my suspicion. The economy of status has been altered drastically by the nature of publicity.  Let’s assume that the desire for status is a desire to be seen, to be known, and to be esteemed.  One wants to be recognized as a member in good standing of a certain social set.  My skepticism about the kinds of social climbing found in the classic novels as existing today stems from the difficulty of identifying social sets in today’s world.  Where is this “society” that you are trying to attain status in? 

One answer to that question is the “set” established by your profession.  An artist strives for respect and standing in the “art world”; a university professor wants standing in her “field”; and business people want esteem among their peers.  There are, in other words, professional hierarchies—and these hierarchies are not primarily tracked by money.  As a business person once told me, the money’s not primary, but it is a way of keeping score.  So money is not unrelated to the rankings in the hierarchy, but non-monetary achievements are (ideally) the “real” determinant of status.  The Beatles and Bruce Springsteen have a standing superior to Neil Diamond irrespective of the fortunes accumulated by each. We don’t reference how much money they each have when ranking them.

Here, however, is where I think the distortions of the media come into play.  In the classic social novels, no one is pursuing fame or celebrity.  Modern media mean that you can play for standing in society as a whole, not in some particular subset.  Everyone knows who Donald Trump is (even long before he ran for president) just as everyone knows who Michael Jackson is.  Competition for standing in that amorphous, but all-encompassing, world is competition for attention.  It has become a cliché, but still true, that we now live in an attention economy.  What is disturbing to old-liners like me is that attention seems substance-free.  No such thing as bad publicity.  Celebrity is being famous for being famous.  The celebrity doesn’t have to bring any goods to market (think of Elizabeth Holmes); she just has to be good at attracting eyes—and in Holmes’ case (as in many others) the money will follow the eyes.

How long can you get away with this?  Crypto is indicating you can get away with it for quite some time.  On the other hand, there are still some (even if feeble) quality controls.  Bruce Springsteen, for the most part, manages to sidestep the attention economy.  He has never descended into tabloid hell the way Brittney Spears has—and the almost universal respect he has garnered remains tied to his achievements, not to his being a celebrity.  Or think of Dolly Parton as contrasted to Tammy Wynette (despite some recent attempts to recalibrate our understanding of Tammy.)  Dolly has slowly but surely moved from being a cartoonish character to a revered one.  That’s partly because of her ability to make fun of her white trash look (“it takes a lot of money to look this cheap”), and her avoiding the tabloid fodder of Wynette’s drug problems and multiple divorces, along with a rise in cultural status of country music over the past forty years.

Taylor Swift is an interesting case along these lines.  There’s a substantial body of work there (even if this old fogey can’t judge the quality of it), but her fame has now thrown her into the media frenzy where her actual music is mostly irrelevant.  Will she be able to avoid descending into tabloid hell?  Will she continue to produce her music?  When you think of it, it is a miracle that the Beatles, once Beatlemania hit, actually continued to develop musically and produced work in 1967 that was superior to the work that gained them fame in 1964.  It’s only worse now in terms of how the attention world will eat its young.  Maybe Taylor Swift will manage not to get swallowed up on the basis of this fame coming to her (unlike the Beatles) relatively late.  She was known before of course, but not “known” like this—and let’s hope the ballast of being 33, not 17 like Brittney Spears, sees her through.

In sum, “status” seems to have exploded in today’s world, having to a large extent collapsed into something better described as “fame” or “celebrity.”  Yet, there are still circumscribed social sets in which people strive for status, in which there are fairly well defined markers for garnering respect.  But there’s now another game in town, one where a person becomes famous not relative to a defined set, but for society in general.  Donald Trump, we might justly say, failed to garner any respect in the closed sets of NYC society or the business world (his skills as a businessman are laughable, non-existent; he fooled no one in that world).  But he was a winner in the other (larger?) game of becoming known, if not quite respected, in society at large.  And you can cash in that kind of success, not just in dollars but in other perks as well.

You can’t have that larger game without the media through which one’s image is offered to millions.  We have multiple media of that type now (not just the newspapers of the 19th century) and the frenzied effort to garner attention feels like the defining characteristic of our era.  That so much of that effort is also light on content (to put it charitably) is deeply disturbing to old fogeys like myself.

I am generally skeptical of claims that our times are radically different than times past—and hate positions that rely on claiming our times are much worse than times past.  So I want to register a caveat at the end of this post—and a promissory note.  The desire for fame as contrasted to status is not a new phenomenon, so I need to think about how fame was understood and pursued prior to the media tools currently deployed in seeking it.  And as long as I am trying to track “economies,” there are at least two other competitive spheres that should be considered:  struggles for power and the competition for sexual success (this last returning us to an Austen focused interest in the marriage market, but influenced now by the Darwinian concept of sexual selection.)  But enough for today.

Novak Djokovic and George Eliot: On Great Books (3)

I find myself compelled to return to the topic of great books as a result of reading Middlemarch with one of my reading groups.  To recap: I have argued that 1) our judgments of books changes over time and is context sensitive (cultural standards and sensibilities change); 2) that institutional inertia and imprimatur mean that a canon gets established and remains stable over long periods for “elite” or institutionally embedded opinion; revolutions in taste happen suddenly after long resistance to the revolution (akin to the idea of a “tipping point” or Thomas Kuhn’s notion of a “paradigm shift”); and 3) it hardly makes sense to rank order a set consisting of all novels since there is such variety within the set (what could it mean to compare Moby Dick to one of the Jeeves novels?) and that what is deemed “best” in any context is relative to the purposes that drive the judgment or choice.  Wodehouse is better than Melville on some occasions and for certain purposes.  In short, variety (diversity) reigns—both in the objects being judged and in the purposes that would underline any specific act of judgment.

Then I started reading Middlemarch and wondered if I simply was wrong.  That there are some achievements of human agents that simply make one shake one’s head in wonder: how could a human being be capable of that?  The breadth of vision in Middlemarch, the ability to imagine a whole world with an astounding cast of characters, startles—and humbles.  It seems a feat only one person in a million could pull off.  It is, in short, a masterpiece.  And masterpieces are all too rare.

Now it is possible to say Wodehouse also wrote masterpieces—given his aims and the genre in which he was working.  And it is certainly reasonable to prefer reading Wodehouse to Middlemarch on many occasions.  We get to one sticky issue here, the one best represented by Matthew Arnold insisting that Chaucer was not top drawer because his work lacked “high seriousness.”  One prejudice in the “great books” canon-making is some notion (vague enough) of profundity.  This is why tragedy has always been ranked above comedy, why King Lear is generally deemed greater than Twelfth Night despite each being of high quality in its chosen genre. 

I don’t have anything that strikes me as worth saying about this profundity issue.  I only think it should be acknowledged as a standard of judgment—and that it should be acknowledged that it is only one among many standards.  And I don’t think it should be a standard that trumps all the others.  Let’s discuss King Lear’s greatness in a way that specifies the standards by which we deem it great—and not indulge in meaningless comparisons to Twelfth Night, a play whose greatness is best understood in relation to other standards.

But—and here’s the rub, the reason for this blog post—I still find myself wanting to talk about the greatness of these Shakespeare plays.  Just sticking to Shakespeare, comparing apples to apples, I am going to say Twelfth Night is better than Two Gentlemen of Verona; and that King Lear is better than Coriolanus.  There are cases where the things to be compared are within the same domain—and one can be judged better than the other.  In the realm of realistic novels that aspire to a totalizing view of a certain social scene, Middlemarch is better than Sybil.  Of course, one is called upon to provide the reasons that undergird these judgments.

All of this brings me to Novak Djokovic—and the core doubt that drives this post (and this re-vision of my two earlier posts on “great books”).  What is astounding about Djokovic is the gap between him and most of the incredibly talented men’s tennis players in the world.  The twentieth best tennis player in world has almost no chance of beating Djokovic (especially in a five set match).  There are, in fact, only (at absolute most) ten players in the world who could beat him—and even in that case he would win the match against them well over half the time. 

My point is the extreme pyramid of talent.  That the gap between the tenth best player in the world and the absolutely best player is so wide defies explanation and belief.  It seems much more plausible to expect that the top rung of talent would be occupied by at least a group.   There are, after all, many aspiring tennis players and novelists who have put in the hours (Malcolm Gladwell’s famous “ten thousand hours”), yet do not reach the pinnacle.  Why is extreme talent, extreme achievement, so rare? I have no answer here. Talent is a mystery, what Shakespeare would have called “fortune”–unearned, simply implanted in the person. Of course, it is possible to waste one’s talents, just as it is crucial to nurture and hone one’s talents. But it is nonsense to think Djokovic’s supremacy is the product of his working harder and being more monomonaically dedicated to being the best tennis player in the world than his competitors. There are plenty of people just as dedicated to that goal as he is. They just lack his talent.

Could it be, then, that the term “great book” makes sense if used to designate those instances of extreme achievement, those cases where we encounter a work of human hands that awes us because it is so far beyond what most humans, even ones dedicated and talented in that specific field of endeavor, ever manage to accomplish? 

Great Books 2: Institutions, Curators, and Partisans

I find that I have a bit more to say on the topic of “great books.” 

The scare quotes are not ironic—or even really scare quotes.  Instead, they are the proper punctuation when referring to a word as a word or a phrase as a phrase.  As in, the word “Pope” refers to the head of the Catholic Church.  The phrase “great books” enters into common parlance with University of Chicago President Robert Maynard Hutchins’s establishment of a great books centered curriculum there in the 1930s.  From the Wikipedia page on Hutchins: “His most far-reaching academic reforms involved the undergraduate College of the University of Chicago, which was retooled into a novel pedagogical system built on Great Books, Socratic dialogue, comprehensive examinations and early entrance to college.”  The University of Chicago dropped that curriculum shortly after Hutchins stepped down in the early 1950s, with St John’s College now the only undergraduate institution in the country with a full-bore great books curriculum.  Stanford and Columbia had a very great books slanted general education set of requirements for first and second year undergraduates well into the 1990s, but have greatly modified that curriculum in the 21st century.

These curricular issues are central to what I want to write about today.  “Literature,” Roland Barthes once said, “is what gets taught.”  It is very hard to even have a concept of “great books” apart from educational institutions, from what students are required to read, from what a “well-educated” person is expected to be familiar with.  As I wrote a few posts back (https://jzmcgowan.com/2023/07/31/americans-are-down-on-college/), we in the United States seem now to have lost any notion of what a “well-educated” person is or should be.  The grace notes of a passing familiarity with Shakespeare or Robert Frost are now meaningless.  The “social capital” accruing to being “cultured” (as outlined in Pierre Bourdieu’s work) has absolutely no value in contemporary America (apart, perhaps, from some very rarified circles in New York). 

I am not here to mourn that loss.  As I said in my last post, aesthetic artefacts are only “alive” if they are important to some people in a culture.  Only if some people find that consuming (apologies for the philistine word) an artistic work is fulfilling, even essential to their well-being, will that work avoid falling into oblivion, totally forgotten as most work of human hands (artistic or otherwise) is. 

Today, instead, I want to consider how it is that some works do survive.  I think, despite the desire from Hume until the present, that the intrinsic greatness of the works that survive is not a satisfactory explanation.  More striking to me is that the same small group of works (easily read within a four year education) gets called “great”—and how hard it is for newcomers to break into the list.  For all the talk of “opening up the canon,” what gets taught in America’s schools (from grade school all the way up through college) has remained remarkably stable.  People teach what they were taught.

Yes, Toni Morrison, Salman Rushdie, and Gabriel Garcia Marquez have become “classics”—and are widely taught.  But how many pre-1970 works have been added to the list of “greats” since 1970?  Ralph Ellison and Frederick Douglass certainly.  James Baldwin is an interesting case because he has become an increasingly important figure while none of his works of fiction has become a “classic.”  On the English side of the Atlantic, Trollope has become more important while Shelley has been drastically demoted and Tennyson’s star is dimming fast.  But no other novelist has challenged the hegemony of Austen, the Brontës, Dickens, and Eliot among the Victorians, or Conrad, Joyce, Hardy, and Woolf among the modernists.  The kinds of wide-scale revaluations of writers that happened in the early years of the 20th century (the elevations on Melville and Donne, for example) have not happened again in the 100 years since.  There really hasn’t been any significant addition to the list (apart from Douglass and barring the handful of new works that get anointed) since 1930.

I don’t deny that literary scholars for the most part read more widely and write about a larger range of texts than such scholars did in the 1950s and 1960s.  (Even that assumption should be cautious.  Victorian studies is the field I know best and the older scholars in that field certainly read more of the “minor” poets of the era than anyone who got a PhD in the 1980s or later ever does.)  But the wider canon of scholars has not trickled down very much into the undergraduate curriculum.  Survey courses in both British and American literature prior to 1945 are still pretty much the same as they were fifty years ago, with perhaps one or two token non-canonical works.  More specialized upper class courses and grad courses are sometimes more wide-ranging.  Most significantly, the widening academic canon has not moved into general literate culture (if that mythical beast even exists) at all.

The one place where all bets are off is in courses on post 1945 literature.  No canon (aside from Ellison, Morrison, Rushdie, Baldwin) has been established there, so you will find Nabokov read here and Roth read there, while the growth of “genre courses” means Shirley Jackson and Philip K. Dick are probably now taught more frequently than Mailer or Updike or Bellow.  Things are not as unstable on the British side, although the slide has been toward works written in English by non-English authors (Heaney, Coetzee, various Indian novelists alongside Rushdie, Ondaatje, Ishiguro).

Much of the stability of the pre-1945 canon is institutional.  Institutions curate the art of the past—and curators are mostly conservative.  A good example is the way that the changing standards brought in by Henry James and T. S. Eliot were not allowed (finally) to lead to a wide-scale revision of the “list.”  Unity and a tight control over narrative point of view formed the basis of James’s complaints against the Victorians.  The rather comical result was that academic critics for a good thirty years (roughly 1945 to 1975) went through somersaults to show how the novels of Dickens and Melville were unified—a perverse, if delightful to witness, flying in the face of the facts.  Such critics knew that Dickens and Melville were “great,” and if unity was one feature of greatness, then, ipso facto, their novels must be unified.  Of course, the need to prove those novels were unified showed there was some sub rosa recognition that they were not.  Only F. R. Leavis had the courage of his convictions—and the consistency of thought—to try to drum Dickens out of the list of the greats.  And even Leavis eventually repented.

The curators keep chosen works in public view.  They fuss over those works, attend to their needs, keep bringing them before the public (or, at least, students).  Curators are dutiful servants—and only rarely dare to try to be taste-makers in their own right. 

I don’t think curators are enough.  The dutiful, mostly bored and certainly non-passionate, teacher is a stock figure in every Hollywood high school movie.  Such people cannot bring the works of the past alive.  For that you need partisans.  Some curators, of course, are passionate partisans.  What partisanship needs, among lots of other things, is a sense of opposition.  The partisan’s passion is engendered by the sense of others who do not care—or, even more thrilling, others who would deny the value of the work that the partisan finds essential and transcendentally good.  Yes, there are figures like Shakespeare who are beyond the need of partisans.  There is a complacent consensus about their greatness—and that’s enough.  But more marginal figures (marginal, let me emphasize, in terms of their institutional standing—how much institutional attention and time is devoted to them—not in terms of some kind of intrinsic greatness) like Laurence Sterne or Tennyson need their champions.

In short, works of art are kept alive by some people publicly, enthusiastically, and loudly displaying how their lives are enlivened by their interaction with those works.  So it is a public sphere, a communal, thing—and depends heavily on admiration for the effects displayed by the partisan.  I want to have what she is having—a joyous, enlivening aesthetic experience.  Hume, then, was not totally wrong; works are deemed great because of the pleasures (multiple and many-faceted) they yield—and those pleasures are manifested by aesthetic consumers.  But there is no reason to appeal to “experts” or “connoisseurs.” Anyone can play the role of making us think a work is worth a look, anyone whose visible pleasure has been generated by an encounter with that work.

The final point, I guess, is that aesthetic pleasure very often generates this desire to be shared.  I want others to experience my reaction to a work (to appreciate my appreciation of it.)  And aesthetic pleasure can be enhanced by sharing.  That’s why seeing a movie in the theater is different from streaming it at home.  That’s why a book group or classroom discussion can deepen my appreciation of a book, my sense of its relative strengths and weakness, my apprehension of its various dimensions. 

So long as those communal encounters with a work are happening, the work “lives.”  When there is no longer an audience for the work, it dies.  Getting labeled “great” dramatically increases the chances of a work staying alive, in large part because then the institutional artillery is rolled into place to maintain it.  But if the work no longer engages an audience in ways close to their vital concerns, no institutional effort can keep it from oblivion.

Kenneth Burke

I am working my way through old thumb drives and came across this introduction to Kenneth Burke’s work that I wrote for some sort of Blackwell Companion. I do not know if it ever was published. But the essay strikes me as a useful overview of Burke’s work. So I offer it here for what it’s worth.

     Kenneth Burke is an American polymath whose work offered an alternative to the New Criticism by focusing on the pragmatic ways that literature serves as “equipment for living.”  His resolute refusal to understand the literary as a distinctive use of language or literary criticism as a discipline separate from wider sociological analyses anticipated the move away from formalism and return to context characteristic of literary theory in the 1980s and 1990s. 

     Ever the maverick, Burke never graduated from college.  He quit Columbia after a year and took up residence in Greenwich Village in the early 1920s, where he associated with American modernists such as William Carlos Williams.  He served as editor of the important modernist little magazine The Dial, wrote poetry, novels, and criticism, and also took up various social science research jobs to pay the rent.  His work is influenced by such an eclectic assortment of figures—from medieval theologians like Duns Scotus through to Nietzsche and the social psychologist G. H. Mead—that it comes as no surprise that he has proved uncategorizable.  He belongs to no discipline and founded no school, even though his books are endlessly suggestive and have proved particularly important to academics in rhetorical and communication studies.  Burke never held a formal academic position although he did teach for many years at Bennington College and lived long enough to bask in the acclaim when a new generation of literary theorists discovered him at the end of the twentieth century.  

     Burke’s work relevant to literary theory is best divided into three phases.  This division is somewhat artificial, but it helps to organize an overview of his long career.  The first phase encompasses his work during the 1920s and 1930s, particularly the key texts Permanence and Change (1935) and Attitudes Toward History (1938), as well as the essays collected under the title The Philosophy of Literary Form (1941).  During these years, Burke did not offer a handy name for the kind of work he was doing, but he can be seen groping toward a dynamic account of literature that can do justice to its expressive and social power.  On the expressive side, Burke argues that literature allows for the hypothetical examination of “attitudes,” of possible ways of relating to the self, to others, and to the world.  Attitudes, Burke insists, are “incipient actions.”  To take a stance toward the world is to relate to it in a particular way and, subsequently, to act on the premises embedded in that relation.  Literature offers the fullest possible play for an imagination of possible actions and their potential consequences.  What particularly catches Burke’s attention—and defines his genius as a literary critic—is the way literary texts “convert upwards and downwards” by changing names and contexts.  Hence, for example, by conversion downward, Aschenbach’s desires in Death in Venice can be rendered as the lust of an old man for a young boy.  But conversion upward would read his desire as a love that opens up to him realms of insight previously unavailable.   

     Crucially, the “logic” of literary texts is never straightforward, but rather tied to the development of tropes, doubling of fictional characters, associations triggered by puns, and flights of fancy that often defy explanation. Thus, literature illustrates the ways humans create values and “reasons” (motives) for action, while also providing the means foe personal and social transformation. The various metamorphoses and associational pairings in texts extend outward from the author or the protagonist to include the audience and, through them, the social.  Partly through his affiliations with “Popular Front” leftists who were trying to forge mass political movements in the 1930s, Burke becomes interested in “rhetoric,” in the ways that artistic works can serve to constitute communities.  Literature has real-world impacts both by priming selves to act and by creating groups that cohere through “identification” with the same goals, same leader, or same overarching vision (ideology).  By 1941, partly through his famous essay, “The Rhetoric of Hitler’s Battle,” on Mein Kampf, Burke had become fascinated by the plot and figurative dynamics through which a text identifies (produces) a “foreign” element, a scapegoat, and sets about to purge it.  This interest in scapegoating persists throughout the rest of Burke’s career.

     The second phase of Burke’s career sees him attempting to systematize his insistence that “literature is symbolic action.”  Following in the footsteps of pragmatist social theorist George Herbert Mead, Burke tries to develop a full-scale philosophy of the act.  (The parallels to the work of Russian literary theorist Mikhail Bakhtin are striking, but Burke, like others in the West at the time, did not know Bakhtin’s work.)  Burke calls his theory “dramatism” and planned to expound it in a trilogy: A Grammar of Motives (1945), A Rhetoric of Motives (1950), and A Symbolic of Motives.  This last work was never completed, although pieces of it were published in a volume entitled Essays Toward a Symbolic of Motives (2006) after Burke’s death.  By “motives,” Burke means the attitudes, values, and beliefs that move a person to act.  His “grammar” attempts to identify the necessary conditions of any action, of which there are, he says, five: the act, the agent, the scene of action, agency (means), and purpose.  A Grammar of Motives offers what amounts to a history of philosophy in terms of which of the five elements a particular philosophy emphasizes.  To take “the scene” as most crucial, for example, leads to naturalism and other kinds of determinism that view the environment as dictating what actors do.  To place the greatest emphasis on the agent would mean the kind of voluntarism we associate with certain extreme versions of existentialism.  The “ratios” that try to weight the different roles played by the five elements can be quite complex and Burke traces out these intricacies through commentaries on a dizzying array of figures from the history of Western thought. 

     Presiding over the whole enterprise, although this is never explicitly acknowledged, is Hegel, partly because Burke in this middle phase aspires to the kind of all-encompassing system that Hegel also strives to produce.  But most importantly because the mode of thought is relentlessly dialectical.  For Burke, any philosophy that highlights one element of the pentad at the expense of another will inevitably produce a reaction, a new theory or philosophy that picks up the neglected item.  His philosophy, by way of contrast, will try to be inclusive, to do justice to the roles played by all five elements.  I think it fair to say that Burke’s does not realize his systematic ambitions.  A Grammar of Motives is usually accounted Burke’s masterpiece, but that is for the wealth of insights it offers on an astounding range of topics and figures, not because he constructs a grand synthesis.  In fact, despite his aspirations, Burke is not a systematic thinker.  He is constantly chasing side thoughts.  His digressions are famous and his distinctive style—full of italics, scare quotes, and parentheses—reflects the almost manic quality of his thinking, always on the edge of skittering completely out of control.

     A Rhetoric of Motives then considers “the use of words by human agents to form attitudes or induce actions in other human agents.”  Rhetoric is the social component of language, focusing on its use to form communities and foster action in concert.  It involves “the use of language as a symbolic means of inducing cooperation in beings that by nature respond to symbols.”  Burke especially emphasizes “identification” of a recognizably Freudian sort.  The rhetor aims to get his audience to identify with, to feel themselves “consubstantial” with a group or an ideal.  One effective way to achieve this goal is by processes of association that link the group or ideal the writer wishes to promote to already cherished values.  So, for example, I might try to liken the effort to combat global warming to the program that sent human beings to the moon.  I would try to transfer the positive feelings about the mission to the moon to a willingness to get enthusiastically involved in this new effort. 

     Presumably, the final volume of the trilogy was going to examine the specific symbols that language utilizes as human agents form their motives.  The reasons Burke failed to complete that third volume are unknown.  Obviously, he struggled with it since he went over ten years before publishing his next book, The Rhetoric of Religion (1961).  And that new book introduced another shift in his work, from “dramatism” to “logology,” Burke’s third phase.  This new vision picks up a major theme in A Rhetoric of Motives and pushes it to its logical conclusion.  Burke argues that any linguistic account that aims to describe a scene comprehensively will inevitably produce a hierarchy of terms that leads from the smallest particular up to the highest, most inclusive term, which Burke labels a “god-term.”  For example, physics moves from sub-atomic particles up through atoms and molecules to something called “matter.”  For Burke, “matter” is physics’ god-term, which functions, crucially, both as the motive of the whole enterprise (to offer an explanation of matter) and to exclude certain considerations (physicists do not acknowledge spiritual causes).  “Logology,” then, would be the analysis of any system of linguistic ordering that details its hierarchy and thus understands what it aims to achieve and what it serves to exclude.  The problematic claim is that every use of language, no matter what the field or the occasion, has precisely the same structure.  Burke appears, in his final works, to adopt a tragic determinism.  Humans are always and everywhere addicted to hierarchy and to monistic, mono-theological, modes of thought that always produce excluded victims, punitive orthodoxies, and the conflicts generated by various heresies.  The essays collected in Language as Symbolic Action (1966) reinforce this tragic vision by offering sweeping definitions of “Man” and of “Language.”

     Paradoxically, Burke’s vision narrows as a result of his attempt to be all-encompassing.  The universalism of the claims made during his “logology” phase makes everything look the same—and this from a writer whose greatest strength was his unsystematic, even chaotic, enchantment with particular cases.  Seen this way, Attitudes Toward History becomes Burke’s strongest book since it focuses on their being plural possibilities, a variety of different attitudes (with an “s”) that humans might adopt as they face the world and decide how to act, how to live, within it.  Similarly, the resources upon which humans can call as they take up this task are many.  The book offers a catalogue of those resources without ever claiming that any must be chosen or that any choice has inevitable consequences.  Not surprisingly, in surveying this open field, Burke comes to announce that his own perspective is “comic,” a perspective, he claims, that “by astutely gauging situation and personal resources . . . promotes the realistic sense of one’s limitations” yet does not succumb to a “passive” fatalism.  Human action cannot carry all before it, but it is not utterly futile as well.  Learning to roll with the punches is the great comic virtue, an adaptation of attitude to circumstance.  Burke at his most magnificent awakens us to full glory of human resourcefulness—and highlights how literature especially puts that ingenuity on display while also putting it through its paces.