Category: aesthetics

Gaita 3: Examples and Conversation

I want now to discuss the other two ways that Gaita thinks one might achieve “moral understanding,” i.e. move from blindness concerning the infinite worth of each individual human being to an intense awareness of that “absolute” fact.  That awareness would then be the most fundamental determinant of how one acts in the world, in how one orients one’s being-in-the-world (to use Heideggerian language that Gaita does not deploy).

What we might call Gaita “a-rationalism” when it comes to “moral understanding” underwrites his turn to examples.  “[D]eepened moral understanding is a movement towards necessity, of the world becoming, as Iris Murdoch puts it, ‘compulsively present to the will.’ The example reveals that a deepened understanding of the nature and reality of evil is not always a deepened understanding of the reasons for not doing it, and why it is a mistake to believe that reflection on the nature of good and evil is always, or even most importantly, reflection on a certain class of reasons for action, of considerations which may have a legitimate speaking-voice in a piece of practical reasoning” (234). (I will want to contest the appeal to “necessity” here in a future post.)

I assume the hedge of “not always” in the passage just quoted is to guard against “performative contradiction.”  After all, Gaita’s book is an extensive, very reason dependent, argument about the limits of reason.  So he has to acknowledge some role reason might play in moral deliberation.

Still, he wants to claim that examples—seeing someone act in ways that display their care for another human being in ways that inspire admiration and emulation–are central to developing moral understanding.  What it means to care for someone, to enact one’s valuing of them qua human being, has more to teach us about, to lead us to, goodness than all the generalizing treatises of the intellectuals.   “We do not discover the full humanity of a racially denigrated people in books by social scientists, not, at any rate, if those books merely contain knowledge of the kind that might be included in encyclopedias.  If we discover it by reading, then it is in plays, novels, and poetry—not in science but in art” (335).  [The touching faith of certain philosophers—Cavell, Nussbaum—in the efficacy of art stands in stark contrast to the despair so many artists feel as they accept Auden’s resigned conclusion that “poetry makes nothing happen.”]

The example is concrete, individual, and has a real presence in the world in ways that generalized statements do not.  There is a kind of ontological nominalism here; only the particular is real, is actually instantiated—and thus it has the potential to impact us in ways that mere words (or mere reasons made up of words) cannot. 

Like many others, Gaita follws Kant here—and suggests that the Critique of Judgment, ostensibly about aesthetic judgments, actually also offers a better account of morality than Kant’s rationalist account of practical reason does.  When it comes to ethical judgment:

“[T]here is . . . discussion and argument, but it should be argument informed by the realization that it cannot, discursively, yield a standard, or set of standards, in the light of which all examples are to be judged.  No example is self-authenticating, but it does not follow that their place in our judgments is merely to guide us to discursively established principles of which they are intuited instances.  Nor can any example play a role akin to that of the standard metre, for that would distort the necessarily provisional place they have for those whose judgments they have inspired and shaped.  That is reasonably evident in aesthetic cases, and I think it is the same in ethical ones.  When I speak of examples, I am thinking primarily of what has moved us in the speech and actions of others and because of which we stand in certain judgments and reject others.  Philosophy has been suspicious of the fact that we learn by being moved because of a mistaken conception of thought that judges this [i.e. being moved] as its [i.e. thought’s] desertion” (270, Gaita’s italics).  “I acknowledge that [the} acceptance of [such] judgments as judgments depends upon a richer conception of critical thinking and of the relation between thought and feeling than is presently available in the mainstream philosophical tradition” (41).

It is but a small step from this claim that we are more likely to be moved, to learn, from examples (presumably both positive and negative ones) to coming down on the Humean side of viewing “sensibility” as more crucial to one’s ethical posture in the world than any kind of Kantian rational procedure.  “The corruptions of Raskolnikov’s [main character in Dostoyevsky’s Crime and Punishment] remorse were not merely a result of his failure to understand properly what he had done, nor were they merely in self-deceiving service to such a failure of understanding.  They were a form of his failure to understand [i.e. his reflections and remorse did not focus on the humanity of his victim].  Such interdependence of understanding and response is what I want to stress . . . It is sometime conveyed by the word ‘sensibility.’ Most forms of moral corruption are corruptions of sensibility” (35).  I take it that this claim means that it is not reasoning poorly or in faulty ways that makes one morally corrupt, but by having the wrong dispositions, the wrong orientation to the “condition” of being a human who occupies a world with other humans.

And it is in shaping such a sensibility that Gaita places the efficacy of examples.  The love displayed by “saints” in their actions in the world “has a revelatory role.  Sometimes we see that something is precious only in the light of someone’s love for it.  Love’s capacity to reveal is, in part, a function of the authority of the lover.  It also depends on our openness to this kind of authority. . . . The love of saints depends on, builds on and transforms, [a] sense of individuality.  It deepens the language of love, which nourishes and is nourished by our sense that human beings are irreplaceable and, because of that transformation, it compels some people to affirm that even those who suffer affliction so severe that they have irrecoverably lost everything that gives sense to our lives, and even the most radical of evildoers, are fully our fellow human beings.  As with the love it transforms, the love of saints plays a constitutive and revelatory role” (xxiv). 

The educative role of the example—and its relation to “feeling and character”—is stressed when Gaita writes (again, the italics are his): “Aristotle was closer to the truth when he said if we want to know what justice is then we should turn to the example of the just man—but we must have eyes to see.  For Aristotle, the education of feeling and character was an epistemic condition of right judgment on what could only be discussed as authoritative example” (46).  From Wittgenstein, Gaita derives the conviction that “[k]knowledge that another person is in pain is not an achievement that can be characterized independently of certain affective dispositions” (176).

To place such a strong emphasis on “sensibility” and “affective dispositions” and “feeling and character” is to end up with 1) fairly bald assertions when it comes to trying the see why some people have “moral understanding” while others do not and 2) trying to find the mechanisms (remorse, examples) for moving people toward moral understanding (i.e. the topic of this post and my previous one).

Here’s the bald assertion: “Moral understanding requires that those who would claim to have it should be serious respondents to morality’s demands.  Someone who cannot be responsive to morality’s demands is one for whom morality has no reality.  The ‘reality” of moral values is inseparable from the reality of it as a claim on us, and serious responsiveness to that claim is internal to the recognition of its reality” (59).  [I will have much to say about the ways “seriousness” is deployed by Gaita in his book in subsequent posts.  The term is close to a tic in his writing, trotted out every time his argument hits a nodal point where sheer assertion is offered.]

The example does something reason, as the philosophers understand it, cannot do.  It inspires emulation.  I think in fact, that Gaita often verges on saying that the example compels emulation.  Certainly, that explains his concern with “authoritative” examples. But he hasn’t much to offer as to what would actually lead someone to be properly “responsive” to the example, to accept (through its offices) the “authority,” the “reality,” of “moral values” and their “claims” upon him.  It seems obviously, trivially, true that a moral person takes moral values seriously.  And it seems at least plausible to say that examples can work to move a person toward taking moral values seriously.  But there is still the mystery of why examples “move” some people, but fail to move others.

The final means toward moving someone to moral understanding that Gaita offers is “conversation.”  The transformation conversation offers is not (emphatically) our “need to learn from others only because of our limited epistemic and logical powers” (275).  Rather, what conversation can open our eyes to is “the reality of other human beings” (277),–that is, to the fundamental truth that Gaita has hammered on as the most important plank for a human morality.  Here is the full description of how conversation is to effect this realization.  (Thus, conversation like love, and the examples of saintly and dastardly action, is revelatory.)

“Conversation promises and threatens surprise.  Martin Buber said that ‘talking to oneself’ is utterly different from talking to someone else, and that the difference is marked by the fact that one cannot be a surprise to oneself in the way that another can be. [Here we get a long passage of Buber’s.]  The surprise Buber speaks of is not conditional upon routine or ignorance.  It is a kind of shock at the realization of how other than, and other to, oneself another human being can be.  It is the shock of the reality of other human beings, and the strange and unique kind of individuality of their presence. . . . It is in connection with such as sense of reality that we should understand Socrates’ insistence on conversation and the kind of presence he required of himself and his partners” (277).

Conversation, then, stands for a full encounter with the other, the kind of encounter which brings home forcefully the other’s reality as other.  That, of course, does not guarantee that I will then value that other (although Gaita seems to assume some kind of equivalence between recognition of otherness and valuing the other’s irreplaceable individuality.) 

But I don’t mean to sneer here.  One of the dilemmas in current day America is how to communicate across divides that have become entrenched, how to even have any communication take place at all when everyone is locked into their own echo chambers.  The inefficacy of general (broadcast) media to shift hearts and minds is all too obvious (even accepting the influence of Fox News).  Gaita’s discussion of conversation is still too abstract—we want the dialogue to lead to more nuanced, particular, convictions than some general affirmation of the other’s otherness.  On the other hand, even getting that far would be very welcome.  And it certainly does seem that face-to-face encounters are more likely to “move” people from entrenched stances than anything they are going to get from the non-face-to-face flows of opinion and information from the news media or from social media.  How to enable potentially transformative conversations does seem to me a vital question for our times. To pooh-poo in advance the possible effectiveness of such interactions is to throw in the towel before even making any attempt at betterment.

Enough for today.  I want to make a detour into talking more directly about hatred and violence in the next post—before returning to Gaita’s Socrates-inspired understanding of a meaningful life.

Novak Djokovic and George Eliot: On Great Books (3)

I find myself compelled to return to the topic of great books as a result of reading Middlemarch with one of my reading groups.  To recap: I have argued that 1) our judgments of books changes over time and is context sensitive (cultural standards and sensibilities change); 2) that institutional inertia and imprimatur mean that a canon gets established and remains stable over long periods for “elite” or institutionally embedded opinion; revolutions in taste happen suddenly after long resistance to the revolution (akin to the idea of a “tipping point” or Thomas Kuhn’s notion of a “paradigm shift”); and 3) it hardly makes sense to rank order a set consisting of all novels since there is such variety within the set (what could it mean to compare Moby Dick to one of the Jeeves novels?) and that what is deemed “best” in any context is relative to the purposes that drive the judgment or choice.  Wodehouse is better than Melville on some occasions and for certain purposes.  In short, variety (diversity) reigns—both in the objects being judged and in the purposes that would underline any specific act of judgment.

Then I started reading Middlemarch and wondered if I simply was wrong.  That there are some achievements of human agents that simply make one shake one’s head in wonder: how could a human being be capable of that?  The breadth of vision in Middlemarch, the ability to imagine a whole world with an astounding cast of characters, startles—and humbles.  It seems a feat only one person in a million could pull off.  It is, in short, a masterpiece.  And masterpieces are all too rare.

Now it is possible to say Wodehouse also wrote masterpieces—given his aims and the genre in which he was working.  And it is certainly reasonable to prefer reading Wodehouse to Middlemarch on many occasions.  We get to one sticky issue here, the one best represented by Matthew Arnold insisting that Chaucer was not top drawer because his work lacked “high seriousness.”  One prejudice in the “great books” canon-making is some notion (vague enough) of profundity.  This is why tragedy has always been ranked above comedy, why King Lear is generally deemed greater than Twelfth Night despite each being of high quality in its chosen genre. 

I don’t have anything that strikes me as worth saying about this profundity issue.  I only think it should be acknowledged as a standard of judgment—and that it should be acknowledged that it is only one among many standards.  And I don’t think it should be a standard that trumps all the others.  Let’s discuss King Lear’s greatness in a way that specifies the standards by which we deem it great—and not indulge in meaningless comparisons to Twelfth Night, a play whose greatness is best understood in relation to other standards.

But—and here’s the rub, the reason for this blog post—I still find myself wanting to talk about the greatness of these Shakespeare plays.  Just sticking to Shakespeare, comparing apples to apples, I am going to say Twelfth Night is better than Two Gentlemen of Verona; and that King Lear is better than Coriolanus.  There are cases where the things to be compared are within the same domain—and one can be judged better than the other.  In the realm of realistic novels that aspire to a totalizing view of a certain social scene, Middlemarch is better than Sybil.  Of course, one is called upon to provide the reasons that undergird these judgments.

All of this brings me to Novak Djokovic—and the core doubt that drives this post (and this re-vision of my two earlier posts on “great books”).  What is astounding about Djokovic is the gap between him and most of the incredibly talented men’s tennis players in the world.  The twentieth best tennis player in world has almost no chance of beating Djokovic (especially in a five set match).  There are, in fact, only (at absolute most) ten players in the world who could beat him—and even in that case he would win the match against them well over half the time. 

My point is the extreme pyramid of talent.  That the gap between the tenth best player in the world and the absolutely best player is so wide defies explanation and belief.  It seems much more plausible to expect that the top rung of talent would be occupied by at least a group.   There are, after all, many aspiring tennis players and novelists who have put in the hours (Malcolm Gladwell’s famous “ten thousand hours”), yet do not reach the pinnacle.  Why is extreme talent, extreme achievement, so rare? I have no answer here. Talent is a mystery, what Shakespeare would have called “fortune”–unearned, simply implanted in the person. Of course, it is possible to waste one’s talents, just as it is crucial to nurture and hone one’s talents. But it is nonsense to think Djokovic’s supremacy is the product of his working harder and being more monomonaically dedicated to being the best tennis player in the world than his competitors. There are plenty of people just as dedicated to that goal as he is. They just lack his talent.

Could it be, then, that the term “great book” makes sense if used to designate those instances of extreme achievement, those cases where we encounter a work of human hands that awes us because it is so far beyond what most humans, even ones dedicated and talented in that specific field of endeavor, ever manage to accomplish? 

Great Books 2: Institutions, Curators, and Partisans

I find that I have a bit more to say on the topic of “great books.” 

The scare quotes are not ironic—or even really scare quotes.  Instead, they are the proper punctuation when referring to a word as a word or a phrase as a phrase.  As in, the word “Pope” refers to the head of the Catholic Church.  The phrase “great books” enters into common parlance with University of Chicago President Robert Maynard Hutchins’s establishment of a great books centered curriculum there in the 1930s.  From the Wikipedia page on Hutchins: “His most far-reaching academic reforms involved the undergraduate College of the University of Chicago, which was retooled into a novel pedagogical system built on Great Books, Socratic dialogue, comprehensive examinations and early entrance to college.”  The University of Chicago dropped that curriculum shortly after Hutchins stepped down in the early 1950s, with St John’s College now the only undergraduate institution in the country with a full-bore great books curriculum.  Stanford and Columbia had a very great books slanted general education set of requirements for first and second year undergraduates well into the 1990s, but have greatly modified that curriculum in the 21st century.

These curricular issues are central to what I want to write about today.  “Literature,” Roland Barthes once said, “is what gets taught.”  It is very hard to even have a concept of “great books” apart from educational institutions, from what students are required to read, from what a “well-educated” person is expected to be familiar with.  As I wrote a few posts back (https://jzmcgowan.com/2023/07/31/americans-are-down-on-college/), we in the United States seem now to have lost any notion of what a “well-educated” person is or should be.  The grace notes of a passing familiarity with Shakespeare or Robert Frost are now meaningless.  The “social capital” accruing to being “cultured” (as outlined in Pierre Bourdieu’s work) has absolutely no value in contemporary America (apart, perhaps, from some very rarified circles in New York). 

I am not here to mourn that loss.  As I said in my last post, aesthetic artefacts are only “alive” if they are important to some people in a culture.  Only if some people find that consuming (apologies for the philistine word) an artistic work is fulfilling, even essential to their well-being, will that work avoid falling into oblivion, totally forgotten as most work of human hands (artistic or otherwise) is. 

Today, instead, I want to consider how it is that some works do survive.  I think, despite the desire from Hume until the present, that the intrinsic greatness of the works that survive is not a satisfactory explanation.  More striking to me is that the same small group of works (easily read within a four year education) gets called “great”—and how hard it is for newcomers to break into the list.  For all the talk of “opening up the canon,” what gets taught in America’s schools (from grade school all the way up through college) has remained remarkably stable.  People teach what they were taught.

Yes, Toni Morrison, Salman Rushdie, and Gabriel Garcia Marquez have become “classics”—and are widely taught.  But how many pre-1970 works have been added to the list of “greats” since 1970?  Ralph Ellison and Frederick Douglass certainly.  James Baldwin is an interesting case because he has become an increasingly important figure while none of his works of fiction has become a “classic.”  On the English side of the Atlantic, Trollope has become more important while Shelley has been drastically demoted and Tennyson’s star is dimming fast.  But no other novelist has challenged the hegemony of Austen, the Brontës, Dickens, and Eliot among the Victorians, or Conrad, Joyce, Hardy, and Woolf among the modernists.  The kinds of wide-scale revaluations of writers that happened in the early years of the 20th century (the elevations on Melville and Donne, for example) have not happened again in the 100 years since.  There really hasn’t been any significant addition to the list (apart from Douglass and barring the handful of new works that get anointed) since 1930.

I don’t deny that literary scholars for the most part read more widely and write about a larger range of texts than such scholars did in the 1950s and 1960s.  (Even that assumption should be cautious.  Victorian studies is the field I know best and the older scholars in that field certainly read more of the “minor” poets of the era than anyone who got a PhD in the 1980s or later ever does.)  But the wider canon of scholars has not trickled down very much into the undergraduate curriculum.  Survey courses in both British and American literature prior to 1945 are still pretty much the same as they were fifty years ago, with perhaps one or two token non-canonical works.  More specialized upper class courses and grad courses are sometimes more wide-ranging.  Most significantly, the widening academic canon has not moved into general literate culture (if that mythical beast even exists) at all.

The one place where all bets are off is in courses on post 1945 literature.  No canon (aside from Ellison, Morrison, Rushdie, Baldwin) has been established there, so you will find Nabokov read here and Roth read there, while the growth of “genre courses” means Shirley Jackson and Philip K. Dick are probably now taught more frequently than Mailer or Updike or Bellow.  Things are not as unstable on the British side, although the slide has been toward works written in English by non-English authors (Heaney, Coetzee, various Indian novelists alongside Rushdie, Ondaatje, Ishiguro).

Much of the stability of the pre-1945 canon is institutional.  Institutions curate the art of the past—and curators are mostly conservative.  A good example is the way that the changing standards brought in by Henry James and T. S. Eliot were not allowed (finally) to lead to a wide-scale revision of the “list.”  Unity and a tight control over narrative point of view formed the basis of James’s complaints against the Victorians.  The rather comical result was that academic critics for a good thirty years (roughly 1945 to 1975) went through somersaults to show how the novels of Dickens and Melville were unified—a perverse, if delightful to witness, flying in the face of the facts.  Such critics knew that Dickens and Melville were “great,” and if unity was one feature of greatness, then, ipso facto, their novels must be unified.  Of course, the need to prove those novels were unified showed there was some sub rosa recognition that they were not.  Only F. R. Leavis had the courage of his convictions—and the consistency of thought—to try to drum Dickens out of the list of the greats.  And even Leavis eventually repented.

The curators keep chosen works in public view.  They fuss over those works, attend to their needs, keep bringing them before the public (or, at least, students).  Curators are dutiful servants—and only rarely dare to try to be taste-makers in their own right. 

I don’t think curators are enough.  The dutiful, mostly bored and certainly non-passionate, teacher is a stock figure in every Hollywood high school movie.  Such people cannot bring the works of the past alive.  For that you need partisans.  Some curators, of course, are passionate partisans.  What partisanship needs, among lots of other things, is a sense of opposition.  The partisan’s passion is engendered by the sense of others who do not care—or, even more thrilling, others who would deny the value of the work that the partisan finds essential and transcendentally good.  Yes, there are figures like Shakespeare who are beyond the need of partisans.  There is a complacent consensus about their greatness—and that’s enough.  But more marginal figures (marginal, let me emphasize, in terms of their institutional standing—how much institutional attention and time is devoted to them—not in terms of some kind of intrinsic greatness) like Laurence Sterne or Tennyson need their champions.

In short, works of art are kept alive by some people publicly, enthusiastically, and loudly displaying how their lives are enlivened by their interaction with those works.  So it is a public sphere, a communal, thing—and depends heavily on admiration for the effects displayed by the partisan.  I want to have what she is having—a joyous, enlivening aesthetic experience.  Hume, then, was not totally wrong; works are deemed great because of the pleasures (multiple and many-faceted) they yield—and those pleasures are manifested by aesthetic consumers.  But there is no reason to appeal to “experts” or “connoisseurs.” Anyone can play the role of making us think a work is worth a look, anyone whose visible pleasure has been generated by an encounter with that work.

The final point, I guess, is that aesthetic pleasure very often generates this desire to be shared.  I want others to experience my reaction to a work (to appreciate my appreciation of it.)  And aesthetic pleasure can be enhanced by sharing.  That’s why seeing a movie in the theater is different from streaming it at home.  That’s why a book group or classroom discussion can deepen my appreciation of a book, my sense of its relative strengths and weakness, my apprehension of its various dimensions. 

So long as those communal encounters with a work are happening, the work “lives.”  When there is no longer an audience for the work, it dies.  Getting labeled “great” dramatically increases the chances of a work staying alive, in large part because then the institutional artillery is rolled into place to maintain it.  But if the work no longer engages an audience in ways close to their vital concerns, no institutional effort can keep it from oblivion.

Great Books?

I am currently facilitating a reading group that began with the goal of revisiting the literary works the group members read in a “great books” course forty years ago.  The original (year long) syllabus will be familiar to anyone who knows the traditional canon of the Western literary tradition: Homer, the Bible, Sophocles, Virgil, Augustine, Dante, Shakespeare, Milton, Goethe, Dostoyevsky, Joyce.  Forty years ago, there was not even a token attempt to include a non-white, non-male author—and the absence of such authors went entirely unmarked, was not on the radar screen as it were, and thus was not even considered something worth noticing or contemplating.

In the course of revisiting the class all these years later, it is not surprising that the group has felt the need to supplement the original list with works by Virginia Woolf, Toni Morrison, Jesmyn Ward, Sandra Cisneros, and Salman Rushdie.

In reading those works, some members of the group have declared (in certain, not all, instances) that the books don’t meet the standard of a “great book”; some of the books have been deemed interesting, informative, worth reading perhaps, but not “great.”

Which raises the vexed question of the standard for greatness—and that’s the topic of this post.

My first—and biggest—point is that I find the whole enterprise of deciding whether something is great or not unproductive.  It rest on the notion of a one size fits all, absolute standard that is more detrimental to appreciation of an aesthetic (or any other kind of) experience than helpful.

Is Italian cuisine “better” than Chinese cuisine?  I trust you see the absurdity of the question.  You certainly can’t appreciate the Italian meal you are eating if you are comparing it to a Chinese meal.  And, in the abstract, the general question of which cuisine is “better” is nonsense.  There is no proper answer to the question because it lacks all specificity.

Judgments of better or worse are always in relation to some standard, some criteria, of judgment.  In his book A Defense of Judgment (University of Chicago Press, 2021), Michael Clune keeps scoring cheap points by telling us that Moby Dick is better than The Apprentice.  The examples hide the absurdity of the claim.  If he insisted instead that Moby Dick is better than The Sopranos, he would almost certainly generate the kind of objection that could lead to forcing him to justify his claim.  According to what criteria is Moby Dick superior, and in relation to what purposes.  Are there no contexts at all where I would prefer to watch The Sopranos to reading Moby Dick? Are there specific things The Sopranos does better than Moby Dick? Am I always choosing the lesser (thus revealing my debased tastes) when I watch the show?  Would the world be a richer and “better” (that word again!) place if it only had Moby Dick in it and not The Sopranos

I hope that makes it clear that the rank ordering of various aesthetic works is not just unhelpful, but needlessly restrictive, tending toward the puritanical.  Furthermore, it is a category error.  To respond to diversity (that there are multiple cuisines, that there are many aesthetic objects, and that they come in different genres and employ different media) by ranking all the instances it offers on one scale is to miss the pluralistic plenitude of the world.

So, the standard bearer always cries at this point, does that mean anything goes?  Are we doomed to drown in the sea of relativism? The bugbear of relativism, the contortions writers who long to be considered “serious” go through to avoid being accused of relativism, never fails to astound me.  I hope to address these fears—akin to a “moral panic” in their intensity—in a future post.  Suffice for now to say that relativism is trivially true.  You cannot aspire to be the world’s greatest baseball player if you grow up in first century CE Rome or in contemporary Malawi.  Your aspirations are relative to context.

What does that say about aesthetic standards?  First (again, trivially true) is that such standards shift over time.  Until 1920, general opinion was that Uncle Tom’s Cabin was a better book than Moby Dick.  From 1920 to 1980, you would have been considered a complete philistine to prefer Stowe’s novel to Melville’s.  Currently, a more pluralistic ethos prevails.  If you are considering a novel that successfully moves an audience to tears and outrage about a social injustice, then Uncle Tom’s Cabin is the ticket.  For more abstract musings on the meaning of life, Moby Dick is a better bet.  If you want a “tight,” well structured, gem of a more minimalist nature, not one of the “loose baggy monsters” that Henry James disparaged, than neither Stowe nor Melville is going to fit the bill.

Judgment, then, of a work’s quality will be relative to the standard you are applying to the work.  And also relative to the purpose for which the work was written and the purpose for which the consumer is coming to the work.  When making up a syllabus of 19th century American literature, excluding Stowe (and, for that matter, Frederick Douglass), as was standard practice for well over fifty years, is to offer a very truncated vision of the American scene from 1840 to 1870.  Allowing some vague, unspecified, notion of “better” justify the inclusion of Emerson, Thoreau, Hawthorne, Whitman, Dickinson, and Melville, along with the exclusion of Stowe and Douglass, is not only to miss important cultural works, but also to renege on the intellectual responsibility to be self-conscious about the standards that govern one’s judgments (and the choices that follow from those judgments, along with the consequences of those choices).

OK.  Let me try to get concrete.  The whole “great books” thing, with its (most likely inevitably futile) attempt to impose “standards” on the benighted tastes of one’s contemporaries, always arises in moments of what we nowadays call “culture wars.”  One famous instance is the quarrel between “the ancients and the moderns” of the late 17th and early 18th century.  More relevant to us today is the modernist revolt against the Romantics and the Victorians.  T. S. Eliot was a central figure here, promulgating a “classicist” aesthetic standard that valued austere, non-sentimental, tightly formed, stringently intellectual (and hence non-emotional and non-personal) works over what he deemed the sloppy, sentimental, and overly rhetorical (i.e. trying to persuade the audience of some moral or political or otherwise sententious “truth’) of the art of the 19th century.  That the works Eliot championed were “difficult” was a feature not a bug.  The world was awash in easy, popular art—and “high art” had to be protected from danger of being dragged into that swamp. 

What Eliot was trying to produce was nothing less than a sea change in sensibility.  He wanted to change what audiences liked, how they responded to aesthetic objects.  Henry James (as we have already seen) was engaged in the same enterprise.  The modernist painters offer a particularly clear case of this enterprise.  Works that in 1870 were deemed “barbarous” were declared masterpieces by 1910.  (Van Gogh, who sold only one painting in his lifetime, unhappily did not live to bask in this radical revaluation, this shift in criteria of judgment, in the world of visual art.  Cezanne, to a somewhat lesser extent, also died a few years too early.)

The shift in sensibility was wonderfully summed up (in his usual pithy manner) by Oscar Wilde when he said “it would take a heart of stone to not laugh at the death of Little Nell.” (Translation: the death of little Nell in Dickens’s The Old Curiosity Shop –which gets dragged out over numerous pages—famously moved readers to tears on both sides of the Atlantic.)

So, in short, we do have a set of aesthetic standards promulgated by the modernists that lead to the elevation of Melville over Stowe (among many over revaluations) and which can be specified (especially when considered as negations of some of the prevailing features of “popular art”—works which, like loose women, are castigated for being “too easy.”)

A list of great books, then, can be a destroyer of diversity.  (“Eleanor Rigby” along criteria of profundity and musical complexity is a “greater” song than “When I’m 64,” but don’t we want a world in which both exist and in which we listen to both?)  And such a list relies on a fairly one-dimensional set of criteria that belies the imaginative plenitude that the arts provide.  When this narrowing work is combined with the notion that all judges of any taste know instinctively what a great work looks and tastes like, without any need to spell out the grounds for their judgment, we have a specific sensibility parading as universal.  (Which, not surprisingly, mirrors the objection of women and people of color about their experiences in “unmarked,” male-shaped spaces.  There are unwritten, even unconscious, norms of behavior in such spaces that are not seen as one alternative manner among others, as not universal.)

Does this mean all judgments of “better” or “worse” are off the table? No.  It simply means that an aesthetic work (or a meal in a Chinese restaurant for that matter) should be judged according to the criteria that guided its making.  I will admit that I find much politically motivated visual art deeply flawed.  But that is not because I have some aestheticist notion that art is always ruined by being political (another of the modernist shibboleths).  I reject any such absolute, universalist standard that says art can only do this and not that.  Rather, I think it is particularly difficult for the visual arts to make statements; they don’t have the same resources for statement-making available to novelists, poets, and film-makers (to name only three). 

Does this mean that visual artists should all eschew making works that aim at some political point? No.  Successfully doing something that is very difficult is often the hallmark of an important artist, one worth paying attention to.  The role of the audience is, in this view, to grasp what the artist is trying to accomplish—and to judge how successfully the artist accomplished that goal. Given similar goals, some artists do better work than other artists–relative to that goal.

Two last points and I am done.

The first relates to acquired taste.  An aesthetic education is always a process of learning how to appreciate, in the best case scenario to enjoy, aesthetic objects that, at first encounter, are too different, difficult, foreign, unfamiliar to grasp.  This process of education is mid-wived by others (friends, lovers, teachers) who deeply appreciate some works of art and long to convey that appreciation to another.  The means to that sharing is a heightened apprehension of the particular features of the particular work.  The mentor guides the neophyte toward “seeing” what is there.  The one who appreciates illuminates the work, shows what it contains that is to be valued, to the newcomer.  People who are especially good at this work of illumination are the truly gifted teachers and critics. 

In my ideal English department (for example), the staff would include a medievalist to whom the works of that period are endlessly fascinating and enjoyable—and that professor would be a success if she communicated that enthusiasm, that appreciation, to students who entered college with no idea that there was a vastly rich repository of medieval literature to encounter and learn to love. There would be no need to disparage some works as inferior in order to champion some as deeply pleasurable and worth reading along any number of criterial dimensions. 

And that brings me to my second—and last—point.  There is absolutely no doubt that various works have been aided in the perpetual effort to escape oblivion by institutional support and inertia.  Wordsworth becomes part of the curriculum—and I teach and research about Wordsworth because of the institutional stamp of value.  Literary institutions, like all assemblages of power, work to sustain themselves.  It takes a long time for values to shift in the academy—a shorter time in the market (as witnessed by the shift in taste in painting between 1870 and 1914).  The larger point is that judgments of value do not occur in a vacuum.  There are institutional hierarchies that protect prevailing judgments and only slowly adopt re-valuations.  

Still, institutions are not omnipotent—and they tend to ossification if not drawing revitalizing energies from some other source.  All of which is to say that “great books” only remain alive to the extent that some people somewhere still find them of interest, importance, worth devoting some time to.  Here’s the last reappearance of relativism in this discussion.  A book can be as “great” as you want to claim it is, but none of its intrinsic features will ensure its survival, its still being read, its not falling into the oblivion that engulfs 99% of the artistic works ever produced.  It will only still command attention while some audience finds it worthy of attention.  And that worthiness rests, in part, on the work having institutional prestige and enthusiastic champions, but also (crucially) on an encounter with it being experienced by at least some people as part of living a full and satisfying life.  The work’s survival is relative to an audience that keeps it alive.