Category: Meaning and Life and the Humanities

Legacy

Some years back, when I was planning to step down as director of UNC’s Institute for the Arts and Humanities, someone asked me what I wanted my legacy to be.  My predecessor in the job (and the founder of the Institute), Ruel Tyson, who died recently at the age of 89, and whose funeral was this week, was very legacy conscious.  He wanted his name associated with the Institute—and cared deeply about the direction the Institute, and the University more generally, took even after his retirement.  He took pains not only to continue being involved with the Institute, but also to get into writing materials relevant to the Institute’s history and to its ongoing evolution.

Ruel’s death has me dwelling on such things, along with Martin Hägglund’s assertion in This Life (Pantheon Books, 2019) that “It is a central feature of our spiritual life that we remember the dead, just as it is a central feature of our spiritual life that we seek to be remembered after our death. This importance of memory—or recollection—is inseparable from the risk of forgetting.  Our fidelity to past generations is animated by the sense that they live only insofar as we sustain their memory, just as we will live on only insofar as future generations sustain the memory of us” (181-82).  Elsewhere he states baldly: there is “no afterlife apart from those who care to remember” us (167).  And continues: “The death of the beloved is irrevocable—it is a loss that cannot be recuperated—since there is no life other than this life” (167-68).

I fully believe that there is no life other than this life.  But I find myself uninterested in, unattached to, the idea of an afterlife in the memories of others.  Why should I care?  I will be beyond caring.  I have never thought of the books I have written as messages sent to some future.  I wrote them to address my contemporaries and desired a response from those contemporaries: to stir up their thoughts, to change their minds, to win their praise.  I wanted to be part of the conversation of my time—not a part of some conversation from which I was absent because dead.  Similarly, in my work at the university, I wanted to enable all kinds of intellectual adventures for my colleagues and students.  I was not at all focused on building the conditions for things that would happen after I was gone.  The here and now was all.

Yes, I care about my children’s lives—and the world they will inherit to live those lives in.  I want to give them the wherewithal to have good lives.  That wherewithal involves money, but plenty of other things, all I hope given as a gift of love.  But my children are not my “legacy.”  They are people with their own lives, albeit people I care deeply about.

I certainly don’t think of them as under any obligation to remember me or (worse) to memorialize me.  I have little filial piety myself (a fact I intend to ponder as I go along), but am made more uneasy by the thought of my children having filial piety than I am by their lacking it.  (Only a coincidence that I am writing this on father’s day, a day not celebrated in our household.)  I want my children’s love, not their reverence or piety.  And I want them to take the gifts I have given them (of all sorts) for granted, as the daily and completely unexceptional manifestation of a love that is like the air they breathe, simply an unquestioned fact of daily existence, sustaining but unremarkable.

It is not simply that I will be dead—and thus in no position to know that I am being remembered or to care.  It is also that memory is abstract.  It is leagues away from the full experience of being alive, in all its blooming, buzzing confusion, its welter of emotions, desires, hopes, and activities.  Those who knew you while you were alive know some of that concrete you, but soon enough you are just a name on a family tree, with only the slightest hints, some bare facts, maybe a photograph, maybe some letters, suggesting the actual person.  Such attenuated selfhood is nothing like life—and holds no appeal to me.  It seems a mug’s game to care about that time to come—just another way of not attending to the present, of focusing in on “this life.”

Jane and I went to another funeral yesterday (after Ruel’s on Tuesday).  This memorial service was for a lovely man who taught both of our children at the Quaker high school they attended.  Conducted as a Quaker meeting, we had 90 minutes of people sharing their memories of Jamie—and those memories did capture him to a remarkable degree.  He was concretely witnessed and imagined in what people had to say.  His lived reality, his personality, was caught and conveyed.

Now that is a memory process I can endorse; we were all filled with his spirit during and after those ninety minutes.  But I don’t think it helps—and don’t find myself wishing—to claim that kind of specific memory will last for more than ten or so years, or to think of even the fullest kind of memory in any way counts as a satisfactory afterlife.  To paraphrase Woody Allen (this catches the spirit, and nowhere close to the letter of his comment): I want the kind of afterlife where I am worrying about the mortgage that is due next Tuesday and what gift to buy for my beloved, whose birthday is next week.  In short, not an afterlife, but this life is what I want.  And I am perfectly happy to let the time after my death take care of itself, without its occupants feeling any obligation to keep me in mind.

The Tree of Life

I have just finished reading Richard Powers’ latest novel, The Overstory (Norton, 2018).  Powers is his own distinctive cross between a sci-fi writer and a realist.  His novels (of which I have read three or four) almost always center around an issue or a problem—and that problem is usually connected to a fairly new technological or scientific presence in our lives: DNA, computers, advanced “financial instruments.”  As with many sci-fi writers, his characters and his dialogue are often stilted, lacking the kind of psychological depth or witty interchanges (“witty” in the sense of clever, off-beat, unexpected rather than funny) that tend to hold my interest as a reader.  I find most sci-fi unreadable because too “thin” in character and language, while too wrapped up in elaborate explanations (that barely interest me) of the scientific/technological “set-up.” David Mitchell’s novels have the same downside for me as Powers’: too much scene setting and explanation, although Mitchell is a better stylist than Powers by far.

So is The Overstory Powers’ best novel?  Who knows?  It actually borrows its structure (somewhat) from Mitchell’s Cloud Atlas, while the characters feel a tad less mechanical to me.  But I suspect that’s because the “big theme” (always the driving force of Powers’s novels) was much more compelling to me in this novel, with only Gain of the earlier ones holding my interest so successfully.

The big theme: how forests think (the title of a book that is clearly situated behind Powers’s work even though he does not acknowledge it, or any other sources.)  We are treated to a quasi-mystical panegyric to trees, while being given the recent scientific discoveries that trees communicate with one another; they do not live in accordance with the individualistic struggle for existence imagined by a certain version of Darwinian evolution, but (rather) exist within much larger eco-systems on which their survival and flourishing depend.  The novel’s overall message—hammered home repeatedly—is that humans are also part of that same eco-system—and that competition for the resources to sustain life as contrasted to cooperation to produce and maintain those resources can only lead to disaster.  Those disasters are not just ecological (climate change and depletion of things necessary to life), but also psychological.  The competitive, each against each, mentality is no way to live.

I am only fitfully susceptible to mystical calls to experience some kind of unity with nature.  I am perfectly willing to embrace rationalistic arguments that cooperation, rather than competition, is the golden road to flourishing.  And, given Powers’s deficiencies as a writer, I would not have predicted that the mysticism of his book would move me.  But it did.  That we—the human race, the prosperous West and its imitators, the American rugged individualists—are living crazy and crazy-making lives comes through loud and clear in the novel.  That the alternative is some kind of tree-hugging is less obvious to me most days—but seems a much more attractive way to go when reading this novel.

I have said Powers is a realist.  So his tree-huggers in the novel ultimately fail in their efforts to protect forests from logging.  The forces of the crazy world are too strong for the small minority who uphold the holistic vision.  But he does have an ace up his sleeve; after all, it is “life” itself that is dependent on interlocking systems of dependency. So he does seem to believe that, in the long run, the crazies will be defeated, that the forces of life will overwhelm the death-dealers.  Of course, how long that long run will be, and what the life of the planet will look like when the Anthropocene comes to an end (and human life with it?) is impossible to picture.

Life will prevail.  That is Powers’ faith—or assertion.  Is that enough?  I have also read recently an excellent book by Peter J. Woodford: The Moral Meaning of Nature: Nietzsche’s Darwinian Religion and its Critics (University of Chicago Press, 2018).  Woodford makes the convincing argument that Nietzsche takes from Darwin the idea that “life” is a force that motivates and compels.  Human behavior is driven by “life,” by what life needs.  Humans, like other living creatures, are puppets of life, blindly driven to meet its demands.  “When we speak of values, we speak under the inspiration, under the optic of life; life itself forces us to establish values; when we establish values, life itself values through us” (Nietzsche, Twilight of the Idols).

 

Here is Woodford’s fullest explanation of Nietzsche’s viewpoint:

“The concept that allows for the connection between the biological world, ethics, aesthetics, and religion is the concept of a teleological drive that defines living activity.  This drive is aimed at its own satisfaction and at obtaining the external conditions of its satisfaction. . . . Tragic drama reenacts the unrestricted, unsuppressed expression of [the] inexhaustible natural eros of life for itself. . . . Nietzsche conceived life as autotelic—that is, directed at itself as the source of its own satisfaction.  It was this autotelic nature of life that allowed Nietzsche to make the key move from description of a natural drive to discussion of the sources and criteria of ethical value and, further, to the project of a ‘revaluation of value’ that characterized his final writings.  Life desires itself, and only life itself is able to satisfy this desire.  So the affirmation of life captures what constitutes the genuine fulfillment, satisfaction, and flourishing of a biological entity.  Nietzsche’s appropriation of Darwinism transformed his recovery of tragedy into a project of recovering nature’s own basic affirmation of itself in a contemporary culture in which this affirmation appeared, to him at least, to be absent.  His project was thus inherently evaluative at the same time that it was a description of a principle that explained the nature and behavior of organic forms” (38).

Here’s my takeaway.  Both Powers and Nietzsche believe that they are describing the way that “life” operates.  Needless to say, they have very different visions of how life does its thing, with Powers seeing human competitiveness as a perverted deviation from the way life really works, while Nietzsche (at least at times) sees life as competition, as the struggle for power, all the way down.  (Cooperative schemes for Nietzsche are just subtle mechanisms to establish dominance—and submission to such schemes generates the sickness of ressentiment.)

What Wofford highlights is that this merger of the descriptive with the evaluative doesn’t really work.  How are we to prove that life is really this way when there are life forms that don’t act in the described way?  Competition and cooperation are both in play in the world.  What makes one “real life,” and the other some form of “perversion”?  Life, in other words, is a normative term, not a descriptive one.  Or, at the very least, there is no clean fact/value divide here; our biological descriptions are shot through and through with evaluation right from the start.  We could say that the most basic evaluative statement is that it is better to be alive than to be dead.  Which in Powers quickly morphs into the statement that it is better to be connected to other living beings within a system that generates a flourishing life, while in Nietzsche it becomes the statement that it is better to assume a way of living that gives fullest expression to life’s vital energies.

[An aside: the Nazis, arguably, were a death cult–and managed to get lots and lots of people to value death over life.  What started with dealing out death to the other guy fairly quickly moved into embracing one’s own death, not–it seems to me–in the mode of sacrifice but in the mode of universal destruction for its own sake.  A general auto de fe.]

In short, to say that life will always win out says nothing about how long “perversions” can persist or about what life actually looks like.  And the answer to the second question—what life looks like—will always be infected by evaluative wishes, with what the describer wants life to look like.

That conclusion leaves me with two issues.  The first is pushed hard by Wofford in his book.  “Life” (it would seem) cannot be the determiner of values; we humans (and Powers’ book makes a strong case that other living beings besides humans are in on this game) evaluate different forms of life in terms of other goods: flourishing, pleasure, equality/justice.  This is an argument against “naturalism.”  Life (or nature) is not going to dictate our values; we are going to reserve the right/ability to evaluate what life/nature throws at us.  Cancer and death are, apparently, natural, but that doesn’t mean we have to value them positively.

The second issue is my pragmatist, Promethean one.  To what extent can human activity shape what life is.  Nietzsche has always struck me as a borderline masochist.  For all his hysterical rhetoric of activity, he positions himself to accept whatever life dishes out.  Amor fati and all that.  But humans and other living creatures alter the natural environment all the time to better suit their needs and desires.  So “life” is plastic—and, hence, a moving target.  It may speak with a certain voice, but it is only one voice in an ensemble.  I have no doubt that it is a voice to which humans currently pay too little heed. But it is not a dictator, not a voice to which we owe blind submission.  That’s because 1) we evaluate what life/nature dishes out and 2) because we have powers on our side to shape the forms life takes.

Finally, all of this means that if humans are currently shaping life/nature in destructive, life-threatening ways, we cannot expect life itself to set us on a better course.  The trees may win in the long run—but we all remember what Keynes said about the long run.  In the meantime, the trees are dying and we may not be very far behind them.

The Future of the Humanities

For some time now, I have a question that I use as a litmus test when speaking with professors of English.  Do you think there will be professors of Victorian literature on American campuses fifty years from now?  There is no discernible pattern, that I can tell, among the responses I get, which cover the full gamut from confident pronouncements that “of course there will be” to sharp laughter accompanying the assertion “I give them twenty years to go extinct.”  (For the record: UNC’s English department currently has five medievalists, seven Renaissance scholars, and six professors teaching Romantic and Victorian literature—that is, if I am allowed to count myself a Victorianist, as I sometime was.)

I have gone through four crises of the humanities in my lifetime, each coinciding with a serious economic downturn (1974, 1981, 1992, and 2008).  The 1981 slump cost me my job when the Humanities Department in which I taught was abolished.  The collapse of the dot.com boom did not generate its corresponding “death of the humanities” moment because, apparently, 9/11 showed us we needed poets.  They were trotted out nation-wide as America tried to come to terms with its grief.

Still, the crisis feels different this time.  Of course, I may just be old and tired and discouraged.  Not “may be.”  Certainly am.  But I think there are also real differences this time around—differences that point to a different future for the humanities.

In part, I am following up my posts about curriculum revision at UNC.  The coverage model is on the wane.  The notion that general education students should gain a familiarity with the whole of English literature is certainly moving toward extinction.  Classes are going to be more focused, more oriented to solving defined problems and imparting designated competencies.  Methods over content.

But, paradoxically, the decline of the professors of Victorian literature is linked to more coverage, not less.  The History Department can be our guide here.  At one time, History departments had two or three specialists in French history (roughly divided by centuries), three or four in English history, along with others who might specialize in Germany or Spain or Italy.  That all began to change (slowly, since it takes some time to turn over a tenured faculty) twenty or so years ago when the Eurocentric world of the American history department was broken open.  Now there needed to be specialists on China, on India, on Latin America, on Africa.  True, in some cases, these non-European specialists were planted in new “area studies” units (Asian Studies, Latin American Studies, Near Eastern Studies etc.).  But usually even those located in area studies would hold a joint appointment in History—and those joint appointments ate up “faculty lines” formerly devoted to the 18th century French specialist.

Art History departments (because relatively small) have always worked on this model: limited numbers of faculty who were supposed, somehow, to cover all art in all places from the beginning of time.  The result was that, while courses covered that whole span, the department only featured scholars of certain periods.  There was no way to have an active scholar in all the possible areas to be studied.  Scholarly “coverage,” in other words, was impossible.

English and Philosophy departments are, in my view, certain to go down this path. English now has to cover world literatures written in English, as well as the literatures of groups formerly not studied (not part of the “canon”).  Philosophy, as well, now incldue non-Western thought, as well as practical, professional, and environmental  ethics, along with new interests in cognitive science.

There will not, fifty years from now, be no professors of Victorian literature in America.  But there will no longer be the presumption that every self-respecting department of English must have a professor of Victorian literature.  The scholarly coverage will be much more spotty—which means, among other things, that someone who wants to become a scholar of Victorian literature will know there are six places to reasonably pursue that ambition in graduate school instead of (as is the case now) assuming you can study Victorian literature in any graduate program.  Similarly, if 18th century English and Scottish empiricism is your heart’s desire, you will have to identify the six philosophy departments you can pursue that course of study.

There is, of course, the larger question.  Certainly (or, at least, it seems obvious to me, although hardly to all those I submit to my litmus test), it is a remarkable thing that our society sees fit to subsidize scholars of Victorian literature.  The prestige of English literature (not our national literature after all) is breath-taking if you reflect upon it for even three seconds.  What made Shakespeare into an American author, an absolute fixture in the American curriculum from seventh grade onwards?  What plausible stake could our society be said to have in subsidizing continued research into the fiction and life of Charles Dickens?  What compelling interest (as a court of law would phrase it) can be identified here?

Another paradox here, it seems to me.  I hate (positively hate, I tell you) the bromides offered (since Matthew Arnold at least) in generalized defenses of the humanities.  When I was (during my years as a director of a humanities center) called upon to speak about the value of the humanities, I always focused on individual examples of the kind of work my center was enabling.  The individual projects were fascinating—and of obvious interest to most halfway-educated and halfway-sympathetic audiences.  The fact that, within the humanities, intellectual inquiry leads to new knowledge and to new perspectives on old knowledge is the lifeblood of the whole enterprise.

But it is much harder to label that good work as necessary.  The world is a better, richer (I choose this word deliberately) place when it is possible for scholars to chase down fascinating ideas and stories because they are fascinating.  And I firmly believe that fascination will mean that people who have the inclination and the leisure will continue to do humanities work come hell and high water.  Yes, they will need the five hundred pounds a year and the room of one’s own that Virginia Woolf identified as the prerequisites, but people of such means are hardly an endangered species at the moment.  And, yes, it is true that society generally (especially after the fact, in the rear view mirror as it were) likes to be able to point to such achievements, to see them as signs of vitality, culture, high-mindedness and the like.  But that doesn’t say who is to pay.  The state?  The bargain up to now is that the scholars (as well as the poets and the novelists) teach for their crust of bread and for, what is more precious, the time to do their non-teaching work of scholarship and writing.  Philanthropists?  The arts in America are subsidized by private charity—and so is much of higher education (increasingly so as state support dwindles.)  The intricacies of this bargain warrant another post.  The market?  Never going to happen.  Poetry and scholarship is never going to pay for itself, and novels only very rarely so.

The humanities, then, are dependent on charity—or on the weird institution that is American higher education.  The humanities’ place in higher education is precarious—and the more the logic of the market is imposed on education, the more precarious that position becomes.  No surprise there.  But it is no help when my colleagues act as if the value of scholarship on Victorian literature is self-evident.  Just the opposite.  Its value is extremely hard to articulate.  We humanists do not have any knock-down arguments.  And there aren’t any out there just waiting to be discovered.  The ground has been too well covered for there to have been such an oversight.  The humanities are in the tough position of being a luxury, not a necessity, even as they are also a luxury which makes life worth living as contrasted to “bare life” (to appropriate Agamben’s phrase).  The cruelty of our times is that the overlords are perfectly content (hell, it is one of their primary aims) to have the vast majority only possess “bare life.”  Perhaps it was always thus, but that is no consolation. Not needing the humanities themselves, our overlords are hardly moved to consider how to provide it for others.

More Comments on What We Should Teach at University

My colleague Todd Taylor weighs in—and thinks he also might be the source for my “formula.”  Here, from Todd’s textbook is his version of the three-pronged statement about what we should, as teachers, be aiming to enable our students to do.

  1. To gather the most relevant and persuasive evidence.
  2. To identify a pattern among that evidence.
  3. To articulate a perspective supported by your analysis of the evidence.

And here are Todd’s further thoughts:

“I might have been a source for the ‘neat formula’ you mention, since I’ve been preaching that three-step process as “The Essential Skill for the Information Age” for over a decade now.  I might have added the formula to the Tar Heel Writing Guide.  I am attaching a scan of my textbook Becoming a College Writer where I distill the formula to its simplest form.  I have longer talks on the formula, with notable points being that step #1 sometimes includes generating information beyond just locating someone else’s data.  And step #3, articulating a perspective for others to follow (or call to action or application), is the fulcrum where “content-consumption, passive pedagogy” breaks down and “knowledge-production, active learning” takes off.

The high-point of my experience preaching this formula was when a senior ENGL 142 student shared with me the news of a job interview that ended successfully at the moment when she recited the three steps in response to the question ‘What is your problem solving process?’

In my textbook, I also have a potentially provocative definition of a “discipline” as “a method (for gathering evidence) applied to a subject,” which is my soft attempt to introduce epistemology to GenEd students.  What gets interesting for us rhet/discourse types is to consider how a “discipline” goes beyond steps #1 and #2 and includes step #3 so that a complete definition of “discipline” also includes the ways of articulating/communicating that which emerges from the application of a method to a subject.  I will forever hold onto to my beloved linguistic determinism.  Of course, this idea is nothing new to critical theorists, especially from Foucault.  What might be new(ish) is to try to explain/integrate such ideas within the institution(s) of GenEd requirements and higher ed.  I expect if I studied Dewey again, I could trace the ideas there, just as I expect other folks have other versions of the ‘neat formula.'”

Todd also raised another issue with me that is (at least to me) of great interest.  The humanities are wedded, we agreed, to “interpretation.”  And it makes sense to think of interpretation as a “method” or “approach” that is distinct from the qualitative/quantitative divide in the social sciences.  Back to Dilthey.  Explanation versus meaning.  Analysis versus the hermeneutic.  But perhaps even more than that, since quantitative/qualitative can be descriptors applied to the data itself, whereas interpretation is about how you understand the data.  So no science, even with all its numbers, without some sort of interpretation.  In other words, quantitative/qualitative doesn’t cover the whole field.  There is much more to be said about how we process information than simply saying sometimes we do it via numbers and sometimes via other means.

Moral Envy and Opportunity Hoarding

One quick addendum to the last post—and to Bertrand Russell’s comment about how the traditionalist is allowed all kinds of indignation that the reformer is not.  What’s with the ubiquity of death threats against anyone who offends the right wing in the United States?  That those who would change an established social practice/pattern, no matter how unjust or absurd, deserve a death sentence is, to all appearances, simply accepted by the radical right.  So, just to give one example, the NC State professor who went public with his memories of drinking heavily with Brett Kavanaugh at Yale immediately got death threats—as did some of his colleagues in the History Department.  Maybe you could say that snobbish contempt for the “deplorables” is the standard left wing response to right wingers—just as predictable as right wingers making death threats.  But contempt and scorn are not solely the prerogative of the left, whereas death threats do seem only mobilized by the right.

Which does segue, somewhat, into today’s topic, which was to take up David Graeber’s alternative way of explaining the grand canyon between the left and right in today’s America.  His first point concerns what he calls “moral envy.”  “By ‘moral envy,’ I am referring here to feelings of envy and resentment directed at another person, not because that person is wealthy, or gifted, or lucky, but because his or her behavior is seen as upholding a higher moral standard than the envier’s own.  The basic sentiment seems to be ‘How dare that person claim to be better than me (by acting in a way that I do indeed acknowledge is better than me?”” (Bullshit Jobs: A Theory [Simon and Schuster, 2018], 248).  The most usual form this envy takes, in my experience, is the outraged assertion that someone is a “hypocrite.”  The right wing is particularly addicted to this claim about liberal do-gooders.  The liberals, in their view, claim to be holier than thou, but know what side their bed is feathered on, and do quite well for themselves.  They wouldn’t be sipping lattes and driving Priuses if they weren’t laughing their way to the bank.  Moral envy, then, is about bringing everyone down to the same low level of behavior—and thus (here I think Graeber is right) entails a covert acknowledgement that the general run of behavior is not up to our publicly stated moral aspirations.  So we don’t like the people who make the everyday, all-too-human fact of the gap between our ideals and our behavior conspicuous.  Especially when their behavior indicates that the gap is not necessary.  It is actually possible to act in a morally admirable manner.

But then Graeber goes on to do something unexpected—and to me convincing—with this speculation about moral envy.  He ties it to jobs.  Basically, the argument goes like this: some people get to have meaningful jobs, ones for which it is fairly easy to make the case that “here is work worth doing.”  Generally, such work involves actually making something or actually providing a needed service to some people.  The farmer and the doctor have built-in job satisfaction insofar as what they devote themselves to doing requires almost no justification—to themselves or to others.  (This, of course, doesn’t preclude all kinds of dissatisfactions with factors that make their jobs needlessly onerous or economically precarious.)

Graeber’s argument in Bullshit Jobs is that there are not enough of the meaningful jobs to go around.  As robots make more of the things that factory workers used to make and as agricultural labor also requires far fewer workers than it once did, we have not (as utopians once predicted and as Graeber still believes is completely possible) rolled back working hours.  Instead, we generated more and more bullshit jobs—jobs that are make-work in some cases (simply unproductive in ways that those who hold the job can easily see) or, even worse, jobs that are positively anti-productive or harmful (sitting in office denying people’s welfare or insurance claims; telemarketing; you can expand the list.)  In short, lots of people simply don’t have access to jobs that would allow them to do work that they, themselves, morally approve of.

Graeber’s point is that the people who hold these jobs know how worthless the jobs are.  But they rarely have other options—although the people he talks to in his book do often quit these soul-destroying jobs.  The political point is that the number of “good” jobs, i.e. worthwhile, meaningful jobs is limited.  And the people who have those jobs curtail access to them (through professional licensing practices in some cases, through networking in other cases).  There is an inside track to the good jobs that depends, to a very large extent, on being to the manor/manner born.  Especially for the jobs that accord upper-middle-class status (and almost guarantee that one will be a liberal), transmission is generational.  This is the “opportunity hoarding” that Richard Reeves speaks about in his 2017 book, Dream Hoarders.  The liberal professional classes talk a good game about diversity and meritocracy, but they basically keep the spots open for their kids.  Entry into that world from the outside is very difficult and very rare.

To the manner born should also be taken fairly literally.  Access to the upper middle class jobs still requires the detour of education–and how to survive (and even thrive) at an American university is an inherited trait.  Kids from the upper middle class are completely at home in college, just as non-middle-class kids are so often completely at sea.  Yes, school can be a make-it and a break-it, a place where an upper class kid falls off the rails and place where the lower class kid finds a ladder she manages to climb.  But all the statistics, as well as my own experience as a college teacher for thirty years, tell me that the exceptions are relatively rare.  College is a fairly difficult environment to navigate–and close to impossibly difficult for students to whom college’s idiolects are not a native language.

So two conclusions. 1.  It is a mixture of class resentment and moral envy that explains the deep animus against liberal elites on the part of non-elites—an animus that, as much as does racism in my opinion, explains why the abandoned working class of our post-industrial cities has turned to the right.  As bad as (or, at least, as much as) their loss of economic and social status has been their loss of access to meaningful work.  Put them into as many training sessions as you want to transition them to the jobs of the post-industrial economy, you are not going to solve their acute knowledge that these new jobs suck when compared to their old jobs in terms of basic worth.  So they resent the hell out of those who still hold meaningful jobs—and get well paid for those jobs and also have the gall to preach to them about tolerance and diversity.  2.  It is soul-destroying to do work you cannot justify as worth doing.  And what is soul-destroying will lead to aggression, despair, rising suicide rates, drug abuse, and susceptibility to right-wing demagogues.  Pride in one’s work is a sine non qua of a dignified adult life.

Religion, Sect, Party (Part Two)

Having given you Taylor’s definition of religion last time, I now want to move over to Slezkine’s discussion of religion (which then bleeds over into politics) in The House of Government.

He offers a few attempts at defining religion, the first from Steve Bruce: religion “consists of beliefs, actions, and institutions which assume the existence of supernatural entities with powers of action, or impersonal powers or processes possessed of moral purpose.  Such a formulation seems to encompass what ordinary people mean when they talk of religion” (73; all the words in quotes are Bruce’s, not Slezkine’s).  If we go to Durkheim, Slezkine says we get “another approach. ‘Religion, according to his [Durkheim’s] definition, is ‘a unified system of beliefs and practices relative to sacred things.’  Sacred things are things that ‘the profane must not and cannot touch with impunity.’  The function of the sacred is to unite humans into moral communities” (74).

Durkheim’s position is functionalist; religion serves human need, especially the needs of human sociality.  Slezkine continues: “Subsequent elaborations of functionalism describe religion as a process by which humans create a sense of the self and an ‘objective and moral universe of meaning’ [Thomas Luckmann]; a ‘set of symbolic forms and acts that relate man to the ultimate conditions of his existence’ [Robert Bellah]; and, in Clifford Geertz’s much cited version, ‘ a system of symbols which acts to establish powerful, pervasive, and long-lasting moods and motivations in men by formulating conceptions of a general order of existence and clothing these with such an aura of facticity that the moods and motivations seem uniquely realistic” (74).

In Bruce’s terms, I don’t think I can be considered religious, since I think morality is uniquely human; I don’t think there are impersonal or divine processes/beings that have a moral purpose and are capable of acting to further that moral purpose.

But the Durkheim/functionalist positions seem closer to home. What I have been worrying for months on this blog concerns the “sacredness” of “life.”  Does taking life as sacred, as the ultimate value, as the thing that profane hands (the state, other agents of violence, the lords of capitalism) should not destroy or even render less full, fall within the realm of religion?  It does seem to aim at some of the same ends—certainly at establishing a “moral community” united by its reverence for life; certainly in establishing a “moral universe of meaning” underwritten by the ultimate value of life; and certainly in paying attention to “the ultimate conditions of existence,” i.e. the drama of life and death, of being given a precious thing—life—that can only be possessed for a limited time.

I am never sure what all this (that is, the “formal” consonance of religion with humanism) amounts to.  If it is something as general as saying that the question of meaning inevitable arises for humans, and that the ways they answer that question has inevitable consequences for human sociality/communities, then the resemblance doesn’t seem to me to have much bite.  It is so general, so abstract, a similarity that it doesn’t tell us anything of much import.  It is like saying that all animals eat.  Yes, but the devil is in the details.  Some are vegetarians, some kill other animals for food, some are omnivores.

All human communities must be organized, in part, around securing enough food to live.  But hunter/gatherers are pretty radically different from agrarians—and all the important stuff seems to lie in the differences, not in the general similarity of needing to secure food.  I suspect it is the same for religion/atheism.  Yes, they must both address questions of meaning and of creating/sustaining livable communities, but the differences in how they go about those tasks are the significant thing.

More interesting to me is how both Taylor and Slekzine use Karl Jasper’s notion of the “Axial Revolution.”  Taylor leans heavily on Max Weber’s notion of a “disenchanted” world; Slekzine is interested in how the Axial revolution displaces the transcendent from the here and now into some entirely separate realm.  Or, I guess, we could say that the Axial revolution creates the transcendent realm.  In animist versions of the world, the sacred is in the here and now, the spirits that reside in the tree or the stream or the wind.  The sacred doesn’t have its own special place.  But now it is removed from the ordinary world—which is fallen, in need of salvation, and material/mechanical.  Spirit and matter are alienated from one another.  The real and the ideal do not coincide.

For Slekzine, then, every politics (like every post-Axial religion) has to provide a path for moving from here (the fallen real of the world we inhabit day by day) to there (the ideal world of moral and spiritual perfection).  He is particularly interested in millennial versions of that pathway since he thinks revolutionaries are quintessential millennialists.  And he clearly believes that all millennialists promise much more than they can deliver—and then must deal with the disappointment that inevitably follows from the failure of their predictions to come true.

That’s where I retain a liberal optimism—which is also a moral condemnation of the pessimist. My position, quite simply, is that some social orders (namely, social democracy as it has been established and lived in various countries, including Sweden, Denmark, Canada etc.) are demonstrably better than some other social orders if our standard is affording the means for a flourishing life to the largest number of the society’s members.  Measurements such as poverty and education levels, life expectancy etc. can help us make the case for the superiority of these societies to some others.

The point is that the gap between the real and the ideal is actual—even in the best social democracies.  But the point is also that this gap is bridgeable; we have concrete ways to make our societies better, and to move them closer to the ideal of a flourishing life for all.  Pessimists take the easy way out, pronouncing (usually from a fairly comfortable position), that all effort is useless, that our fallen condition is incorrigible.  A humanist politics, then, aims to re-locate the ideal in this world (as opposed to exiling it to a transcendent other-worldly place), while also affirming that movement toward the ideal is possible—and should be the focus of our political efforts.

In these terms, the ideal is, I guess, transcendent in the sense that it is not present in the here and now.  The ordinary does not suffice even within a politics that wants to affirm the ordinary, the basic pleasures and needs of sustaining life.  But there is also the insistence that the ordinary supplies everything we need to improve it—and that such improvements have been achieved in various places at various times, even if we can agree that no society has achieved perfection. There is no need to appeal to outside forces, to something that transcends the human, in order to move toward the ideal.

How a society handles, responds to, the gap between now (the real) and the ideal seems to me an important way to think about its politics.  Looking at 2018 America, it seems (for starters) that we have a deep division over what the ideal should be.  The liberal ideal is universal flourishing.  It seems very difficult not to caricature the ideal of liberalism’s opponents.  I think it is fair (but they probably would not) to say their view is premised on the notion of scarcity.  There is not enough of the good, life-sustaining, stuff to go around—which generates endless competition for the scarce goods.  In that competition, there is nothing wrong (in fact, it makes emotional and moral sense), to fight to secure the goods for one’s own group (family, ethnicity, nation).  A good (ideal) world would be one in which the scarce goods would go to those who truly deserve them (because hard workers, or good people, or “one of us.”)  But the real world is unfair, all kinds of cheaters and other morally unworthy types, get the goods, so politics should be geared to pushing such moochers away from the trough.  That seems to me to be the rightist mindset in this country these days.

But both sides seem to be humanists of my sort, since both seem to think politics can move us to the ideal in this world.  There is not some hope in a transcendent realm—or an orientation toward that realm.

Religion, Sect, Party

Even before quite finishing one behemoth (two chapters to go in Taylor’s A Secular Age), I have started another one, Yuri Slezine’s The House of Government (Princeton UP, 2017).  Surprisingly, they overlap to a fair extent.  Slezine pushes hard on his thesis that Bolshevism is a millennial sect and that its understandings of history and society follow time-worn Biblical plots, especially those found in Exodus and the Book of Revelations.  I find his thesis a bit mechanical and over reductive, an implausible one size fits all.  The strength of his book lies in its details, the multiple stories he can tell about the core figures of the Russian Revolution, not in the explanatory framework that he squeezes all those details into.

But Slezine does offer some general speculations on the nature of religion, sects, and parties that I want to pursue at the moment.  Taylor defines “religious faith in a strong sense . . . by a double criterion: the belief in transcendent reality, on one hand, and the connected aspiration to a transformation that goes beyond ordinary human flourishing on the other” (510).  A fairly substantial component of Taylor’s argument is that most, if not all, people will feel a pull toward those two things; that settling for mundane reality and ordinary flourishing will leave people with a sense of “lack,” a haunting feeling that there must be more.  He considers, very briefly, the idea that secularism entails people simply becoming indifferent to transcendence and some kind of transformation beyond the ordinary—and rejects the possibility that such indifference has—or even could—become common.

He pays more attention to the fact that the existence of a “transcendent reality” has simply become incredible to many people.  But—and this is a major point for him—he insists that the evidence cannot (of science or of anything else) be decisive on this question, or that evidence is even the prime reason for unbelief in the transcendent.  Rather, unbelief is underwritten by an ethos—one of bravely facing up to the facts, of putting aside the childish things of religious faith (the Freudian critique of the “illusion” that is religion).

I am not convinced.  Am I full of contempt for the evangelicals who claim to be Christians, but are such noteworthy examples of non-Christian animus, gleefully dishing out harm to all they deem reprobate even as they accommodate themselves to the thuggery and sexual malpractices of Donald Trump?  Of course.  But Taylor has no truck for the fundamentalists either.  His is the most anodyne of liberal Christianities; he has trouble with the whole idea of hell; basically (without his ever quite coming out and saying so) Taylor’s God does not consign people to eternal damnation.  Instead, hell for Taylor gets associated with sin—both of them understood as the painful alienation from God that results from turning one’s back on the transcendent.  Taylor, in other words, tiptoes away from judgment and punishment—believers aren’t supposed to be judging other humans or inflicting punishment upon them, and he is clearly uneasy with the image of a judging God.  In fact, moralism (rigid rules of conduct) is one of his main enemies in the book.  In its place, he urges us to Aristotelian phronesis, which insists that judgments always be particular, attending to the novelties of the situation at hand.

But back to me.  Aside from my contempt for the evangelicals and their hypocrisies and petty (and not so petty) cruelties to others, do I harbor a Freudian contempt for the believer?  Does my unbelief, the fact that I find the notion that god exists simply incredible (meaning there is no way that how I understand existence has room for a divine being) rest on a self-congratulatory idea of my “maturity” as contrasted to those childish believers?  It doesn’t feel that way.  I find most Christians harmless, and have no beef with practicing Muslims and Jews.  It’s only the fanatics of all religions, but equally the fanatics of godless capitalism, that I abhor.  And I share that sentiment with Taylor.  So I just don’t see that it’s some basic moralistic distinction I make between believers and unbelievers that drives my adoption of unbelief.  It seems much more obvious that my understanding of the world has no place for a god, makes the very idea of a god, if not quite unthinkable (because so many other humans keep insisting there is one), at least unimaginable.  I might as well try to imagine, believe in, a world that contains unicorns.  My “picture” of the world just can’t accommodate a god.

Taylor several times evokes Wittgenstein’s idea of our being held “captive” by a picture.  But Taylor also eschews the notion that some kind of argument (like the classic ones about god’s existence) or some kind of evidence could change the picture of unbelief to one of belief.  He is very much in William James territory.  Basically, his position is that the facts “underdetermine” the choice between belief and unbelief, that materialist science is not conclusive, and so the materialist, as much as the theist, rests his case, in the final analysis, on a leap of faith.  This is the Jamesian “open space” in which we all exist.  And then Taylor seems (without being explicit enough about this) to say that the deciding factor is going to be “experience” (shades of James’s Varieties of Religious Experience), where what follows (in the ways of feelings, motivations, transformations) from making the leap of faith toward a god stands as the confirmation that belief is the right way to go.  It’s the fruits of the relationship to a transcendent that Taylor wants to harvest, that make religious belief valuable in his eyes.

Here’s is where I wish Taylor had paid closer attention to James, particularly the essay “The Will to Believe.”  In that essay, James says that choices have three features: they can be “live or dead” choices, “momentous or trivial” ones, or “forced or avoidable” ones.  On this last one, James identifies the “avoidable” path as the result of indifference.  If I say you must choose between the red or the white wine, you can answer “it’s all the same to me” or I don’t want any wine at all.  You can, in short, avoid making the decision I am asking you to make.  In the case of “live versus dead,” I can ask you whether you believe in Zeus or Zarathustra, and your reply can be “neither of those options is a true possibility for me; nothing in my way of life or my existing set of beliefs allows the question of believing in Zeus to be a real question for me.”  Finally, “momentous/trivial” relates to what I think hangs on the choice; whether or not to have a child is momentous, with huge implications for my life and the life of others; what I choose to eat for dinner tonight is much less momentous, although not without some consequences (for my health, for the environment etc.)

I bring this up because the choice of believing in god is not, at this point in my life, a “live” choice for me.  I have no more substantial grounds or inclination to believe in the Christian god than I do to believe in Zeus.  Furthermore—I am on shakier ground here but think this is true—I don’t find the choice of unbelief momentous.  It is just what I believe: there is no god.  James in that same essay also covers this ground: most of our beliefs are not chosen.  Even though I only have second-hand evidence of the fact (what is reported in books and the historical record), I am not free to believe that Abraham Lincoln never existed or that he was not a President of the US.  I can’t will myself into not believing in his existence.  Well, I feel the same way about god.  I can’t will myself into believing that god exists.  That there is no god is as settled a belief for me as my belief in Abraham Lincoln’s existence.  And I don’t see that very much hangs on those two beliefs.

How can that be, asks the incredulous believer?  But (and, again, I am following James here) I think the believer often has cause and effect backwards.  Pope Francs has just declared capital punishment unacceptable to believing Catholics; Antonia Scalia, a devout Catholic, was an advocate of capital punishment.  So it is hard to see how the belief in god is the source of the conviction about capital punishment.  Something else must motivate the position taken.  Or, at the very least, the fact of believing in god is pretty radically undeterminative; god’s inscrutability is such that humans have to fill in many (most?) of the details.

It’s the same as Taylor’s revisionist views on hell.  Humans keep tweaking their notion of what god wants in order to fit human ideas of what an acceptable god would look like.  Even if you want to dismiss that kind of debunking statement about humans creating the god they can admire/respect, many believers (obviously not fundamentalists) are still going to accept that god’s ways are mysterious and not easily known.  In relation to that mysteriousness, that under-specificity of actual directives, I want to say choosing to believe in god or not doesn’t turn out to be very momentous—at least not in terms of giving us clear moral/ethical guidelines.  Believers have disagreed vehemently about what the implications of their religious beliefs are for actual behavior. Skipping the whole choice, being indifferent to the question of god’s existence (and I think that kind of indifference, not paying much mind to the question of god, is much more common than Taylor thinks it is), doesn’t allow us to escape disagreements about good behavior, but doesn’t handicap us in any significant way from participation in such debates.

I don’t, in fact, think Taylor would disagree about this.  He isn’t at all interested in a moralistic religion—and he is also not committed to the notion that atheists can’t be moral, that their moral convictions and commitments rest on air.   Instead, Taylor argues that the choice is momentous because of the experience–of “deeper” (a word he uses again and again without ever really telling us what is entailed in “deepness”) meanings and a “transformed” relationship to life, the world, others–opens up, makes possible.  Again, the specifics of the transformation are awfully vague.  But the basic idea is clear enough; to those who open themselves up to a relationship to the transcendent, the very terms of life are different—and fuller, more satisfying, and more likely to answer to a spiritual hunger that lurks within us. So I guess Taylor’s advice to me would be: give it a try, see what changes come if you believe in god and try to establish a relationship to him.  I am free, of course, to say “I pass.”  What Taylor finds harder to credit is that my response to his offer could be indifference, a shrug of the shoulders.  He thinks my rejection of his offer must be driven by some animus against the believer and some admiring self-image of myself as a courageous facer of the unpleasant facts of existence.

The funny thing about this is how individualistic it is, how much it hangs on the personal experience that belief generates.  It is one of the key differences between James and John Dewey that James’s vision is pretty relentlessly individualistic, while Dewey is the kind of communitarian critic of liberalism that Taylor has, throughout his long distinguished career, been.  In A Secular Age, however, Taylor is not interested in the community of believers.  Yes, he sees the cultural setting (the “background assumptions” that are a constant in his understanding of how human language and psychology operate) as establishing the very conditions that make unbelief even possible in a “secular age,” but he doesn’t read the consequences of belief/unbelief in a very communal way.  That’s because he has to admit that both believers and unbelievers have committed the same kinds of horrors.  He is very careful not to make the crude Christian argument that unbelievers like Stalin will inevitably kill indiscriminately, as if there wasn’t any blood on Christian hands or as if there have been no secular saints.  So he does not seem to say there is any social pay-off to widespread belief—at least not one we can count on with any kind of assurance.  But he does insist on the personal pay-off.

Here’s where Slezine’s book comes in.  The kind of millennial religion he ascribes to the Bolsheviks is all about communal pay-off; they are looking toward a “transformation” of the world, not of personal selves and experience.  In fact, they are oriented toward a total sacrifice of the personal in the name of that larger transformation.  So it is to the terms of that kind of belief—in the dawning of a new age—that I will turn in my next post.