Category: Meaning and Life and the Humanities

The Future of the Humanities

For some time now, I have a question that I use as a litmus test when speaking with professors of English.  Do you think there will be professors of Victorian literature on American campuses fifty years from now?  There is no discernible pattern, that I can tell, among the responses I get, which cover the full gamut from confident pronouncements that “of course there will be” to sharp laughter accompanying the assertion “I give them twenty years to go extinct.”  (For the record: UNC’s English department currently has five medievalists, seven Renaissance scholars, and six professors teaching Romantic and Victorian literature—that is, if I am allowed to count myself a Victorianist, as I sometime was.)

I have gone through four crises of the humanities in my lifetime, each coinciding with a serious economic downturn (1974, 1981, 1992, and 2008).  The 1981 slump cost me my job when the Humanities Department in which I taught was abolished.  The collapse of the dot.com boom did not generate its corresponding “death of the humanities” moment because, apparently, 9/11 showed us we needed poets.  They were trotted out nation-wide as America tried to come to terms with its grief.

Still, the crisis feels different this time.  Of course, I may just be old and tired and discouraged.  Not “may be.”  Certainly am.  But I think there are also real differences this time around—differences that point to a different future for the humanities.

In part, I am following up my posts about curriculum revision at UNC.  The coverage model is on the wane.  The notion that general education students should gain a familiarity with the whole of English literature is certainly moving toward extinction.  Classes are going to be more focused, more oriented to solving defined problems and imparting designated competencies.  Methods over content.

But, paradoxically, the decline of the professors of Victorian literature is linked to more coverage, not less.  The History Department can be our guide here.  At one time, History departments had two or three specialists in French history (roughly divided by centuries), three or four in English history, along with others who might specialize in Germany or Spain or Italy.  That all began to change (slowly, since it takes some time to turn over a tenured faculty) twenty or so years ago when the Eurocentric world of the American history department was broken open.  Now there needed to be specialists on China, on India, on Latin America, on Africa.  True, in some cases, these non-European specialists were planted in new “area studies” units (Asian Studies, Latin American Studies, Near Eastern Studies etc.).  But usually even those located in area studies would hold a joint appointment in History—and those joint appointments ate up “faculty lines” formerly devoted to the 18th century French specialist.

Art History departments (because relatively small) have always worked on this model: limited numbers of faculty who were supposed, somehow, to cover all art in all places from the beginning of time.  The result was that, while courses covered that whole span, the department only featured scholars of certain periods.  There was no way to have an active scholar in all the possible areas to be studied.  Scholarly “coverage,” in other words, was impossible.

English and Philosophy departments are, in my view, certain to go down this path. English now has to cover world literatures written in English, as well as the literatures of groups formerly not studied (not part of the “canon”).  Philosophy, as well, now incldue non-Western thought, as well as practical, professional, and environmental  ethics, along with new interests in cognitive science.

There will not, fifty years from now, be no professors of Victorian literature in America.  But there will no longer be the presumption that every self-respecting department of English must have a professor of Victorian literature.  The scholarly coverage will be much more spotty—which means, among other things, that someone who wants to become a scholar of Victorian literature will know there are six places to reasonably pursue that ambition in graduate school instead of (as is the case now) assuming you can study Victorian literature in any graduate program.  Similarly, if 18th century English and Scottish empiricism is your heart’s desire, you will have to identify the six philosophy departments you can pursue that course of study.

There is, of course, the larger question.  Certainly (or, at least, it seems obvious to me, although hardly to all those I submit to my litmus test), it is a remarkable thing that our society sees fit to subsidize scholars of Victorian literature.  The prestige of English literature (not our national literature after all) is breath-taking if you reflect upon it for even three seconds.  What made Shakespeare into an American author, an absolute fixture in the American curriculum from seventh grade onwards?  What plausible stake could our society be said to have in subsidizing continued research into the fiction and life of Charles Dickens?  What compelling interest (as a court of law would phrase it) can be identified here?

Another paradox here, it seems to me.  I hate (positively hate, I tell you) the bromides offered (since Matthew Arnold at least) in generalized defenses of the humanities.  When I was (during my years as a director of a humanities center) called upon to speak about the value of the humanities, I always focused on individual examples of the kind of work my center was enabling.  The individual projects were fascinating—and of obvious interest to most halfway-educated and halfway-sympathetic audiences.  The fact that, within the humanities, intellectual inquiry leads to new knowledge and to new perspectives on old knowledge is the lifeblood of the whole enterprise.

But it is much harder to label that good work as necessary.  The world is a better, richer (I choose this word deliberately) place when it is possible for scholars to chase down fascinating ideas and stories because they are fascinating.  And I firmly believe that fascination will mean that people who have the inclination and the leisure will continue to do humanities work come hell and high water.  Yes, they will need the five hundred pounds a year and the room of one’s own that Virginia Woolf identified as the prerequisites, but people of such means are hardly an endangered species at the moment.  And, yes, it is true that society generally (especially after the fact, in the rear view mirror as it were) likes to be able to point to such achievements, to see them as signs of vitality, culture, high-mindedness and the like.  But that doesn’t say who is to pay.  The state?  The bargain up to now is that the scholars (as well as the poets and the novelists) teach for their crust of bread and for, what is more precious, the time to do their non-teaching work of scholarship and writing.  Philanthropists?  The arts in America are subsidized by private charity—and so is much of higher education (increasingly so as state support dwindles.)  The intricacies of this bargain warrant another post.  The market?  Never going to happen.  Poetry and scholarship is never going to pay for itself, and novels only very rarely so.

The humanities, then, are dependent on charity—or on the weird institution that is American higher education.  The humanities’ place in higher education is precarious—and the more the logic of the market is imposed on education, the more precarious that position becomes.  No surprise there.  But it is no help when my colleagues act as if the value of scholarship on Victorian literature is self-evident.  Just the opposite.  Its value is extremely hard to articulate.  We humanists do not have any knock-down arguments.  And there aren’t any out there just waiting to be discovered.  The ground has been too well covered for there to have been such an oversight.  The humanities are in the tough position of being a luxury, not a necessity, even as they are also a luxury which makes life worth living as contrasted to “bare life” (to appropriate Agamben’s phrase).  The cruelty of our times is that the overlords are perfectly content (hell, it is one of their primary aims) to have the vast majority only possess “bare life.”  Perhaps it was always thus, but that is no consolation. Not needing the humanities themselves, our overlords are hardly moved to consider how to provide it for others.

More Comments on What We Should Teach at University

My colleague Todd Taylor weighs in—and thinks he also might be the source for my “formula.”  Here, from Todd’s textbook is his version of the three-pronged statement about what we should, as teachers, be aiming to enable our students to do.

  1. To gather the most relevant and persuasive evidence.
  2. To identify a pattern among that evidence.
  3. To articulate a perspective supported by your analysis of the evidence.

And here are Todd’s further thoughts:

“I might have been a source for the ‘neat formula’ you mention, since I’ve been preaching that three-step process as “The Essential Skill for the Information Age” for over a decade now.  I might have added the formula to the Tar Heel Writing Guide.  I am attaching a scan of my textbook Becoming a College Writer where I distill the formula to its simplest form.  I have longer talks on the formula, with notable points being that step #1 sometimes includes generating information beyond just locating someone else’s data.  And step #3, articulating a perspective for others to follow (or call to action or application), is the fulcrum where “content-consumption, passive pedagogy” breaks down and “knowledge-production, active learning” takes off.

The high-point of my experience preaching this formula was when a senior ENGL 142 student shared with me the news of a job interview that ended successfully at the moment when she recited the three steps in response to the question ‘What is your problem solving process?’

In my textbook, I also have a potentially provocative definition of a “discipline” as “a method (for gathering evidence) applied to a subject,” which is my soft attempt to introduce epistemology to GenEd students.  What gets interesting for us rhet/discourse types is to consider how a “discipline” goes beyond steps #1 and #2 and includes step #3 so that a complete definition of “discipline” also includes the ways of articulating/communicating that which emerges from the application of a method to a subject.  I will forever hold onto to my beloved linguistic determinism.  Of course, this idea is nothing new to critical theorists, especially from Foucault.  What might be new(ish) is to try to explain/integrate such ideas within the institution(s) of GenEd requirements and higher ed.  I expect if I studied Dewey again, I could trace the ideas there, just as I expect other folks have other versions of the ‘neat formula.'”

Todd also raised another issue with me that is (at least to me) of great interest.  The humanities are wedded, we agreed, to “interpretation.”  And it makes sense to think of interpretation as a “method” or “approach” that is distinct from the qualitative/quantitative divide in the social sciences.  Back to Dilthey.  Explanation versus meaning.  Analysis versus the hermeneutic.  But perhaps even more than that, since quantitative/qualitative can be descriptors applied to the data itself, whereas interpretation is about how you understand the data.  So no science, even with all its numbers, without some sort of interpretation.  In other words, quantitative/qualitative doesn’t cover the whole field.  There is much more to be said about how we process information than simply saying sometimes we do it via numbers and sometimes via other means.

Moral Envy and Opportunity Hoarding

One quick addendum to the last post—and to Bertrand Russell’s comment about how the traditionalist is allowed all kinds of indignation that the reformer is not.  What’s with the ubiquity of death threats against anyone who offends the right wing in the United States?  That those who would change an established social practice/pattern, no matter how unjust or absurd, deserve a death sentence is, to all appearances, simply accepted by the radical right.  So, just to give one example, the NC State professor who went public with his memories of drinking heavily with Brett Kavanaugh at Yale immediately got death threats—as did some of his colleagues in the History Department.  Maybe you could say that snobbish contempt for the “deplorables” is the standard left wing response to right wingers—just as predictable as right wingers making death threats.  But contempt and scorn are not solely the prerogative of the left, whereas death threats do seem only mobilized by the right.

Which does segue, somewhat, into today’s topic, which was to take up David Graeber’s alternative way of explaining the grand canyon between the left and right in today’s America.  His first point concerns what he calls “moral envy.”  “By ‘moral envy,’ I am referring here to feelings of envy and resentment directed at another person, not because that person is wealthy, or gifted, or lucky, but because his or her behavior is seen as upholding a higher moral standard than the envier’s own.  The basic sentiment seems to be ‘How dare that person claim to be better than me (by acting in a way that I do indeed acknowledge is better than me?”” (Bullshit Jobs: A Theory [Simon and Schuster, 2018], 248).  The most usual form this envy takes, in my experience, is the outraged assertion that someone is a “hypocrite.”  The right wing is particularly addicted to this claim about liberal do-gooders.  The liberals, in their view, claim to be holier than thou, but know what side their bed is feathered on, and do quite well for themselves.  They wouldn’t be sipping lattes and driving Priuses if they weren’t laughing their way to the bank.  Moral envy, then, is about bringing everyone down to the same low level of behavior—and thus (here I think Graeber is right) entails a covert acknowledgement that the general run of behavior is not up to our publicly stated moral aspirations.  So we don’t like the people who make the everyday, all-too-human fact of the gap between our ideals and our behavior conspicuous.  Especially when their behavior indicates that the gap is not necessary.  It is actually possible to act in a morally admirable manner.

But then Graeber goes on to do something unexpected—and to me convincing—with this speculation about moral envy.  He ties it to jobs.  Basically, the argument goes like this: some people get to have meaningful jobs, ones for which it is fairly easy to make the case that “here is work worth doing.”  Generally, such work involves actually making something or actually providing a needed service to some people.  The farmer and the doctor have built-in job satisfaction insofar as what they devote themselves to doing requires almost no justification—to themselves or to others.  (This, of course, doesn’t preclude all kinds of dissatisfactions with factors that make their jobs needlessly onerous or economically precarious.)

Graeber’s argument in Bullshit Jobs is that there are not enough of the meaningful jobs to go around.  As robots make more of the things that factory workers used to make and as agricultural labor also requires far fewer workers than it once did, we have not (as utopians once predicted and as Graeber still believes is completely possible) rolled back working hours.  Instead, we generated more and more bullshit jobs—jobs that are make-work in some cases (simply unproductive in ways that those who hold the job can easily see) or, even worse, jobs that are positively anti-productive or harmful (sitting in office denying people’s welfare or insurance claims; telemarketing; you can expand the list.)  In short, lots of people simply don’t have access to jobs that would allow them to do work that they, themselves, morally approve of.

Graeber’s point is that the people who hold these jobs know how worthless the jobs are.  But they rarely have other options—although the people he talks to in his book do often quit these soul-destroying jobs.  The political point is that the number of “good” jobs, i.e. worthwhile, meaningful jobs is limited.  And the people who have those jobs curtail access to them (through professional licensing practices in some cases, through networking in other cases).  There is an inside track to the good jobs that depends, to a very large extent, on being to the manor/manner born.  Especially for the jobs that accord upper-middle-class status (and almost guarantee that one will be a liberal), transmission is generational.  This is the “opportunity hoarding” that Richard Reeves speaks about in his 2017 book, Dream Hoarders.  The liberal professional classes talk a good game about diversity and meritocracy, but they basically keep the spots open for their kids.  Entry into that world from the outside is very difficult and very rare.

To the manner born should also be taken fairly literally.  Access to the upper middle class jobs still requires the detour of education–and how to survive (and even thrive) at an American university is an inherited trait.  Kids from the upper middle class are completely at home in college, just as non-middle-class kids are so often completely at sea.  Yes, school can be a make-it and a break-it, a place where an upper class kid falls off the rails and place where the lower class kid finds a ladder she manages to climb.  But all the statistics, as well as my own experience as a college teacher for thirty years, tell me that the exceptions are relatively rare.  College is a fairly difficult environment to navigate–and close to impossibly difficult for students to whom college’s idiolects are not a native language.

So two conclusions. 1.  It is a mixture of class resentment and moral envy that explains the deep animus against liberal elites on the part of non-elites—an animus that, as much as does racism in my opinion, explains why the abandoned working class of our post-industrial cities has turned to the right.  As bad as (or, at least, as much as) their loss of economic and social status has been their loss of access to meaningful work.  Put them into as many training sessions as you want to transition them to the jobs of the post-industrial economy, you are not going to solve their acute knowledge that these new jobs suck when compared to their old jobs in terms of basic worth.  So they resent the hell out of those who still hold meaningful jobs—and get well paid for those jobs and also have the gall to preach to them about tolerance and diversity.  2.  It is soul-destroying to do work you cannot justify as worth doing.  And what is soul-destroying will lead to aggression, despair, rising suicide rates, drug abuse, and susceptibility to right-wing demagogues.  Pride in one’s work is a sine non qua of a dignified adult life.

Religion, Sect, Party (Part Two)

Having given you Taylor’s definition of religion last time, I now want to move over to Slezkine’s discussion of religion (which then bleeds over into politics) in The House of Government.

He offers a few attempts at defining religion, the first from Steve Bruce: religion “consists of beliefs, actions, and institutions which assume the existence of supernatural entities with powers of action, or impersonal powers or processes possessed of moral purpose.  Such a formulation seems to encompass what ordinary people mean when they talk of religion” (73; all the words in quotes are Bruce’s, not Slezkine’s).  If we go to Durkheim, Slezkine says we get “another approach. ‘Religion, according to his [Durkheim’s] definition, is ‘a unified system of beliefs and practices relative to sacred things.’  Sacred things are things that ‘the profane must not and cannot touch with impunity.’  The function of the sacred is to unite humans into moral communities” (74).

Durkheim’s position is functionalist; religion serves human need, especially the needs of human sociality.  Slezkine continues: “Subsequent elaborations of functionalism describe religion as a process by which humans create a sense of the self and an ‘objective and moral universe of meaning’ [Thomas Luckmann]; a ‘set of symbolic forms and acts that relate man to the ultimate conditions of his existence’ [Robert Bellah]; and, in Clifford Geertz’s much cited version, ‘ a system of symbols which acts to establish powerful, pervasive, and long-lasting moods and motivations in men by formulating conceptions of a general order of existence and clothing these with such an aura of facticity that the moods and motivations seem uniquely realistic” (74).

In Bruce’s terms, I don’t think I can be considered religious, since I think morality is uniquely human; I don’t think there are impersonal or divine processes/beings that have a moral purpose and are capable of acting to further that moral purpose.

But the Durkheim/functionalist positions seem closer to home. What I have been worrying for months on this blog concerns the “sacredness” of “life.”  Does taking life as sacred, as the ultimate value, as the thing that profane hands (the state, other agents of violence, the lords of capitalism) should not destroy or even render less full, fall within the realm of religion?  It does seem to aim at some of the same ends—certainly at establishing a “moral community” united by its reverence for life; certainly in establishing a “moral universe of meaning” underwritten by the ultimate value of life; and certainly in paying attention to “the ultimate conditions of existence,” i.e. the drama of life and death, of being given a precious thing—life—that can only be possessed for a limited time.

I am never sure what all this (that is, the “formal” consonance of religion with humanism) amounts to.  If it is something as general as saying that the question of meaning inevitable arises for humans, and that the ways they answer that question has inevitable consequences for human sociality/communities, then the resemblance doesn’t seem to me to have much bite.  It is so general, so abstract, a similarity that it doesn’t tell us anything of much import.  It is like saying that all animals eat.  Yes, but the devil is in the details.  Some are vegetarians, some kill other animals for food, some are omnivores.

All human communities must be organized, in part, around securing enough food to live.  But hunter/gatherers are pretty radically different from agrarians—and all the important stuff seems to lie in the differences, not in the general similarity of needing to secure food.  I suspect it is the same for religion/atheism.  Yes, they must both address questions of meaning and of creating/sustaining livable communities, but the differences in how they go about those tasks are the significant thing.

More interesting to me is how both Taylor and Slekzine use Karl Jasper’s notion of the “Axial Revolution.”  Taylor leans heavily on Max Weber’s notion of a “disenchanted” world; Slekzine is interested in how the Axial revolution displaces the transcendent from the here and now into some entirely separate realm.  Or, I guess, we could say that the Axial revolution creates the transcendent realm.  In animist versions of the world, the sacred is in the here and now, the spirits that reside in the tree or the stream or the wind.  The sacred doesn’t have its own special place.  But now it is removed from the ordinary world—which is fallen, in need of salvation, and material/mechanical.  Spirit and matter are alienated from one another.  The real and the ideal do not coincide.

For Slekzine, then, every politics (like every post-Axial religion) has to provide a path for moving from here (the fallen real of the world we inhabit day by day) to there (the ideal world of moral and spiritual perfection).  He is particularly interested in millennial versions of that pathway since he thinks revolutionaries are quintessential millennialists.  And he clearly believes that all millennialists promise much more than they can deliver—and then must deal with the disappointment that inevitably follows from the failure of their predictions to come true.

That’s where I retain a liberal optimism—which is also a moral condemnation of the pessimist. My position, quite simply, is that some social orders (namely, social democracy as it has been established and lived in various countries, including Sweden, Denmark, Canada etc.) are demonstrably better than some other social orders if our standard is affording the means for a flourishing life to the largest number of the society’s members.  Measurements such as poverty and education levels, life expectancy etc. can help us make the case for the superiority of these societies to some others.

The point is that the gap between the real and the ideal is actual—even in the best social democracies.  But the point is also that this gap is bridgeable; we have concrete ways to make our societies better, and to move them closer to the ideal of a flourishing life for all.  Pessimists take the easy way out, pronouncing (usually from a fairly comfortable position), that all effort is useless, that our fallen condition is incorrigible.  A humanist politics, then, aims to re-locate the ideal in this world (as opposed to exiling it to a transcendent other-worldly place), while also affirming that movement toward the ideal is possible—and should be the focus of our political efforts.

In these terms, the ideal is, I guess, transcendent in the sense that it is not present in the here and now.  The ordinary does not suffice even within a politics that wants to affirm the ordinary, the basic pleasures and needs of sustaining life.  But there is also the insistence that the ordinary supplies everything we need to improve it—and that such improvements have been achieved in various places at various times, even if we can agree that no society has achieved perfection. There is no need to appeal to outside forces, to something that transcends the human, in order to move toward the ideal.

How a society handles, responds to, the gap between now (the real) and the ideal seems to me an important way to think about its politics.  Looking at 2018 America, it seems (for starters) that we have a deep division over what the ideal should be.  The liberal ideal is universal flourishing.  It seems very difficult not to caricature the ideal of liberalism’s opponents.  I think it is fair (but they probably would not) to say their view is premised on the notion of scarcity.  There is not enough of the good, life-sustaining, stuff to go around—which generates endless competition for the scarce goods.  In that competition, there is nothing wrong (in fact, it makes emotional and moral sense), to fight to secure the goods for one’s own group (family, ethnicity, nation).  A good (ideal) world would be one in which the scarce goods would go to those who truly deserve them (because hard workers, or good people, or “one of us.”)  But the real world is unfair, all kinds of cheaters and other morally unworthy types, get the goods, so politics should be geared to pushing such moochers away from the trough.  That seems to me to be the rightist mindset in this country these days.

But both sides seem to be humanists of my sort, since both seem to think politics can move us to the ideal in this world.  There is not some hope in a transcendent realm—or an orientation toward that realm.

Religion, Sect, Party

Even before quite finishing one behemoth (two chapters to go in Taylor’s A Secular Age), I have started another one, Yuri Slezine’s The House of Government (Princeton UP, 2017).  Surprisingly, they overlap to a fair extent.  Slezine pushes hard on his thesis that Bolshevism is a millennial sect and that its understandings of history and society follow time-worn Biblical plots, especially those found in Exodus and the Book of Revelations.  I find his thesis a bit mechanical and over reductive, an implausible one size fits all.  The strength of his book lies in its details, the multiple stories he can tell about the core figures of the Russian Revolution, not in the explanatory framework that he squeezes all those details into.

But Slezine does offer some general speculations on the nature of religion, sects, and parties that I want to pursue at the moment.  Taylor defines “religious faith in a strong sense . . . by a double criterion: the belief in transcendent reality, on one hand, and the connected aspiration to a transformation that goes beyond ordinary human flourishing on the other” (510).  A fairly substantial component of Taylor’s argument is that most, if not all, people will feel a pull toward those two things; that settling for mundane reality and ordinary flourishing will leave people with a sense of “lack,” a haunting feeling that there must be more.  He considers, very briefly, the idea that secularism entails people simply becoming indifferent to transcendence and some kind of transformation beyond the ordinary—and rejects the possibility that such indifference has—or even could—become common.

He pays more attention to the fact that the existence of a “transcendent reality” has simply become incredible to many people.  But—and this is a major point for him—he insists that the evidence cannot (of science or of anything else) be decisive on this question, or that evidence is even the prime reason for unbelief in the transcendent.  Rather, unbelief is underwritten by an ethos—one of bravely facing up to the facts, of putting aside the childish things of religious faith (the Freudian critique of the “illusion” that is religion).

I am not convinced.  Am I full of contempt for the evangelicals who claim to be Christians, but are such noteworthy examples of non-Christian animus, gleefully dishing out harm to all they deem reprobate even as they accommodate themselves to the thuggery and sexual malpractices of Donald Trump?  Of course.  But Taylor has no truck for the fundamentalists either.  His is the most anodyne of liberal Christianities; he has trouble with the whole idea of hell; basically (without his ever quite coming out and saying so) Taylor’s God does not consign people to eternal damnation.  Instead, hell for Taylor gets associated with sin—both of them understood as the painful alienation from God that results from turning one’s back on the transcendent.  Taylor, in other words, tiptoes away from judgment and punishment—believers aren’t supposed to be judging other humans or inflicting punishment upon them, and he is clearly uneasy with the image of a judging God.  In fact, moralism (rigid rules of conduct) is one of his main enemies in the book.  In its place, he urges us to Aristotelian phronesis, which insists that judgments always be particular, attending to the novelties of the situation at hand.

But back to me.  Aside from my contempt for the evangelicals and their hypocrisies and petty (and not so petty) cruelties to others, do I harbor a Freudian contempt for the believer?  Does my unbelief, the fact that I find the notion that god exists simply incredible (meaning there is no way that how I understand existence has room for a divine being) rest on a self-congratulatory idea of my “maturity” as contrasted to those childish believers?  It doesn’t feel that way.  I find most Christians harmless, and have no beef with practicing Muslims and Jews.  It’s only the fanatics of all religions, but equally the fanatics of godless capitalism, that I abhor.  And I share that sentiment with Taylor.  So I just don’t see that it’s some basic moralistic distinction I make between believers and unbelievers that drives my adoption of unbelief.  It seems much more obvious that my understanding of the world has no place for a god, makes the very idea of a god, if not quite unthinkable (because so many other humans keep insisting there is one), at least unimaginable.  I might as well try to imagine, believe in, a world that contains unicorns.  My “picture” of the world just can’t accommodate a god.

Taylor several times evokes Wittgenstein’s idea of our being held “captive” by a picture.  But Taylor also eschews the notion that some kind of argument (like the classic ones about god’s existence) or some kind of evidence could change the picture of unbelief to one of belief.  He is very much in William James territory.  Basically, his position is that the facts “underdetermine” the choice between belief and unbelief, that materialist science is not conclusive, and so the materialist, as much as the theist, rests his case, in the final analysis, on a leap of faith.  This is the Jamesian “open space” in which we all exist.  And then Taylor seems (without being explicit enough about this) to say that the deciding factor is going to be “experience” (shades of James’s Varieties of Religious Experience), where what follows (in the ways of feelings, motivations, transformations) from making the leap of faith toward a god stands as the confirmation that belief is the right way to go.  It’s the fruits of the relationship to a transcendent that Taylor wants to harvest, that make religious belief valuable in his eyes.

Here’s is where I wish Taylor had paid closer attention to James, particularly the essay “The Will to Believe.”  In that essay, James says that choices have three features: they can be “live or dead” choices, “momentous or trivial” ones, or “forced or avoidable” ones.  On this last one, James identifies the “avoidable” path as the result of indifference.  If I say you must choose between the red or the white wine, you can answer “it’s all the same to me” or I don’t want any wine at all.  You can, in short, avoid making the decision I am asking you to make.  In the case of “live versus dead,” I can ask you whether you believe in Zeus or Zarathustra, and your reply can be “neither of those options is a true possibility for me; nothing in my way of life or my existing set of beliefs allows the question of believing in Zeus to be a real question for me.”  Finally, “momentous/trivial” relates to what I think hangs on the choice; whether or not to have a child is momentous, with huge implications for my life and the life of others; what I choose to eat for dinner tonight is much less momentous, although not without some consequences (for my health, for the environment etc.)

I bring this up because the choice of believing in god is not, at this point in my life, a “live” choice for me.  I have no more substantial grounds or inclination to believe in the Christian god than I do to believe in Zeus.  Furthermore—I am on shakier ground here but think this is true—I don’t find the choice of unbelief momentous.  It is just what I believe: there is no god.  James in that same essay also covers this ground: most of our beliefs are not chosen.  Even though I only have second-hand evidence of the fact (what is reported in books and the historical record), I am not free to believe that Abraham Lincoln never existed or that he was not a President of the US.  I can’t will myself into not believing in his existence.  Well, I feel the same way about god.  I can’t will myself into believing that god exists.  That there is no god is as settled a belief for me as my belief in Abraham Lincoln’s existence.  And I don’t see that very much hangs on those two beliefs.

How can that be, asks the incredulous believer?  But (and, again, I am following James here) I think the believer often has cause and effect backwards.  Pope Francs has just declared capital punishment unacceptable to believing Catholics; Antonia Scalia, a devout Catholic, was an advocate of capital punishment.  So it is hard to see how the belief in god is the source of the conviction about capital punishment.  Something else must motivate the position taken.  Or, at the very least, the fact of believing in god is pretty radically undeterminative; god’s inscrutability is such that humans have to fill in many (most?) of the details.

It’s the same as Taylor’s revisionist views on hell.  Humans keep tweaking their notion of what god wants in order to fit human ideas of what an acceptable god would look like.  Even if you want to dismiss that kind of debunking statement about humans creating the god they can admire/respect, many believers (obviously not fundamentalists) are still going to accept that god’s ways are mysterious and not easily known.  In relation to that mysteriousness, that under-specificity of actual directives, I want to say choosing to believe in god or not doesn’t turn out to be very momentous—at least not in terms of giving us clear moral/ethical guidelines.  Believers have disagreed vehemently about what the implications of their religious beliefs are for actual behavior. Skipping the whole choice, being indifferent to the question of god’s existence (and I think that kind of indifference, not paying much mind to the question of god, is much more common than Taylor thinks it is), doesn’t allow us to escape disagreements about good behavior, but doesn’t handicap us in any significant way from participation in such debates.

I don’t, in fact, think Taylor would disagree about this.  He isn’t at all interested in a moralistic religion—and he is also not committed to the notion that atheists can’t be moral, that their moral convictions and commitments rest on air.   Instead, Taylor argues that the choice is momentous because of the experience–of “deeper” (a word he uses again and again without ever really telling us what is entailed in “deepness”) meanings and a “transformed” relationship to life, the world, others–opens up, makes possible.  Again, the specifics of the transformation are awfully vague.  But the basic idea is clear enough; to those who open themselves up to a relationship to the transcendent, the very terms of life are different—and fuller, more satisfying, and more likely to answer to a spiritual hunger that lurks within us. So I guess Taylor’s advice to me would be: give it a try, see what changes come if you believe in god and try to establish a relationship to him.  I am free, of course, to say “I pass.”  What Taylor finds harder to credit is that my response to his offer could be indifference, a shrug of the shoulders.  He thinks my rejection of his offer must be driven by some animus against the believer and some admiring self-image of myself as a courageous facer of the unpleasant facts of existence.

The funny thing about this is how individualistic it is, how much it hangs on the personal experience that belief generates.  It is one of the key differences between James and John Dewey that James’s vision is pretty relentlessly individualistic, while Dewey is the kind of communitarian critic of liberalism that Taylor has, throughout his long distinguished career, been.  In A Secular Age, however, Taylor is not interested in the community of believers.  Yes, he sees the cultural setting (the “background assumptions” that are a constant in his understanding of how human language and psychology operate) as establishing the very conditions that make unbelief even possible in a “secular age,” but he doesn’t read the consequences of belief/unbelief in a very communal way.  That’s because he has to admit that both believers and unbelievers have committed the same kinds of horrors.  He is very careful not to make the crude Christian argument that unbelievers like Stalin will inevitably kill indiscriminately, as if there wasn’t any blood on Christian hands or as if there have been no secular saints.  So he does not seem to say there is any social pay-off to widespread belief—at least not one we can count on with any kind of assurance.  But he does insist on the personal pay-off.

Here’s where Slezine’s book comes in.  The kind of millennial religion he ascribes to the Bolsheviks is all about communal pay-off; they are looking toward a “transformation” of the world, not of personal selves and experience.  In fact, they are oriented toward a total sacrifice of the personal in the name of that larger transformation.  So it is to the terms of that kind of belief—in the dawning of a new age—that I will turn in my next post.

Materialism, Meaning, and the Humanities

Taylor’s theism is directed, in part, against a reductionist materialism, which would 1) in its utilitarian forms (which include Darwinian accounts) “reduce” human motivations to sustaining life (either that of the individual or of the species) and see all human behavior as driven by the efforts to seek pleasure or avoid pain; or 2) in its biochemical forms claim that all human behavior is a product of chemical reactions in the body.  He is adamant that there must be “something more” than this to explain human aspirations and behavior.

In particular, Taylor says there are three things a reductionist materialism cannot account for: 1. Any sense of there being non-human forces or powers to which we, as humans, can connect.  This, straightforwardly, is the place where “transcendence” makes its appearance.  There is something that transcends the exclusively human—and the experience of or faith in the existence of that transcendent something cannot be accounted for in reductionist materialist ontologies.

2.  There is the observable fact that moral motivations play a large (although hardly exclusive) role in what humans do.  There are issues of value—of what gives pleasure or what gives pain, what is seen as admirable, and standards apart from desire itself by which any particular desire is deemed endorsable or not.  We subject out own desires and behavior, as well as the desires and behaviors of others, to judgment—and the materialist view has a hard time accounting for the standards that are deployed in our making of judgments. This is a version of the fact/value dichotomy–and Taylor (I think) is sympathetic to the pragmatist view (most fully articulated by Hilary Putnam, but clearly already there in William James and wonderfully expressed by Kenneth Burke) that we are always already valuers, that our attention to things (to facts, to what is the case) is driven by what “concerns” us, what we think matters, is significant.

3. Finally, we have aesthetic responses, finding beauty in some things, and turning away in disgust from others, along with desires to produce such artistic objects and to spend time in their contemplation and consumption.  We might say that here we find admiration for work well done—for accomplishments that go beyond just getting the job done, just being “good enough.”  Standards of excellence are applied in all kinds of fields—from artistic endeavors to athletic ones to simply the “style” and competence with which the most ordinary tasks are done.

Taylor does not insist that only faith in a transcendent can underwrite objections to reductionist materialism.  But what he does show is that religion (at least in some cases) shares a cause with the humanities: the cause of showing there is something more than materialist satisfactions (the utility maximizing rational individual of classical economic theory) that “matter” to human beings.  The humanities are also committed to a sense that humans derive (find) meaning in a variety of activities and relationships that are not captured by a single-minded pursuit of utility.

Of course, ever since Matthew Arnold (at least), the humanities’ attempts to describe those sources of non-utilitarian meaning have come across as pretty desperate, a kind of hysterical special pleading.  In fact, the humanities seem caught between two antithetical strategies in such presentations of their value.  Either, they try to demonstrate that the humanities have a utility value, just one that is not reducible to pleasure/pain or straightforward economic gain.  Or they try to argue for the uselessness of the aesthetic and of knowledge for its own sake, finding in such non-utility a welcome respite from the obsessions and demands of a consumer culture, where getting and spending rules over all time and effort.

I am more inclined to go the “meaning route.”  That is, I don’t want to focus on what the humanities and the arts “do” for the person who either pursues them actively or consumes them somewhat more passively.  In other words, I am not very attracted to or convinced by the Martha Nussbaum type arguments about how reading the classics (from Lucretius to George Eliot and Henry James) makes us better moral subjects and better democratic citizens.  Perhaps she is right.  But I’d hate to be committed to saying that those who do not do the requisite reading are somehow doomed to be deficient moral subjects and citizens.

Rather, I think it more demonstrably (phenomenologically) true that subjects locate meaning through processes of valuation that prove much more multifarious than any utilitarian or Darwinian calculus can account for.  In particular, I would push the thought that what is found valuable (and hence worth striving to create and working to sustain) is much more the product of a self’s relation to, embeddedness in, others and the non-human world than the utilitarian/Darwinian account would suggest.  Which is to say that, along with Dewey, I believe “morality is social.”  Morality, in this case, covers both what contemporary philosophy (following Bernard Williams) calls morals (rules of conduct mostly directed toward establishing and maintaining optimal relations to others and to the world) and ethics (questions pertaining to what is the “good life,” of what ends I—and others—should pursue).  All the issues pertaining both to morals and ethics are worked out, thought through, acted upon, and subject to the judgment of others within the ensemble of social relations and practices in which the self is embedded.

Does that mean that “society” plays the role of the “transcendent” in my form of humanism?  I am willing to accept that characterization of my position.  The social is the “horizon” (to use that term from phenomenology) within which judgments of meaning and value are made.  The humanities, then, would become the study of how those judgments were/are made by various different people situated in various different societies.  But not just how those judgments were made, but also what those judgments were/are.  The humanities and the arts, as is often said—and Taylor argues that the same is true for religion—proceed by way of exemplars.  There are no hard and fast rules for making judgments—and there is no way to proclaim apodictic truth for any particular judgment.  Which is not to say that there are no reasons one can offer for one’s own judgments.  But we should fully expect that such reasons will prove more convincing to some than to others—and that the extent to which reasons are convincing will depend quite heavily on the social context from within which those reasons are heard and evaluated.

Does this all entail cultural relativism?  Yes, to some extent.  I will in a subsequent post return to the William James’s notion of a “live option.”  There are demonstrably judgments and choices that were “live options” in the past that are no longer so.  Unlike someone living in the 1845 South, I cannot actively entertain the question of whether I should purchase a slave.  This is not simply because no slaves are available to buy.  It is also because, situated where I am in history and culture, being a slave owner is unthinkable for me.

But it is only relativism to a certain extent because cultures are not monolithic; they are in dialogue with other cultures (and with the past), as well as internally riven with all kinds of debates about proper judgments concerning morals and ethics.  The person living in the 1845 South could not be unaware that some of his fellow American citizens found slave-owning abominable.  Being within a culture can isolate someone from others who hold contrary views, but it cannot completely shield him from knowing about those who would dispute his views.  The humanities, we might say, are committed to airing all such disputes—opening out toward the historical record, to other cultures, and to the debates within one’s own culture.  The humanities stake a lot on the idea that the pursuit of meaning and values should be undertaken in and through exposure to as wide a set of judgments as possible.

This open-mindedness of the liberal arts (of the humanities) is, of course, anathema to those who wish to insure the triumph of one particular set of values over another.  All tyrannies try to shut down the public sphere, the full and raucous airing of multiple views.  Established religions have often been guilty of just such attempts to stifle discussion and debate.  Taylor, of course, recognizes that fact.  Hence he has to be very tolerant of non-religious humanists.  His position seems to be that the humanist is missing out on something, on a good thing, by not opening up to a relation with the transcendent (as contrasted to accusing the humanist of heresy).  At issue, I presume, is whether the transcendent of one’s relation to others and to the world is “enough.”

Enough for what?  For fully realizing the potential of life?  It seems like it would have to be something like that.  But I am not sure—and will return to this issue in subsequent posts.

For now, I will finish by considering the relation to the non-human.  I am not inclined (as is obvious by now) to find in the non-human—be it God, Nature, or some kind of life force/energy—a source of meaning.  Yet that does not entail denying that non-human forces and energies exist.  There are natural processes—erosion, earthquakes, weather cycles etc.—that exist apart from the human;  they pre-existed the human and will, most likely, exist after humans are extinct. There are also non-human creatures, some of whom pre-existed us and others of whom (I assume) will outlive the human species.

Moral questions involve, among other things, considering how we value those non-human forces/creatures and what are the optimal relations in which to stand to them.  Am I committed to the notion that whatever meaning and value those non-human forces possess are meanings and values that we, as humans, have created?  Yes, I am committed to that view.  Does that mean that non-human forces can only have meaning/value insofar as they relate to (even serve) human concerns?  That’s a tougher one.  I’d like to think (but don’t fully know how to make this stick) that we humans can value something with which we share the world (whether that sharer is human or non-human) for its own sake.  That is, I can fully acknowledge the other’s right to exist, and to flourish, without seeing the other’s existence as benefiting me in some way.  Here is Kant’s “kingdom of ends.”  That it is humans who see/designate others as ends-in-themselves does not logically entail that such a view is impossible to achieve.

What would be the reason(s) advanced for such a view?  One could be the reciprocity argument.  I am no more responsible for my presence on earth than is my neighbor or a butterfly.  Since I fully expect others to grant my right to be here, it is consistent that I grant their right to be here as well.  Otherwise, I would have to have some argument that would explain why I have more right to be here than the other creatures and processes that I find in the world about me.  Of course, such arguments for the “special status” of humans are rampant in human history, and most religions offer some version of such arguments.  Hence only humans get to be immortal or made in God’s image in Christianity.  There is also the Darwinian/Nietzschean route of saying we live in a totally amoral universe, where it is eat or be eaten, so it is not a question of “special status” for the human, or even for me and/or my tribe, just a struggle for life and death.  But if we accept that moral considerations do have some force in human motives and actions, then the challenge of justifying the “special status” of all humans or of some sub-set of humans is likely to be taken seriously.

A second set of reasons would be more holistic, more ecological.  The idea here would be that the world is sustained (in part) by a set of natural processes that unfold without human direction, but that can be altered by human action/intervention.  We are slowly discovering that such human actions/interventions often have drastic by-products, ones that threaten the sustainability of the world.  Our presumptions of control over the non-human have had bad consequences.  We would be much better off walking with a much lighter tread, leaving others and the non-human to live in peace, exempt from any interference from us.

Are those natural processes transcendent?  In a strict sense, I guess the answer is Yes.  They are certainly non-human.  But they are not transcendent in the more religious sense because they are not, in my view, a source of meaning, or some kind of “personal” entity to which we can have a call-and-response (dialogic) relation.  Taylor persistently wants to reject the “impersonal universe” he associates with modern secularism, while I am fully guilty of finding the non-human “impersonal.”  We stand in relation to the non-human, and can have a drastic impact on its functionings, but I don’t think we can be in dialogue with it, and I don’t think we can establish a relation to it that generates meanings except insofar as we, as humans, find value in the non-human (something which occurs all the time).

Am I fully satisfied with these formulations?  Far from it.  I am using Taylor to sort through my own commitments/intuitions, even as his book challenges me to offer a coherent (and convincing) account of how I justify/understand the assumptions/claims that must underwrite those commitments.  And I am finding that I stand on very shaky ground.

Perfectionism and Liberalism

Adam Gopnik has become one of the most astute theorists/apologists for liberalism, even though his thoughts on that subject simply come as asides in the occasional pieces he writes for the New Yorker.  In the July 30, 2018 edition, in a review of a book about the utopian fictions of the 1890 to 1910 period, he has this to say: “Liberalism is a perpetual program of reform, intended to alleviate the cruelty we see around us.  The result will not be a utopia but merely another society, with its own unanticipated defects to correct, though with some of the worst injustices—tearing limbs from people or keeping them as perpetual chattel or depriving half the population of the right to speak to their own future—gone, we hope for good.  That is as close as liberalism gets to utopia: a future society that is flawed, like our own, but less cruel as time goes on.”

The complaint of non-liberals is that liberals aim too low, that they timidly rule out as impossible things they should be fighting to accomplish.  And surely there is much to be said for the view that liberals are particularly ineffective if they are not constantly pushed by a more radical “left.”  On the other hand, liberal timidity, what Judith Shklar memorably called “the liberalism of fear,” is a commitment to minimizing concentrations of power and maximizing the distribution of power in order to prevent tyranny.  Power deployed for economic gain or power deployed to bring about a utopian vision of solidarity/common effort are equally to be feared.  Pluralism is the by-word, also known as liberal “permissiveness.”  As much as possible, keep to an absolute minimum the power of any entity (be it state, business, church, or another person) to dictate to me the terms of my life.

Another common critique of liberalism comes from a different direction.  The issue here is not that liberals don’t fight hard enough for the justice they claim to cherish, but that the individualism that liberal permissiveness establishes is unsatisfying.  Left to their own devices, individuals will either (this is the elitist, right-wing critique of liberal individualism) choose “low,” materialist desires that are undignified and recognizably bestial or (the left-wing, “communitarian” critique) be left adrift, exiled from all the kinds of intersubjective associations/relationships that actually make life meaningful.

In short, a straight-forward “materialism,” which accepts that our primary motives are for bodily comforts and other basic pleasures—what I called “hedonism” a few posts back—is deemed insufficient for a “full” (now the term is Charles Taylor’s) human life.  There must be more, Taylor keeps saying.

Here’s my dilemma—and kudos to Taylor for bringing it home so forcefully.  A certain version of materialism, with its notion that personal interest in securing material goods plus the psychological satisfactions of familial love and social respect are primary and “enough”, reigns among the aggressive right-wing in the US today.  The old line conservative, elitist critics of the Alan Bloom and Harold Bloom sort are just about total dinosaurs now.  The current right wing scorns elites and their fancy views of human dignity and attachment to “higher” things.  “Freedom” for Samuel Alioto is complete liberal permissiveness in economic matters, tied to a lingering moralistic attempt to suppress non-economically motivated “vices.”

So I certainly want to combat what Taylor insists is the “reductionism” of a materialist utilitarianism—the notion that all value resides in the extent to which something contributes to well-being, with “well-being” defined in very restrictive, mostly economic, terms.  The humanities, as a whole, have understood this as the battleground: the effort to get the public and the body politic to accept (and act on that acceptance) the value of non-economically motivated or remunerated activities. (In a future post, I will return to this topic aand try to think through what the “more” is that a secularist humanities would offer.)

What path should one take in this effort to combat economistic utilitarianism?  Taylor writes that “the question [that] arises here [is] what ontology can underpin our moral commitments” (607).  Now, of course, Richard Rorty (of whom more in a moment) would argue that we needn’t have any ontology to underwrite our commitments, that the whole (traditional) philosophical game of thinking that “foundations” somehow explain and/or secure our commitments is a misunderstanding of how human psychology works.  (Basically, Rorty is accepting William James’s insistence that we have our commitments first and then invent fancy justifications for them after the fact.)  The critics reply (inevitably) that Rorty thus shows that he has an ontology—basically, the ontological description of “human nature” that is James’s psychology.  If, like Rorty, you are committed to the liberal ideal (as expressed by Gopnik, who is consciously or not, channeling Rorty on this point) of reducing cruelty, then you are going to undertake that work in relation to how you understand human psychology.  In Rorty’s case, that means working on “sensibility” and believing that affective tales of cruelty that will awaken our disgust at such behavior will be more effective than Kantian arguments about the way cruelty violates the categorical imperative.

The Humean (and Rorty, like Dewey, is a complete Humean when it comes to morality/ethics) gambit is that humans have everything they need in their normal, ordinary equipment to move toward less cruel societies.  We don’t need “grace” or some other kind of leg up to be better than we have been in the past.  Our politics, we might say in this Humean vein, consist of the rhetorical, legal, and extra-legal battles waged between those who would “liberate” the drives toward economic and other sorts of power and accumulation versus those who would engage the “sympathetic” emotions that highlight cooperation and affective ties to our fellow human beings.  The Humean liberal, therefore, will endorse political arrangements that do not stifle ordinary human desires (for sex, companionship, fellowship, material comforts, recognition, the pleasures of work and play) while working against all accumulations of power that would allow someone to interfere in the pursuit of those ordinary desires.

What Taylor argues is that this liberal approach is not enough.  And it is “not enough” in two quite different ways.  First, it is not enough because it still leaves us with a deep deficit of “meaning.”  It is a “shallow” conception of human life, one that does not answer to a felt—and everywhere demonstrated need—for a “fuller” sense of what life is for and about.  Humans want their lives to connect up to something greater than just their own self-generated desires. (I have already, in a prior post, expressed my skepticism that this hankering for a “deeper meaning” is as widespread, even universal, as Taylor presumes.  To put it bluntly, I believe many more people today–July 28, 2018–are suffering from physical hunger than from spiritual hunger.) People, in Taylor’s view, want to experience the connection of their desires to some “higher” or “larger” purpose in things.  So the ontology in question is not just a description of “human Nature” but also of the non-human—and a description of how the human “connects” to that non-human.  You can, of course, claim (like the existentialists) that there is no connection, that we are mistaken when we project one and would be better off getting rid of our longing for one, but that is still an ontological claim about the nature of the non-human and about its relation to the human.  In that existential case, you are then going to locate “meaning” (a la Camus) in the heroic, if futile, human effort to create meaning within a meaningless universe.

Taylor’s second objection to Humean naturalism is more interesting to me because I find it much more troubling, much more difficult to think through given my own predilections.  Put most bluntly, Taylor says (I paraphrase): “OK, your naturalistic account posits a basic ‘sympathy” for others within the human self.  But, by the same token, your naturalistic account is going to have to acknowledge the aggressive and violent impulses within the self.  Your liberal polity is going to have to have some strategy for handling or transforming or suppressing those violent tendencies.  In short, there are desires embedded in selves that are not conducive to ‘less cruel’ futures, so what are you going to do about them?”

Taylor’s own position is clear.  He doesn’t use the term “perfectionist” (that, instead, is a recurrent feature of Stanley Cavell’s objections to Deweyean pragmatism), but he is clearly (at least in my view) in perfectionist territory.  Taylor is certainly insistent that what non-religious views (those that adhere to a strictly “immanent frame”—his term) miss is a drive toward “transformation” that is often motivated or underwritten by the desire to connect to some “transcendent.”  Liberal “permissiveness” doesn’t recognize, or provide any space for, this urge to transformation—or for the fact that those who pursue this goal most fervently are often the humans we most admire.  Self-overcoming, we might say, is view more favorably than simply “care of the self.”  Taylor is very, very good on how the arguments about all this go—with the liberal proponents of care of the self seeing the self-overcomers as dangerous, with their heroic visions that tend toward utopian-seeking tyranny or a religious denigration of the ordinary, the here and now; and the proponents of transformative striving seeing the liberals as selfish, limited in vision, stuck in the most mundane and least noble/dignified of the possible human ways to live a life, to pursue and achieve meaning.

I am clearly of the non-heroic camp, but the challenge Taylor poses is most difficult to me when he says that even the liberal aims at a transformation of human nature, of built in human desires, insofar as the liberal seeks to minimize violence and even to banish it entirely.  The conundrum: how do you either transform or (where necessary) suppress desire without being tyrannical?  The easy way out is to say it is not tyrannical to suppress the rapist.  But that just gets us into the business of what desires are so beyond the pale that their suppression is justified as contrasted to the desires we should let express themselves.  The prevailing liberal answer to that problem remains Mill’s harm principle—which is, admittedly, imperfect but the best we’ve got on hand.

Meanwhile, it would seem that liberals would also be working on another front to transform those violent desires so that the need for suppression wouldn’t arise as often.  Liberals, it would seem, can’t completely sidestep a “perfectionist” ethics, one that seeks to re-form some basic attributes of human nature–as it has so far manifested itself in history. To put it in the starkest terms: every human society and every moment in human history has manifested some version of war.  Yet the liberal is committed to (in utopian fashion) the idea that war is not inevitable, that we can create a world in which wars would not occur.  But the path to that war-free world must involve a “perfectionist” transformation of what humanity has shown itself to be up to our current point in time.  The issue then becomes: “What is the perfectionist strategy to that end?”  How does the Humean liberal propose to get from here (war) to there (perpetual peace)?

Taylor is not denying that the liberal has possible strategies.  But he thinks those strategies are “excarnated”—divorced from the body and emotion, the opposite of “incarnated.” This is Taylor’s version of the familiar critique that liberalism is “bloodless,” that it disconnects the body from the mind in its celebration of the disengaged, objective spectator view of knowledge at the same time that it extracts individuals (in the name of autonomy) from their embedding in social practices and social communities.  The ideal liberal self stands apart, capable of putting to the question everything, including the most basic constituents of his life (his own desires and his own relations to others.)  This is Rorty’s liberal ironist, cultivating a certain distance from everything, even his own beliefs.  The liberal, then, only has “reasons”—the consequentialist argument that life would be more pleasant, less “nasty, brutish, and short”—if we managed to stop war, stop being violent and cruel to one another.  Or, if we go the Humean/Rorty route, the liberal can work to enhance the inbuilt “sympathy” that makes us find cruelty appalling—and mobilize that sentiment against the other sentiments that lead to finding violence thrilling, pleasurable, or ecstatic.

Taylor, instead, favors a non-liberal route that avoids “excarnation.”  Instead, it recognizes that “in archaic, pre-Axial forms, ritual in war or sacrifice consecrates violence; it relates violence to the sacred, and gives a numinous depth to killing and the excitement and inebriation of killing; just as it does through the rituals mentioned above for sexual desire and union” (611-612).  The Christian experience/virtue of agape, Taylor insists, is fully bodily and emotional—and affords a sense of connection to non-human, transcendent powers and purposes.  And there can be a similar sense of connection in expressions/experiences of violence.

Of course, Taylor relies here on the “containment” that ritual performs.  A safe space, we might say, is created for the expression of violence—a space that highlights the connection to the transcendent that violence can afford but which also keeps that violence from getting out of hand.  (I continue to be very interested in all the ways violence is “contained.”  Why don’t all wars become “total”?  Why do states, in dealing with criminals, or other authority figures, like parents, stop short of total violence, of killing?  Think of spanking; how it is ritualized, how it stops short of doing real physical harm—or how, in other instances, it pushes right through that boundary and does lead to real physical harm.  What keeps the limits intact in one case and not in the other?)

But the ritual is not only “containment” for Taylor; it is also a path toward “transformation.”  Think of how the ritual of marriage transforms the love relation between the two partners.  Do we really want to argue that marriage is meaningless, that it does not change anything between the couple?  The marriage ritual is not, as we all well know, magically efficacious—but that hardly seems to justify claiming it has no effect at all.  What Taylor is pointing toward is some kind of similar ritual(s) to deal with aggressive desires (a complement to marriage’s relation to potentially anarchistic and violent sexual desires).

So what Taylor thinks we lose if we are a-religious secularists is this way (habit?) of thinking about the connection between desires found in selves and some kind of larger forces out there in the universe.  And losing that sense of connection means losing any taste (or search) for rituals that take individual desire and place it in relation to those larger, non-human forces.  As a result, we lose an effective strategy for the transformation of those desires into something more “perfect,” more in accord with our (utopian?) visions of what human life could be—where that utopian vision in Taylor’s case includes both a more meaningful life on the personal level (since connected to powers and purposes beyond the isolated self) and a more just, less cruel society because rituals contain the destructive potential of sex and violence.

Rorty’s alternative is instructive if we consider the modesty, the anti-utopianism, of liberalism.  Rorty doesn’t rule out perfectionism (that would violate liberal permissiveness), but he relegates it to the “private” sphere.  Self-overcoming is all well and good—from training for marathons to trying to overcome one’s tendencies to anger—but is a “project” undertaken by a self—not a path mandated by any other power.  The “public” sphere is devoted (for Rorty) to overcoming cruelty and to something like a minimal social justice (making sure everyone has the means to sustain life).  But any public mandate to “transformation” is opening the path to tyranny.  What this Rorty formula leaves unanswered is whether the public (think of the French Revolutionaries and their festivals) should strive to create rituals for the expression/transformation of basic desires.  These rituals need not be mandatory, but could still be useful in the effort to curb cruelty and heighten (emotional and moral) commitment to social justice.  That is, even a minimalist public sphere (in terms of what it hopes to achieve and in terms of how much its leaves to the discretion of individuals when it comes to where they find meaning and how they spend their time) might still benefit from not being so minimalist in terms of the occasions for public gatherings and rituals that it provides.

Let me end here by saying that I am one of those anti-clerical, anti-religious people (so well described by Taylor) who worries that religion’s focus on the transcendent implies a neglect of, even a contempt for, the ordinary.  I am always troubled by a search for salvation—whether that search take a religious or a Utopian form.  I think we are better off if, as Gopnik puts it, we accept the imperfections of the human condition, and work on improving that condition, without thinking that some kind of “transformation” will change our lot very dramatically or, once and for all, insure that peace and justice will reign undisturbed from now on.

In my most extreme moments, I want to say not only can’t we be “saved” from the human condition as we now experience it, but that we don’t need to be “saved.”  What we need is to take up the work at hand, work that is fairly obvious to anyone who looks around and sees the rising temperatures and the homeless people on our streets and the people going bankrupt trying to pay medical bills.  There isn’t a “transformation” of a political or religious/ethical reality that is going to address such issues.  It’s doing the gritty down-to-earth work of attending to those issues that will lead to some desirable changes, although not to the end of all our cares and worries.  In short, I am secularist insofar as I don’t think help is coming from elsewhere.  I have no faith that there are non-human powers to which we can connect—and that those powers will enable some kind of “transformation” that will solve our (humanly created) problems.

“Perfectionism” is a fully permissible add-on, but please do that on your own time (i.e. I accept the Rortyean notion that it is “private”), while the “public” of legal politics will demand that you act decently toward your fellows.  Still—with all that—I acknowledge that Taylor poses a significant challenge when he says that even the liberal (whether a Humean or a Kantian liberal) will look to “transform” certain human desires in the name of a more just and less violent society.