Author: john mcgowan

Secular Ethics

I am about one-third of the way through Martin Hägglund’s This Life: Secular Faith and Spiritual Freedom (Pantheon Books, 2019), of which more anon.

But I have been carrying around in my head for over seven months now my own build-it-from-scratch notion of ethics without God.  The impetus was a student pushing me in class last fall to sketch out the position—and then the book on Nietzsche’s “religion of life” that I discussed in my last post (way too long ago; here’s the link).

So here goes.  The starting point is: it is better to be alive than dead.  Ask one hundred people if they would rather live than die and 99 will choose life.

A fundamental value: to be alive.

First Objection:

Various writers have expressed the opinion that is best not to have been born since this life is just a constant tale of suffering and woe.  Life’s a bitch and then you die.

Here’s Ecclesiastes, beginning of Chapter 4:

“Next, I turned to look at all the acts of oppression that make people suffer under the sun. Look at the tears of those who suffer! No one can comfort them. Their oppressors have all the power. No one can comfort those who suffer. I congratulate the dead, who have already died, rather than the living, who still have to carry on. But the person who hasn’t been born yet is better off than both of them. He hasn’t seen the evil that is done under the sun.”

Here’s Sophocles’ version of that thought, from Oedipus at Colonus:

“Not to be born is, beyond all estimation, best; but when a man has seen the light of day, this is next best by far, that with utmost speed he should go back from where he came. For when he has seen youth go by, with its easy merry-making, [1230] what hard affliction is foreign to him, what suffering does he not know? Envy, factions, strife, battles, [1235] and murders. Last of all falls to his lot old age, blamed, weak, unsociable, friendless, wherein dwells every misery among miseries.”

And here is Nietzsche’s version, which he calls the “wisdom of Silenus” in The Birth of Tragedy:

“The best of all things is something entirely outside your grasp: not to be born, not to be, to be nothing. But the second best thing for you is to die soon.”

Second Objection:

As Hägglund argues, many religions are committed to the notion that being alive on earth is not the most fundamental good.  There is a better life elsewhere—a different thought than the claim that non-existence (not to have been born) would be preferable to life.

Response to Objections:

The rejoinder to the first two objections is that few people actually live in such a way that their conduct demonstrates an actual belief in non-existence or an alternative existence being preferable to life on this earth.  Never say never.  I would not argue that no one has ever preferred an alternative to this life.  But the wide-spread commitment to life and its continuance on the part of the vast majority seems to me enough to go on.  I certainly don’t see how that commitment can appear a weaker starting plank than belief in a divine prescriptor of moral rules.  I would venture to guess that the number of people who do not believe in such a god is greater than the number who would happily give up this life for some other state.

Third Objection:

There are obvious—and manifold—reasons to choose death over life under a variety of circumstances.  I think there are two different paths to follow in thinking about this objection.

Path #1:

People (all the time) have things that they value more than life.  They are willing (literally—it is crucial that it is literally) to die for those things.  Hence the problem of establishing “life” as the supreme value.  Rather, what seems to be the case is that life is an understood and fundamental value—and that we demonstrate the truly serious value of other things precisely by being willing to sacrifice life for those other things.  To put one’s life on the line is the ultimate way of showing where one’s basic commitments reside.  This is my basic take-away from Peter Woodford’s The Moral Meaning of Nature: Nietzsche’s Darwinian Religion and its Critics (U of Chicago P, 2018; the book discussed in my last post.)  To use Agamben’s terms “bare life” is not enough; it will always be judged in relation to other values.  A standard will be applied to any life; its worth will be judged.  And in some cases, some value will be deemed of more worth than life—and life will be sacrificed in the name of that higher value.  In other words, “life” can not be the sole value.

I am resolutely pluralist about what those higher values might be that people are willing to sacrifice life for.  My only point is that an assumed value of life provides the mechanism (if you will) for demonstrating the value placed on that “other” and “higher” thing.  In other words, the fact (gift?) of life—and the fact of its vulnerability and inevitable demise (a big point for Hägglund, to be discussed in next post)—establishes a fundamental value against which other values can be measured and displayed.  Without life, no value. (A solecism in one sense.  Of course, if no one was alive, there would be no values.  But the point is also that there would be no values if life itself was not valued, at least to some extent.) Placing life in the balance enables the assertion of a hierarchy of values, a reckoning of what matters most.

Path #2:

It is possible not only to imagine, but also to put into effect, conditions that make life preferable to death.  As Hannah Arendt put it, chillingly, in The Origins of Totalitarianism, the Nazis, in the concentration camps and elsewhere, were experts in making life worse than death. Better to be dead than to suffer various forms of torture and deprivation.

I want to give this fact a positive spin.  If the first plank of a secular ethics is “it is better to be alive than dead,” then the second to twentieth planks attend to the actual conditions on the ground required to make the first plank true.  We can begin to flesh out what “makes a life worth living,” starting with material needs like sufficient food, water, and shelter, and moving on from there to things like security, love, education, health care etc.  We have various versions of the full list from the UN Declaration of Rights to Martha Nussbaum’s list of “capabilities.”

“Bare life” is not sufficient; attending to life leads quickly to a consideration of “quality” of life.  A secular ethics is committed, it seems to me, to bringing about a world in which the conditions for a life worth living are available to all.  The work of ethics is the articulation of those conditions.  That articulation becomes fairly complex once some kind of base-line autonomy—i.e. the freedom of individuals to decide for themselves what a life worth living looks like—is made a basic condition of a life worth living.  [Autonomy is where the plurality of “higher values” for which people are willing to sacrifice life comes in.  My argument would be 1) no one should be able to compel you to sacrifice life for their “higher value” and 2) you are not allowed to compel anyone to sacrifice life for your “higher value.”  But what about sacrificing your goods—through taxes, for example?  That’s much trickier and raises thorny issues of legitimate coercion.]

It seems to me that a secular ethics requires one further plank.  Call it the equality principle.  Simply stated: no one is more entitled to the basic conditions of a life worth living than anyone else.  This is the minimalist position I have discussed at other times on this blog.  Setting a floor to which all are entitled is required for this secular ethics to proceed.

What can be the justification for the equality principle?  Some kind of Kantian universalism seems required at this juncture.  To state it negatively: nothing in nature justifies the differentiation of access to the basic enabling conditions of a life worth living.  To state it positively: to be alive is to possess an equal claim to the means for a life worth living.

Two complications immediately arise: 1. Is there any way to justify inequalities above the floor?  After every one has the minimal conditions met, must there be full equality from there?  2.  Can there be any justification for depriving some people, in certain cases, of the minimum? (The obvious example would be imprisonment or other deprivations meted out as punishments.)

Both of these complications raise the issue of responsibility and accountability.  To what extent is the life that people have, including the quality of that life, a product of their prior choices and actions?  Once we grant that people have the freedom to make consequential choices, how do we respond to those consequences?  And when is society justified in imposing consequences that agents themselves would strive to evade?

No one said ethics was going to be easy.  Laws and punishments are not going to disappear.  Democracy is meant to provide a deliberative process for the creation of laws and sanctions—and to provide the results of those deliberations with legitimacy.

All I have tried to do in this post is to show where a secular ethics might begin its deliberations—without appealing to a divine source for our ethical intuitions or for our ethical reasonings.

The Tree of Life

I have just finished reading Richard Powers’ latest novel, The Overstory (Norton, 2018).  Powers is his own distinctive cross between a sci-fi writer and a realist.  His novels (of which I have read three or four) almost always center around an issue or a problem—and that problem is usually connected to a fairly new technological or scientific presence in our lives: DNA, computers, advanced “financial instruments.”  As with many sci-fi writers, his characters and his dialogue are often stilted, lacking the kind of psychological depth or witty interchanges (“witty” in the sense of clever, off-beat, unexpected rather than funny) that tend to hold my interest as a reader.  I find most sci-fi unreadable because too “thin” in character and language, while too wrapped up in elaborate explanations (that barely interest me) of the scientific/technological “set-up.” David Mitchell’s novels have the same downside for me as Powers’: too much scene setting and explanation, although Mitchell is a better stylist than Powers by far.

So is The Overstory Powers’ best novel?  Who knows?  It actually borrows its structure (somewhat) from Mitchell’s Cloud Atlas, while the characters feel a tad less mechanical to me.  But I suspect that’s because the “big theme” (always the driving force of Powers’s novels) was much more compelling to me in this novel, with only Gain of the earlier ones holding my interest so successfully.

The big theme: how forests think (the title of a book that is clearly situated behind Powers’s work even though he does not acknowledge it, or any other sources.)  We are treated to a quasi-mystical panegyric to trees, while being given the recent scientific discoveries that trees communicate with one another; they do not live in accordance with the individualistic struggle for existence imagined by a certain version of Darwinian evolution, but (rather) exist within much larger eco-systems on which their survival and flourishing depend.  The novel’s overall message—hammered home repeatedly—is that humans are also part of that same eco-system—and that competition for the resources to sustain life as contrasted to cooperation to produce and maintain those resources can only lead to disaster.  Those disasters are not just ecological (climate change and depletion of things necessary to life), but also psychological.  The competitive, each against each, mentality is no way to live.

I am only fitfully susceptible to mystical calls to experience some kind of unity with nature.  I am perfectly willing to embrace rationalistic arguments that cooperation, rather than competition, is the golden road to flourishing.  And, given Powers’s deficiencies as a writer, I would not have predicted that the mysticism of his book would move me.  But it did.  That we—the human race, the prosperous West and its imitators, the American rugged individualists—are living crazy and crazy-making lives comes through loud and clear in the novel.  That the alternative is some kind of tree-hugging is less obvious to me most days—but seems a much more attractive way to go when reading this novel.

I have said Powers is a realist.  So his tree-huggers in the novel ultimately fail in their efforts to protect forests from logging.  The forces of the crazy world are too strong for the small minority who uphold the holistic vision.  But he does have an ace up his sleeve; after all, it is “life” itself that is dependent on interlocking systems of dependency. So he does seem to believe that, in the long run, the crazies will be defeated, that the forces of life will overwhelm the death-dealers.  Of course, how long that long run will be, and what the life of the planet will look like when the Anthropocene comes to an end (and human life with it?) is impossible to picture.

Life will prevail.  That is Powers’ faith—or assertion.  Is that enough?  I have also read recently an excellent book by Peter J. Woodford: The Moral Meaning of Nature: Nietzsche’s Darwinian Religion and its Critics (University of Chicago Press, 2018).  Woodford makes the convincing argument that Nietzsche takes from Darwin the idea that “life” is a force that motivates and compels.  Human behavior is driven by “life,” by what life needs.  Humans, like other living creatures, are puppets of life, blindly driven to meet its demands.  “When we speak of values, we speak under the inspiration, under the optic of life; life itself forces us to establish values; when we establish values, life itself values through us” (Nietzsche, Twilight of the Idols).

 

Here is Woodford’s fullest explanation of Nietzsche’s viewpoint:

“The concept that allows for the connection between the biological world, ethics, aesthetics, and religion is the concept of a teleological drive that defines living activity.  This drive is aimed at its own satisfaction and at obtaining the external conditions of its satisfaction. . . . Tragic drama reenacts the unrestricted, unsuppressed expression of [the] inexhaustible natural eros of life for itself. . . . Nietzsche conceived life as autotelic—that is, directed at itself as the source of its own satisfaction.  It was this autotelic nature of life that allowed Nietzsche to make the key move from description of a natural drive to discussion of the sources and criteria of ethical value and, further, to the project of a ‘revaluation of value’ that characterized his final writings.  Life desires itself, and only life itself is able to satisfy this desire.  So the affirmation of life captures what constitutes the genuine fulfillment, satisfaction, and flourishing of a biological entity.  Nietzsche’s appropriation of Darwinism transformed his recovery of tragedy into a project of recovering nature’s own basic affirmation of itself in a contemporary culture in which this affirmation appeared, to him at least, to be absent.  His project was thus inherently evaluative at the same time that it was a description of a principle that explained the nature and behavior of organic forms” (38).

Here’s my takeaway.  Both Powers and Nietzsche believe that they are describing the way that “life” operates.  Needless to say, they have very different visions of how life does its thing, with Powers seeing human competitiveness as a perverted deviation from the way life really works, while Nietzsche (at least at times) sees life as competition, as the struggle for power, all the way down.  (Cooperative schemes for Nietzsche are just subtle mechanisms to establish dominance—and submission to such schemes generates the sickness of ressentiment.)

What Wofford highlights is that this merger of the descriptive with the evaluative doesn’t really work.  How are we to prove that life is really this way when there are life forms that don’t act in the described way?  Competition and cooperation are both in play in the world.  What makes one “real life,” and the other some form of “perversion”?  Life, in other words, is a normative term, not a descriptive one.  Or, at the very least, there is no clean fact/value divide here; our biological descriptions are shot through and through with evaluation right from the start.  We could say that the most basic evaluative statement is that it is better to be alive than to be dead.  Which in Powers quickly morphs into the statement that it is better to be connected to other living beings within a system that generates a flourishing life, while in Nietzsche it becomes the statement that it is better to assume a way of living that gives fullest expression to life’s vital energies.

[An aside: the Nazis, arguably, were a death cult–and managed to get lots and lots of people to value death over life.  What started with dealing out death to the other guy fairly quickly moved into embracing one’s own death, not–it seems to me–in the mode of sacrifice but in the mode of universal destruction for its own sake.  A general auto de fe.]

In short, to say that life will always win out says nothing about how long “perversions” can persist or about what life actually looks like.  And the answer to the second question—what life looks like—will always be infected by evaluative wishes, with what the describer wants life to look like.

That conclusion leaves me with two issues.  The first is pushed hard by Wofford in his book.  “Life” (it would seem) cannot be the determiner of values; we humans (and Powers’ book makes a strong case that other living beings besides humans are in on this game) evaluate different forms of life in terms of other goods: flourishing, pleasure, equality/justice.  This is an argument against “naturalism.”  Life (or nature) is not going to dictate our values; we are going to reserve the right/ability to evaluate what life/nature throws at us.  Cancer and death are, apparently, natural, but that doesn’t mean we have to value them positively.

The second issue is my pragmatist, Promethean one.  To what extent can human activity shape what life is.  Nietzsche has always struck me as a borderline masochist.  For all his hysterical rhetoric of activity, he positions himself to accept whatever life dishes out.  Amor fati and all that.  But humans and other living creatures alter the natural environment all the time to better suit their needs and desires.  So “life” is plastic—and, hence, a moving target.  It may speak with a certain voice, but it is only one voice in an ensemble.  I have no doubt that it is a voice to which humans currently pay too little heed. But it is not a dictator, not a voice to which we owe blind submission.  That’s because 1) we evaluate what life/nature dishes out and 2) because we have powers on our side to shape the forms life takes.

Finally, all of this means that if humans are currently shaping life/nature in destructive, life-threatening ways, we cannot expect life itself to set us on a better course.  The trees may win in the long run—but we all remember what Keynes said about the long run.  In the meantime, the trees are dying and we may not be very far behind them.

Money and Babies

Since I got onto money in my last post, I am going to continue that line of thought (briefly).  I worried a lot about money between the ages of 29 and 44 (roughly); it’s strange how hard it is for me to remember my feelings.  Sure, I forget events as well.  But the main outlines of my life’s history are there to be remembered.  What I can’t pull up is how I felt, what I was thinking, at various points.  My sense now that I was somehow not present at much of my life stems from this inability to reconstruct, even in imagination, who I was at any given moment.  I did all these things—but don’t remember how I did them or what I felt as I was doing or even exactly what I thought I was doing.  Getting through each day was the focus, and somehow I made it from one day to the next.  But there was no overall plan—and no way to settle into some set of coherent, sustaining emotions.  It was a blur then and it’s a blur now.

All of which is to say that I worried about money, about the relative lack of it, without having any idea about how to get more of it.  I didn’t even have money fantasies—like winning the lottery or (just as likely) writing a best-selling book.  What I did for a living, including writing the books that my academic career required, was utterly disconnected emotionally and intellectually from the need to have more money.  When I made my first academic move (from the University of Rochester’s Eastman School of Music to the University of North Carolina) the motive was purely professional, not monetary.  I wanted to teach in an English department and be at a university where my talents would not be underutilized.  That it would involve a substantial raise in pay never occurred to me until I got the offer of employment.  And when I went and got that outside offer in order to boost my UNC salary (as mentioned in the last post), it was the inequity of what I was being paid that drove me, not the money itself.  In fact, despite worrying about money for all those years, I never actually imagined having more than enough.  It was as if I just accepted that financial insecurity was my lot in life.  I could worry about it, but I didn’t have any prospects of changing it.

Certainly, my worries did make me into a cheap-skate.  And undoubtedly those niggardly habits are the reason we now have more than enough each month.  Habits they certainly are since at this point they don’t even pinch.  They just are the way I live in the world—and allow me to feel like I am being extravagant when (fairly often now) I allow myself luxuries others would even give a second thought.

My main theme, however: the worries about money were utterly separate from the decision to have children.  That this was so now amazes me.  It is simply true that when Jane and I decided the time had come to have children, the financial implications of that decision never occurred to me.  We made a very conscious decision to have children.  Our relationship was predicated, in fact, on the agreement that we would have children.  And when that pre-nuptial agreement was made the issue of having money enough to have kids was raised.  But four years later, when we decided to have the anticipated child, money was not considered at all.  And when we decided to have a second child after another two years, once again money was not an issue.  I don’t know why not.  Why—when I worried about having enough money for all kinds of other necessities—did I not worry about having enough money to raise our two children?  That’s the mystery.

I have no answer.  And I can’t say if that was true generally for those of us having our children in the 1980s, although it seems to have been true for most of my friends.  On the other hand, as my wife often notes, I do have a fairly large number of married friends (couples who have been together forty years now) who do not have children.  Very likely that a mixture of professional and financial reasons led to their not having kids.

I do, however, feel that financial considerations do play a large role now (in the 2010s) in the decision to have children.  That’s part of the cultural sea-change around winners and losers, the emptying out of the middle class, and the ridiculous price of “private” and quasi-private education.  Most conspicuous to me is the increasing number of single-child families among the upper middle class.  Yes, that is the result of a late start for women who take time to establish themselves in a profession.  But it also an artifact of worrying about the cost of child-care and of education.

I come from a family of seven children.  And my parents, relatively, were less well-off when they had us than Jane and I were when we had our children.  (That statement is a bit complicated since my parents had access to family money in case of emergency that was not there for me to tap.  But, in fact, my parents didn’t rely heavily on that reserve until much later in their lives.)  Was my not following my parents’ footsteps toward a large family financially motivated?  A bit, I guess.  But it really seems more a matter of style—plus the fact that my wife was 34 when she got pregnant with our first.  But even if she had been 24 (as my mother was at her first pregnancy), it is highly unlikely we would have had more than two kids (perhaps three).  The idea was unthinkable by 1987; it just wasn’t done.

It is also hard to see how we could have done it (even though that fact didn’t enter into our thinking).  Certainly, it would have been done very differently.  We paid $430,000 for our two children’s educations: three years of private high school and four years of private university (with a $15,000 scholarship each year) for my son, and four years of private high school and four years of private university for my daughter. And that figure is just the fees paid to the schools; it doesn’t include all the other costs. We would certainly have relied much more heavily on public education if we had more than two children.

Once again, I have no moral to draw.  I am just trying to track what seem to me particularly significant shifts in cultural sensibility.

On Salaries and Money and American Universities

My last post on the future of the humanities led me to think about American higher education, which I am tempted to call, semi-blasphemously, “our peculiar institution.”  But it also led me to think about money. I was led to that thought by recalling that I, a humanist scholar, am a state employee of North Carolina.  But my munificent salary is, actually, largely paid by “private dollars,” funded out of the “endowed chair” donated to the university by members of the Hanes family (of Winston-Salem and underwear fame).  This post will be an unholy mixture of what that fact means for American higher education and what it means for my own relationship to money and to my work.

I am not being ironic when I use “munificent” to characterize my salary.  I make more money than ever, in my most avaricious dreams, I could have believed an English professor could make.  That salary is public knowledge because North Carolina has rather strict “sunshine” laws.  You can go to a website and look it up.  Yet in keeping with American prudery, which insures that we know less about our friends’ financial circumstances than about their sex lives, I can’t bring myself to name the sum here—or to name the sum that my wife and I have accumulated in our retirement accounts.  When, every once in a while, I do disclose those two numbers to friends and family, I am very conscious of a weird (unsettling) mixture of shame and boast in the disclosure.  I think I am overpaid—but I am proud to be valued so highly.  David Graeber is good on this feeling in his book BullShit Jobs.  For those of us who love our work and didn’t go into it for the money, there is something shameful about the pay.  Even more shameful when the pay makes one rich.

I feel guilty getting paid so much for doing a job that I like and that, frankly, comes very easy to me.  I have many colleagues who are overwhelmed, who feel constantly way behind, who are anxious, who are bedeviled by a sense that they have never done enough.  I have been, until the past year, always extremely busy; I have always worked on weekends.  But I have seldom been anxious.  When I go to North Carolina, it became clear to me very early on that this place operated at a speed that was very comfortable for me.  My pace of work, my productivity, was going to place me in the top tier at UNC.  I was never going to be made to feel inadequate, not up to snuff. (I am not extremely busy at the moment–which makes me feel even more guilty–because I have become persona non grata on campus following my public criticisms of the Chancellor.  I don’t get asked to do anything anymore.)

A time came, inevitably, when I was a victim of salary compression.  Professors get raises that average below inflation.  I tell my grad students the hard truth that their starting salary at a job could easily become their salary for life.  Raises will never go far beyond the increases in the cost of living.  But here is where we get back to the “peculiar institution” issue.  American universities exist within a prestige hierarchy. At the top of that hierarchy—meaning not only the top schools but also the wannabes—there is competition for the “best faculty.”  This is just one place where things get weird.

Why weird?  Because the measure of quality among faculty is their research productivity.  As my cynical friend Hans puts it: “in academics, quantity doesn’t count, quantity is everything.”  It’s not quite that bad, but almost.  Faculty must publish in order to distinguish themselves from other faculty—and then universities must have a faculty that publishes a lot to distinguish themselves from other universities.  In Britain, this has led to the absurdity of the government actually allocating funds to departments based on their research productivity; in America, it is more indirect, since the “best” universities can increase their funding through three means: 1) more state support in the way of research grants from the Federal (and in the case of state universities) and state governments; 2) an ability to charge higher tuition because more prestigious; and 3) a greater ability to raise philanthropic dollars because more expensive and more prestigious, which means having richer alumni.

One oddity (among others) is, of course, that research has, at best, a tangential relation to the educational mission of the university.  More to the point, the students attracted to the university by its prestige have very close to no interest in the research that underwrites that prestige.  Furthermore, the connection between prestige and the research is also completely fuzzy.  For one things, the prestige hierarchy is just about set in stone.  The same schools that headed the list in 1900 still head the list in 2020.  Reputations are, it seems, just about impossible to tarnish.  They glow like the light from long extinguished stars.

It is true that some schools—notably Duke—have managed to elbow their way into the top tier.  There are now lots of Duke imitators, all trying to crack into the stratosphere of Harvard, Yale, Stanford.  But it seems quaint to think Duke’s success can be tied in any direct way to its faculty’s research.  That success seems much more tied to a well-timed (they got into this game first) branding exercise.  They made splashy faculty hires, at the same time that they made themselves into a perennial contender for the national basketball championship.  What those faculty actually did after they were hired was secondary.  It was a question of having names on the letterhead that would lead to U.S. News (and other ranking outlets) to give Duke a boost.

Duke’s timing was impeccable because they hopped aboard the first privatization wave.  The 1980s began the move toward a renewed obsession with prestige that dovetailed with the superstition that “public” education was, by its nature, inferior to “private” education.  As the rich and the elites (see Christopher Lasch’s The Revolt of the Elites) abandoned the public commons (most dramatically in where they sent their kids to school), universities like Duke and my alma mater Georgetown were there to pick up the slack.  Georgetown shows that there was room to move up for the Duke imitators; the smallish privates, like Georgetown, Northwestern, Emory, and Vanderbilt, came up in the world, occupying a particular niche below the Ivies, but with a prestige value, a tuition price tag, and tough admission standards that simply were not the case when I was a Hoya in the 1970s.  As I learned when I got to grad school at SUNY Buffalo in 1974, they thought of themselves as having taken a chance on me because they didn’t know what a Georgetown degree meant.  Yale and Cornell turned me down.

My old employer, the University of Rochester, has always wanted to play in the Northwestern, Emory, Vanderbilt league–without ever quite managing to pull it off.  When I taught there in the late 1980s, Rochester’s president insisted on a 30% rise in tuition–in order to bring UR’s tuition in line with Northwestern etc.  He said we would never be thought any good if we didn’t charge like “our peers.”  I argued that there surely was a market niche for a good school that charged 30% less–and that UR had a better shot of getting students in that niche than in competing with Northwestern.  I, of course, lost the argument–but not just in terms of what the university did, but also in terms of its effect on applications and admissions.  I didn’t understand in those days that, when it comes to higher education, for many aspirants prestige trumps all other factors every time.  And just as in the wider market, it pays much better to cater to the wishes of the well-to-do than to a mass market.

Back to research for a moment.  As Christopher Newfield’s work has amply documented, universities lose money on the big science grants they get.  The infrastructure required to compete for such grants costs more than the grants can bring in.  Thus, either tuition, direct state support, or philanthropic dollars must underwrite the research enterprise.  Yet schools compete wildly for the research dollars because they are essential to their prestige.  Thus, UNC set a goal some years back of $1 billion a year in research funding, a goal that the Vice Chancellor for Research also admitted would worsen our bad financial plight.  We have since surpassed that goal—and are going broke.  But we had 44,000 applicants for 5000 undergraduate slots this past admissions cycle, and our departments and schools remain highly ranked.

The research imperative also makes faculty lives hell.  I have been lucky, as I already said.  For whatever reason, research has always come easily to me; it is not a burden, just something I do.  In part—and truthfully—I enjoy it.  But I will also admit it is so tangled up with issues of self-respect and of respect from my peers, that I would be hard pressed to sort out the various strands of my emotional attachments to my work.  I do know, however, that for many of my colleagues, the research is just a site of constant frustration, of a constant sense of not being good enough or productive enough.  For what?  First of all, the university needs good teachers, as well as good administrators who serve as directors of undergraduate studies, who sponsor various student clubs, who keep the educational enterprise running smoothly.  The administrative bloat on American campuses (which has, demonstrably, be a major factor in the rising costs of higher education) stems in part from freeing faculty from doing that work in the name of giving them more time to do research.

No one wants to admit that much of the research is not much worth doing.  The world will get on just fine without the many bad books and journal articles—many of which are never read by anyone—that the emphasis on research creates.  We have wasted countless hours from imaginative people by pushing faculty toward only one metric of work, toward only one way to contribute to the university.

My position is that good books will still get written even if faculty weren’t forced to write them.  This is tricky.  I am, after all, trying to think about prestige hierarchies.  And it would take a massive cultural sea-change within academia to reach the point where those who were productive researchers were not at the top of the ladder.  Cultural sea-changes require alterations in what Raymond Williams called “structures of feeling.”  I have already indicated the extent to which I recognize my own research was motivated by issues of self-worth and of looking worthy in the eyes of my peers.

Reputation drives many academics much more than money—and it cripples them far more effectively as well.  But still, part of me wants to insist that if the work is worth doing, it will get done.  In other words, we could lose all the research produced just because there is gun to people’s heads—and there still would be good books written (and some bad ones as well) because there will still be people for whom the enterprise of writing a book is central to their sense of themselves (as writers, as persons) and because they see the writing of books as valuable in and of itself.  That Holy Grail of “intrinsic value.”  I doubt we ever get full purity.  But, after all, we do do certain things because we find them worth doing.  And the writing of books is either something some people find worth doing—or it shouldn’t be done at all.

I always read Proust and other social novelists with an inability to suspend disbelief.  I could not understand a life where social climbing, where social ambition, was the driving passion.  I thought that such a world had long since disappeared.  People didn’t orient their lives in that fashion anymore.  But today I read The New Yorker and it is full of tales of people who are tortured and paralyzed by social media, who are obsessed with the “right brands,”star chefs and restaurants, and by celebrities.   And I should probably admit that academics are embroiled in their own kind of social climbing; they, too, want to be part of certain inner circles.  I always held myself rather aloof from all that—and, yet, by the Proustian law of getting what you seem (to others) not to want, I have had, by any objective standard, a highly successful academic career.  I never reached superstar status; I am more like the number 50th ranked tennis player in the world, known by some but not all, but still getting a fair number of perks that fall to those in the inner circles, even if I don’t have their name recognition and my books are read by much, much smaller audiences.

Among the perks, in my own context, there is that absurd salary.  When compression struck, I was able (as you are forced to do in the academic game) to go get an “outside offer.”  I had the kind of research profile that would lead another school that was in the prestige game to bid for my services.  I was able to force UNC to raise my salary so it was in line with that of my colleagues who had been hired after me or who had gotten outside offers of their own.  (Maybe another time I will talk about the complex layers of guilt unleashed by playing the game of getting such an offer.)

Which brings me full circle.  UNC can only compete for the “best faculty” as it struggles to maintain its high reputation, its high ranking, because private donors (alumni who are committed to UNC maintaining its standing) supplement the salaries the state is willing to pay.  UNC, like almost all the top public universities (Virginia, Michigan, UCLA, Berkeley) is a quasi-public school at this point.  Since UNC is more dependent on state dollars than the other schools I have just named, its standing is, in fact, sinking while theirs is holding steady.  Public schools further down the ladder—the UNC Charlottes of the world—are playing a desperate game of catch-up since they don’t’ have the fund-raising potential of the “flagships” and thus are hurt even more by the steady withdrawal of state support.

In short, the privatization of American higher education is a product of the lessening prestige of the public schools—a decline that is semi-rational given that schools are much less fully funded now than they once were.  But it is only semi-rational because it is also tied to the resurgence in the US of prestige-hunger, a resurgence related to the many sins that get covered by the name “neoliberalism.”  There is a heightened—if only rarely explicitly stated—sense of the great divide between winners and losers in our contemporary world.  And going to the “right” college now seems essential (to many people) to making sure you are one of the winners.  The Dukes and Georgetowns of the world have risen because of that anxiety about being left behind and because anything public has been underfunded and denigrated since the 1980s.  This, of course, explains the recent scandal of cheating the admissions process.  More importantly, it explains the on-going scandal of “legacy” admissions, which are motivated by fund-raising imperatives and by the time-worn abilities of elites to retain privileges.

The wider story, however, is about distinction–and cultural mores.  Here’s another argument I lost regarding college admissions.  UNC never had any “merit-based” scholarships (apart from the Moreheads, a whole ‘nother story).  In the early 1990s UNC realized it was beginning to lost the “best” in-state students to schools like Brown and Georgetown and Harvard.  Losing such students, of course, hurt our US News rankings, since average SAT scores for the incoming class were a major metric.  So it was decided to begin offering $500 and $1000 named scholarships to top applicants, irrespective of financial need.  My argument: “you mean to tell me that giving someone $1000 off our $12,000 in-state tuition will make them come to UNC, when their family is fully ready to pay $45,000 for them to go to Brown?”  Once again, I was wrong.  Students wanted to be singled out as “different,” as “special.”  The merit scholarships did increase our yield among top in-state students.  Maybe I am hopelessly romanticizing the 1950s and 1960s–and maybe the middle middle class that came from still exists.  I went to the most elite Catholic high school on Long Island.  All of my classmates went to college.  And there was some sense of a distinction between “going away” to college and going to a college within fifty miles of our high school.  But, really, beyond that little to no sense that Hamilton was different from Villanova, or Northwestern not the same as Marist.  And there was certainly no sense that a school had to distinguish me from other admitted students in order to get me to attend.  I can’t help but believe we are a far less democratic, far less egalitarian society culturally and emotionally (as well as, obviously, economically) now than we were in 1965.

My fat salary is linked to the same sea changes.  In academia, too, the divide between winners and losers has widened.  The spread between the highest and lowest salary in my department is much greater now than it was in 1992, when I arrived.  And, of course, academia has also created its own version of “contract workers,” the “adjuncts” who get low wages and no benefits to do the teaching that the “research faculty” does not do.  It stinks—even as I am a beneficiary of it.  No wonder I feel guilty.  Yeah, you say, you and your guilt feelings plus $1.50 will get you a ride on the subway.  I hate coming across as defensive, but I will record here that I have turned down all available raises over the past five years (admittedly, they were hardly large) so that the money could be distributed among my less well-paid colleagues.

A last point about money.  This thought comes from the Paul Manafort story.  I must be a person of very limited imagination.  Over the past three years, after all the deductions for taxes, retirement funds, health insurance etc., my wife and I together have approximately $10,000 a month in take home pay.  That’s the amount that lands in our bank accounts each month.  We bought our house quite some time ago, so our monthly mortgage plus escrow is $2000.  I understand that is low for most people.  But we have had a number of medical bills that our shitty medical insurance fails to cover—certainly coming to at least $500 a month when averaged over a whole year.  In any case, the point is that we can’t spend $10,000 a month—even as we were supplementing my wife’s mother’s retirement home costs to the tune of $1500 a month, and give a fair amount of money to our two children.  Yet we do not deny ourselves anything, and basically don’t pay much attention to what we spend.  This last, not paying attention, is an astounding luxury after at least twenty years of sweating every penny.  Yet, even with being wildly careless in relation to our earlier habits, there is always enough money.  In fact, it slowly accumulates, so that at the end of every year, no matter what medical emergencies or extravagant trips or increases in the number of charities we send an automatic monthly donation to, there is an extra $10,000 or so.

Clearly—as Paul Manafort showed us—there are a significant number of people in the US to whom $10,000 a month would be woefully inadequate.  Of course, there are millions more for whom, as for my wife and I, it would be untold riches. I don’t really know what moral to derive from that fact.  So I will simply state it here—and cease.