Category: Economics and Inequality

Economic Power/Political Power

A quick addition to my last post.

The desire is to somehow hold economic power and political power apart, using each as a counterbalance against the other.  To give the state absolute power over the economy is to insure vast economic inequality.  Such has, generally speaking, been the lesson of history.  Powerful states of the pre-modern era presided over massively unequal societies.

But there is a modern exception.  Communism in Russia and Eastern Europe did produce fairly egalitarian societies; in that case, state power was used against the accumulation of wealth by the few.  There still existed a privileged elite of state officials, but there was also a general distribution of economic goods.  The problem, of course, was a combination of state tyranny with low productivity.  The paranoia that afflicts all tyrannies led to abuses that made life unbearable.

But (actually existing) communism did show that it is possible to use state (political) power to mitigate economic inequality.  Social democracy from 1945 to 1970 was also successful in this direction.  Under social democracy, the economy enjoys a relative autonomy, but is highly regulated by a state that interferes to prevent large inequities.

Where there is some kind of norm that political power (defined as the ability to direct the actions of state institutions) should not either 1) be a route to economic gain or 2) be working hand-in-glove with the economically powerful to secure their positions, the violations of that norm are called “corruption.”  The Marxist, of course, says that the state in all capitalist societies (the “bourgeois state”) is corrupt if that is our definition of corruption.  The state will always have been “captured” by the plutocrats.

What belies that Marxist analysis is that the plutocrats hate the state and do everything in their power (under the slogan of laissez-faire) to render the state a non-player in economic and social matters.  Capitalists do not want an effective state of any sort—either of the left, center, or right.  A strong state of any stripe is not going to let the economy goes its own way, but will (instead) fight to gain control over it.  I think it fair to say that the fight between political and economic power mirrors the fight between civil and religious power in the early days of the nation-state.  The English king versus the clergy and the Pope.

The ordinary citizen, I am arguing, is better off when neither side can win this fight, when the two antagonists have enough standing to prevent one from having it all its way.

Our current mess comes in two forms, the worst of all worlds.  We have a weak state combined with massive corruption.  What powers the state still has are placed at the service of capital while politicians use office to get rich.  We have a regulatory apparatus that is almost completely dormant.  From the SEC to the IRS, from the FDA to the EPA, the agencies are not doing their jobs, but standing idly by while the corporations, financiers, and tax-evading rich do their thing.

The leftist response is to say that the whole set-up in unworkable.  We need a new social organization.  I have just finished reading Fredric Jameson’s An American Utopia (Verso, 2016).  Interestingly enough, Jameson also thinks we need “dual power” in order to move out of our current mess.  The subtitle of his book is “Dual Power and the Universal Army.”  More about Jameson in subsequent posts.

Here I just want to reiterate what I take to be a fundamental liberal tenet: all concentrations of power are to be avoided; monopolies of power in any society are a disaster that mirror the equal but opposite disaster of civil war.  Absolute sovereignty of the Hobbesian sort is not a solution; but the absence of all sovereignty is, as Hobbes saw, a formula for endless violence.  Jameson says the key political problem for any Utopia is “federalism.”  That seems right to me, if we take federalism to mean the distribution of power to various social locations.  Having a market that stands in some autonomy from the state is an example of federalism.  There are, of course, other forms that federalism can take.  All of those forms are ways of working against the concentration of power in one place.

Liberalism (Yet Again)

In his London Review of Books review (February 6th issue) of Alexander Zevin’s history of The Economist magazine, Stefan Collini makes a point I have often made-and which I presented at some length on this blog some eighteen months ago.  To wit, the term “liberalism” is used in such a loose, baggy way that it comes to mean nothing at all—or, more usually, everything that the one who deploys the term despises.  If John Dewey and Margret Thatcher are both liberals, what could the term possibly designate?

My take has always been that there are a number of things—habeas corpus, religious tolerance, social welfare programs, freedom of the press—that in specific contexts can be identified as “liberal” in contrast to more authoritarian positions, but that the existence of these specific things are the product of different historical exigencies and do not cohere into some coherent, overall ideology.  They may be a family resemblance among the positions that get called “liberal,” but there is no necessary connection between habeas corpus and religious tolerance.  You can easily have one without the other, as was true in England for several centuries.

In a letter to the LRB, Zevin objects to Collini’s refusal to credit the more generalized use of the term “liberal.”  I find his objection cogent and thus offer it here:

“Resistant, in general, to overarching categories, he [Collini] seems particularly sensitive when it comes to liberalism. ‘When people ask me if the division between men of the Right and men of the Left still makes sense,’ the essayist Alain once remarked, ‘the first thing that comes to mind is that the person asking the question is certainly not a man of the Left.’  When someone says, mutatis mutandis, ‘all you mean by liberalism’ is ‘not socialism’ and ‘there is no such thing,’ it is safe to assume the speaker is a liberal, defensively protecting himself.”

So, yes, guilty as charged.  I am a liberal—and do have something at stake in claiming that the term ‘liberalism’ is used in too loose a fashion to do much good.  I want a finer grained statement of what specific features of the political landscape are desirable, are worth fighting to preserve where they exist, and to introduce where they do not.  We should know what we are talking about—and what we are advocating for.  Zevin’s point (not surprisingly) is that the liberalism of The Economist encompassed its support of the Vietnam and Iraqi wars; Collini, no doubt, would argue that many liberals opposed those wars, whereas they were the brainchild of many to the right of liberalism, those often called neo-conservatives.  The right, in other words, was more solidly unified in its opinion on those wars than a sorely divided liberal camp. Yes, some liberals supported those wars, but hardly all.  And it is very hard to believe that a centrist like Al Gore would have led the US into that “war of choice” in Iraq.  To which, the anti-liberal leftist says I have two words for you: Tony Blair.

The left, it seems, needs to continually assert its distance from a detested center that it calls ‘liberalism.”  It also needs to constantly trumpet the sins of that liberalism and to mitigate its differences from the right.  For the soi disant radical left, neo-liberal and neo-conservative become equivalent terms, with no appreciable difference between them.  Hilary Clinton is no better and no worse than George W. Bush.  And somehow both are liberals.

My defensiveness comes from wanting to save the term “liberal” to designate a raft of values and positions I wish to advocate.  Maybe I should give that up, call myself a “social democrat,” and move on.  I resist that move because there are values captured by “liberalism” (especially those connected to rights and tolerance) that aren’t covered by “social democrat,” with its focus on economic sufficiency and regulation of market forces and market practices.

But how about the “not socialism” broad brush?  Michael Clune, in an essay entitled “Judgment and Equality” (Critical Inquiry, 2019, pp. 910—917), repeats the by-now familiar dismissal of liberalism’s individualism, its reduction of everything to “choice,” to “consumer preference.”  Even a cursory reading of 20th century liberals such as Dewey or Charles Taylor would indicate how sloppy a vision of liberalism such a charge demonstrates.  Not to mention that one standard conservative charge against liberals is precisely that they negate individual responsibility in their emphasis on the social determinants of behavior.  Which is it?  Liberals are full-scale believers in heroic individual autonomy, or they are apologists for the impoverished and the misfit, blaming social conditions for their perceived failures?

Still, Clune does make a concrete claim: “The liberal tradition supports the effort to correct egregious market inequities through policies that leave the market intact” (928).

Now we are talking.  I do think that the commitments I think of as liberal include an acceptance of the market.  That acceptance is, partly, pragmatic (in the vulgar, not philosophical, sense of that term.)  I think the chances of overthrowing the market and installing something different in its place are nonexistent.  In that sense, there is no realistic alternative at the current moment.  So, says the radical, you and Thatcher are the same.  Told you so.

Not so fast.  What I am saying is that the consequential political battles of our time are going to be fought over what kind of market we are going to have.  This is a real battle, with real stakes.  The right over the past seventy years has fought tooth and nail to discredit social democracy, to roll back any state (or other) regulation of the market, and any mechanisms (from unions to minimum wage laws to other forms of state involvement in wage negotiations) that would overcome the imbalances of power existing between employers and workers in an unregulated market.  We know two things: one, that the right has been largely successful in this battle; two, that the vast majority of workers in the West are worse off now than they were in 1960.  Social democracy was a better deal for workers than the present regime (call it neo-liberal if you like, although that term ignores the liberalism of the twentieth century in favor of the “classical liberalism” of the 19th).

Another (contingent) feature of liberalism is its distrust of concentrations of power, its desire to share power around, to create “checks and balances.”  Currently, that entails a recognition that economic power is over-concentrated; that we need state power to counterbalance it because the collective power of workers (through unions or other mechanisms) is hard (if not impossible) to mobilize under present economic conditions.

It is fair to say that the founders were more concerned about concentrated state power than about concentrated economic power.  It is a stretch, I believe, to see Jefferson as a laissez-faire classical economist, but his words and ideas can be wrenched in that direction (by historians like Joyce Appleby) because he wanted to establish sources of power outside of the state’s reach.

I think economic sufficiency does provide a citizen with some independence from the state.  Therefore, I am also willing to argue that acceptance of markets is not just a pragmatic expediency, but also justified in its own right.  Economic bases of power apart from the state are not necessarily a bad thing.

The bad thing is overweening economic power, just as tyrannical state power is a bad thing.  Markets, like states, tend toward the abuse of power.  We need mechanisms, enforceable regulations and structuring rules, to curb market power.  We also need to identify various basics—like health care, education, transportation, clean water and energy—that are not well served by markets and create alternative institutions for their provision.  The best guideline for these alternative institutions is that old liberal standby: equality of access for all.

There are three very strong arguments against the market.  One, the market inevitably produces wildly unequal outcomes.  The liberal response: there are mechanisms, including unions, taxes, and redistributive policies that can combat those unequal outcomes.

Second, markets are inimical with democracy.  The liberal response: workplace democracy is possible, as is political democracy.  Its achievement depends on active mechanisms of participation which must be mandated as part of corporate and state governance.  But there is no absolute bar to the existence of such mechanisms.

Third, economic power always overwhelms political power—if it does not simply convert itself directly into political power.  The reforms that liberalism envisions as answers to numbers one and two never happen because the opponents of such reforms always already have power—which means the power to perpetuate existing inequalities.

That last argument is the killer.  It simply seems true—and then the issue becomes how best to diminish the power of the wealthy, how to turn plutocracy into democracy, and use the democratic state to rein in the inequities of the market (not to mention its environmental degradations).

At this point in the argument, I don’t think the leftist and the liberal have very different goals.  They just differ strongly on tactics.  Is it better to aim to win the way to reform of the market?  Or is it better to work toward the total overthrow of the market?  I don’t see any remotely realistic pathway to that second goal, which is why I remain someone committed to the re-emergence, in even stronger and better form, of social democracy.

Plus Ça Change . . .

Offered without comment.  From Flaubert’s 1869 Sentimental Education (the Penguin edition of 1964, translated by Robert Baldick).

“’All the same,’ protested Martinon, ‘poverty exists, and we have to admit it.  But neither Science nor Authority can be expected to apply the remedy.  It is purely a matter for individuals.  When the lower classes make up their minds to rid themselves of their vices, they will free themselves from their wants.  Let the common people be more moral and they will be less poor!’

According to Monsieur Dambreuse, nothing useful could be done without enormous capital.  So the only possible way was to entrust, ‘as was suggested, incidentally, by Saint-Simon’s disciples (oh, yes, there was some good in them!  Give the devil his due) to entrust, I say, the cause of Progress to those who can increase the national wealth.’ Imperceptibly, the conversation moved on to the great industrial undertakings, the railways and the mines” (238).

“Most of the men there had served at least four governments; and they would have sold France or the whole human race to safeguard their fortune, to spare themselves the slightest feeling of discomfort or embarrassment, or even out of mere servility and instinctive worship of strength.  They all declared that political crimes were unpardonable. . . . One high official even proclaimed, ‘For my part, Monsieur, if I found out my brother was involved in a plot, I should denounce him!” (240).

Secular Ethics

I am about one-third of the way through Martin Hägglund’s This Life: Secular Faith and Spiritual Freedom (Pantheon Books, 2019), of which more anon.

But I have been carrying around in my head for over seven months now my own build-it-from-scratch notion of ethics without God.  The impetus was a student pushing me in class last fall to sketch out the position—and then the book on Nietzsche’s “religion of life” that I discussed in my last post (way too long ago; here’s the link).

So here goes.  The starting point is: it is better to be alive than dead.  Ask one hundred people if they would rather live than die and 99 will choose life.

A fundamental value: to be alive.

First Objection:

Various writers have expressed the opinion that is best not to have been born since this life is just a constant tale of suffering and woe.  Life’s a bitch and then you die.

Here’s Ecclesiastes, beginning of Chapter 4:

“Next, I turned to look at all the acts of oppression that make people suffer under the sun. Look at the tears of those who suffer! No one can comfort them. Their oppressors have all the power. No one can comfort those who suffer. I congratulate the dead, who have already died, rather than the living, who still have to carry on. But the person who hasn’t been born yet is better off than both of them. He hasn’t seen the evil that is done under the sun.”

Here’s Sophocles’ version of that thought, from Oedipus at Colonus:

“Not to be born is, beyond all estimation, best; but when a man has seen the light of day, this is next best by far, that with utmost speed he should go back from where he came. For when he has seen youth go by, with its easy merry-making, [1230] what hard affliction is foreign to him, what suffering does he not know? Envy, factions, strife, battles, [1235] and murders. Last of all falls to his lot old age, blamed, weak, unsociable, friendless, wherein dwells every misery among miseries.”

And here is Nietzsche’s version, which he calls the “wisdom of Silenus” in The Birth of Tragedy:

“The best of all things is something entirely outside your grasp: not to be born, not to be, to be nothing. But the second best thing for you is to die soon.”

Second Objection:

As Hägglund argues, many religions are committed to the notion that being alive on earth is not the most fundamental good.  There is a better life elsewhere—a different thought than the claim that non-existence (not to have been born) would be preferable to life.

Response to Objections:

The rejoinder to the first two objections is that few people actually live in such a way that their conduct demonstrates an actual belief in non-existence or an alternative existence being preferable to life on this earth.  Never say never.  I would not argue that no one has ever preferred an alternative to this life.  But the wide-spread commitment to life and its continuance on the part of the vast majority seems to me enough to go on.  I certainly don’t see how that commitment can appear a weaker starting plank than belief in a divine prescriptor of moral rules.  I would venture to guess that the number of people who do not believe in such a god is greater than the number who would happily give up this life for some other state.

Third Objection:

There are obvious—and manifold—reasons to choose death over life under a variety of circumstances.  I think there are two different paths to follow in thinking about this objection.

Path #1:

People (all the time) have things that they value more than life.  They are willing (literally—it is crucial that it is literally) to die for those things.  Hence the problem of establishing “life” as the supreme value.  Rather, what seems to be the case is that life is an understood and fundamental value—and that we demonstrate the truly serious value of other things precisely by being willing to sacrifice life for those other things.  To put one’s life on the line is the ultimate way of showing where one’s basic commitments reside.  This is my basic take-away from Peter Woodford’s The Moral Meaning of Nature: Nietzsche’s Darwinian Religion and its Critics (U of Chicago P, 2018; the book discussed in my last post.)  To use Agamben’s terms “bare life” is not enough; it will always be judged in relation to other values.  A standard will be applied to any life; its worth will be judged.  And in some cases, some value will be deemed of more worth than life—and life will be sacrificed in the name of that higher value.  In other words, “life” can not be the sole value.

I am resolutely pluralist about what those higher values might be that people are willing to sacrifice life for.  My only point is that an assumed value of life provides the mechanism (if you will) for demonstrating the value placed on that “other” and “higher” thing.  In other words, the fact (gift?) of life—and the fact of its vulnerability and inevitable demise (a big point for Hägglund, to be discussed in next post)—establishes a fundamental value against which other values can be measured and displayed.  Without life, no value. (A solecism in one sense.  Of course, if no one was alive, there would be no values.  But the point is also that there would be no values if life itself was not valued, at least to some extent.) Placing life in the balance enables the assertion of a hierarchy of values, a reckoning of what matters most.

Path #2:

It is possible not only to imagine, but also to put into effect, conditions that make life preferable to death.  As Hannah Arendt put it, chillingly, in The Origins of Totalitarianism, the Nazis, in the concentration camps and elsewhere, were experts in making life worse than death. Better to be dead than to suffer various forms of torture and deprivation.

I want to give this fact a positive spin.  If the first plank of a secular ethics is “it is better to be alive than dead,” then the second to twentieth planks attend to the actual conditions on the ground required to make the first plank true.  We can begin to flesh out what “makes a life worth living,” starting with material needs like sufficient food, water, and shelter, and moving on from there to things like security, love, education, health care etc.  We have various versions of the full list from the UN Declaration of Rights to Martha Nussbaum’s list of “capabilities.”

“Bare life” is not sufficient; attending to life leads quickly to a consideration of “quality” of life.  A secular ethics is committed, it seems to me, to bringing about a world in which the conditions for a life worth living are available to all.  The work of ethics is the articulation of those conditions.  That articulation becomes fairly complex once some kind of base-line autonomy—i.e. the freedom of individuals to decide for themselves what a life worth living looks like—is made a basic condition of a life worth living.  [Autonomy is where the plurality of “higher values” for which people are willing to sacrifice life comes in.  My argument would be 1) no one should be able to compel you to sacrifice life for their “higher value” and 2) you are not allowed to compel anyone to sacrifice life for your “higher value.”  But what about sacrificing your goods—through taxes, for example?  That’s much trickier and raises thorny issues of legitimate coercion.]

It seems to me that a secular ethics requires one further plank.  Call it the equality principle.  Simply stated: no one is more entitled to the basic conditions of a life worth living than anyone else.  This is the minimalist position I have discussed at other times on this blog.  Setting a floor to which all are entitled is required for this secular ethics to proceed.

What can be the justification for the equality principle?  Some kind of Kantian universalism seems required at this juncture.  To state it negatively: nothing in nature justifies the differentiation of access to the basic enabling conditions of a life worth living.  To state it positively: to be alive is to possess an equal claim to the means for a life worth living.

Two complications immediately arise: 1. Is there any way to justify inequalities above the floor?  After every one has the minimal conditions met, must there be full equality from there?  2.  Can there be any justification for depriving some people, in certain cases, of the minimum? (The obvious example would be imprisonment or other deprivations meted out as punishments.)

Both of these complications raise the issue of responsibility and accountability.  To what extent is the life that people have, including the quality of that life, a product of their prior choices and actions?  Once we grant that people have the freedom to make consequential choices, how do we respond to those consequences?  And when is society justified in imposing consequences that agents themselves would strive to evade?

No one said ethics was going to be easy.  Laws and punishments are not going to disappear.  Democracy is meant to provide a deliberative process for the creation of laws and sanctions—and to provide the results of those deliberations with legitimacy.

All I have tried to do in this post is to show where a secular ethics might begin its deliberations—without appealing to a divine source for our ethical intuitions or for our ethical reasonings.

Money and Babies

Since I got onto money in my last post, I am going to continue that line of thought (briefly).  I worried a lot about money between the ages of 29 and 44 (roughly); it’s strange how hard it is for me to remember my feelings.  Sure, I forget events as well.  But the main outlines of my life’s history are there to be remembered.  What I can’t pull up is how I felt, what I was thinking, at various points.  My sense now that I was somehow not present at much of my life stems from this inability to reconstruct, even in imagination, who I was at any given moment.  I did all these things—but don’t remember how I did them or what I felt as I was doing or even exactly what I thought I was doing.  Getting through each day was the focus, and somehow I made it from one day to the next.  But there was no overall plan—and no way to settle into some set of coherent, sustaining emotions.  It was a blur then and it’s a blur now.

All of which is to say that I worried about money, about the relative lack of it, without having any idea about how to get more of it.  I didn’t even have money fantasies—like winning the lottery or (just as likely) writing a best-selling book.  What I did for a living, including writing the books that my academic career required, was utterly disconnected emotionally and intellectually from the need to have more money.  When I made my first academic move (from the University of Rochester’s Eastman School of Music to the University of North Carolina) the motive was purely professional, not monetary.  I wanted to teach in an English department and be at a university where my talents would not be underutilized.  That it would involve a substantial raise in pay never occurred to me until I got the offer of employment.  And when I went and got that outside offer in order to boost my UNC salary (as mentioned in the last post), it was the inequity of what I was being paid that drove me, not the money itself.  In fact, despite worrying about money for all those years, I never actually imagined having more than enough.  It was as if I just accepted that financial insecurity was my lot in life.  I could worry about it, but I didn’t have any prospects of changing it.

Certainly, my worries did make me into a cheap-skate.  And undoubtedly those niggardly habits are the reason we now have more than enough each month.  Habits they certainly are since at this point they don’t even pinch.  They just are the way I live in the world—and allow me to feel like I am being extravagant when (fairly often now) I allow myself luxuries others would even give a second thought.

My main theme, however: the worries about money were utterly separate from the decision to have children.  That this was so now amazes me.  It is simply true that when Jane and I decided the time had come to have children, the financial implications of that decision never occurred to me.  We made a very conscious decision to have children.  Our relationship was predicated, in fact, on the agreement that we would have children.  And when that pre-nuptial agreement was made the issue of having money enough to have kids was raised.  But four years later, when we decided to have the anticipated child, money was not considered at all.  And when we decided to have a second child after another two years, once again money was not an issue.  I don’t know why not.  Why—when I worried about having enough money for all kinds of other necessities—did I not worry about having enough money to raise our two children?  That’s the mystery.

I have no answer.  And I can’t say if that was true generally for those of us having our children in the 1980s, although it seems to have been true for most of my friends.  On the other hand, as my wife often notes, I do have a fairly large number of married friends (couples who have been together forty years now) who do not have children.  Very likely that a mixture of professional and financial reasons led to their not having kids.

I do, however, feel that financial considerations do play a large role now (in the 2010s) in the decision to have children.  That’s part of the cultural sea-change around winners and losers, the emptying out of the middle class, and the ridiculous price of “private” and quasi-private education.  Most conspicuous to me is the increasing number of single-child families among the upper middle class.  Yes, that is the result of a late start for women who take time to establish themselves in a profession.  But it also an artifact of worrying about the cost of child-care and of education.

I come from a family of seven children.  And my parents, relatively, were less well-off when they had us than Jane and I were when we had our children.  (That statement is a bit complicated since my parents had access to family money in case of emergency that was not there for me to tap.  But, in fact, my parents didn’t rely heavily on that reserve until much later in their lives.)  Was my not following my parents’ footsteps toward a large family financially motivated?  A bit, I guess.  But it really seems more a matter of style—plus the fact that my wife was 34 when she got pregnant with our first.  But even if she had been 24 (as my mother was at her first pregnancy), it is highly unlikely we would have had more than two kids (perhaps three).  The idea was unthinkable by 1987; it just wasn’t done.

It is also hard to see how we could have done it (even though that fact didn’t enter into our thinking).  Certainly, it would have been done very differently.  We paid $430,000 for our two children’s educations: three years of private high school and four years of private university (with a $15,000 scholarship each year) for my son, and four years of private high school and four years of private university for my daughter. And that figure is just the fees paid to the schools; it doesn’t include all the other costs. We would certainly have relied much more heavily on public education if we had more than two children.

Once again, I have no moral to draw.  I am just trying to track what seem to me particularly significant shifts in cultural sensibility.

On Salaries and Money and American Universities

My last post on the future of the humanities led me to think about American higher education, which I am tempted to call, semi-blasphemously, “our peculiar institution.”  But it also led me to think about money. I was led to that thought by recalling that I, a humanist scholar, am a state employee of North Carolina.  But my munificent salary is, actually, largely paid by “private dollars,” funded out of the “endowed chair” donated to the university by members of the Hanes family (of Winston-Salem and underwear fame).  This post will be an unholy mixture of what that fact means for American higher education and what it means for my own relationship to money and to my work.

I am not being ironic when I use “munificent” to characterize my salary.  I make more money than ever, in my most avaricious dreams, I could have believed an English professor could make.  That salary is public knowledge because North Carolina has rather strict “sunshine” laws.  You can go to a website and look it up.  Yet in keeping with American prudery, which insures that we know less about our friends’ financial circumstances than about their sex lives, I can’t bring myself to name the sum here—or to name the sum that my wife and I have accumulated in our retirement accounts.  When, every once in a while, I do disclose those two numbers to friends and family, I am very conscious of a weird (unsettling) mixture of shame and boast in the disclosure.  I think I am overpaid—but I am proud to be valued so highly.  David Graeber is good on this feeling in his book BullShit Jobs.  For those of us who love our work and didn’t go into it for the money, there is something shameful about the pay.  Even more shameful when the pay makes one rich.

I feel guilty getting paid so much for doing a job that I like and that, frankly, comes very easy to me.  I have many colleagues who are overwhelmed, who feel constantly way behind, who are anxious, who are bedeviled by a sense that they have never done enough.  I have been, until the past year, always extremely busy; I have always worked on weekends.  But I have seldom been anxious.  When I go to North Carolina, it became clear to me very early on that this place operated at a speed that was very comfortable for me.  My pace of work, my productivity, was going to place me in the top tier at UNC.  I was never going to be made to feel inadequate, not up to snuff. (I am not extremely busy at the moment–which makes me feel even more guilty–because I have become persona non grata on campus following my public criticisms of the Chancellor.  I don’t get asked to do anything anymore.)

A time came, inevitably, when I was a victim of salary compression.  Professors get raises that average below inflation.  I tell my grad students the hard truth that their starting salary at a job could easily become their salary for life.  Raises will never go far beyond the increases in the cost of living.  But here is where we get back to the “peculiar institution” issue.  American universities exist within a prestige hierarchy. At the top of that hierarchy—meaning not only the top schools but also the wannabes—there is competition for the “best faculty.”  This is just one place where things get weird.

Why weird?  Because the measure of quality among faculty is their research productivity.  As my cynical friend Hans puts it: “in academics, quantity doesn’t count, quantity is everything.”  It’s not quite that bad, but almost.  Faculty must publish in order to distinguish themselves from other faculty—and then universities must have a faculty that publishes a lot to distinguish themselves from other universities.  In Britain, this has led to the absurdity of the government actually allocating funds to departments based on their research productivity; in America, it is more indirect, since the “best” universities can increase their funding through three means: 1) more state support in the way of research grants from the Federal (and in the case of state universities) and state governments; 2) an ability to charge higher tuition because more prestigious; and 3) a greater ability to raise philanthropic dollars because more expensive and more prestigious, which means having richer alumni.

One oddity (among others) is, of course, that research has, at best, a tangential relation to the educational mission of the university.  More to the point, the students attracted to the university by its prestige have very close to no interest in the research that underwrites that prestige.  Furthermore, the connection between prestige and the research is also completely fuzzy.  For one things, the prestige hierarchy is just about set in stone.  The same schools that headed the list in 1900 still head the list in 2020.  Reputations are, it seems, just about impossible to tarnish.  They glow like the light from long extinguished stars.

It is true that some schools—notably Duke—have managed to elbow their way into the top tier.  There are now lots of Duke imitators, all trying to crack into the stratosphere of Harvard, Yale, Stanford.  But it seems quaint to think Duke’s success can be tied in any direct way to its faculty’s research.  That success seems much more tied to a well-timed (they got into this game first) branding exercise.  They made splashy faculty hires, at the same time that they made themselves into a perennial contender for the national basketball championship.  What those faculty actually did after they were hired was secondary.  It was a question of having names on the letterhead that would lead to U.S. News (and other ranking outlets) to give Duke a boost.

Duke’s timing was impeccable because they hopped aboard the first privatization wave.  The 1980s began the move toward a renewed obsession with prestige that dovetailed with the superstition that “public” education was, by its nature, inferior to “private” education.  As the rich and the elites (see Christopher Lasch’s The Revolt of the Elites) abandoned the public commons (most dramatically in where they sent their kids to school), universities like Duke and my alma mater Georgetown were there to pick up the slack.  Georgetown shows that there was room to move up for the Duke imitators; the smallish privates, like Georgetown, Northwestern, Emory, and Vanderbilt, came up in the world, occupying a particular niche below the Ivies, but with a prestige value, a tuition price tag, and tough admission standards that simply were not the case when I was a Hoya in the 1970s.  As I learned when I got to grad school at SUNY Buffalo in 1974, they thought of themselves as having taken a chance on me because they didn’t know what a Georgetown degree meant.  Yale and Cornell turned me down.

My old employer, the University of Rochester, has always wanted to play in the Northwestern, Emory, Vanderbilt league–without ever quite managing to pull it off.  When I taught there in the late 1980s, Rochester’s president insisted on a 30% rise in tuition–in order to bring UR’s tuition in line with Northwestern etc.  He said we would never be thought any good if we didn’t charge like “our peers.”  I argued that there surely was a market niche for a good school that charged 30% less–and that UR had a better shot of getting students in that niche than in competing with Northwestern.  I, of course, lost the argument–but not just in terms of what the university did, but also in terms of its effect on applications and admissions.  I didn’t understand in those days that, when it comes to higher education, for many aspirants prestige trumps all other factors every time.  And just as in the wider market, it pays much better to cater to the wishes of the well-to-do than to a mass market.

Back to research for a moment.  As Christopher Newfield’s work has amply documented, universities lose money on the big science grants they get.  The infrastructure required to compete for such grants costs more than the grants can bring in.  Thus, either tuition, direct state support, or philanthropic dollars must underwrite the research enterprise.  Yet schools compete wildly for the research dollars because they are essential to their prestige.  Thus, UNC set a goal some years back of $1 billion a year in research funding, a goal that the Vice Chancellor for Research also admitted would worsen our bad financial plight.  We have since surpassed that goal—and are going broke.  But we had 44,000 applicants for 5000 undergraduate slots this past admissions cycle, and our departments and schools remain highly ranked.

The research imperative also makes faculty lives hell.  I have been lucky, as I already said.  For whatever reason, research has always come easily to me; it is not a burden, just something I do.  In part—and truthfully—I enjoy it.  But I will also admit it is so tangled up with issues of self-respect and of respect from my peers, that I would be hard pressed to sort out the various strands of my emotional attachments to my work.  I do know, however, that for many of my colleagues, the research is just a site of constant frustration, of a constant sense of not being good enough or productive enough.  For what?  First of all, the university needs good teachers, as well as good administrators who serve as directors of undergraduate studies, who sponsor various student clubs, who keep the educational enterprise running smoothly.  The administrative bloat on American campuses (which has, demonstrably, be a major factor in the rising costs of higher education) stems in part from freeing faculty from doing that work in the name of giving them more time to do research.

No one wants to admit that much of the research is not much worth doing.  The world will get on just fine without the many bad books and journal articles—many of which are never read by anyone—that the emphasis on research creates.  We have wasted countless hours from imaginative people by pushing faculty toward only one metric of work, toward only one way to contribute to the university.

My position is that good books will still get written even if faculty weren’t forced to write them.  This is tricky.  I am, after all, trying to think about prestige hierarchies.  And it would take a massive cultural sea-change within academia to reach the point where those who were productive researchers were not at the top of the ladder.  Cultural sea-changes require alterations in what Raymond Williams called “structures of feeling.”  I have already indicated the extent to which I recognize my own research was motivated by issues of self-worth and of looking worthy in the eyes of my peers.

Reputation drives many academics much more than money—and it cripples them far more effectively as well.  But still, part of me wants to insist that if the work is worth doing, it will get done.  In other words, we could lose all the research produced just because there is gun to people’s heads—and there still would be good books written (and some bad ones as well) because there will still be people for whom the enterprise of writing a book is central to their sense of themselves (as writers, as persons) and because they see the writing of books as valuable in and of itself.  That Holy Grail of “intrinsic value.”  I doubt we ever get full purity.  But, after all, we do do certain things because we find them worth doing.  And the writing of books is either something some people find worth doing—or it shouldn’t be done at all.

I always read Proust and other social novelists with an inability to suspend disbelief.  I could not understand a life where social climbing, where social ambition, was the driving passion.  I thought that such a world had long since disappeared.  People didn’t orient their lives in that fashion anymore.  But today I read The New Yorker and it is full of tales of people who are tortured and paralyzed by social media, who are obsessed with the “right brands,”star chefs and restaurants, and by celebrities.   And I should probably admit that academics are embroiled in their own kind of social climbing; they, too, want to be part of certain inner circles.  I always held myself rather aloof from all that—and, yet, by the Proustian law of getting what you seem (to others) not to want, I have had, by any objective standard, a highly successful academic career.  I never reached superstar status; I am more like the number 50th ranked tennis player in the world, known by some but not all, but still getting a fair number of perks that fall to those in the inner circles, even if I don’t have their name recognition and my books are read by much, much smaller audiences.

Among the perks, in my own context, there is that absurd salary.  When compression struck, I was able (as you are forced to do in the academic game) to go get an “outside offer.”  I had the kind of research profile that would lead another school that was in the prestige game to bid for my services.  I was able to force UNC to raise my salary so it was in line with that of my colleagues who had been hired after me or who had gotten outside offers of their own.  (Maybe another time I will talk about the complex layers of guilt unleashed by playing the game of getting such an offer.)

Which brings me full circle.  UNC can only compete for the “best faculty” as it struggles to maintain its high reputation, its high ranking, because private donors (alumni who are committed to UNC maintaining its standing) supplement the salaries the state is willing to pay.  UNC, like almost all the top public universities (Virginia, Michigan, UCLA, Berkeley) is a quasi-public school at this point.  Since UNC is more dependent on state dollars than the other schools I have just named, its standing is, in fact, sinking while theirs is holding steady.  Public schools further down the ladder—the UNC Charlottes of the world—are playing a desperate game of catch-up since they don’t’ have the fund-raising potential of the “flagships” and thus are hurt even more by the steady withdrawal of state support.

In short, the privatization of American higher education is a product of the lessening prestige of the public schools—a decline that is semi-rational given that schools are much less fully funded now than they once were.  But it is only semi-rational because it is also tied to the resurgence in the US of prestige-hunger, a resurgence related to the many sins that get covered by the name “neoliberalism.”  There is a heightened—if only rarely explicitly stated—sense of the great divide between winners and losers in our contemporary world.  And going to the “right” college now seems essential (to many people) to making sure you are one of the winners.  The Dukes and Georgetowns of the world have risen because of that anxiety about being left behind and because anything public has been underfunded and denigrated since the 1980s.  This, of course, explains the recent scandal of cheating the admissions process.  More importantly, it explains the on-going scandal of “legacy” admissions, which are motivated by fund-raising imperatives and by the time-worn abilities of elites to retain privileges.

The wider story, however, is about distinction–and cultural mores.  Here’s another argument I lost regarding college admissions.  UNC never had any “merit-based” scholarships (apart from the Moreheads, a whole ‘nother story).  In the early 1990s UNC realized it was beginning to lost the “best” in-state students to schools like Brown and Georgetown and Harvard.  Losing such students, of course, hurt our US News rankings, since average SAT scores for the incoming class were a major metric.  So it was decided to begin offering $500 and $1000 named scholarships to top applicants, irrespective of financial need.  My argument: “you mean to tell me that giving someone $1000 off our $12,000 in-state tuition will make them come to UNC, when their family is fully ready to pay $45,000 for them to go to Brown?”  Once again, I was wrong.  Students wanted to be singled out as “different,” as “special.”  The merit scholarships did increase our yield among top in-state students.  Maybe I am hopelessly romanticizing the 1950s and 1960s–and maybe the middle middle class that came from still exists.  I went to the most elite Catholic high school on Long Island.  All of my classmates went to college.  And there was some sense of a distinction between “going away” to college and going to a college within fifty miles of our high school.  But, really, beyond that little to no sense that Hamilton was different from Villanova, or Northwestern not the same as Marist.  And there was certainly no sense that a school had to distinguish me from other admitted students in order to get me to attend.  I can’t help but believe we are a far less democratic, far less egalitarian society culturally and emotionally (as well as, obviously, economically) now than we were in 1965.

My fat salary is linked to the same sea changes.  In academia, too, the divide between winners and losers has widened.  The spread between the highest and lowest salary in my department is much greater now than it was in 1992, when I arrived.  And, of course, academia has also created its own version of “contract workers,” the “adjuncts” who get low wages and no benefits to do the teaching that the “research faculty” does not do.  It stinks—even as I am a beneficiary of it.  No wonder I feel guilty.  Yeah, you say, you and your guilt feelings plus $1.50 will get you a ride on the subway.  I hate coming across as defensive, but I will record here that I have turned down all available raises over the past five years (admittedly, they were hardly large) so that the money could be distributed among my less well-paid colleagues.

A last point about money.  This thought comes from the Paul Manafort story.  I must be a person of very limited imagination.  Over the past three years, after all the deductions for taxes, retirement funds, health insurance etc., my wife and I together have approximately $10,000 a month in take home pay.  That’s the amount that lands in our bank accounts each month.  We bought our house quite some time ago, so our monthly mortgage plus escrow is $2000.  I understand that is low for most people.  But we have had a number of medical bills that our shitty medical insurance fails to cover—certainly coming to at least $500 a month when averaged over a whole year.  In any case, the point is that we can’t spend $10,000 a month—even as we were supplementing my wife’s mother’s retirement home costs to the tune of $1500 a month, and give a fair amount of money to our two children.  Yet we do not deny ourselves anything, and basically don’t pay much attention to what we spend.  This last, not paying attention, is an astounding luxury after at least twenty years of sweating every penny.  Yet, even with being wildly careless in relation to our earlier habits, there is always enough money.  In fact, it slowly accumulates, so that at the end of every year, no matter what medical emergencies or extravagant trips or increases in the number of charities we send an automatic monthly donation to, there is an extra $10,000 or so.

Clearly—as Paul Manafort showed us—there are a significant number of people in the US to whom $10,000 a month would be woefully inadequate.  Of course, there are millions more for whom, as for my wife and I, it would be untold riches. I don’t really know what moral to derive from that fact.  So I will simply state it here—and cease.

Harry Frankfurt on Inequality

I read Harry Frankfurt’s essay on inequality (published as a small book by Princeton University Press, 2015) over the weekend.  Frankfurt’s position is simple: “Economic equality is not, as such, of any particular moral importance; and by the same token, economic inequality is not in itself morally objectionable.  From the point of view of morality, it is not important that everyone should have the same.  What is morally important is that each should have enough.  If everyone had enough money, it would be of no special or deliberate concern whether some people had more money than others.  I shall call this alternative to egalitarianism the ‘doctrine of sufficiency’—that is, the doctrine that what is morally important with regard to money is that everyone should have enough” (7).

Economic inequality is morally objectionable only when the fact of its existence leads to the production of other moral harms.  But it is not intrinsically (a key word for Frankfurt) morally objectionable in itself.  “That economic equality is not a good in itself leaves open the possibility, obviously, that it may be instrumentally valuable as a necessary condition for the attainment of goods that do generally possess intrinsic value. . . . [T]he widespread error of believing that there are powerful moral reasons for caring about economic equality for its own sake is far from innocuous.  As a matter of fact, this belief tends to do significant harm” (8-9).

Frankfurt’s efforts to specify the harm done are not very convincing, involving tortured arguments about marginal utility and implausible suppositions about scarcity.  By failing to deal in any concrete cases, he offers broad arguments that fall apart (it seems to me) when applied to things like access to clean air and clean water (think of the Flint water crisis) or to health care and education (where provision of equal access and quality to all is a commitment to the equal worth of every life, a principle that seems to me intrinsic.)  It gets even worse at the end of the book, in the second essay, “Equality and Respect.”  Frankfurt writes: “I categorically reject the presumption that egalitarianism, of whatever variety, is an ideal of any intrinsic moral importance” (65).  His argument rests on a bit of a shell game, since he substitutes “respect” for “equality”, and then acknowledges that there are certain rights we deem morally due to all because of our “respect” for their “common humanity.”  But he is against “equality” because he thinks we also accord respect (and even certain rights) differentially.  We need to take the differences between people into account when those differences are (in his words) “relevant.”  What he fails to see is that “equality” names the moral principle that, in (again) particular cases, no differences can or should be relevant (in spite of the fact that various agents will try to assert and act on the relevance of differences).  The most obvious case is “equality before the law.”  It is very hard to see how “equality before the law” is not an intrinsic moral principle.  It functions as a principle irrespective of outcomes—and its functioning as a principle is demonstrated precisely by the fact that it is meant to trump any other possible way of organizing how the law functions.  It is good in and of itself; we could even say that “equality before the law” constitutes the good, the legitimacy, of law—and it preforms this constitutive function because it is the intrinsic principle law is meant to instantiate.

But let’s go back to economic inequality.  There Frankfurt is on much stronger ground.  I don’t think he makes a good case that concern over economic inequality causes harm.  But as what I have been calling a “welfare minimalist” (what he calls “the doctrine of sufficiency”), he echoes the comment of my colleague that questions of inequality are irrelevant if everyone has enough.  As Frankfurt puts it: “The doctrines of egalitarianism and of sufficiency are logically independent: considerations that support the one cannot be presumed to provide support for the other” (43).  “The fact that some people have much less than others is not at all morally disturbing when it is clear that the worse off have plenty” (43).

There are practical questions here of a Marxist variety: namely, is it possible for there to be substantial inequality without the concomitant impoverishment of some proportion of the population?  Oddly enough, Frankfurt briefly talks about the inflationary effects of making the poor better off, but never considers the inflationary effects of their being vast concentrations of wealth (in housing costs, for example).  Mostly, however, Frankfurt shies far away from practical issues.

On his chosen level of abstraction, he makes one very good and one very provocative point.  The good point is that concerns about inequality help us not at all with the tough question of establishing standards of sufficiency.  If the first task before us is triage, then what is needed is to provide everyone with enough.  It seems true to me that triage is the current priority—and that we have barely begun to address the question of what would suffice.  Talk of a UBI (Universal Basic Income) is hopelessly abstract without a consideration of what that income would enable its recipient to buy—and of what we, as a society, deem essential for every individual to be able to procure.  There is work to be done on the “minimalist” side, although I do think Martha Nussbaum’s list of minimal requirements (in her book on the capabilities approach from Harvard UP) is a good start.

The provocative point comes from Frankfurt’s stringent requirement that a moral principle, au fond, should be “intrinsic.”  The trouble with inequality as a standard is that it is “relative,” not “absolute” (41-42).  It is not tied to my needs per se, but to a comparison between what I have and what someone else has.  The result, Frankfurt believes, is that the self is alienated from its own life.  “[A] preoccupation with the alleged inherent value of economic equality tends to divert a person’s attention away from trying to discover—within his experience of himself and of his life conditions—what he himself really cares about, what he truly desires or needs, and what will actually satisfy him. . . . It leads a person away from understanding what he himself truly requires in order effectively to pursue his own most authentic needs, interest, and ambitions. . . . It separates a person from his own individual reality, and leads him to focus his attention upon desires and needs that are not most authentically his own” (11-12).

Comparisons are odious.  Making them leads us into the hell of heteronomy—and away from the Kantian heights of autonomy and the existential heaven of authenticity.  But snark is not really the appropriate response here.  There seem to me interesting abstract and practical questions involved.  The abstract question is about the very possibility (and desirability) of autonomy/authenticity.  Can I really form desires and projects that are independent of my society?  In first century Rome, I could not have dreamed of becoming a baseball player or a computer scientist.  Does that mean that my desire to become one or the other in 2019 is inauthentic?  More directly, it is highly likely that my career aspirations are shaped by various positive reinforcements, various signals that I got from others that my talents lay in a particular direction.  Does that make my choice inauthentic?  More abstractly, what is the good of authenticity?  What is at stake in making decisions for myself, based on a notion of my own needs and ambitions? Usually, the claim is that freedom rests on autonomy.  Certainly both Kant and the existentialists believed that.  But what if freedom is just another word for nothing left to lose, if it indicates a state of alienation from others so extreme that it is worthless—a thought that both Kierkegaard and Sartre explored.

I am, as anyone who has read anything by me likely knows, a proponent of autonomy, but not a fanatic about it.  That people should have the freedom to make various decisions for themselves is a bottom-line moral and political good in my book.  But I am not wedded to any kind of absolutist view of autonomy, which may explain why appeals to “authenticity” leave me cold.  On the authenticity front, I am inclined to think, we are all compromised from the get go.  We are intersubjectively formed and constituted; our interactions with others (hell, think about how we acquire language) embed “the other” within us from the start.  It’s a hopeless task to try and sort out which desires are authentically ours and which come from our society, from the others we have interacted with, etc. etc.  To have the freedom to act on one’s desires is a desirable autonomy in my view; to try to parse the “authenticity” of those desires in terms of some standard of their being “intrinsic” to my self and not “externally” generated seems to me one path to madness.

Even more concretely, Frankfurt’s link of the “intrinsic” to the “authentic” raises the question of whether any judgments (about anything at all) are possible without comparison.  His notion seems to be that an “absolute” and “intrinsic” standard allows me to judge something without having to engage in any comparison between that something and some other thing.  I guess Kant’s categorical imperative is meant to function that way.  You have the standard—and then you can judge if this action meets that standard.  But does judgment really ever unfold that way?  By the time Kant gets to the Critique of Judgment, he thinks we need to proceed by way of examples—which he sees as various instantiations of “the beautiful” (since “the beautiful” in and of itself is too vague, too ethereal, a standard to function as a “determinative” for judgment).  And, in more practical matters, it would seem judgment very, very often involves weighing a range of possibilities—and comparing them to see which is the most desirable (according to a variety of considerations such as feasibility, costs, outcomes etc.) A “pure” judgment–innocent of all comparison–seems a rare beast indeed.

Because he operates at his insistently high level of abstraction, Frankfurt approaches his “authenticity” issue as a question of satisfaction with one’s life.  Basically, he is interested in this phenomenon: I am satisfied with my life even though I fully realize that others have much more money than me.  One measure of my satisfaction is that I would not go very far out of my way to acquire more money.  Hence the fact of economic inequality barely impinges on my sense that I have “enough” for my needs and desires.  This is a slightly different case from saying that my concept of my needs and desires has been formed apart from any comparison between my lot and the lot of others.  Here, instead, the point is that, even when comparing my lot to that of those better off than me, I do not conclude that my lot is bad.  I am satisfied.

For Frankfurt, my satisfaction shows that I have no fundamental moral objection to economic inequality.  Provided I have “enough” I am not particularly morally outraged that others have even more.  I am not moved to act to change that inequality.

It seems to me that two possibilities arise here.  The first is that I do find the existence of large fortunes morally outrageous. I don’t act because I don’t see a clear avenue of effective action to change that situation, although I do consistently vote for the political parties who are trying to combat economic inequality.  But Frankfurt’s point is that my satisfaction shows I don’t find economic inequality “intrinsically” wrong.  I am most likely moved to object to it by seeing what harms have been done to others in order to accumulate such a large fortune—or I point to the wasted resources that are hoarded by the rich and could be used to help the poor.  Frankfurt, in other words, may be right that economic inequality is not “intrinsically” wrong, but only wrong in terms of the harms that it produces.  I think I would take the position that economic inequality is a “leading indicator” of various ills (like poverty, exploitation, increasing precarity, the undermining of democratic governance, etc.)—and that the burden of proof lies in showing that such inequality is harmless.  If this focus on produced harms means economic inequality is not an “intrinsic” value, so be it.

The other interesting consideration Frankfurt’s discussion brings to the fore is the absence of envy.  Conservatives, of course, are fond of reducing all concerns about economic inequality to envy.  And the mystery to be considered here (and to which Frankfurt points) is how, if I am aware that others have more than me, I am not consumed with envy, resentment, or a sense of abiding injustice (i.e. it’s not fair that he has more than me).  Certainly some people’s lives are blighted by exactly those feelings.  But others are content, are satisfied, in the way Frankfurt describes.  The comparison has no bite for them.  The difference is noted but not particularly resented—or, if resented, still doesn’t reside at the center of the judgment of their own life.  Maybe some kind of primitive narcissism is at work here, some sense that I really like being me and don’t really want to trade in “me” in order to be some other chap.  The deep repudiation of self required by envy may just be beyond the reach of 80% of us.  Just how prevalent is self-hatred?  How many would really desire to change their lot with another?

Pure speculation of course.  But the point is not some fantasy about authenticity, about living in a world where I don’t shape my desires or self-judgments at least partially by comparing myself to others.  Rather, the fact of our constantly doing such comparing is here acknowledged—and the question is how we live contentedly even as we also recognize that we fall short of others in all kinds of ways.  He has better health, a more successful career, a sunnier disposition, more money, more friends, more acclaim.  How can I be content when I see all that he possesses that I do not?  That’s the mystery.  And I don’t think Frankfurt solves it–and I cannot explain it either.  His little book makes the mystery’s existence vivid for me.