Category: Institutions

Joseph North (One)

One of the oddities of Joseph North’s Literary Criticism: A Concise Political History (Harvard UP, 2017) is that it practices what it preaches against.  North believes that the historicist turn of the 1980s was a mistake, yet his own “history” is very precisely historicist: he aims to tie that “turn” in literary criticism to a larger narrative about neo-liberalism.

In fact, North subscribes to a fairly “vulgar,” fairly simplistic version of social determinism.  His periodization of literary criticism offers us “an early period between the wars in which the possibility of something like a break with liberalism, and a genuine move to radicalism, is mooted and then disarmed,” followed by “a period of relative continuity through the mid-century, with the two paradigms of ‘criticism’ and ‘scholarship’ both serving real superstructural functions within Keynesianism.”  And, finally, when the “Keynesian period enters into a crisis in the 1970s . . . we see the establishment of a new era: the unprecedentedly complete dominance of the ‘scholar’ model in the form of the historicist/contextualist paradigm.”  North concludes this quick survey of the “base” determinants of literary critical practice with a rhetorical question:  “If this congruence comes as something of a surprise, it is also quite unsurprising: what would one expect to find except that the history of the discipline marches more or less in step with the underlying transformations of the social order?” (17).

Perhaps I missed something, but I really didn’t catch where North made his assertions about the two periods past the 1930s stick.  How do both the “critical” and “scholarly” paradigms serve Keynesianism?  I can see where the growth of state-funded higher education after World War II is a feature of Keynesianism.  But surely the emerging model (in the 50s and 60s) of the “research university,” has as much, if not more, to do with the Cold War than with Keynesian economic policy.

But when it gets down to specifics about different paradigms of practice within literary criticism, I fail to see the connection.  Yes, literary criticism got dragged into a “production” model (publish or perish) that fits it rather poorly, but why or how did different types of production, so long as they found their way into print, “count” until the more intense professionalization of the 1970s, when “peer-reviewed” became the only coin of the realm?  The new emphasis on “scholarship” (about which North is absolutely right) was central to that professionalization—and does seem directly connected to the end of the post-war economic expansion.  But that doesn’t explain why “professionalization” should take an historicist form, just as I am still puzzled as to how both forms—critical and scholarly—“serve” Keynesian needs prior to 1970.

However, my main goal in this post is not to try to parse out the base/superstructure relationship that North appears committed to.  I have another object in view: why does he avoid the fairly obvious question of how his own position (one he sees as foreshadowed, seen in a glass darkly, by Isobel Armstrong among others) reflects (is determined by?) our own historical moment?  What has changed in the base to make this questioning of the historicist paradigm possible now?  North goes idealistic at this point, discussing “intimations” that appear driven by dissatisfactions felt by particular practitioners.  The social order drops out of the picture.

Let’s go back to fundamentals.  I am tempted to paraphrase Ruskin: for every hundred people who talk of capitalism, one actually understands it.  I am guided by the sociologist Alvin Gouldner, in this case his short 1979 book The Rise of the New Class and the Future of the Intellectuals (Oxford UP), a book that has been a touchstone for me ever since I read it in the early 1980s.  Gouldner offers this definition of capital: anything that can command an income in the mixed market/state economy in which we in the West (at least) live.  Deceptively simple, but incredibly useful as a heuristic.  Money that you spend to buy food you then eat is not capital; that money does not bring a financial return.  It does bring a material return, but not a financial one.  Money that you (as a food distributer) spend to buy food that you will then sell to supermarkets is capital.  And the food you sell becomes a commodity—while the food you eat is not a commodity.  Capital often passes through the commodity form in order to garner its financial return.

But keep your eye on “what commands an income.”  For Marx, of course, the wage earner only has her “labor power” to secure an income.  And labor power is cheap because there is so much of it available.  So there is a big incentive for those who only have their labor power to discover a way to make it more scarce.  Enter the professions.  The professional relies on selling the fact that she possesses an expertise that others lack.  That expertise is her “value added.”  It justifies the larger income that she secures for herself.

Literary critics became English professors in the post-war expansion of the research university.  We can take William Empson and Kenneth Burke as examples of the pre-1950s literary critic, living by their wits, and writing in a dizzying array of modes (poetry, commissioned “reports,” reviews, books, polemics).  But the research university gave critics “a local habitat [the university] and a name” [English professors] and, “like the dyer’s hand, their nature was subdued.”  The steady progress toward professionalization was begun, with a huge leap forward when the “job market” tightened in the 1970s.

So what’s new in the 2010s?  The “discipline” itself is under fire.  “English,” as Gerald Graff and Peter Elbow both marveled years ago, was long the most required school subject, from kindergarten through the second year of college.  Its place in our educational institutions appeared secure, unassailable.  There would always be a need to English teachers.  That assumed truth no longer holds.  Internally, interdisciplinarity, writing across the curriculum, and other innovations threatened the hegemony of the discipline.  Externally, the right wing’s concerted attack on an ideologically suspect set of “tenured radicals” along with a more general discounting (even elimination) of value assigned to being “cultured” meant the “requirement” of English was questioned.

North describes this shift in these terms:  “if the last three decades have taught literary studies anything about its relationship to the capitalist state, it is that the capitalist state does not want us around.  Under a Keynesian funding regime, it was possible to think that literary study was being supported because it served an important legitimating role in the maintenance of liberal capitalist institutions. . . . the dominant forms of legitimation are now elsewhere” (85).  True enough, although I would still like to see how that “legitimating role” worked prior to 1970; I would think institutional inertia rather than some effective or needed legitimating role was the most important factor.

In that context, the upsurge in the past five years (as the effects of 2008 on the landscape of higher education registered) of defenses of “the” discipline makes sense.  North—with his constant refrain of “rigor” and “method”—is working overtime to claim a distinctive identity for the discipline (accept no pale or inferior imitations!).  This man has a used discipline to sell you. (It is unclear, to say the least, how a return to “criticism,” only this time with rigor, improves our standing in the eyes of the contemporary “capitalist state.”  Why should they want North’s re-formed discipline  around anymore than the current version?)

North appears  blind to the fact that a discipline is a commodity within the institution that is higher education.  The commodity he has to sell has lost significant amounts of value over the past ten years within the institution, for reasons both external and internal.  A market correction?  Perhaps—but only perhaps because (as with all stock markets) we have no place to stand if we are trying to discover the “true” value of the commodity in question.

So what is North’s case that we should value the discipline of literary criticism more highly? He doesn’t address the external factors at all, but resets the internal case by basing the distinctiveness of literary criticism on fairly traditional grounds: it has a distinct method (“Close reading”) and a distinct object (“rich” literary and aesthetic texts).  To wit:  “what [do] we really mean by ‘close reading’ beyond paying attention to small units of any kind of text.  Our questions must then be of the order: what range of capabilities and sensitivities is the reading practice being used to cultivate?  What kinds of texts are most suited to cultivating those ranges? Putting the issue naively, it seems to me that the method of close reading cannot serve as a justification for disciplinary literary study until the discipline is able to show that there is something about literary texts that make them especially rewarding training grounds for the kinds of aptitudes the discipline is claiming to train.  Here again the rejected category of the aesthetic proves indispensable, for of course literary and other aesthetic texts are particularly rich training grounds for all sorts of capabilities and sensitivities: aesthetic capabilities”( 108-9; italics in original).

I will have more to say about “the method of close reading” in my next post.  For now, I just want to point out that it is absurd to think “close reading” is confined to literary studies–and North shows himself aware of that fact as he retreats fairly quickly from the “method” to the “objects” (texts).  Just about any practitioner in any field to whom the details matter is a close reader.  When my son became an archaeology major, my first thought was: “that will come to an end when he encounters pottery shards.” Sure enough, he had a brilliant professor who lived and breathed pottery shards—and who, even better yet, could make them talk.  My son realized he wasn’t enthralled enough with pottery shards to give them that kind of attention—and decided not to go to grad school.  Instead, my son realized that where he cared about details to that extent, where no fine point was too trivial to be ignored, was the theater—and thus he became an actor and a director.  To someone who finds a particular field meaningful, all the details speak.  Ask any lawyer, lab scientist, or gardener.  They are all close readers.

This argument I have just made suggests, as a corollary, that all phenomenon are “rich” to those inspired by them.  Great teachers are, among other things, those who can transmit that enthusiasm, that deep attentive interest, to others.  If training in attention to detail is what literary studies does, it has no corner on that market.  Immersion in just about any discipline will have similar effects.  And there is no reason to believe the literary critics’ objects are “richer” than the archaeologists’ pottery shards.

In short, if we go the “competencies” route, then it will be difficult to make the case that literary studies is a privileged route to close attention to detail—or even to that other chestnut, “critical thinking.” (To North’s credit, he doesn’t play the critical thinking card.)  Most disciplines are self-reflective; they engage in their own version of what John Rawls called “reflective equilibrium,” moving back and forth between received paradigms of analysis and their encounter with the objects of their study.

North is not, in fact, very invested in “saving” literary studies by arguing they belong in the university because they impart a certain set of skills or competencies that can’t be transmitted otherwise.  Instead, he places almost all his chips on the “aesthetic.”  What literary studies does, unlike all the rest, is initiate the student into “all sorts of capabilities and sensitivities” that can be categorized as “aesthetic capabilities.”

Now we are down to brass tacks.  What we need to know is what distinguishes “aesthetic capabilities” from other kinds of capabilities?  And we need to know why we should value those aesthetic capabilities?   On the first score, North has shockingly little to say—and he apologizes for this failure.  “I ought perhaps to read into the record, at points like this, how very merely gestural these gestures [toward the nature of the aesthetic] have been; the real task of developing claims of this sort is of course philosophical and methodological rather than historical, and thus has seemed to me to belong to a different book” (109; italics in original).

Which leaves us with his claims about what the aesthetic is good for.  Why should we value an aesthetic sensibility?  The short answer is that this sensibility gives us a place to stand in opposition to commercial culture.  He wants to place literary criticism at the service of radical politics—and heaps scorn throughout on liberals, neo-liberals, and misguided soi-disant radicals (i.e. the historicist critics who thought they were striking a blow against the empire).  I want to dive into this whole vein in his book in subsequent posts.  Readers of this blog will know I am deeply sympathetic to the focus on “sensibility” and North helps me think again about what appeals to (and the training of) sensibilities could entail.

But for now I will end with registering a certain amazement, or maybe it is just a perplexity.  How will it serve the discipline’s tenuous place in the contemporary university to announce that its value lies in the fact that it comes to bury you?  Usually rebels prefer to work in a more clandestine manner.  Which is to ask (more pointedly): how does assuming rebellious stances, in an endless game in which each player tries to position himself to the left of all the other players, bring palpable rewards within the discipline even as it endangers the position of the discipline in the larger struggle for resources, students, and respect within the contemporary university? That’s a contradiction whose relation to the dominant neo-liberal order is beyond my abilities to parse.

Oliver Wendell Holmes: Violence and the Law

Holmes’s war experiences left him with the view that it all boils down to force, to the imposition of death.  “Holmes had little enthusiasm for the idea that human beings possessed any rights by virtue of being human.  Holmes always liked to provoke friends who he thought were being sentimentally idealistic by saying, ‘all society rests on the deaths of men,” and frequently asserted that a ‘right’ was nothing more than ‘those things a given crowd will fight for—which vary from religion to the price of a glass of beer’” (369-70 in Budiansky’s biography of Holmes).

Holmes’ rejection of any “natural” theory of rights always returned to this assertion about death:

The jurists who believe in natural law seem to me to be in that naïve state of

mind that accepts what has been familiar and accepted by them and their

neighbors as something that must be accepted by all men everywhere.  The

most fundamental of the supposed preexisting rights—the right to life—is

sacrificed without a scruple not only in war, but whenever the interest of

society, that is, of the predominant power in the community, is thought to

demand it (376).


And he understood the law entirely through its direct relation to force.  “The law, as Holmes never tired of pointing out, is at its foundation ‘a statement of the circumstances in which the public force will be brought to bear upon men through the courts’” (435).  “Holmes’s point was that the law is what the law does; it is not a theoretical collection of axioms and moral principles, but a practical statement of where public force will be brought to bear, and that could only be derived from an examination of it in action” (244).  “[H]e would come to insist as a cornerstone of his legal philosophy that law is fundamentally a statement of society’s willingness to use force—‘every law means I will kill sooner than not have my way,’ as he put it[;] . . . he did not want the men who threw ideas around ever again to escape responsibility for where those ideas led.  It was the same reason he lost the enthusiastic belief he once has in the cause of women’s suffrage: political decision had better come from those who do the killing” (131).

Temperamentally, this is easy enough to characterize.  The manly facing up to harsh facts, to an unsentimental view of humans and their social institutions, and a disgust with all sentimental claptrap.

Philosophically, it is less easy to describe.  Where there is power there must be force is clear enough.  But what Holmes seems to miss is that the law often serves as an attempt to restrict force.  Rights (in some instances) are legal statements about instances where the use of force is illegitimate.  Certainly (as Madison was already well aware and as countless commentators have noted since) there is something paradoxical about the state articulating limitations on its own powers.

Who is going to enforce those limitations?  The answer is the courts.  And the courts do not have an army.  That’s what the rule of law is about: the attempt to establish modus vivendi that are respected absent the direct application of force.  Holmes, of course, is arguing that the court’s decision will not be obeyed unless there is the implied (maybe not even implied, but fully explicit) use of state power to enforce that decision.  But his position, like all reductionisms, does not do justice to the complexities of human behavior and psychology.  The Loving decision of 1967, like earlier decisions on child labor laws, led to significant changes in everyday social practice that came into existence with little fanfare.  There are cases where the desire to live within the law is enough; there is an investment in living in a lawful society.  Its benefits are clear enough that its unpleasant consequences (in relation to my own beliefs and preferences) are a price I am willing to pay in order to enjoy those benefits.  Of course, there are also instances where force needs to be applied—as with the widespread flouting of the Brown decision.  My point is simply that the law’s relationship to force is more complex than Holmes allows.  The law is an alternative to violence in many instances, not its direct expression.

My position fits with my notion of the Constitution as an idealistic document, of a statement of the just society we wish to be.  The law is not, as Holmes would argue, completely divorced from questions of morality and justice (more claptrap!).  That relation is complex and often frustrating, but it does no good (either theoretically or practically) to just cut the tie in the name of clear-sighted realism.  Social institutions exist, in part, to protect citizens from force.  And, yes, that can mean in some instances that state force must be deployed in order to fend off other forces.  But it also means in some instances that the institutions serve to prevent any deployment of force at all.  The law affords, when it works, an escape from force, from the unpredictable, uncontrollable and deeply non-useful side effects of most uses of force.

In short, the manly man creates (at least as much as he discovers) the harsh world of struggle he insists is our basic lot.  True, Holmes did not create the war he marched off to at the age of twenty.  He experienced that war as forced upon him.  But he never got quite clear about who was responsible.  He was inclined to blame the abolitionists and their moral fervor, their uncompromising and intolerant absolutism.  He certainly had no patience for their self-righteous moralizing.  Still, blaming them had some obvious flaws, so he ended up converting the idea of struggle into a metaphysical assertion.  He, like Dewey and James, but in a different, more Herbert Spencer-like register, became a Darwinian, focused on the struggle for existence.  But he yoked Darwin to Hobbes; it is not the best adaptation to environmental conditions that assures survival, but the best application of force.  Of course, if the environmental condition is the war of all against all, then the adepts at violence will be the ones who survive.

All of this goes along with contempt for the losers in the battle.  Holmes had no patience with socialists or with proponents of racial justice.  The unwashed were driven by envy; “no rearrangement of property could address the real sources of social discontent” (396), those sources being the envy of the successful by the unsuccessful.  It’s a struggle; just get on with it and quit the whining—or expecting anyone to offer you a helping hand.  Holmes did accept that the law should level the field of struggle; he was (somewhat contradictorily) committed to the notion of a “fair” fight.  Where this ideal of “fairness” was to come from is never clear in his thought—or his legal opinions.  (He was, in fact, very wary of the broad use of the 14th Amendment’s language about “due process” and “equal protection of the laws.”  The broad use of the 14th amendment was being pioneered by Louis Brandeis in Holmes’ later year on the Supreme Court.)  Budiansky is clear that Holmes is by no stretch of the term a “liberal.”

Holmes’s famous dissents from the more conservative decisions of the pre-New Deal Court are motivated by his ideal of fairness—and (connecting to earlier posts about what liberalism even means) that ideal is used against decisions that in American usage are understood as “conservative” even though those conservative decisions were based on the “liberal” laissez-faire idea that the state cannot interfere in business practices.  Holmes’s scathing dissents from the court’s overturning of child labor laws enacted by the states are usually argued on the grounds of consistency.  He says that state governments already regulate commerce (for example, of alcohol), so it is absurd to say they can’t regulate other aspects of commercial activities.

Regulation, it would seem, is always about competing interests.  Since it is inevitable that there will be competing interests, society (through its regulatory laws) is best served by establishing a framework for the balancing of those interests.  Regulation is neither full permission nor full prohibition.  It strives to set conditions for a practice, conditions that take the various interests involved into account.  But Holmes never really worked out a theoretical account of regulation—another place where his reductionism fails him.  Yes, regulations must be enforced, but they are also always a compromise meant to mitigate the need to resort to force–and to prevent anyone from having a full, free hand in the social field characterized by a plurality of different interests and aims.

Economic Power/Political Power

A quick addition to my last post.

The desire is to somehow hold economic power and political power apart, using each as a counterbalance against the other.  To give the state absolute power over the economy is to insure vast economic inequality.  Such has, generally speaking, been the lesson of history.  Powerful states of the pre-modern era presided over massively unequal societies.

But there is a modern exception.  Communism in Russia and Eastern Europe did produce fairly egalitarian societies; in that case, state power was used against the accumulation of wealth by the few.  There still existed a privileged elite of state officials, but there was also a general distribution of economic goods.  The problem, of course, was a combination of state tyranny with low productivity.  The paranoia that afflicts all tyrannies led to abuses that made life unbearable.

But (actually existing) communism did show that it is possible to use state (political) power to mitigate economic inequality.  Social democracy from 1945 to 1970 was also successful in this direction.  Under social democracy, the economy enjoys a relative autonomy, but is highly regulated by a state that interferes to prevent large inequities.

Where there is some kind of norm that political power (defined as the ability to direct the actions of state institutions) should not either 1) be a route to economic gain or 2) be working hand-in-glove with the economically powerful to secure their positions, the violations of that norm are called “corruption.”  The Marxist, of course, says that the state in all capitalist societies (the “bourgeois state”) is corrupt if that is our definition of corruption.  The state will always have been “captured” by the plutocrats.

What belies that Marxist analysis is that the plutocrats hate the state and do everything in their power (under the slogan of laissez-faire) to render the state a non-player in economic and social matters.  Capitalists do not want an effective state of any sort—either of the left, center, or right.  A strong state of any stripe is not going to let the economy goes its own way, but will (instead) fight to gain control over it.  I think it fair to say that the fight between political and economic power mirrors the fight between civil and religious power in the early days of the nation-state.  The English king versus the clergy and the Pope.

The ordinary citizen, I am arguing, is better off when neither side can win this fight, when the two antagonists have enough standing to prevent one from having it all its way.

Our current mess comes in two forms, the worst of all worlds.  We have a weak state combined with massive corruption.  What powers the state still has are placed at the service of capital while politicians use office to get rich.  We have a regulatory apparatus that is almost completely dormant.  From the SEC to the IRS, from the FDA to the EPA, the agencies are not doing their jobs, but standing idly by while the corporations, financiers, and tax-evading rich do their thing.

The leftist response is to say that the whole set-up in unworkable.  We need a new social organization.  I have just finished reading Fredric Jameson’s An American Utopia (Verso, 2016).  Interestingly enough, Jameson also thinks we need “dual power” in order to move out of our current mess.  The subtitle of his book is “Dual Power and the Universal Army.”  More about Jameson in subsequent posts.

Here I just want to reiterate what I take to be a fundamental liberal tenet: all concentrations of power are to be avoided; monopolies of power in any society are a disaster that mirror the equal but opposite disaster of civil war.  Absolute sovereignty of the Hobbesian sort is not a solution; but the absence of all sovereignty is, as Hobbes saw, a formula for endless violence.  Jameson says the key political problem for any Utopia is “federalism.”  That seems right to me, if we take federalism to mean the distribution of power to various social locations.  Having a market that stands in some autonomy from the state is an example of federalism.  There are, of course, other forms that federalism can take.  All of those forms are ways of working against the concentration of power in one place.

On Salaries and Money and American Universities

My last post on the future of the humanities led me to think about American higher education, which I am tempted to call, semi-blasphemously, “our peculiar institution.”  But it also led me to think about money. I was led to that thought by recalling that I, a humanist scholar, am a state employee of North Carolina.  But my munificent salary is, actually, largely paid by “private dollars,” funded out of the “endowed chair” donated to the university by members of the Hanes family (of Winston-Salem and underwear fame).  This post will be an unholy mixture of what that fact means for American higher education and what it means for my own relationship to money and to my work.

I am not being ironic when I use “munificent” to characterize my salary.  I make more money than ever, in my most avaricious dreams, I could have believed an English professor could make.  That salary is public knowledge because North Carolina has rather strict “sunshine” laws.  You can go to a website and look it up.  Yet in keeping with American prudery, which insures that we know less about our friends’ financial circumstances than about their sex lives, I can’t bring myself to name the sum here—or to name the sum that my wife and I have accumulated in our retirement accounts.  When, every once in a while, I do disclose those two numbers to friends and family, I am very conscious of a weird (unsettling) mixture of shame and boast in the disclosure.  I think I am overpaid—but I am proud to be valued so highly.  David Graeber is good on this feeling in his book BullShit Jobs.  For those of us who love our work and didn’t go into it for the money, there is something shameful about the pay.  Even more shameful when the pay makes one rich.

I feel guilty getting paid so much for doing a job that I like and that, frankly, comes very easy to me.  I have many colleagues who are overwhelmed, who feel constantly way behind, who are anxious, who are bedeviled by a sense that they have never done enough.  I have been, until the past year, always extremely busy; I have always worked on weekends.  But I have seldom been anxious.  When I go to North Carolina, it became clear to me very early on that this place operated at a speed that was very comfortable for me.  My pace of work, my productivity, was going to place me in the top tier at UNC.  I was never going to be made to feel inadequate, not up to snuff. (I am not extremely busy at the moment–which makes me feel even more guilty–because I have become persona non grata on campus following my public criticisms of the Chancellor.  I don’t get asked to do anything anymore.)

A time came, inevitably, when I was a victim of salary compression.  Professors get raises that average below inflation.  I tell my grad students the hard truth that their starting salary at a job could easily become their salary for life.  Raises will never go far beyond the increases in the cost of living.  But here is where we get back to the “peculiar institution” issue.  American universities exist within a prestige hierarchy. At the top of that hierarchy—meaning not only the top schools but also the wannabes—there is competition for the “best faculty.”  This is just one place where things get weird.

Why weird?  Because the measure of quality among faculty is their research productivity.  As my cynical friend Hans puts it: “in academics, quantity doesn’t count, quantity is everything.”  It’s not quite that bad, but almost.  Faculty must publish in order to distinguish themselves from other faculty—and then universities must have a faculty that publishes a lot to distinguish themselves from other universities.  In Britain, this has led to the absurdity of the government actually allocating funds to departments based on their research productivity; in America, it is more indirect, since the “best” universities can increase their funding through three means: 1) more state support in the way of research grants from the Federal (and in the case of state universities) and state governments; 2) an ability to charge higher tuition because more prestigious; and 3) a greater ability to raise philanthropic dollars because more expensive and more prestigious, which means having richer alumni.

One oddity (among others) is, of course, that research has, at best, a tangential relation to the educational mission of the university.  More to the point, the students attracted to the university by its prestige have very close to no interest in the research that underwrites that prestige.  Furthermore, the connection between prestige and the research is also completely fuzzy.  For one things, the prestige hierarchy is just about set in stone.  The same schools that headed the list in 1900 still head the list in 2020.  Reputations are, it seems, just about impossible to tarnish.  They glow like the light from long extinguished stars.

It is true that some schools—notably Duke—have managed to elbow their way into the top tier.  There are now lots of Duke imitators, all trying to crack into the stratosphere of Harvard, Yale, Stanford.  But it seems quaint to think Duke’s success can be tied in any direct way to its faculty’s research.  That success seems much more tied to a well-timed (they got into this game first) branding exercise.  They made splashy faculty hires, at the same time that they made themselves into a perennial contender for the national basketball championship.  What those faculty actually did after they were hired was secondary.  It was a question of having names on the letterhead that would lead to U.S. News (and other ranking outlets) to give Duke a boost.

Duke’s timing was impeccable because they hopped aboard the first privatization wave.  The 1980s began the move toward a renewed obsession with prestige that dovetailed with the superstition that “public” education was, by its nature, inferior to “private” education.  As the rich and the elites (see Christopher Lasch’s The Revolt of the Elites) abandoned the public commons (most dramatically in where they sent their kids to school), universities like Duke and my alma mater Georgetown were there to pick up the slack.  Georgetown shows that there was room to move up for the Duke imitators; the smallish privates, like Georgetown, Northwestern, Emory, and Vanderbilt, came up in the world, occupying a particular niche below the Ivies, but with a prestige value, a tuition price tag, and tough admission standards that simply were not the case when I was a Hoya in the 1970s.  As I learned when I got to grad school at SUNY Buffalo in 1974, they thought of themselves as having taken a chance on me because they didn’t know what a Georgetown degree meant.  Yale and Cornell turned me down.

My old employer, the University of Rochester, has always wanted to play in the Northwestern, Emory, Vanderbilt league–without ever quite managing to pull it off.  When I taught there in the late 1980s, Rochester’s president insisted on a 30% rise in tuition–in order to bring UR’s tuition in line with Northwestern etc.  He said we would never be thought any good if we didn’t charge like “our peers.”  I argued that there surely was a market niche for a good school that charged 30% less–and that UR had a better shot of getting students in that niche than in competing with Northwestern.  I, of course, lost the argument–but not just in terms of what the university did, but also in terms of its effect on applications and admissions.  I didn’t understand in those days that, when it comes to higher education, for many aspirants prestige trumps all other factors every time.  And just as in the wider market, it pays much better to cater to the wishes of the well-to-do than to a mass market.

Back to research for a moment.  As Christopher Newfield’s work has amply documented, universities lose money on the big science grants they get.  The infrastructure required to compete for such grants costs more than the grants can bring in.  Thus, either tuition, direct state support, or philanthropic dollars must underwrite the research enterprise.  Yet schools compete wildly for the research dollars because they are essential to their prestige.  Thus, UNC set a goal some years back of $1 billion a year in research funding, a goal that the Vice Chancellor for Research also admitted would worsen our bad financial plight.  We have since surpassed that goal—and are going broke.  But we had 44,000 applicants for 5000 undergraduate slots this past admissions cycle, and our departments and schools remain highly ranked.

The research imperative also makes faculty lives hell.  I have been lucky, as I already said.  For whatever reason, research has always come easily to me; it is not a burden, just something I do.  In part—and truthfully—I enjoy it.  But I will also admit it is so tangled up with issues of self-respect and of respect from my peers, that I would be hard pressed to sort out the various strands of my emotional attachments to my work.  I do know, however, that for many of my colleagues, the research is just a site of constant frustration, of a constant sense of not being good enough or productive enough.  For what?  First of all, the university needs good teachers, as well as good administrators who serve as directors of undergraduate studies, who sponsor various student clubs, who keep the educational enterprise running smoothly.  The administrative bloat on American campuses (which has, demonstrably, be a major factor in the rising costs of higher education) stems in part from freeing faculty from doing that work in the name of giving them more time to do research.

No one wants to admit that much of the research is not much worth doing.  The world will get on just fine without the many bad books and journal articles—many of which are never read by anyone—that the emphasis on research creates.  We have wasted countless hours from imaginative people by pushing faculty toward only one metric of work, toward only one way to contribute to the university.

My position is that good books will still get written even if faculty weren’t forced to write them.  This is tricky.  I am, after all, trying to think about prestige hierarchies.  And it would take a massive cultural sea-change within academia to reach the point where those who were productive researchers were not at the top of the ladder.  Cultural sea-changes require alterations in what Raymond Williams called “structures of feeling.”  I have already indicated the extent to which I recognize my own research was motivated by issues of self-worth and of looking worthy in the eyes of my peers.

Reputation drives many academics much more than money—and it cripples them far more effectively as well.  But still, part of me wants to insist that if the work is worth doing, it will get done.  In other words, we could lose all the research produced just because there is gun to people’s heads—and there still would be good books written (and some bad ones as well) because there will still be people for whom the enterprise of writing a book is central to their sense of themselves (as writers, as persons) and because they see the writing of books as valuable in and of itself.  That Holy Grail of “intrinsic value.”  I doubt we ever get full purity.  But, after all, we do do certain things because we find them worth doing.  And the writing of books is either something some people find worth doing—or it shouldn’t be done at all.

I always read Proust and other social novelists with an inability to suspend disbelief.  I could not understand a life where social climbing, where social ambition, was the driving passion.  I thought that such a world had long since disappeared.  People didn’t orient their lives in that fashion anymore.  But today I read The New Yorker and it is full of tales of people who are tortured and paralyzed by social media, who are obsessed with the “right brands,”star chefs and restaurants, and by celebrities.   And I should probably admit that academics are embroiled in their own kind of social climbing; they, too, want to be part of certain inner circles.  I always held myself rather aloof from all that—and, yet, by the Proustian law of getting what you seem (to others) not to want, I have had, by any objective standard, a highly successful academic career.  I never reached superstar status; I am more like the number 50th ranked tennis player in the world, known by some but not all, but still getting a fair number of perks that fall to those in the inner circles, even if I don’t have their name recognition and my books are read by much, much smaller audiences.

Among the perks, in my own context, there is that absurd salary.  When compression struck, I was able (as you are forced to do in the academic game) to go get an “outside offer.”  I had the kind of research profile that would lead another school that was in the prestige game to bid for my services.  I was able to force UNC to raise my salary so it was in line with that of my colleagues who had been hired after me or who had gotten outside offers of their own.  (Maybe another time I will talk about the complex layers of guilt unleashed by playing the game of getting such an offer.)

Which brings me full circle.  UNC can only compete for the “best faculty” as it struggles to maintain its high reputation, its high ranking, because private donors (alumni who are committed to UNC maintaining its standing) supplement the salaries the state is willing to pay.  UNC, like almost all the top public universities (Virginia, Michigan, UCLA, Berkeley) is a quasi-public school at this point.  Since UNC is more dependent on state dollars than the other schools I have just named, its standing is, in fact, sinking while theirs is holding steady.  Public schools further down the ladder—the UNC Charlottes of the world—are playing a desperate game of catch-up since they don’t’ have the fund-raising potential of the “flagships” and thus are hurt even more by the steady withdrawal of state support.

In short, the privatization of American higher education is a product of the lessening prestige of the public schools—a decline that is semi-rational given that schools are much less fully funded now than they once were.  But it is only semi-rational because it is also tied to the resurgence in the US of prestige-hunger, a resurgence related to the many sins that get covered by the name “neoliberalism.”  There is a heightened—if only rarely explicitly stated—sense of the great divide between winners and losers in our contemporary world.  And going to the “right” college now seems essential (to many people) to making sure you are one of the winners.  The Dukes and Georgetowns of the world have risen because of that anxiety about being left behind and because anything public has been underfunded and denigrated since the 1980s.  This, of course, explains the recent scandal of cheating the admissions process.  More importantly, it explains the on-going scandal of “legacy” admissions, which are motivated by fund-raising imperatives and by the time-worn abilities of elites to retain privileges.

The wider story, however, is about distinction–and cultural mores.  Here’s another argument I lost regarding college admissions.  UNC never had any “merit-based” scholarships (apart from the Moreheads, a whole ‘nother story).  In the early 1990s UNC realized it was beginning to lost the “best” in-state students to schools like Brown and Georgetown and Harvard.  Losing such students, of course, hurt our US News rankings, since average SAT scores for the incoming class were a major metric.  So it was decided to begin offering $500 and $1000 named scholarships to top applicants, irrespective of financial need.  My argument: “you mean to tell me that giving someone $1000 off our $12,000 in-state tuition will make them come to UNC, when their family is fully ready to pay $45,000 for them to go to Brown?”  Once again, I was wrong.  Students wanted to be singled out as “different,” as “special.”  The merit scholarships did increase our yield among top in-state students.  Maybe I am hopelessly romanticizing the 1950s and 1960s–and maybe the middle middle class that came from still exists.  I went to the most elite Catholic high school on Long Island.  All of my classmates went to college.  And there was some sense of a distinction between “going away” to college and going to a college within fifty miles of our high school.  But, really, beyond that little to no sense that Hamilton was different from Villanova, or Northwestern not the same as Marist.  And there was certainly no sense that a school had to distinguish me from other admitted students in order to get me to attend.  I can’t help but believe we are a far less democratic, far less egalitarian society culturally and emotionally (as well as, obviously, economically) now than we were in 1965.

My fat salary is linked to the same sea changes.  In academia, too, the divide between winners and losers has widened.  The spread between the highest and lowest salary in my department is much greater now than it was in 1992, when I arrived.  And, of course, academia has also created its own version of “contract workers,” the “adjuncts” who get low wages and no benefits to do the teaching that the “research faculty” does not do.  It stinks—even as I am a beneficiary of it.  No wonder I feel guilty.  Yeah, you say, you and your guilt feelings plus $1.50 will get you a ride on the subway.  I hate coming across as defensive, but I will record here that I have turned down all available raises over the past five years (admittedly, they were hardly large) so that the money could be distributed among my less well-paid colleagues.

A last point about money.  This thought comes from the Paul Manafort story.  I must be a person of very limited imagination.  Over the past three years, after all the deductions for taxes, retirement funds, health insurance etc., my wife and I together have approximately $10,000 a month in take home pay.  That’s the amount that lands in our bank accounts each month.  We bought our house quite some time ago, so our monthly mortgage plus escrow is $2000.  I understand that is low for most people.  But we have had a number of medical bills that our shitty medical insurance fails to cover—certainly coming to at least $500 a month when averaged over a whole year.  In any case, the point is that we can’t spend $10,000 a month—even as we were supplementing my wife’s mother’s retirement home costs to the tune of $1500 a month, and give a fair amount of money to our two children.  Yet we do not deny ourselves anything, and basically don’t pay much attention to what we spend.  This last, not paying attention, is an astounding luxury after at least twenty years of sweating every penny.  Yet, even with being wildly careless in relation to our earlier habits, there is always enough money.  In fact, it slowly accumulates, so that at the end of every year, no matter what medical emergencies or extravagant trips or increases in the number of charities we send an automatic monthly donation to, there is an extra $10,000 or so.

Clearly—as Paul Manafort showed us—there are a significant number of people in the US to whom $10,000 a month would be woefully inadequate.  Of course, there are millions more for whom, as for my wife and I, it would be untold riches. I don’t really know what moral to derive from that fact.  So I will simply state it here—and cease.