Category: Institutions

Joseph North Three:  Sensibility, Community, Institution

Now we reach the point in my discussion of Joseph North’s Literary Criticism: A Concise Political History (Harvard UP, 2017) where I mostly agree with him.  I am simply going to take up some of his key terms and goals and inflect them somewhat differently.  I think what I have to say runs parallel to North, not ever much meeting him on his chosen ground, but not running athwart his formulations either.

Here’s three of North’s descriptions of his project.

The first comes from a footnote on Raymond Williams and features North’s “scholarship/criticism” divide.  “Of course, none of this is to say that Williams was not deeply committed to ‘practice’ in other fields of endeavor; I merely mean to observe that he understood his disciplinary work in scholarly terms, as cultural analysis, cultural history, and cultural theory, rather than understanding it in critical terms as the systematic cultivation of sensibility.  Naturally the two are not finally distinguishable, and any powerful work of scholarship moves readers to try on different ranges of sensibility, etc. etc. But the ‘practice’ of scholarship, conceived of as cultural analysis, is necessarily neither direct nor systematic in this respect” (pg. 233, fn. 18).

The second is notable for its raising the issue of institutions.  “I only want to add that the problem facing the discipline is not an entirely new one, for in a broad sense it is much the same problem that the critical revolution of the 1920s managed to solve: the problem of creating a true paradigm for criticism—the problem of how to build an institution that would cultivate new, deeper forms of subjectivity and collectivity in a rigorous and repeatable way” (126-27).

In the third passage, he faults the work of D. A. Miller and Eve Sedgwick for its “lack of any prospect of a true paradigm for criticism—the lack of any hope of putting together a paradigmatic way to use the literary directly to intervene in the social order” (173).  Two pages earlier, he describes what I think he means by “direct” intervention.  “My point is simply that it really does make a difference to the character of the work produced by an intellectual formation when those involved feel strongly their responsibility to the needs of a fairly well-defined larger formation beyond the academy—a larger formation defined not simply by its ‘identity’ but by its character as a living movement—which is to say, really, a formation defined by its always limited but nevertheless real ability to define itself by determining, collectively, the trajectory of its own development” (171).

I can’t resist, of course, registering where I disagree with these statements. I have already made clear my skepticism that there is a rigorous or systematic way to cultivate a sensibility.  I am also astounded that North does not recognize feminist literary criticism of the period from 1975 to 1995 as a paradigmatic case of academic work tied “to a fairly well-defined larger formation beyond the academy.”  And if Sedgwick’s relation to the gay liberation movement isn’t a similar instance, may the Lord help the rest of us.  And North’s repeated (as much a tic as his use of the term “rigorous”) use of the words “true” and “really” make him appear more Stalinist than I think he really is.  Does he really intend to shut down the pluralism of intellectual work in favor of the one true path?  Re-education camps for the critics so that they get with the program—and are taught the methods of the new systematic pedagogy.  Surely, one of the delights of the aesthetic sensibility is its anarchism, its playfulness, its imaginative ingenuity, excesses, and unruliness. I suspect that “systematic,” and “repeatable” and “direct” aesthetic education would prove counter-productive in many cases.  At least, I hope it would–with teachers and student both summoning enough gumption to rebel against the indoctrination.

Finally, I want to quibble with his description of “direct” intervention.  Work that stands in support of, proves useful to, “larger” social movements is not direct—at least not directly political.  Here’s Judith Butler in a 1988 essay offering a straightforward description of political acts.  “Clearly, there are political acts which are deliberate and instrumental actions of political organization, resistant collective interventions with the broad aim of instating a more just set of social and political relations” (“Performative Acts and Gender Constitution,” Theater Journal  523).  That cultivating an aesthetic sensibility might play a role in encouraging someone to join that social movement is not the direct political intervention that the movement attempts through quite different means and actions than what the critic does in the classroom or in her written work.  To confuse the two does no one any good—especially if it lets the teacher/critic deem herself sufficiently political as she advances her academic career.  The teacher/critic’s contribution is valuable, but it also indirect.

Enough with the dissents. I completely agree with North that “sensibility” is the crucial concept for the “hearts and minds” side of politics.  Cultivating a leftist sensibility is necessary, although not sufficient, to creating the kind of society we leftists want to live in.  The caveats here are familiar.  There is no guaranteed path from an aesthetic sensibility to a leftist politics. [Let me also note that the practice of close reading is also not the only, or even the royal road, to acquiring an aesthetic sensibility.  Lots of people got there other ways, which casts doubt of the “systematic” and “rigorous” pedagogy, and on the fetishizing of close reading.] For many aesthetes (Nietzsche, Yeats, and Pound among them), the vulgarity and bad taste of the masses drives them to anti-democratic, autocratic visions of strong, masterful leaders of the herd.  For others (Wordsworth and Coleridge for example), reverence for genius promotes a kind of over-all piety that leads to a quietist respect for everything that is, investing tradition and the customary forms of life with a sacred aura it is impious to question or change.  (This is the aesthetic version—articulated by T. S. Eliot as well—of Edmund Burke’s conservatism.)

But the larger point—and now we are with David Hume and William James in contrast to Kant—is that our political ideas and principles (and our ethical ones as well) are the products of our sensibility.  It is the moral passions and our moral intuitions that generate our political commitments.  James (in the first lecture of Pragmatism) talks of “temperament”—and throughout his work (from The Principles of Psychology onwards) insists that our stated reasons for doing something are always secondary; there was the will to do something first, then the search for justifying reasons.  Indignation at the injustice of others (or of social arrangements) and shame at one’s own acts of selfishness are more secure grounds for conduct than a rationally derived categorical imperative.

James seems to think of temperament as innate, a fated from birth.  North’s point is that education—a sentimental education—can shape sensibility.  I agree.  My daughter was in college at George Washington University when Osama bin Laden was killed.  Her classmates rushed over to the White House (three blocks away) to celebrate when the news was heard.  She told my wife and me that she didn’t join the celebration.  It just felt wrong to her to dance in the streets about killing someone.  Her parents’ reaction was her Friends School education had just proved itself.

Sensibility is akin to taste.  The leftist today finds it distasteful, an offense to her sense of how things should be, to live in Trump’s America.  I will use my next post to describe the sensibility of the right in that America.  But for the left, there is outrage at the caging of immigrant children, and at the bigotry that extends to non-whites, women, non-Christians and beyond.  Fundamentally, it is the shame of living in such a needlessly cruel society, with its thousands of homeless and millions of uninsured.

I don’t know exactly how a specifically “aesthetic” sensibility lines up with this leftist sensibility.  And as I have said, there is certainly no sure path from one to the other.  But I am willing to believe (maybe because it is true at least for myself) that the aesthetic stands at odds with commercial culture, attending to values and experiences that are “discounted” (in every sense of that word) in the dominant culture.  Being placed at odds, in a spot where the taken-for-granteds of one’s society are made somewhat less self-evident, has its effect.  If what one has come to appreciate, even to love, is scorned by others, new modes of reckoning (again in every sense of the word), and new allegiances (structure of feeling) may beckon.

Here is where Hume is preferable to James.  Hume (Dewey and Mead follow Hume  here in a way the more individualistic James does not) portrays sensibility as shaped through our communal relations and as reinforced by those same relations.  In other words, even non-conformity is social.  It is extremely difficult, perhaps impossible (akin to the impossibility of a “private language” in the Wittgenstein argument) to be a solitary “enemy of the people.”  There must be resources—from the tradition, from received works of art, criticism, and cultural analysis, from a cohort—on which one can draw to sustain the feeling that something is wrong in the dominant order.

Education, in other words, can play a major role in shaping sensibility—and it is the community the school offers is as crucial as the educational content.  Young people discover the courage of their convictions when they find others who feel the same way, who have the same inchoate intuitions that school (in both its formal and informal interactions) is helping them to articulate.  The encouragement of teachers (yes, you are on the right path; keep going; keep probing; keep questioning; trust your instincts) and of peers (those famous all-night bull sessions after our student finds her sympaticos).

Communities are, famously, ephemeral.  We can idealize them (as arguably Hannah Arendt does in her definition of “the political”—a definition that seems to exclude everything except the excited talk among equals from the political sphere).  Societies are corrupt, impersonal, hierarchical, mechanical, not face-to-face.  Communities are “known” (as Raymond Williams phrased it), informal and intimate.  A familiar narrative of “modernity” sees communities as overwhelmed by society, by the depredations of capitalism, war, and the ever-expanding state. (Tonnies)

This romanticism does not serve the left well.  Communities are not sustainable in the absence of institutions.  And they certainly cannot withstand the pressures of power, of the large forces of capitalism and the state, without institutional homes.  There must (quite literally) be places for the community to gather and resources for its maintenance.  Make no mistake about it: neo-liberalism has deliberately and methodically set out to destroy the institutions that have sustained the left (while building their own infrastructure—chambers of commerce, business lobbying groups, the infamous think tanks—that provide careers for the cadre of right-wing hacks).  Unions, of course, first and foremost.  When did we last have a union leader who was recognized as a spokesperson for America’s workers?  But there has also been the absorption of the “associations” that Tocqueville famously saw as the hallmark of American democracy into the services of the state.  Outsourced welfare functions are now the responsibility of clinics first created by the feminist and gay liberation movements to serve the needs of their communities.  Financial stability has been secured at the price of being experienced as embedded members of the community; now those organizations are purveyors of  services begrudgingly offered by a bureaucratic state that always put obstacles in the way of accessing those benefits.

North is right to see that the neoliberal attack on institutions extends to the university.  The aesthetic sensibility (since at least 1960) has been bunkered in the university, having failed to sustain the few other institutional structures (little magazines, the literary reviews it inherited from the 19th century) that existed in the early 20th century.  Reading groups are well and good (they are thriving and I hardly want to belittle them), but have no institutional weight or home.  Humanities departments are about it, except for the arts scene (again, mostly woefully under-institutionalized) in some major cities.

So there is every reason to fight hard to keep the humanities as an integral part of the university.  I personally don’t think taking the disciplinary route is the way to fight this fight—but maybe I am wrong.  Maybe only claims to disciplinary specificity and expertise can gain us a spot.

More crucially, I think North is absolutely right to believe that our efforts as critics are doomed to political ineffectiveness if not tied to vibrant social movements.

[For the record, here is where I think North’s criticism/scholarship divide really doesn’t work.  Efforts along both lines can prove supportive or not to social movements.  It is the content, not the form, of the work that matters.  And I also think work that is apolitical is perfectly OK.  It is tyrannical—a mirror image of the absurd regimes of “productivity” that afflict both capitalism and the research university—to insist that everything one does contribute to the political cause.  Life is made worth living, in many instances, by things that are unproductive, are useless.]

The problem of the contemporary left is, precisely, the absence of such social movements.  The civil rights movement had the black churches, and then the proliferation of organizations: SNCC, CORE, SCLC, along with the venerable NAACP, and A. Philip Randolph’s labor organization.  It sustained itself over a very long time.  The feminist movement had its clinics, and NOW.  The anti-war movement had A. J. Muste and David Dellinger, long-time veterans of peace groups.  The Democratic Party is obviously no good unless (except when) it is pushed by groups formed outside the party, groups that act on their own without taking instructions from the party. The Bernie Sanders insurrection will only reshape the Democratic Party when it establishes itself as an independent power outside the party–with which the party then needs to come to terms.

The trouble with Black Lives Matter, ME Too, and Occupy is that they all have resisted or failed (I don’t know which one) to establish any kind of institutional base.  Each of these movements has identified a mass of people who share certain experiences and a certain sensibility.  They have, in other words, called into presence (albeit mostly virtually—except for Occupy) a community.  That discovery of other like souls is comforting, reassuring, even empowering.  I am not alone.  But to be politically effective, these movements need legs.  They need to be sustained, in it for the long haul.  And that requires institutions: money, functionaries, offices, continuing pressure at the sites deemed appropriate (for strategic reasons) for intervention.

In short (and now I am the one who is going to sound like a thirties Marxist), the left needs to make the long march through the institutions—a march begun by creating some institutions of its own on the outside to prepare it for the infiltration of the institutions on the inside.  That’s what the right has been doing for the past forty years.  While the left was marching in the street on the weekends with their friends, the right was getting elected to school boards.  Protest marches feel great, but are ephemeral, easily ignored.  Our society’s shift rightwards has come through a million incremental changes wrought on the ground by somebody in an office somewhere, by right wing hacks and business lobbyists writing legislation, by regulators letting oversight lapse, by prosecutors and courts looking the other way at white collar and corporate crime. During the Obama years, the left paid almost no attention to state-level races, ceding those legislatures to the right almost by default–with grievous consequences (not the least of which is a weak bench, unable to provide any potential national candidates between the ages of 45 and 65).

We need leftist social movements that pay attention to the minutiae, that are not addicted to the large dramatic gesture, that don’t engage in the magical thinking that a piece of legislation or a court decision solves a problem once and for all.  It’s the implementation, the daily practices of state, corporate, educational, regulatory institutions (as Foucault should have taught us) where change takes place, in often silent and difficult to perceive ways.  That’s the room where it happens—and the left has all too often failed to even try to get into the room.

Joseph North (One)

One of the oddities of Joseph North’s Literary Criticism: A Concise Political History (Harvard UP, 2017) is that it practices what it preaches against.  North believes that the historicist turn of the 1980s was a mistake, yet his own “history” is very precisely historicist: he aims to tie that “turn” in literary criticism to a larger narrative about neo-liberalism.

In fact, North subscribes to a fairly “vulgar,” fairly simplistic version of social determinism.  His periodization of literary criticism offers us “an early period between the wars in which the possibility of something like a break with liberalism, and a genuine move to radicalism, is mooted and then disarmed,” followed by “a period of relative continuity through the mid-century, with the two paradigms of ‘criticism’ and ‘scholarship’ both serving real superstructural functions within Keynesianism.”  And, finally, when the “Keynesian period enters into a crisis in the 1970s . . . we see the establishment of a new era: the unprecedentedly complete dominance of the ‘scholar’ model in the form of the historicist/contextualist paradigm.”  North concludes this quick survey of the “base” determinants of literary critical practice with a rhetorical question:  “If this congruence comes as something of a surprise, it is also quite unsurprising: what would one expect to find except that the history of the discipline marches more or less in step with the underlying transformations of the social order?” (17).

Perhaps I missed something, but I really didn’t catch where North made his assertions about the two periods past the 1930s stick.  How do both the “critical” and “scholarly” paradigms serve Keynesianism?  I can see where the growth of state-funded higher education after World War II is a feature of Keynesianism.  But surely the emerging model (in the 50s and 60s) of the “research university,” has as much, if not more, to do with the Cold War than with Keynesian economic policy.

But when it gets down to specifics about different paradigms of practice within literary criticism, I fail to see the connection.  Yes, literary criticism got dragged into a “production” model (publish or perish) that fits it rather poorly, but why or how did different types of production, so long as they found their way into print, “count” until the more intense professionalization of the 1970s, when “peer-reviewed” became the only coin of the realm?  The new emphasis on “scholarship” (about which North is absolutely right) was central to that professionalization—and does seem directly connected to the end of the post-war economic expansion.  But that doesn’t explain why “professionalization” should take an historicist form, just as I am still puzzled as to how both forms—critical and scholarly—“serve” Keynesian needs prior to 1970.

However, my main goal in this post is not to try to parse out the base/superstructure relationship that North appears committed to.  I have another object in view: why does he avoid the fairly obvious question of how his own position (one he sees as foreshadowed, seen in a glass darkly, by Isobel Armstrong among others) reflects (is determined by?) our own historical moment?  What has changed in the base to make this questioning of the historicist paradigm possible now?  North goes idealistic at this point, discussing “intimations” that appear driven by dissatisfactions felt by particular practitioners.  The social order drops out of the picture.

Let’s go back to fundamentals.  I am tempted to paraphrase Ruskin: for every hundred people who talk of capitalism, one actually understands it.  I am guided by the sociologist Alvin Gouldner, in this case his short 1979 book The Rise of the New Class and the Future of the Intellectuals (Oxford UP), a book that has been a touchstone for me ever since I read it in the early 1980s.  Gouldner offers this definition of capital: anything that can command an income in the mixed market/state economy in which we in the West (at least) live.  Deceptively simple, but incredibly useful as a heuristic.  Money that you spend to buy food you then eat is not capital; that money does not bring a financial return.  It does bring a material return, but not a financial one.  Money that you (as a food distributer) spend to buy food that you will then sell to supermarkets is capital.  And the food you sell becomes a commodity—while the food you eat is not a commodity.  Capital often passes through the commodity form in order to garner its financial return.

But keep your eye on “what commands an income.”  For Marx, of course, the wage earner only has her “labor power” to secure an income.  And labor power is cheap because there is so much of it available.  So there is a big incentive for those who only have their labor power to discover a way to make it more scarce.  Enter the professions.  The professional relies on selling the fact that she possesses an expertise that others lack.  That expertise is her “value added.”  It justifies the larger income that she secures for herself.

Literary critics became English professors in the post-war expansion of the research university.  We can take William Empson and Kenneth Burke as examples of the pre-1950s literary critic, living by their wits, and writing in a dizzying array of modes (poetry, commissioned “reports,” reviews, books, polemics).  But the research university gave critics “a local habitat [the university] and a name” [English professors] and, “like the dyer’s hand, their nature was subdued.”  The steady progress toward professionalization was begun, with a huge leap forward when the “job market” tightened in the 1970s.

So what’s new in the 2010s?  The “discipline” itself is under fire.  “English,” as Gerald Graff and Peter Elbow both marveled years ago, was long the most required school subject, from kindergarten through the second year of college.  Its place in our educational institutions appeared secure, unassailable.  There would always be a need to English teachers.  That assumed truth no longer holds.  Internally, interdisciplinarity, writing across the curriculum, and other innovations threatened the hegemony of the discipline.  Externally, the right wing’s concerted attack on an ideologically suspect set of “tenured radicals” along with a more general discounting (even elimination) of value assigned to being “cultured” meant the “requirement” of English was questioned.

North describes this shift in these terms:  “if the last three decades have taught literary studies anything about its relationship to the capitalist state, it is that the capitalist state does not want us around.  Under a Keynesian funding regime, it was possible to think that literary study was being supported because it served an important legitimating role in the maintenance of liberal capitalist institutions. . . . the dominant forms of legitimation are now elsewhere” (85).  True enough, although I would still like to see how that “legitimating role” worked prior to 1970; I would think institutional inertia rather than some effective or needed legitimating role was the most important factor.

In that context, the upsurge in the past five years (as the effects of 2008 on the landscape of higher education registered) of defenses of “the” discipline makes sense.  North—with his constant refrain of “rigor” and “method”—is working overtime to claim a distinctive identity for the discipline (accept no pale or inferior imitations!).  This man has a used discipline to sell you. (It is unclear, to say the least, how a return to “criticism,” only this time with rigor, improves our standing in the eyes of the contemporary “capitalist state.”  Why should they want North’s re-formed discipline  around anymore than the current version?)

North appears  blind to the fact that a discipline is a commodity within the institution that is higher education.  The commodity he has to sell has lost significant amounts of value over the past ten years within the institution, for reasons both external and internal.  A market correction?  Perhaps—but only perhaps because (as with all stock markets) we have no place to stand if we are trying to discover the “true” value of the commodity in question.

So what is North’s case that we should value the discipline of literary criticism more highly? He doesn’t address the external factors at all, but resets the internal case by basing the distinctiveness of literary criticism on fairly traditional grounds: it has a distinct method (“Close reading”) and a distinct object (“rich” literary and aesthetic texts).  To wit:  “what [do] we really mean by ‘close reading’ beyond paying attention to small units of any kind of text.  Our questions must then be of the order: what range of capabilities and sensitivities is the reading practice being used to cultivate?  What kinds of texts are most suited to cultivating those ranges? Putting the issue naively, it seems to me that the method of close reading cannot serve as a justification for disciplinary literary study until the discipline is able to show that there is something about literary texts that make them especially rewarding training grounds for the kinds of aptitudes the discipline is claiming to train.  Here again the rejected category of the aesthetic proves indispensable, for of course literary and other aesthetic texts are particularly rich training grounds for all sorts of capabilities and sensitivities: aesthetic capabilities”( 108-9; italics in original).

I will have more to say about “the method of close reading” in my next post.  For now, I just want to point out that it is absurd to think “close reading” is confined to literary studies–and North shows himself aware of that fact as he retreats fairly quickly from the “method” to the “objects” (texts).  Just about any practitioner in any field to whom the details matter is a close reader.  When my son became an archaeology major, my first thought was: “that will come to an end when he encounters pottery shards.” Sure enough, he had a brilliant professor who lived and breathed pottery shards—and who, even better yet, could make them talk.  My son realized he wasn’t enthralled enough with pottery shards to give them that kind of attention—and decided not to go to grad school.  Instead, my son realized that where he cared about details to that extent, where no fine point was too trivial to be ignored, was the theater—and thus he became an actor and a director.  To someone who finds a particular field meaningful, all the details speak.  Ask any lawyer, lab scientist, or gardener.  They are all close readers.

This argument I have just made suggests, as a corollary, that all phenomenon are “rich” to those inspired by them.  Great teachers are, among other things, those who can transmit that enthusiasm, that deep attentive interest, to others.  If training in attention to detail is what literary studies does, it has no corner on that market.  Immersion in just about any discipline will have similar effects.  And there is no reason to believe the literary critics’ objects are “richer” than the archaeologists’ pottery shards.

In short, if we go the “competencies” route, then it will be difficult to make the case that literary studies is a privileged route to close attention to detail—or even to that other chestnut, “critical thinking.” (To North’s credit, he doesn’t play the critical thinking card.)  Most disciplines are self-reflective; they engage in their own version of what John Rawls called “reflective equilibrium,” moving back and forth between received paradigms of analysis and their encounter with the objects of their study.

North is not, in fact, very invested in “saving” literary studies by arguing they belong in the university because they impart a certain set of skills or competencies that can’t be transmitted otherwise.  Instead, he places almost all his chips on the “aesthetic.”  What literary studies does, unlike all the rest, is initiate the student into “all sorts of capabilities and sensitivities” that can be categorized as “aesthetic capabilities.”

Now we are down to brass tacks.  What we need to know is what distinguishes “aesthetic capabilities” from other kinds of capabilities?  And we need to know why we should value those aesthetic capabilities?   On the first score, North has shockingly little to say—and he apologizes for this failure.  “I ought perhaps to read into the record, at points like this, how very merely gestural these gestures [toward the nature of the aesthetic] have been; the real task of developing claims of this sort is of course philosophical and methodological rather than historical, and thus has seemed to me to belong to a different book” (109; italics in original).

Which leaves us with his claims about what the aesthetic is good for.  Why should we value an aesthetic sensibility?  The short answer is that this sensibility gives us a place to stand in opposition to commercial culture.  He wants to place literary criticism at the service of radical politics—and heaps scorn throughout on liberals, neo-liberals, and misguided soi-disant radicals (i.e. the historicist critics who thought they were striking a blow against the empire).  I want to dive into this whole vein in his book in subsequent posts.  Readers of this blog will know I am deeply sympathetic to the focus on “sensibility” and North helps me think again about what appeals to (and the training of) sensibilities could entail.

But for now I will end with registering a certain amazement, or maybe it is just a perplexity.  How will it serve the discipline’s tenuous place in the contemporary university to announce that its value lies in the fact that it comes to bury you?  Usually rebels prefer to work in a more clandestine manner.  Which is to ask (more pointedly): how does assuming rebellious stances, in an endless game in which each player tries to position himself to the left of all the other players, bring palpable rewards within the discipline even as it endangers the position of the discipline in the larger struggle for resources, students, and respect within the contemporary university? That’s a contradiction whose relation to the dominant neo-liberal order is beyond my abilities to parse.

Oliver Wendell Holmes: Violence and the Law

Holmes’s war experiences left him with the view that it all boils down to force, to the imposition of death.  “Holmes had little enthusiasm for the idea that human beings possessed any rights by virtue of being human.  Holmes always liked to provoke friends who he thought were being sentimentally idealistic by saying, ‘all society rests on the deaths of men,” and frequently asserted that a ‘right’ was nothing more than ‘those things a given crowd will fight for—which vary from religion to the price of a glass of beer’” (369-70 in Budiansky’s biography of Holmes).

Holmes’ rejection of any “natural” theory of rights always returned to this assertion about death:

The jurists who believe in natural law seem to me to be in that naïve state of

mind that accepts what has been familiar and accepted by them and their

neighbors as something that must be accepted by all men everywhere.  The

most fundamental of the supposed preexisting rights—the right to life—is

sacrificed without a scruple not only in war, but whenever the interest of

society, that is, of the predominant power in the community, is thought to

demand it (376).

 

And he understood the law entirely through its direct relation to force.  “The law, as Holmes never tired of pointing out, is at its foundation ‘a statement of the circumstances in which the public force will be brought to bear upon men through the courts’” (435).  “Holmes’s point was that the law is what the law does; it is not a theoretical collection of axioms and moral principles, but a practical statement of where public force will be brought to bear, and that could only be derived from an examination of it in action” (244).  “[H]e would come to insist as a cornerstone of his legal philosophy that law is fundamentally a statement of society’s willingness to use force—‘every law means I will kill sooner than not have my way,’ as he put it[;] . . . he did not want the men who threw ideas around ever again to escape responsibility for where those ideas led.  It was the same reason he lost the enthusiastic belief he once has in the cause of women’s suffrage: political decision had better come from those who do the killing” (131).

Temperamentally, this is easy enough to characterize.  The manly facing up to harsh facts, to an unsentimental view of humans and their social institutions, and a disgust with all sentimental claptrap.

Philosophically, it is less easy to describe.  Where there is power there must be force is clear enough.  But what Holmes seems to miss is that the law often serves as an attempt to restrict force.  Rights (in some instances) are legal statements about instances where the use of force is illegitimate.  Certainly (as Madison was already well aware and as countless commentators have noted since) there is something paradoxical about the state articulating limitations on its own powers.

Who is going to enforce those limitations?  The answer is the courts.  And the courts do not have an army.  That’s what the rule of law is about: the attempt to establish modus vivendi that are respected absent the direct application of force.  Holmes, of course, is arguing that the court’s decision will not be obeyed unless there is the implied (maybe not even implied, but fully explicit) use of state power to enforce that decision.  But his position, like all reductionisms, does not do justice to the complexities of human behavior and psychology.  The Loving decision of 1967, like earlier decisions on child labor laws, led to significant changes in everyday social practice that came into existence with little fanfare.  There are cases where the desire to live within the law is enough; there is an investment in living in a lawful society.  Its benefits are clear enough that its unpleasant consequences (in relation to my own beliefs and preferences) are a price I am willing to pay in order to enjoy those benefits.  Of course, there are also instances where force needs to be applied—as with the widespread flouting of the Brown decision.  My point is simply that the law’s relationship to force is more complex than Holmes allows.  The law is an alternative to violence in many instances, not its direct expression.

My position fits with my notion of the Constitution as an idealistic document, of a statement of the just society we wish to be.  The law is not, as Holmes would argue, completely divorced from questions of morality and justice (more claptrap!).  That relation is complex and often frustrating, but it does no good (either theoretically or practically) to just cut the tie in the name of clear-sighted realism.  Social institutions exist, in part, to protect citizens from force.  And, yes, that can mean in some instances that state force must be deployed in order to fend off other forces.  But it also means in some instances that the institutions serve to prevent any deployment of force at all.  The law affords, when it works, an escape from force, from the unpredictable, uncontrollable and deeply non-useful side effects of most uses of force.

In short, the manly man creates (at least as much as he discovers) the harsh world of struggle he insists is our basic lot.  True, Holmes did not create the war he marched off to at the age of twenty.  He experienced that war as forced upon him.  But he never got quite clear about who was responsible.  He was inclined to blame the abolitionists and their moral fervor, their uncompromising and intolerant absolutism.  He certainly had no patience for their self-righteous moralizing.  Still, blaming them had some obvious flaws, so he ended up converting the idea of struggle into a metaphysical assertion.  He, like Dewey and James, but in a different, more Herbert Spencer-like register, became a Darwinian, focused on the struggle for existence.  But he yoked Darwin to Hobbes; it is not the best adaptation to environmental conditions that assures survival, but the best application of force.  Of course, if the environmental condition is the war of all against all, then the adepts at violence will be the ones who survive.

All of this goes along with contempt for the losers in the battle.  Holmes had no patience with socialists or with proponents of racial justice.  The unwashed were driven by envy; “no rearrangement of property could address the real sources of social discontent” (396), those sources being the envy of the successful by the unsuccessful.  It’s a struggle; just get on with it and quit the whining—or expecting anyone to offer you a helping hand.  Holmes did accept that the law should level the field of struggle; he was (somewhat contradictorily) committed to the notion of a “fair” fight.  Where this ideal of “fairness” was to come from is never clear in his thought—or his legal opinions.  (He was, in fact, very wary of the broad use of the 14th Amendment’s language about “due process” and “equal protection of the laws.”  The broad use of the 14th amendment was being pioneered by Louis Brandeis in Holmes’ later year on the Supreme Court.)  Budiansky is clear that Holmes is by no stretch of the term a “liberal.”

Holmes’s famous dissents from the more conservative decisions of the pre-New Deal Court are motivated by his ideal of fairness—and (connecting to earlier posts about what liberalism even means) that ideal is used against decisions that in American usage are understood as “conservative” even though those conservative decisions were based on the “liberal” laissez-faire idea that the state cannot interfere in business practices.  Holmes’s scathing dissents from the court’s overturning of child labor laws enacted by the states are usually argued on the grounds of consistency.  He says that state governments already regulate commerce (for example, of alcohol), so it is absurd to say they can’t regulate other aspects of commercial activities.

Regulation, it would seem, is always about competing interests.  Since it is inevitable that there will be competing interests, society (through its regulatory laws) is best served by establishing a framework for the balancing of those interests.  Regulation is neither full permission nor full prohibition.  It strives to set conditions for a practice, conditions that take the various interests involved into account.  But Holmes never really worked out a theoretical account of regulation—another place where his reductionism fails him.  Yes, regulations must be enforced, but they are also always a compromise meant to mitigate the need to resort to force–and to prevent anyone from having a full, free hand in the social field characterized by a plurality of different interests and aims.

Economic Power/Political Power

A quick addition to my last post.

The desire is to somehow hold economic power and political power apart, using each as a counterbalance against the other.  To give the state absolute power over the economy is to insure vast economic inequality.  Such has, generally speaking, been the lesson of history.  Powerful states of the pre-modern era presided over massively unequal societies.

But there is a modern exception.  Communism in Russia and Eastern Europe did produce fairly egalitarian societies; in that case, state power was used against the accumulation of wealth by the few.  There still existed a privileged elite of state officials, but there was also a general distribution of economic goods.  The problem, of course, was a combination of state tyranny with low productivity.  The paranoia that afflicts all tyrannies led to abuses that made life unbearable.

But (actually existing) communism did show that it is possible to use state (political) power to mitigate economic inequality.  Social democracy from 1945 to 1970 was also successful in this direction.  Under social democracy, the economy enjoys a relative autonomy, but is highly regulated by a state that interferes to prevent large inequities.

Where there is some kind of norm that political power (defined as the ability to direct the actions of state institutions) should not either 1) be a route to economic gain or 2) be working hand-in-glove with the economically powerful to secure their positions, the violations of that norm are called “corruption.”  The Marxist, of course, says that the state in all capitalist societies (the “bourgeois state”) is corrupt if that is our definition of corruption.  The state will always have been “captured” by the plutocrats.

What belies that Marxist analysis is that the plutocrats hate the state and do everything in their power (under the slogan of laissez-faire) to render the state a non-player in economic and social matters.  Capitalists do not want an effective state of any sort—either of the left, center, or right.  A strong state of any stripe is not going to let the economy goes its own way, but will (instead) fight to gain control over it.  I think it fair to say that the fight between political and economic power mirrors the fight between civil and religious power in the early days of the nation-state.  The English king versus the clergy and the Pope.

The ordinary citizen, I am arguing, is better off when neither side can win this fight, when the two antagonists have enough standing to prevent one from having it all its way.

Our current mess comes in two forms, the worst of all worlds.  We have a weak state combined with massive corruption.  What powers the state still has are placed at the service of capital while politicians use office to get rich.  We have a regulatory apparatus that is almost completely dormant.  From the SEC to the IRS, from the FDA to the EPA, the agencies are not doing their jobs, but standing idly by while the corporations, financiers, and tax-evading rich do their thing.

The leftist response is to say that the whole set-up in unworkable.  We need a new social organization.  I have just finished reading Fredric Jameson’s An American Utopia (Verso, 2016).  Interestingly enough, Jameson also thinks we need “dual power” in order to move out of our current mess.  The subtitle of his book is “Dual Power and the Universal Army.”  More about Jameson in subsequent posts.

Here I just want to reiterate what I take to be a fundamental liberal tenet: all concentrations of power are to be avoided; monopolies of power in any society are a disaster that mirror the equal but opposite disaster of civil war.  Absolute sovereignty of the Hobbesian sort is not a solution; but the absence of all sovereignty is, as Hobbes saw, a formula for endless violence.  Jameson says the key political problem for any Utopia is “federalism.”  That seems right to me, if we take federalism to mean the distribution of power to various social locations.  Having a market that stands in some autonomy from the state is an example of federalism.  There are, of course, other forms that federalism can take.  All of those forms are ways of working against the concentration of power in one place.

On Salaries and Money and American Universities

My last post on the future of the humanities led me to think about American higher education, which I am tempted to call, semi-blasphemously, “our peculiar institution.”  But it also led me to think about money. I was led to that thought by recalling that I, a humanist scholar, am a state employee of North Carolina.  But my munificent salary is, actually, largely paid by “private dollars,” funded out of the “endowed chair” donated to the university by members of the Hanes family (of Winston-Salem and underwear fame).  This post will be an unholy mixture of what that fact means for American higher education and what it means for my own relationship to money and to my work.

I am not being ironic when I use “munificent” to characterize my salary.  I make more money than ever, in my most avaricious dreams, I could have believed an English professor could make.  That salary is public knowledge because North Carolina has rather strict “sunshine” laws.  You can go to a website and look it up.  Yet in keeping with American prudery, which insures that we know less about our friends’ financial circumstances than about their sex lives, I can’t bring myself to name the sum here—or to name the sum that my wife and I have accumulated in our retirement accounts.  When, every once in a while, I do disclose those two numbers to friends and family, I am very conscious of a weird (unsettling) mixture of shame and boast in the disclosure.  I think I am overpaid—but I am proud to be valued so highly.  David Graeber is good on this feeling in his book BullShit Jobs.  For those of us who love our work and didn’t go into it for the money, there is something shameful about the pay.  Even more shameful when the pay makes one rich.

I feel guilty getting paid so much for doing a job that I like and that, frankly, comes very easy to me.  I have many colleagues who are overwhelmed, who feel constantly way behind, who are anxious, who are bedeviled by a sense that they have never done enough.  I have been, until the past year, always extremely busy; I have always worked on weekends.  But I have seldom been anxious.  When I go to North Carolina, it became clear to me very early on that this place operated at a speed that was very comfortable for me.  My pace of work, my productivity, was going to place me in the top tier at UNC.  I was never going to be made to feel inadequate, not up to snuff. (I am not extremely busy at the moment–which makes me feel even more guilty–because I have become persona non grata on campus following my public criticisms of the Chancellor.  I don’t get asked to do anything anymore.)

A time came, inevitably, when I was a victim of salary compression.  Professors get raises that average below inflation.  I tell my grad students the hard truth that their starting salary at a job could easily become their salary for life.  Raises will never go far beyond the increases in the cost of living.  But here is where we get back to the “peculiar institution” issue.  American universities exist within a prestige hierarchy. At the top of that hierarchy—meaning not only the top schools but also the wannabes—there is competition for the “best faculty.”  This is just one place where things get weird.

Why weird?  Because the measure of quality among faculty is their research productivity.  As my cynical friend Hans puts it: “in academics, quantity doesn’t count, quantity is everything.”  It’s not quite that bad, but almost.  Faculty must publish in order to distinguish themselves from other faculty—and then universities must have a faculty that publishes a lot to distinguish themselves from other universities.  In Britain, this has led to the absurdity of the government actually allocating funds to departments based on their research productivity; in America, it is more indirect, since the “best” universities can increase their funding through three means: 1) more state support in the way of research grants from the Federal (and in the case of state universities) and state governments; 2) an ability to charge higher tuition because more prestigious; and 3) a greater ability to raise philanthropic dollars because more expensive and more prestigious, which means having richer alumni.

One oddity (among others) is, of course, that research has, at best, a tangential relation to the educational mission of the university.  More to the point, the students attracted to the university by its prestige have very close to no interest in the research that underwrites that prestige.  Furthermore, the connection between prestige and the research is also completely fuzzy.  For one things, the prestige hierarchy is just about set in stone.  The same schools that headed the list in 1900 still head the list in 2020.  Reputations are, it seems, just about impossible to tarnish.  They glow like the light from long extinguished stars.

It is true that some schools—notably Duke—have managed to elbow their way into the top tier.  There are now lots of Duke imitators, all trying to crack into the stratosphere of Harvard, Yale, Stanford.  But it seems quaint to think Duke’s success can be tied in any direct way to its faculty’s research.  That success seems much more tied to a well-timed (they got into this game first) branding exercise.  They made splashy faculty hires, at the same time that they made themselves into a perennial contender for the national basketball championship.  What those faculty actually did after they were hired was secondary.  It was a question of having names on the letterhead that would lead to U.S. News (and other ranking outlets) to give Duke a boost.

Duke’s timing was impeccable because they hopped aboard the first privatization wave.  The 1980s began the move toward a renewed obsession with prestige that dovetailed with the superstition that “public” education was, by its nature, inferior to “private” education.  As the rich and the elites (see Christopher Lasch’s The Revolt of the Elites) abandoned the public commons (most dramatically in where they sent their kids to school), universities like Duke and my alma mater Georgetown were there to pick up the slack.  Georgetown shows that there was room to move up for the Duke imitators; the smallish privates, like Georgetown, Northwestern, Emory, and Vanderbilt, came up in the world, occupying a particular niche below the Ivies, but with a prestige value, a tuition price tag, and tough admission standards that simply were not the case when I was a Hoya in the 1970s.  As I learned when I got to grad school at SUNY Buffalo in 1974, they thought of themselves as having taken a chance on me because they didn’t know what a Georgetown degree meant.  Yale and Cornell turned me down.

My old employer, the University of Rochester, has always wanted to play in the Northwestern, Emory, Vanderbilt league–without ever quite managing to pull it off.  When I taught there in the late 1980s, Rochester’s president insisted on a 30% rise in tuition–in order to bring UR’s tuition in line with Northwestern etc.  He said we would never be thought any good if we didn’t charge like “our peers.”  I argued that there surely was a market niche for a good school that charged 30% less–and that UR had a better shot of getting students in that niche than in competing with Northwestern.  I, of course, lost the argument–but not just in terms of what the university did, but also in terms of its effect on applications and admissions.  I didn’t understand in those days that, when it comes to higher education, for many aspirants prestige trumps all other factors every time.  And just as in the wider market, it pays much better to cater to the wishes of the well-to-do than to a mass market.

Back to research for a moment.  As Christopher Newfield’s work has amply documented, universities lose money on the big science grants they get.  The infrastructure required to compete for such grants costs more than the grants can bring in.  Thus, either tuition, direct state support, or philanthropic dollars must underwrite the research enterprise.  Yet schools compete wildly for the research dollars because they are essential to their prestige.  Thus, UNC set a goal some years back of $1 billion a year in research funding, a goal that the Vice Chancellor for Research also admitted would worsen our bad financial plight.  We have since surpassed that goal—and are going broke.  But we had 44,000 applicants for 5000 undergraduate slots this past admissions cycle, and our departments and schools remain highly ranked.

The research imperative also makes faculty lives hell.  I have been lucky, as I already said.  For whatever reason, research has always come easily to me; it is not a burden, just something I do.  In part—and truthfully—I enjoy it.  But I will also admit it is so tangled up with issues of self-respect and of respect from my peers, that I would be hard pressed to sort out the various strands of my emotional attachments to my work.  I do know, however, that for many of my colleagues, the research is just a site of constant frustration, of a constant sense of not being good enough or productive enough.  For what?  First of all, the university needs good teachers, as well as good administrators who serve as directors of undergraduate studies, who sponsor various student clubs, who keep the educational enterprise running smoothly.  The administrative bloat on American campuses (which has, demonstrably, be a major factor in the rising costs of higher education) stems in part from freeing faculty from doing that work in the name of giving them more time to do research.

No one wants to admit that much of the research is not much worth doing.  The world will get on just fine without the many bad books and journal articles—many of which are never read by anyone—that the emphasis on research creates.  We have wasted countless hours from imaginative people by pushing faculty toward only one metric of work, toward only one way to contribute to the university.

My position is that good books will still get written even if faculty weren’t forced to write them.  This is tricky.  I am, after all, trying to think about prestige hierarchies.  And it would take a massive cultural sea-change within academia to reach the point where those who were productive researchers were not at the top of the ladder.  Cultural sea-changes require alterations in what Raymond Williams called “structures of feeling.”  I have already indicated the extent to which I recognize my own research was motivated by issues of self-worth and of looking worthy in the eyes of my peers.

Reputation drives many academics much more than money—and it cripples them far more effectively as well.  But still, part of me wants to insist that if the work is worth doing, it will get done.  In other words, we could lose all the research produced just because there is gun to people’s heads—and there still would be good books written (and some bad ones as well) because there will still be people for whom the enterprise of writing a book is central to their sense of themselves (as writers, as persons) and because they see the writing of books as valuable in and of itself.  That Holy Grail of “intrinsic value.”  I doubt we ever get full purity.  But, after all, we do do certain things because we find them worth doing.  And the writing of books is either something some people find worth doing—or it shouldn’t be done at all.

I always read Proust and other social novelists with an inability to suspend disbelief.  I could not understand a life where social climbing, where social ambition, was the driving passion.  I thought that such a world had long since disappeared.  People didn’t orient their lives in that fashion anymore.  But today I read The New Yorker and it is full of tales of people who are tortured and paralyzed by social media, who are obsessed with the “right brands,”star chefs and restaurants, and by celebrities.   And I should probably admit that academics are embroiled in their own kind of social climbing; they, too, want to be part of certain inner circles.  I always held myself rather aloof from all that—and, yet, by the Proustian law of getting what you seem (to others) not to want, I have had, by any objective standard, a highly successful academic career.  I never reached superstar status; I am more like the number 50th ranked tennis player in the world, known by some but not all, but still getting a fair number of perks that fall to those in the inner circles, even if I don’t have their name recognition and my books are read by much, much smaller audiences.

Among the perks, in my own context, there is that absurd salary.  When compression struck, I was able (as you are forced to do in the academic game) to go get an “outside offer.”  I had the kind of research profile that would lead another school that was in the prestige game to bid for my services.  I was able to force UNC to raise my salary so it was in line with that of my colleagues who had been hired after me or who had gotten outside offers of their own.  (Maybe another time I will talk about the complex layers of guilt unleashed by playing the game of getting such an offer.)

Which brings me full circle.  UNC can only compete for the “best faculty” as it struggles to maintain its high reputation, its high ranking, because private donors (alumni who are committed to UNC maintaining its standing) supplement the salaries the state is willing to pay.  UNC, like almost all the top public universities (Virginia, Michigan, UCLA, Berkeley) is a quasi-public school at this point.  Since UNC is more dependent on state dollars than the other schools I have just named, its standing is, in fact, sinking while theirs is holding steady.  Public schools further down the ladder—the UNC Charlottes of the world—are playing a desperate game of catch-up since they don’t’ have the fund-raising potential of the “flagships” and thus are hurt even more by the steady withdrawal of state support.

In short, the privatization of American higher education is a product of the lessening prestige of the public schools—a decline that is semi-rational given that schools are much less fully funded now than they once were.  But it is only semi-rational because it is also tied to the resurgence in the US of prestige-hunger, a resurgence related to the many sins that get covered by the name “neoliberalism.”  There is a heightened—if only rarely explicitly stated—sense of the great divide between winners and losers in our contemporary world.  And going to the “right” college now seems essential (to many people) to making sure you are one of the winners.  The Dukes and Georgetowns of the world have risen because of that anxiety about being left behind and because anything public has been underfunded and denigrated since the 1980s.  This, of course, explains the recent scandal of cheating the admissions process.  More importantly, it explains the on-going scandal of “legacy” admissions, which are motivated by fund-raising imperatives and by the time-worn abilities of elites to retain privileges.

The wider story, however, is about distinction–and cultural mores.  Here’s another argument I lost regarding college admissions.  UNC never had any “merit-based” scholarships (apart from the Moreheads, a whole ‘nother story).  In the early 1990s UNC realized it was beginning to lost the “best” in-state students to schools like Brown and Georgetown and Harvard.  Losing such students, of course, hurt our US News rankings, since average SAT scores for the incoming class were a major metric.  So it was decided to begin offering $500 and $1000 named scholarships to top applicants, irrespective of financial need.  My argument: “you mean to tell me that giving someone $1000 off our $12,000 in-state tuition will make them come to UNC, when their family is fully ready to pay $45,000 for them to go to Brown?”  Once again, I was wrong.  Students wanted to be singled out as “different,” as “special.”  The merit scholarships did increase our yield among top in-state students.  Maybe I am hopelessly romanticizing the 1950s and 1960s–and maybe the middle middle class that came from still exists.  I went to the most elite Catholic high school on Long Island.  All of my classmates went to college.  And there was some sense of a distinction between “going away” to college and going to a college within fifty miles of our high school.  But, really, beyond that little to no sense that Hamilton was different from Villanova, or Northwestern not the same as Marist.  And there was certainly no sense that a school had to distinguish me from other admitted students in order to get me to attend.  I can’t help but believe we are a far less democratic, far less egalitarian society culturally and emotionally (as well as, obviously, economically) now than we were in 1965.

My fat salary is linked to the same sea changes.  In academia, too, the divide between winners and losers has widened.  The spread between the highest and lowest salary in my department is much greater now than it was in 1992, when I arrived.  And, of course, academia has also created its own version of “contract workers,” the “adjuncts” who get low wages and no benefits to do the teaching that the “research faculty” does not do.  It stinks—even as I am a beneficiary of it.  No wonder I feel guilty.  Yeah, you say, you and your guilt feelings plus $1.50 will get you a ride on the subway.  I hate coming across as defensive, but I will record here that I have turned down all available raises over the past five years (admittedly, they were hardly large) so that the money could be distributed among my less well-paid colleagues.

A last point about money.  This thought comes from the Paul Manafort story.  I must be a person of very limited imagination.  Over the past three years, after all the deductions for taxes, retirement funds, health insurance etc., my wife and I together have approximately $10,000 a month in take home pay.  That’s the amount that lands in our bank accounts each month.  We bought our house quite some time ago, so our monthly mortgage plus escrow is $2000.  I understand that is low for most people.  But we have had a number of medical bills that our shitty medical insurance fails to cover—certainly coming to at least $500 a month when averaged over a whole year.  In any case, the point is that we can’t spend $10,000 a month—even as we were supplementing my wife’s mother’s retirement home costs to the tune of $1500 a month, and give a fair amount of money to our two children.  Yet we do not deny ourselves anything, and basically don’t pay much attention to what we spend.  This last, not paying attention, is an astounding luxury after at least twenty years of sweating every penny.  Yet, even with being wildly careless in relation to our earlier habits, there is always enough money.  In fact, it slowly accumulates, so that at the end of every year, no matter what medical emergencies or extravagant trips or increases in the number of charities we send an automatic monthly donation to, there is an extra $10,000 or so.

Clearly—as Paul Manafort showed us—there are a significant number of people in the US to whom $10,000 a month would be woefully inadequate.  Of course, there are millions more for whom, as for my wife and I, it would be untold riches. I don’t really know what moral to derive from that fact.  So I will simply state it here—and cease.

The Future of the Humanities

For some time now, I have a question that I use as a litmus test when speaking with professors of English.  Do you think there will be professors of Victorian literature on American campuses fifty years from now?  There is no discernible pattern, that I can tell, among the responses I get, which cover the full gamut from confident pronouncements that “of course there will be” to sharp laughter accompanying the assertion “I give them twenty years to go extinct.”  (For the record: UNC’s English department currently has five medievalists, seven Renaissance scholars, and six professors teaching Romantic and Victorian literature—that is, if I am allowed to count myself a Victorianist, as I sometime was.)

I have gone through four crises of the humanities in my lifetime, each coinciding with a serious economic downturn (1974, 1981, 1992, and 2008).  The 1981 slump cost me my job when the Humanities Department in which I taught was abolished.  The collapse of the dot.com boom did not generate its corresponding “death of the humanities” moment because, apparently, 9/11 showed us we needed poets.  They were trotted out nation-wide as America tried to come to terms with its grief.

Still, the crisis feels different this time.  Of course, I may just be old and tired and discouraged.  Not “may be.”  Certainly am.  But I think there are also real differences this time around—differences that point to a different future for the humanities.

In part, I am following up my posts about curriculum revision at UNC.  The coverage model is on the wane.  The notion that general education students should gain a familiarity with the whole of English literature is certainly moving toward extinction.  Classes are going to be more focused, more oriented to solving defined problems and imparting designated competencies.  Methods over content.

But, paradoxically, the decline of the professors of Victorian literature is linked to more coverage, not less.  The History Department can be our guide here.  At one time, History departments had two or three specialists in French history (roughly divided by centuries), three or four in English history, along with others who might specialize in Germany or Spain or Italy.  That all began to change (slowly, since it takes some time to turn over a tenured faculty) twenty or so years ago when the Eurocentric world of the American history department was broken open.  Now there needed to be specialists on China, on India, on Latin America, on Africa.  True, in some cases, these non-European specialists were planted in new “area studies” units (Asian Studies, Latin American Studies, Near Eastern Studies etc.).  But usually even those located in area studies would hold a joint appointment in History—and those joint appointments ate up “faculty lines” formerly devoted to the 18th century French specialist.

Art History departments (because relatively small) have always worked on this model: limited numbers of faculty who were supposed, somehow, to cover all art in all places from the beginning of time.  The result was that, while courses covered that whole span, the department only featured scholars of certain periods.  There was no way to have an active scholar in all the possible areas to be studied.  Scholarly “coverage,” in other words, was impossible.

English and Philosophy departments are, in my view, certain to go down this path. English now has to cover world literatures written in English, as well as the literatures of groups formerly not studied (not part of the “canon”).  Philosophy, as well, now incldue non-Western thought, as well as practical, professional, and environmental  ethics, along with new interests in cognitive science.

There will not, fifty years from now, be no professors of Victorian literature in America.  But there will no longer be the presumption that every self-respecting department of English must have a professor of Victorian literature.  The scholarly coverage will be much more spotty—which means, among other things, that someone who wants to become a scholar of Victorian literature will know there are six places to reasonably pursue that ambition in graduate school instead of (as is the case now) assuming you can study Victorian literature in any graduate program.  Similarly, if 18th century English and Scottish empiricism is your heart’s desire, you will have to identify the six philosophy departments you can pursue that course of study.

There is, of course, the larger question.  Certainly (or, at least, it seems obvious to me, although hardly to all those I submit to my litmus test), it is a remarkable thing that our society sees fit to subsidize scholars of Victorian literature.  The prestige of English literature (not our national literature after all) is breath-taking if you reflect upon it for even three seconds.  What made Shakespeare into an American author, an absolute fixture in the American curriculum from seventh grade onwards?  What plausible stake could our society be said to have in subsidizing continued research into the fiction and life of Charles Dickens?  What compelling interest (as a court of law would phrase it) can be identified here?

Another paradox here, it seems to me.  I hate (positively hate, I tell you) the bromides offered (since Matthew Arnold at least) in generalized defenses of the humanities.  When I was (during my years as a director of a humanities center) called upon to speak about the value of the humanities, I always focused on individual examples of the kind of work my center was enabling.  The individual projects were fascinating—and of obvious interest to most halfway-educated and halfway-sympathetic audiences.  The fact that, within the humanities, intellectual inquiry leads to new knowledge and to new perspectives on old knowledge is the lifeblood of the whole enterprise.

But it is much harder to label that good work as necessary.  The world is a better, richer (I choose this word deliberately) place when it is possible for scholars to chase down fascinating ideas and stories because they are fascinating.  And I firmly believe that fascination will mean that people who have the inclination and the leisure will continue to do humanities work come hell and high water.  Yes, they will need the five hundred pounds a year and the room of one’s own that Virginia Woolf identified as the prerequisites, but people of such means are hardly an endangered species at the moment.  And, yes, it is true that society generally (especially after the fact, in the rear view mirror as it were) likes to be able to point to such achievements, to see them as signs of vitality, culture, high-mindedness and the like.  But that doesn’t say who is to pay.  The state?  The bargain up to now is that the scholars (as well as the poets and the novelists) teach for their crust of bread and for, what is more precious, the time to do their non-teaching work of scholarship and writing.  Philanthropists?  The arts in America are subsidized by private charity—and so is much of higher education (increasingly so as state support dwindles.)  The intricacies of this bargain warrant another post.  The market?  Never going to happen.  Poetry and scholarship is never going to pay for itself, and novels only very rarely so.

The humanities, then, are dependent on charity—or on the weird institution that is American higher education.  The humanities’ place in higher education is precarious—and the more the logic of the market is imposed on education, the more precarious that position becomes.  No surprise there.  But it is no help when my colleagues act as if the value of scholarship on Victorian literature is self-evident.  Just the opposite.  Its value is extremely hard to articulate.  We humanists do not have any knock-down arguments.  And there aren’t any out there just waiting to be discovered.  The ground has been too well covered for there to have been such an oversight.  The humanities are in the tough position of being a luxury, not a necessity, even as they are also a luxury which makes life worth living as contrasted to “bare life” (to appropriate Agamben’s phrase).  The cruelty of our times is that the overlords are perfectly content (hell, it is one of their primary aims) to have the vast majority only possess “bare life.”  Perhaps it was always thus, but that is no consolation. Not needing the humanities themselves, our overlords are hardly moved to consider how to provide it for others.

Violence and Inequality (Part Two)

The thesis of Walter Scheidel’s The Great Leveler:  Violence and Inequality from the Stone Age to the Twenty-First Century (Princeton UP, 2017) is easily stated: “Thousands of years of history boil down to a simple truth: ever since the dawn of civilization, ongoing advances in economic capacity and state building favored growing inequality, but did little if anything to bring it under control.  Up to and including the Great Compression of 1914 to 1950, we are hard pressed to identify reasonably well attested and nontrivial reductions in material inequality that were not associated, one way or another, with violent shocks” (391).

In particular, Scheidel says there are four kinds of “violent shocks” (he calls them the four horsemen): war, plague, system or state collapse, and violent revolution.  But it turns out that not even all instances of those four can do the job of reducing inequality.  The violent shocks, it turns out, must be massive. Only “mass mobilization” wars reduce inequality, so (perhaps) only World War I and, especially, World War II actually count as doing the job.  The Napoleonic Wars clearly do not–and it is harder to tell with the possible mass mobilizations in the ancient world.

Similarly, except for the Russian and Chinese revolutions of the 20th century (both of which caused, at the minimum, fifteen million deaths), revolutions rarely seem to have significantly altered the distribution of resources.  The Black Death (lasting as it did, in waves, over at least eighty and perhaps 120 years) and perhaps similar earlier catastrophic plagues (of which less is certainly known) stand as the only examples of leveling epidemics.  For system or state collapse, we get the fall of Rome—and not much else that is relevant since then, with speculations about collapses prior to Rome and in the Americas (Aztecs and Incas) where (once again) the available evidence leads to conjectures but no firm proofs.

Where does that leave us?  In two places, apparently.  One is that inequality leveling events are rare, are massive, and are, arguably, worse than the disease to which they are the cause.  Also, except for the revolutions, the leveling effects are unintentional by-products.  Which leads the second place: the very conservative conclusion (much like Hayek’s thoughts about the market as being beyond human control/calculation or T. S. Eliot’s similar comments about “culture” being an unplanned and unplannable product of human actions) that, although the creation of inequality is very much the result of human actions that are enabled and sustained by the state (i.e. by political organization), there is little that can be done politically (and deliberately) to reduce inequality.  Scheidel is at great pains to show a) that even the great shocks only reduce inequality for a limited time (about 60 to 80 years) before inequality starts to rise again; b) that the various political expedients currently on the table (like a wealth tax of the kind Elizabeth Warren is proposing or high marginal tax rates) would lower inequality very slightly at most; and c) that the scale of violence required to significantly lower inequality (as contrasted to the marginal reductions that less violent measures could effect) is simply too horrible to deliberately embrace as a course of action.

So the conclusion appears to be: bemoan inequality as much as you like, but also find a way to come to terms with the fact that it is basically irremediable.  Scheidel is good at the bemoaning part, portraying himself as someone who sees inequality as deplorable, even evil.  But he is just as resolute in condemning violence aimed at decreasing inequality.  So his unstated, but strongly, implied recommendation is quietist.

In line with my ongoing obsessions, the book appears to reinforce what I have deemed one of the paradoxes of violence: namely, the fact that the state is undoubtedly a constraint upon violence even as states are also undoubtedly the source of more violence than non-state actors.  In the new version of this paradox that Scheidel’s book suggests, the formulation would go like this: the state enables greater economic activity/productivity while also enabling far greater economic inequality.

Yet the state’s enabling of inequality doesn’t work the other way.  It seems just about impossible to harness the state to decrease inequality—except in the extreme case of war.  World War II certainly bears that out in recent (the past 300 years) history.  The US (in particular) adopted (in astoundingly short order) a very communistic framework to conduct the war (with a command economy in terms of what was to be produced and how people were to be assigned their different roles in production, along with strict wage and price controls, and rationing).  It would seem that the war proved that a command economy can be efficient and, not only that, but in times of dire need, a command economy was obviously preferable to the chaos of the free market.  The war effort was too important to be left to capitalism.  But outside of a situation of war, it has seemed impossible to have the state play that kind of leveling role, strongly governing both production and distribution.  Why?  Because only war produces the kind of social solidarity required for such centralized (enforced) cooperation?  To answer that way gets us back to violence as required—because violence is a force of social cohesion like none other.

To phrase it this way gets us back to an ongoing obsession of this blog: the problem of mobilization.  How to create a sustainable mass movement that can exert the kind of pressure on elites that is required to shift resources downward?  If violence as teh source of cohesion for that movement is taken off the table, what will serve in its place?  Which also raises the thought of why nationalism is so entangled in violence and in rhetorics/practices of sacrifice.  The means by which social cohesion is created.  Maybe that’s the “numinous” quality of violence to which Charles Taylor keeps gesturing.  A kind of Durkheimian creation of the collective, a way of escaping/transcending the self.

A different thought: Scheidel makes a fairly compelling case (although it is not his main focus) that the creation of inequality is itself dependent on violence.  Sometimes the violence of appropriation is massive–especially in the cases of empires which are basically enterprises of either outright extraction (carting off the loot) or somewhat more indirect extortion (requiring the payment of “tribute” in return for peace/protection).  Or sometimes the violence of appropriation is less massive and less direct.  But appropriation still requires a state that, in the last instance, will protect appropriated property against the claims of those who see that appropriation as either unjust or as inimical to their own interests.  In short, the power of the state (a power that resides, to at least some extent, in its capacity for violence and its willingness to put that capacity into use) is necessary to the creation and maintenance of inequality.  So, in one way, it seems like a “little” violence can get you inequality, but it requires “massive” violence to dislodge that inequality in the direction of more equality.  And it is this difference in scale that places the exploited in such an unfavorable position when it comes to remedial action.

Of course, the growth in inequality since 1980 in the US was grounded in legal instruments and institutional practices.  The increasing power of employers over employees, the prevention of the state from intervening in massive lay-offs or equally massive outsourcing, the onslaught of privatization and deregulation (or lax enforcement of existing regulations), the legalization of all kinds of financial speculation and “creative instruments” etc. etc. was all accomplished “non-violently” through a classic “capture of the state.”  This is what inspires the most radical leftist visions; the left seems utterly paralyzed as it witnesses all these court cases, new laws, revisions of executive practice, a paralysis generated by the fact that the shifts of power and wealth to the top 10% are all “legal.”  The radical claims there is no “legal” room left for the radical egalitarian to occupy.  The system is so corrupt that it offers no remedies within its scope.  But the distaste for massive violence (here is where Scheidel is relevant) appears to take extra-legal methods for change off the table.