Category: Public Higher Education

Joseph North Two—Rigor and Memory (Oh My!)

It must be something in the water in New Haven.  North deploys the term “rigor” as frequently as Paul DeMan, with whom he has just about nothing else in common.  I will just offer two instances.  The first comes from his closing exhortation to his readers “to secure a viable site within the social order from which to work at criticism in the genuinely oppositional sense” (211).  Success in this effort would requires “a clear and coherent research program together with a rigorous new pedagogy, both of which, I think, would need to be founded on an intellectual synthesis that addressed the various concerns of the major countercurrents in a systematic and unitary way’ (211).

In the Appendix, the issue is described in this way:  “How does one pursue the tenuous task of cultivating an appreciation for the aesthetic without lapsing into mere impressionism?  How does one pursue this task with a rigor sufficient to qualify one’s work as disciplinary in the scientific terms recognized by the modern university” (217).

[A digression: nothing in the book suggests that North takes an oppositional stance toward the “modern university”—or to its notions of what constitutes a discipline, what “counts as” knowledge, or its measures of productivity.  Rather, he is striving to secure the place of literary studies within that university in order to pursue an oppositional, “radical” (another favorite word, one always poised against “liberal”) program toward modern, capitalist society.]

Rigor, as far as I am concerned, is a half step away from rigor mortis.  When I think of brilliant instances of close reading, rigor is just about the last word that comes to mind.  Supple, lively, surprising, imaginative, even fanciful.  In short, a great close reading quickens.  It bring its subject to life; it opens up, it illuminates. The associative leaps, the tentative speculations, the pushing of an intuition a little too far.  Those are the hallmarks of the kind of close reading that energizes and inspires its readers.  What that reader catches is how the subject at hand energized and inspired the critic.

Similarly, a rigorous pedagogy would, it seems to me, be the quickest way to kill an aesthetic sensibility.  The joyless and the aesthetic ne’er should meet.

Not surprisingly, I have a similar antipathy to “method.”  Close reading is not a method.  To explain why not is going to take a little time, but our guide here is Kant, who has wise and very important things to say about this very topic in his Critique of Judgment.  Spending some time with Kant will help clarify what it is the aesthetic can and can’t do.

But let’s begin with some mundane contrasts.  The cook at home following a recipe.  The lab student preforming an experiment.  The young pianist learning to play a Beethoven sonata.  The grad student in English learning to do close readings.  Begin by thinking of the constraints under which each acts—and the results for which each aims.  The cook and the lab student want to replicate the results that the instructions that have been given should lead to.  True, as cooks and lab students become more proficient practitioners, they will develop a “feel” for the activity that allows them to nudge it in various ways that will improve the outcomes.  The map (the instructions) is not a completely unambiguous and fully articulated guide to the territory.  But it does provide a very definite path—and the goal is to get to the designation that has been indicated at the outset. Surprises are almost all nasty in this endeavor.  You want the cake to rise, you want the experiment to land in the realm of replicable results.

The pianist’s case is somewhat different, although not dramatically so.  In all three cases so far, you can’t learn by simply reading the recipe, the instructions, the musical score.  You must actually do the activity, walk the walk, practice the practice.  There is more scope (I think, but maybe I am wrong) for interpretation, for personal deviance, in playing the Beethoven.  But there is limited room for “play” (using “play” in the sense “a space in which something, as a part of a mechanism, can move” and “freedom of movement within a space”—definitions 14 and 15 in my Random House dictionary.)  Wander too far off course and you are no longer playing that Beethoven sonata.

Now let’s consider our grad student in English.  What instructions do we give her?  The Henry James dictum: “be someone on whom nothing is lost”?  Or the more direct admonition: “Pay attention!”  Where do you even tell the student to begin.  It is not simply a case of (shades of Julie Andrews) beginning at the beginning, a very good place to start, since a reading of a Shakespeare sonnet might very well begin with an image in the seventh line.  In short, what’s the recipe, what’s the method?  Especially since the last thing we want is an outcome that was dictated from the outset, that was the predictable result of our instructions.

Kant is wonderful on this very set of conundrums.  So now let’s remind ourselves of what he has to say on this score.  We are dealing, he tells us, with two very different types of judgment, determinative and reflexive.  Determinative judgments guide our practice according to pre-set rules.  With the recipe in hand and a desire to bake a cake, my actions are guided by the rules set down for me.  Beat the batter until silky smooth (etc.) and judgment comes in since I have to make the call as to when the batter is silky smooth.  In reflexive judgment, however, the rule is not given in advance.  I discover the rule through the practice; the practice is not guided by the rule.

Kant’s example, of course, is the beautiful in art.  Speaking to the artist, he says: “You cannot create a beautiful work by following a rule.”  To do so, would be to produce an imitative, dispirited, inert, dead thing.  It would be, in a word, “academic.”  Think of all those deadly readings of literary texts produced by “applying” a theory to the text.  That’s academic—and precisely against the very spirit of the enterprise.

Here’s a long selection of passages from Kant’s Third Critique that put the relevant claims on the table.  We can take Kant’s use of the term “genius” with a grain of salt, translating it into the more modest terms we are more comfortable with these days.  For genius, think “someone with a displayed talent for imaginative close readings.”

Kant (from sections 46 and 49) of the third Critique:  “(1) Genius is a talent for producing something for which no determinative rule can be given, not a predisposition consisting of a skill for something that can be learned by following some rule or other; hence the foremost property of genius must be originality. (2) Since nonsense too can be original, the products of genius must also be models, i.e. they must be exemplary; hence, though they do not themselves arise through imitation, still they must serve others for this, i.e. as a standard or rule by which to judge. (3) Genius itself cannot describe or indicate scientifically how it brings about its products . . . . That is why, if an author owes his product to his genius, he himself does not know how he came by the ideas for it; nor is it in his power to devise such products at his pleasure, or by following a plan, and to communicate his procedure to others in precepts that would enable them to bring about like products” (Section 46).

“These presuppositions being given, genius is the exemplary originality of a subject’s natural endowment in the free use of his cognitive powers.  Accordingly, the product of a genius (as regards what is attributable to genius in it rather than to possible learning or academic instruction) is an example that is meant not to be imitated, but to be followed by another genius.  (For in mere imitation the element of genius in the work—what constitutes its spirit—would be lost.)  The other genius, who follows the example, is aroused to a feeling of his own originality, which allows him to exercise in art his freedom from the constraint of rules, and to do so in such a way that art acquires a new rule by this, thus showing that talent is exemplary” (Section 49).

Arendt on Kant’s Third Critique.  Cavell on It Happened One Night.  Sedgwick on Billy Budd.  Sianne Ngai on “I Love Lucy.” I defy anyone to extract a “method” from examining (performing an autopsy?) these four examples of close reading. Another oddity of North’s book is that for all his harping on the method of close reading, he offers not a single shout-out to a critic whose close readings he admires.  It is almost as if the attachment to “method” necessitates the suppression of examples.  Precisely because a pedagogy via examples is an alternative to the systematic, rigorous, and methodical pedagogy he wants to recommend.

But surely Kant is right.  First of all, right on the practical grounds that our student learns how to “do” close reading by immersion in various examples of the practice, not by learning a set of rules or “a” method.  Practice makes all the difference in this case; doing it again and again in an effort to reach that giddy moment of freedom, when the imagination, stirred by the examples and by the object of scrutiny, takes flight.  Surely “close reading” is an art, not a science.

And there, in the second and more important place, is where Kant is surely right.  If the very goal is to cultivate an aesthetic sensibility, how could we think that the modes of scientific practice, with its vaunted method and its bias toward replicable and predictable results, would serve our needs?  The game is worth the candle precisely because the aesthetic offers that space of freedom, of imaginative play, of unpredictable originality.  If the aesthetic stands in some kind of salutary opposition to the dominant ethos of neoliberalism, doesn’t that opposition rest on its offer of freedom, of the non-standard, of the unruly, of non-productive imaginings?  Why, in other words, is the aesthetic a threat and a respite from the relentless search for returns on investment, for the incessant demand that each and every one of us get with the program?  That they hate us is a badge of honor; being systematic seems to want to join the “rationalized” world of the economic?  [Side note: here is where critique cannot be abandoned.  We must keep pounding away at the quite literal insanity, irrationality, of the market and all its promoters.  But the aesthetic should, alongside critique, continued to provide examples of living otherwise, of embodying that freedom of imagination.]

Kant, of course, famously resists the idea that lack of method, praise of an originality that gives the rule to itself, means that anything goes.  Genius is to be disciplined by taste, he writes.  We judge the products produced by the would-be genius—and deem some good examples and others not so good.  I am, in fact, very interested in the form that discipline takes in Kant, although this post is already way too long so I won’t pursue that tangent here.  Suffice it to say two things:

1. The standard of taste connects directly to Kant’s fervent desire for “universal communicability.”  He fears an originality so eccentric that it places the genius outside of the human community altogether.  If genius is originality, taste is communal (the sensus communis)—and Kant is deeply committed to the role art plays in creating and sustaining community.  The artist should, even as she pursues her original vision, also have the audience in mind, and consider how she must shape her vision in order to make it accessible to that audience.  So we can judge our students’ attempts to produce close readings in terms of how they “speak” to the community, to the audience.  Do they generate, for the reader, that sense that the text (or film or TV show) in question has been illuminated in exciting and enlivening ways?  There is an “a-ha” moment here that is just about impossible to characterize in any more precise–or rigorous–way.

2. Taste, like genius, is a term that mostly embarrasses us nowadays. It smacks too much of 18th century ancien regime aristocrats.  But is “aesthetic sensibility” really very different from “taste”?  Both require cultivating; both serve as an intuitional ground for judgments.  In my next post—where I take up the question of sensibility—I want to consider this connection further.

But, for now, a few words more about “close readings.  Just because there is no method to offer does not mean we cannot describe some of the characteristics of close reading.  I think in fact, we can call close readings examples of “associative thinking.”  A close reading (often, hardly always) associates disparate things together—or dissociates things that we habitually pair together or considered aligned.  So Arendt shows us how Kant’s third Critique illuminates the nature of the political; Cavell enriches a meditation on finitude through an engagement with It Happened One Night; Sedgwick’s reading of Billy Budd illustrates how homosexuality is both acknowledged and denied; Ngai associates a situation comedy with the nature of precarious employment.  In each case, there is an unexpected—and illuminating, even revelatory—crossing of boundaries.  Surprising juxtapositions (metonymy) and unexpected similarities where before we only saw differences (metaphor).  Which takes us all the way back to Aristotle’s comment “that the metaphorical kind [of naming] is the most important by far.  This alone (a) cannot be acquired from someone else, and (b) is an indication of genius” [that word again!] (Sectoin 22 of the Poetics).  There is no direct way to teach someone how to make those border crossings.

How is this all related to judgment?  Both to Aristotle’s phronesis (sometimes translated as “practical wisdom”) and to Kantian judgment.  (Recall that morality for Kant is too important to leave to judgment of the reflexive sort; he wants a foolproof method for making moral judgments.  Aristotle is much more willing to see phronesis at work in both ethics and aesthetics.)  We get wrong-footed, I think, when we tie judgment to declaring this work or art beautiful or not, this human action good or evil.  Yes, we do make such judgments.

But there is another site of judgment, the one where we judge (or name) what situation confronts us.  Here I am in this time and place; what is it that I am exactly facing?  Here is where associative thinking plays its role.  How is this situation analogous to other situations I know about—either from my own past experiences or from the stories and lessons I have imbibed from my culture?  Depending on how I judge the situation, how I name it, is what I deem possible to make of it.  Creative action stems from imaginative judgments, from seeing in this situation possibilities not usually perceived.

That’s the link of judgment to the aesthetic: the imaginative leaps that, without the conformist safety net of a rule or method, lead to new paths of action.  If we (as teachers in the broad field of aesthetics) aim to cultivate an aesthetic sensibility, it is (I believe) to foster this propensity in our students for originality, for genius—in a world where conformity (the terror of being unemployable, of paying the stiff economic price of not following the indicated paths) rules.  Judgment, like metaphorical thinking, is an art, not a science—and cannot be taught directly, but only through examples.  It’s messy and uncertain (expect lots of mistakes, lots of failed leaps).  And it will exist in tension with “the ordinary”—and, thus, will have to struggle to find bridges back to the community, to the others who are baffled by the alternative paths, the novel associations, you are trying to indicate.

Joseph North (One)

One of the oddities of Joseph North’s Literary Criticism: A Concise Political History (Harvard UP, 2017) is that it practices what it preaches against.  North believes that the historicist turn of the 1980s was a mistake, yet his own “history” is very precisely historicist: he aims to tie that “turn” in literary criticism to a larger narrative about neo-liberalism.

In fact, North subscribes to a fairly “vulgar,” fairly simplistic version of social determinism.  His periodization of literary criticism offers us “an early period between the wars in which the possibility of something like a break with liberalism, and a genuine move to radicalism, is mooted and then disarmed,” followed by “a period of relative continuity through the mid-century, with the two paradigms of ‘criticism’ and ‘scholarship’ both serving real superstructural functions within Keynesianism.”  And, finally, when the “Keynesian period enters into a crisis in the 1970s . . . we see the establishment of a new era: the unprecedentedly complete dominance of the ‘scholar’ model in the form of the historicist/contextualist paradigm.”  North concludes this quick survey of the “base” determinants of literary critical practice with a rhetorical question:  “If this congruence comes as something of a surprise, it is also quite unsurprising: what would one expect to find except that the history of the discipline marches more or less in step with the underlying transformations of the social order?” (17).

Perhaps I missed something, but I really didn’t catch where North made his assertions about the two periods past the 1930s stick.  How do both the “critical” and “scholarly” paradigms serve Keynesianism?  I can see where the growth of state-funded higher education after World War II is a feature of Keynesianism.  But surely the emerging model (in the 50s and 60s) of the “research university,” has as much, if not more, to do with the Cold War than with Keynesian economic policy.

But when it gets down to specifics about different paradigms of practice within literary criticism, I fail to see the connection.  Yes, literary criticism got dragged into a “production” model (publish or perish) that fits it rather poorly, but why or how did different types of production, so long as they found their way into print, “count” until the more intense professionalization of the 1970s, when “peer-reviewed” became the only coin of the realm?  The new emphasis on “scholarship” (about which North is absolutely right) was central to that professionalization—and does seem directly connected to the end of the post-war economic expansion.  But that doesn’t explain why “professionalization” should take an historicist form, just as I am still puzzled as to how both forms—critical and scholarly—“serve” Keynesian needs prior to 1970.

However, my main goal in this post is not to try to parse out the base/superstructure relationship that North appears committed to.  I have another object in view: why does he avoid the fairly obvious question of how his own position (one he sees as foreshadowed, seen in a glass darkly, by Isobel Armstrong among others) reflects (is determined by?) our own historical moment?  What has changed in the base to make this questioning of the historicist paradigm possible now?  North goes idealistic at this point, discussing “intimations” that appear driven by dissatisfactions felt by particular practitioners.  The social order drops out of the picture.

Let’s go back to fundamentals.  I am tempted to paraphrase Ruskin: for every hundred people who talk of capitalism, one actually understands it.  I am guided by the sociologist Alvin Gouldner, in this case his short 1979 book The Rise of the New Class and the Future of the Intellectuals (Oxford UP), a book that has been a touchstone for me ever since I read it in the early 1980s.  Gouldner offers this definition of capital: anything that can command an income in the mixed market/state economy in which we in the West (at least) live.  Deceptively simple, but incredibly useful as a heuristic.  Money that you spend to buy food you then eat is not capital; that money does not bring a financial return.  It does bring a material return, but not a financial one.  Money that you (as a food distributer) spend to buy food that you will then sell to supermarkets is capital.  And the food you sell becomes a commodity—while the food you eat is not a commodity.  Capital often passes through the commodity form in order to garner its financial return.

But keep your eye on “what commands an income.”  For Marx, of course, the wage earner only has her “labor power” to secure an income.  And labor power is cheap because there is so much of it available.  So there is a big incentive for those who only have their labor power to discover a way to make it more scarce.  Enter the professions.  The professional relies on selling the fact that she possesses an expertise that others lack.  That expertise is her “value added.”  It justifies the larger income that she secures for herself.

Literary critics became English professors in the post-war expansion of the research university.  We can take William Empson and Kenneth Burke as examples of the pre-1950s literary critic, living by their wits, and writing in a dizzying array of modes (poetry, commissioned “reports,” reviews, books, polemics).  But the research university gave critics “a local habitat [the university] and a name” [English professors] and, “like the dyer’s hand, their nature was subdued.”  The steady progress toward professionalization was begun, with a huge leap forward when the “job market” tightened in the 1970s.

So what’s new in the 2010s?  The “discipline” itself is under fire.  “English,” as Gerald Graff and Peter Elbow both marveled years ago, was long the most required school subject, from kindergarten through the second year of college.  Its place in our educational institutions appeared secure, unassailable.  There would always be a need to English teachers.  That assumed truth no longer holds.  Internally, interdisciplinarity, writing across the curriculum, and other innovations threatened the hegemony of the discipline.  Externally, the right wing’s concerted attack on an ideologically suspect set of “tenured radicals” along with a more general discounting (even elimination) of value assigned to being “cultured” meant the “requirement” of English was questioned.

North describes this shift in these terms:  “if the last three decades have taught literary studies anything about its relationship to the capitalist state, it is that the capitalist state does not want us around.  Under a Keynesian funding regime, it was possible to think that literary study was being supported because it served an important legitimating role in the maintenance of liberal capitalist institutions. . . . the dominant forms of legitimation are now elsewhere” (85).  True enough, although I would still like to see how that “legitimating role” worked prior to 1970; I would think institutional inertia rather than some effective or needed legitimating role was the most important factor.

In that context, the upsurge in the past five years (as the effects of 2008 on the landscape of higher education registered) of defenses of “the” discipline makes sense.  North—with his constant refrain of “rigor” and “method”—is working overtime to claim a distinctive identity for the discipline (accept no pale or inferior imitations!).  This man has a used discipline to sell you. (It is unclear, to say the least, how a return to “criticism,” only this time with rigor, improves our standing in the eyes of the contemporary “capitalist state.”  Why should they want North’s re-formed discipline  around anymore than the current version?)

North appears  blind to the fact that a discipline is a commodity within the institution that is higher education.  The commodity he has to sell has lost significant amounts of value over the past ten years within the institution, for reasons both external and internal.  A market correction?  Perhaps—but only perhaps because (as with all stock markets) we have no place to stand if we are trying to discover the “true” value of the commodity in question.

So what is North’s case that we should value the discipline of literary criticism more highly? He doesn’t address the external factors at all, but resets the internal case by basing the distinctiveness of literary criticism on fairly traditional grounds: it has a distinct method (“Close reading”) and a distinct object (“rich” literary and aesthetic texts).  To wit:  “what [do] we really mean by ‘close reading’ beyond paying attention to small units of any kind of text.  Our questions must then be of the order: what range of capabilities and sensitivities is the reading practice being used to cultivate?  What kinds of texts are most suited to cultivating those ranges? Putting the issue naively, it seems to me that the method of close reading cannot serve as a justification for disciplinary literary study until the discipline is able to show that there is something about literary texts that make them especially rewarding training grounds for the kinds of aptitudes the discipline is claiming to train.  Here again the rejected category of the aesthetic proves indispensable, for of course literary and other aesthetic texts are particularly rich training grounds for all sorts of capabilities and sensitivities: aesthetic capabilities”( 108-9; italics in original).

I will have more to say about “the method of close reading” in my next post.  For now, I just want to point out that it is absurd to think “close reading” is confined to literary studies–and North shows himself aware of that fact as he retreats fairly quickly from the “method” to the “objects” (texts).  Just about any practitioner in any field to whom the details matter is a close reader.  When my son became an archaeology major, my first thought was: “that will come to an end when he encounters pottery shards.” Sure enough, he had a brilliant professor who lived and breathed pottery shards—and who, even better yet, could make them talk.  My son realized he wasn’t enthralled enough with pottery shards to give them that kind of attention—and decided not to go to grad school.  Instead, my son realized that where he cared about details to that extent, where no fine point was too trivial to be ignored, was the theater—and thus he became an actor and a director.  To someone who finds a particular field meaningful, all the details speak.  Ask any lawyer, lab scientist, or gardener.  They are all close readers.

This argument I have just made suggests, as a corollary, that all phenomenon are “rich” to those inspired by them.  Great teachers are, among other things, those who can transmit that enthusiasm, that deep attentive interest, to others.  If training in attention to detail is what literary studies does, it has no corner on that market.  Immersion in just about any discipline will have similar effects.  And there is no reason to believe the literary critics’ objects are “richer” than the archaeologists’ pottery shards.

In short, if we go the “competencies” route, then it will be difficult to make the case that literary studies is a privileged route to close attention to detail—or even to that other chestnut, “critical thinking.” (To North’s credit, he doesn’t play the critical thinking card.)  Most disciplines are self-reflective; they engage in their own version of what John Rawls called “reflective equilibrium,” moving back and forth between received paradigms of analysis and their encounter with the objects of their study.

North is not, in fact, very invested in “saving” literary studies by arguing they belong in the university because they impart a certain set of skills or competencies that can’t be transmitted otherwise.  Instead, he places almost all his chips on the “aesthetic.”  What literary studies does, unlike all the rest, is initiate the student into “all sorts of capabilities and sensitivities” that can be categorized as “aesthetic capabilities.”

Now we are down to brass tacks.  What we need to know is what distinguishes “aesthetic capabilities” from other kinds of capabilities?  And we need to know why we should value those aesthetic capabilities?   On the first score, North has shockingly little to say—and he apologizes for this failure.  “I ought perhaps to read into the record, at points like this, how very merely gestural these gestures [toward the nature of the aesthetic] have been; the real task of developing claims of this sort is of course philosophical and methodological rather than historical, and thus has seemed to me to belong to a different book” (109; italics in original).

Which leaves us with his claims about what the aesthetic is good for.  Why should we value an aesthetic sensibility?  The short answer is that this sensibility gives us a place to stand in opposition to commercial culture.  He wants to place literary criticism at the service of radical politics—and heaps scorn throughout on liberals, neo-liberals, and misguided soi-disant radicals (i.e. the historicist critics who thought they were striking a blow against the empire).  I want to dive into this whole vein in his book in subsequent posts.  Readers of this blog will know I am deeply sympathetic to the focus on “sensibility” and North helps me think again about what appeals to (and the training of) sensibilities could entail.

But for now I will end with registering a certain amazement, or maybe it is just a perplexity.  How will it serve the discipline’s tenuous place in the contemporary university to announce that its value lies in the fact that it comes to bury you?  Usually rebels prefer to work in a more clandestine manner.  Which is to ask (more pointedly): how does assuming rebellious stances, in an endless game in which each player tries to position himself to the left of all the other players, bring palpable rewards within the discipline even as it endangers the position of the discipline in the larger struggle for resources, students, and respect within the contemporary university? That’s a contradiction whose relation to the dominant neo-liberal order is beyond my abilities to parse.

Money and Babies

Since I got onto money in my last post, I am going to continue that line of thought (briefly).  I worried a lot about money between the ages of 29 and 44 (roughly); it’s strange how hard it is for me to remember my feelings.  Sure, I forget events as well.  But the main outlines of my life’s history are there to be remembered.  What I can’t pull up is how I felt, what I was thinking, at various points.  My sense now that I was somehow not present at much of my life stems from this inability to reconstruct, even in imagination, who I was at any given moment.  I did all these things—but don’t remember how I did them or what I felt as I was doing or even exactly what I thought I was doing.  Getting through each day was the focus, and somehow I made it from one day to the next.  But there was no overall plan—and no way to settle into some set of coherent, sustaining emotions.  It was a blur then and it’s a blur now.

All of which is to say that I worried about money, about the relative lack of it, without having any idea about how to get more of it.  I didn’t even have money fantasies—like winning the lottery or (just as likely) writing a best-selling book.  What I did for a living, including writing the books that my academic career required, was utterly disconnected emotionally and intellectually from the need to have more money.  When I made my first academic move (from the University of Rochester’s Eastman School of Music to the University of North Carolina) the motive was purely professional, not monetary.  I wanted to teach in an English department and be at a university where my talents would not be underutilized.  That it would involve a substantial raise in pay never occurred to me until I got the offer of employment.  And when I went and got that outside offer in order to boost my UNC salary (as mentioned in the last post), it was the inequity of what I was being paid that drove me, not the money itself.  In fact, despite worrying about money for all those years, I never actually imagined having more than enough.  It was as if I just accepted that financial insecurity was my lot in life.  I could worry about it, but I didn’t have any prospects of changing it.

Certainly, my worries did make me into a cheap-skate.  And undoubtedly those niggardly habits are the reason we now have more than enough each month.  Habits they certainly are since at this point they don’t even pinch.  They just are the way I live in the world—and allow me to feel like I am being extravagant when (fairly often now) I allow myself luxuries others would even give a second thought.

My main theme, however: the worries about money were utterly separate from the decision to have children.  That this was so now amazes me.  It is simply true that when Jane and I decided the time had come to have children, the financial implications of that decision never occurred to me.  We made a very conscious decision to have children.  Our relationship was predicated, in fact, on the agreement that we would have children.  And when that pre-nuptial agreement was made the issue of having money enough to have kids was raised.  But four years later, when we decided to have the anticipated child, money was not considered at all.  And when we decided to have a second child after another two years, once again money was not an issue.  I don’t know why not.  Why—when I worried about having enough money for all kinds of other necessities—did I not worry about having enough money to raise our two children?  That’s the mystery.

I have no answer.  And I can’t say if that was true generally for those of us having our children in the 1980s, although it seems to have been true for most of my friends.  On the other hand, as my wife often notes, I do have a fairly large number of married friends (couples who have been together forty years now) who do not have children.  Very likely that a mixture of professional and financial reasons led to their not having kids.

I do, however, feel that financial considerations do play a large role now (in the 2010s) in the decision to have children.  That’s part of the cultural sea-change around winners and losers, the emptying out of the middle class, and the ridiculous price of “private” and quasi-private education.  Most conspicuous to me is the increasing number of single-child families among the upper middle class.  Yes, that is the result of a late start for women who take time to establish themselves in a profession.  But it also an artifact of worrying about the cost of child-care and of education.

I come from a family of seven children.  And my parents, relatively, were less well-off when they had us than Jane and I were when we had our children.  (That statement is a bit complicated since my parents had access to family money in case of emergency that was not there for me to tap.  But, in fact, my parents didn’t rely heavily on that reserve until much later in their lives.)  Was my not following my parents’ footsteps toward a large family financially motivated?  A bit, I guess.  But it really seems more a matter of style—plus the fact that my wife was 34 when she got pregnant with our first.  But even if she had been 24 (as my mother was at her first pregnancy), it is highly unlikely we would have had more than two kids (perhaps three).  The idea was unthinkable by 1987; it just wasn’t done.

It is also hard to see how we could have done it (even though that fact didn’t enter into our thinking).  Certainly, it would have been done very differently.  We paid $430,000 for our two children’s educations: three years of private high school and four years of private university (with a $15,000 scholarship each year) for my son, and four years of private high school and four years of private university for my daughter. And that figure is just the fees paid to the schools; it doesn’t include all the other costs. We would certainly have relied much more heavily on public education if we had more than two children.

Once again, I have no moral to draw.  I am just trying to track what seem to me particularly significant shifts in cultural sensibility.

On Salaries and Money and American Universities

My last post on the future of the humanities led me to think about American higher education, which I am tempted to call, semi-blasphemously, “our peculiar institution.”  But it also led me to think about money. I was led to that thought by recalling that I, a humanist scholar, am a state employee of North Carolina.  But my munificent salary is, actually, largely paid by “private dollars,” funded out of the “endowed chair” donated to the university by members of the Hanes family (of Winston-Salem and underwear fame).  This post will be an unholy mixture of what that fact means for American higher education and what it means for my own relationship to money and to my work.

I am not being ironic when I use “munificent” to characterize my salary.  I make more money than ever, in my most avaricious dreams, I could have believed an English professor could make.  That salary is public knowledge because North Carolina has rather strict “sunshine” laws.  You can go to a website and look it up.  Yet in keeping with American prudery, which insures that we know less about our friends’ financial circumstances than about their sex lives, I can’t bring myself to name the sum here—or to name the sum that my wife and I have accumulated in our retirement accounts.  When, every once in a while, I do disclose those two numbers to friends and family, I am very conscious of a weird (unsettling) mixture of shame and boast in the disclosure.  I think I am overpaid—but I am proud to be valued so highly.  David Graeber is good on this feeling in his book BullShit Jobs.  For those of us who love our work and didn’t go into it for the money, there is something shameful about the pay.  Even more shameful when the pay makes one rich.

I feel guilty getting paid so much for doing a job that I like and that, frankly, comes very easy to me.  I have many colleagues who are overwhelmed, who feel constantly way behind, who are anxious, who are bedeviled by a sense that they have never done enough.  I have been, until the past year, always extremely busy; I have always worked on weekends.  But I have seldom been anxious.  When I go to North Carolina, it became clear to me very early on that this place operated at a speed that was very comfortable for me.  My pace of work, my productivity, was going to place me in the top tier at UNC.  I was never going to be made to feel inadequate, not up to snuff. (I am not extremely busy at the moment–which makes me feel even more guilty–because I have become persona non grata on campus following my public criticisms of the Chancellor.  I don’t get asked to do anything anymore.)

A time came, inevitably, when I was a victim of salary compression.  Professors get raises that average below inflation.  I tell my grad students the hard truth that their starting salary at a job could easily become their salary for life.  Raises will never go far beyond the increases in the cost of living.  But here is where we get back to the “peculiar institution” issue.  American universities exist within a prestige hierarchy. At the top of that hierarchy—meaning not only the top schools but also the wannabes—there is competition for the “best faculty.”  This is just one place where things get weird.

Why weird?  Because the measure of quality among faculty is their research productivity.  As my cynical friend Hans puts it: “in academics, quantity doesn’t count, quantity is everything.”  It’s not quite that bad, but almost.  Faculty must publish in order to distinguish themselves from other faculty—and then universities must have a faculty that publishes a lot to distinguish themselves from other universities.  In Britain, this has led to the absurdity of the government actually allocating funds to departments based on their research productivity; in America, it is more indirect, since the “best” universities can increase their funding through three means: 1) more state support in the way of research grants from the Federal (and in the case of state universities) and state governments; 2) an ability to charge higher tuition because more prestigious; and 3) a greater ability to raise philanthropic dollars because more expensive and more prestigious, which means having richer alumni.

One oddity (among others) is, of course, that research has, at best, a tangential relation to the educational mission of the university.  More to the point, the students attracted to the university by its prestige have very close to no interest in the research that underwrites that prestige.  Furthermore, the connection between prestige and the research is also completely fuzzy.  For one things, the prestige hierarchy is just about set in stone.  The same schools that headed the list in 1900 still head the list in 2020.  Reputations are, it seems, just about impossible to tarnish.  They glow like the light from long extinguished stars.

It is true that some schools—notably Duke—have managed to elbow their way into the top tier.  There are now lots of Duke imitators, all trying to crack into the stratosphere of Harvard, Yale, Stanford.  But it seems quaint to think Duke’s success can be tied in any direct way to its faculty’s research.  That success seems much more tied to a well-timed (they got into this game first) branding exercise.  They made splashy faculty hires, at the same time that they made themselves into a perennial contender for the national basketball championship.  What those faculty actually did after they were hired was secondary.  It was a question of having names on the letterhead that would lead to U.S. News (and other ranking outlets) to give Duke a boost.

Duke’s timing was impeccable because they hopped aboard the first privatization wave.  The 1980s began the move toward a renewed obsession with prestige that dovetailed with the superstition that “public” education was, by its nature, inferior to “private” education.  As the rich and the elites (see Christopher Lasch’s The Revolt of the Elites) abandoned the public commons (most dramatically in where they sent their kids to school), universities like Duke and my alma mater Georgetown were there to pick up the slack.  Georgetown shows that there was room to move up for the Duke imitators; the smallish privates, like Georgetown, Northwestern, Emory, and Vanderbilt, came up in the world, occupying a particular niche below the Ivies, but with a prestige value, a tuition price tag, and tough admission standards that simply were not the case when I was a Hoya in the 1970s.  As I learned when I got to grad school at SUNY Buffalo in 1974, they thought of themselves as having taken a chance on me because they didn’t know what a Georgetown degree meant.  Yale and Cornell turned me down.

My old employer, the University of Rochester, has always wanted to play in the Northwestern, Emory, Vanderbilt league–without ever quite managing to pull it off.  When I taught there in the late 1980s, Rochester’s president insisted on a 30% rise in tuition–in order to bring UR’s tuition in line with Northwestern etc.  He said we would never be thought any good if we didn’t charge like “our peers.”  I argued that there surely was a market niche for a good school that charged 30% less–and that UR had a better shot of getting students in that niche than in competing with Northwestern.  I, of course, lost the argument–but not just in terms of what the university did, but also in terms of its effect on applications and admissions.  I didn’t understand in those days that, when it comes to higher education, for many aspirants prestige trumps all other factors every time.  And just as in the wider market, it pays much better to cater to the wishes of the well-to-do than to a mass market.

Back to research for a moment.  As Christopher Newfield’s work has amply documented, universities lose money on the big science grants they get.  The infrastructure required to compete for such grants costs more than the grants can bring in.  Thus, either tuition, direct state support, or philanthropic dollars must underwrite the research enterprise.  Yet schools compete wildly for the research dollars because they are essential to their prestige.  Thus, UNC set a goal some years back of $1 billion a year in research funding, a goal that the Vice Chancellor for Research also admitted would worsen our bad financial plight.  We have since surpassed that goal—and are going broke.  But we had 44,000 applicants for 5000 undergraduate slots this past admissions cycle, and our departments and schools remain highly ranked.

The research imperative also makes faculty lives hell.  I have been lucky, as I already said.  For whatever reason, research has always come easily to me; it is not a burden, just something I do.  In part—and truthfully—I enjoy it.  But I will also admit it is so tangled up with issues of self-respect and of respect from my peers, that I would be hard pressed to sort out the various strands of my emotional attachments to my work.  I do know, however, that for many of my colleagues, the research is just a site of constant frustration, of a constant sense of not being good enough or productive enough.  For what?  First of all, the university needs good teachers, as well as good administrators who serve as directors of undergraduate studies, who sponsor various student clubs, who keep the educational enterprise running smoothly.  The administrative bloat on American campuses (which has, demonstrably, be a major factor in the rising costs of higher education) stems in part from freeing faculty from doing that work in the name of giving them more time to do research.

No one wants to admit that much of the research is not much worth doing.  The world will get on just fine without the many bad books and journal articles—many of which are never read by anyone—that the emphasis on research creates.  We have wasted countless hours from imaginative people by pushing faculty toward only one metric of work, toward only one way to contribute to the university.

My position is that good books will still get written even if faculty weren’t forced to write them.  This is tricky.  I am, after all, trying to think about prestige hierarchies.  And it would take a massive cultural sea-change within academia to reach the point where those who were productive researchers were not at the top of the ladder.  Cultural sea-changes require alterations in what Raymond Williams called “structures of feeling.”  I have already indicated the extent to which I recognize my own research was motivated by issues of self-worth and of looking worthy in the eyes of my peers.

Reputation drives many academics much more than money—and it cripples them far more effectively as well.  But still, part of me wants to insist that if the work is worth doing, it will get done.  In other words, we could lose all the research produced just because there is gun to people’s heads—and there still would be good books written (and some bad ones as well) because there will still be people for whom the enterprise of writing a book is central to their sense of themselves (as writers, as persons) and because they see the writing of books as valuable in and of itself.  That Holy Grail of “intrinsic value.”  I doubt we ever get full purity.  But, after all, we do do certain things because we find them worth doing.  And the writing of books is either something some people find worth doing—or it shouldn’t be done at all.

I always read Proust and other social novelists with an inability to suspend disbelief.  I could not understand a life where social climbing, where social ambition, was the driving passion.  I thought that such a world had long since disappeared.  People didn’t orient their lives in that fashion anymore.  But today I read The New Yorker and it is full of tales of people who are tortured and paralyzed by social media, who are obsessed with the “right brands,”star chefs and restaurants, and by celebrities.   And I should probably admit that academics are embroiled in their own kind of social climbing; they, too, want to be part of certain inner circles.  I always held myself rather aloof from all that—and, yet, by the Proustian law of getting what you seem (to others) not to want, I have had, by any objective standard, a highly successful academic career.  I never reached superstar status; I am more like the number 50th ranked tennis player in the world, known by some but not all, but still getting a fair number of perks that fall to those in the inner circles, even if I don’t have their name recognition and my books are read by much, much smaller audiences.

Among the perks, in my own context, there is that absurd salary.  When compression struck, I was able (as you are forced to do in the academic game) to go get an “outside offer.”  I had the kind of research profile that would lead another school that was in the prestige game to bid for my services.  I was able to force UNC to raise my salary so it was in line with that of my colleagues who had been hired after me or who had gotten outside offers of their own.  (Maybe another time I will talk about the complex layers of guilt unleashed by playing the game of getting such an offer.)

Which brings me full circle.  UNC can only compete for the “best faculty” as it struggles to maintain its high reputation, its high ranking, because private donors (alumni who are committed to UNC maintaining its standing) supplement the salaries the state is willing to pay.  UNC, like almost all the top public universities (Virginia, Michigan, UCLA, Berkeley) is a quasi-public school at this point.  Since UNC is more dependent on state dollars than the other schools I have just named, its standing is, in fact, sinking while theirs is holding steady.  Public schools further down the ladder—the UNC Charlottes of the world—are playing a desperate game of catch-up since they don’t’ have the fund-raising potential of the “flagships” and thus are hurt even more by the steady withdrawal of state support.

In short, the privatization of American higher education is a product of the lessening prestige of the public schools—a decline that is semi-rational given that schools are much less fully funded now than they once were.  But it is only semi-rational because it is also tied to the resurgence in the US of prestige-hunger, a resurgence related to the many sins that get covered by the name “neoliberalism.”  There is a heightened—if only rarely explicitly stated—sense of the great divide between winners and losers in our contemporary world.  And going to the “right” college now seems essential (to many people) to making sure you are one of the winners.  The Dukes and Georgetowns of the world have risen because of that anxiety about being left behind and because anything public has been underfunded and denigrated since the 1980s.  This, of course, explains the recent scandal of cheating the admissions process.  More importantly, it explains the on-going scandal of “legacy” admissions, which are motivated by fund-raising imperatives and by the time-worn abilities of elites to retain privileges.

The wider story, however, is about distinction–and cultural mores.  Here’s another argument I lost regarding college admissions.  UNC never had any “merit-based” scholarships (apart from the Moreheads, a whole ‘nother story).  In the early 1990s UNC realized it was beginning to lost the “best” in-state students to schools like Brown and Georgetown and Harvard.  Losing such students, of course, hurt our US News rankings, since average SAT scores for the incoming class were a major metric.  So it was decided to begin offering $500 and $1000 named scholarships to top applicants, irrespective of financial need.  My argument: “you mean to tell me that giving someone $1000 off our $12,000 in-state tuition will make them come to UNC, when their family is fully ready to pay $45,000 for them to go to Brown?”  Once again, I was wrong.  Students wanted to be singled out as “different,” as “special.”  The merit scholarships did increase our yield among top in-state students.  Maybe I am hopelessly romanticizing the 1950s and 1960s–and maybe the middle middle class that came from still exists.  I went to the most elite Catholic high school on Long Island.  All of my classmates went to college.  And there was some sense of a distinction between “going away” to college and going to a college within fifty miles of our high school.  But, really, beyond that little to no sense that Hamilton was different from Villanova, or Northwestern not the same as Marist.  And there was certainly no sense that a school had to distinguish me from other admitted students in order to get me to attend.  I can’t help but believe we are a far less democratic, far less egalitarian society culturally and emotionally (as well as, obviously, economically) now than we were in 1965.

My fat salary is linked to the same sea changes.  In academia, too, the divide between winners and losers has widened.  The spread between the highest and lowest salary in my department is much greater now than it was in 1992, when I arrived.  And, of course, academia has also created its own version of “contract workers,” the “adjuncts” who get low wages and no benefits to do the teaching that the “research faculty” does not do.  It stinks—even as I am a beneficiary of it.  No wonder I feel guilty.  Yeah, you say, you and your guilt feelings plus $1.50 will get you a ride on the subway.  I hate coming across as defensive, but I will record here that I have turned down all available raises over the past five years (admittedly, they were hardly large) so that the money could be distributed among my less well-paid colleagues.

A last point about money.  This thought comes from the Paul Manafort story.  I must be a person of very limited imagination.  Over the past three years, after all the deductions for taxes, retirement funds, health insurance etc., my wife and I together have approximately $10,000 a month in take home pay.  That’s the amount that lands in our bank accounts each month.  We bought our house quite some time ago, so our monthly mortgage plus escrow is $2000.  I understand that is low for most people.  But we have had a number of medical bills that our shitty medical insurance fails to cover—certainly coming to at least $500 a month when averaged over a whole year.  In any case, the point is that we can’t spend $10,000 a month—even as we were supplementing my wife’s mother’s retirement home costs to the tune of $1500 a month, and give a fair amount of money to our two children.  Yet we do not deny ourselves anything, and basically don’t pay much attention to what we spend.  This last, not paying attention, is an astounding luxury after at least twenty years of sweating every penny.  Yet, even with being wildly careless in relation to our earlier habits, there is always enough money.  In fact, it slowly accumulates, so that at the end of every year, no matter what medical emergencies or extravagant trips or increases in the number of charities we send an automatic monthly donation to, there is an extra $10,000 or so.

Clearly—as Paul Manafort showed us—there are a significant number of people in the US to whom $10,000 a month would be woefully inadequate.  Of course, there are millions more for whom, as for my wife and I, it would be untold riches. I don’t really know what moral to derive from that fact.  So I will simply state it here—and cease.