Author: john mcgowan

Money and Babies

Since I got onto money in my last post, I am going to continue that line of thought (briefly).  I worried a lot about money between the ages of 29 and 44 (roughly); it’s strange how hard it is for me to remember my feelings.  Sure, I forget events as well.  But the main outlines of my life’s history are there to be remembered.  What I can’t pull up is how I felt, what I was thinking, at various points.  My sense now that I was somehow not present at much of my life stems from this inability to reconstruct, even in imagination, who I was at any given moment.  I did all these things—but don’t remember how I did them or what I felt as I was doing or even exactly what I thought I was doing.  Getting through each day was the focus, and somehow I made it from one day to the next.  But there was no overall plan—and no way to settle into some set of coherent, sustaining emotions.  It was a blur then and it’s a blur now.

All of which is to say that I worried about money, about the relative lack of it, without having any idea about how to get more of it.  I didn’t even have money fantasies—like winning the lottery or (just as likely) writing a best-selling book.  What I did for a living, including writing the books that my academic career required, was utterly disconnected emotionally and intellectually from the need to have more money.  When I made my first academic move (from the University of Rochester’s Eastman School of Music to the University of North Carolina) the motive was purely professional, not monetary.  I wanted to teach in an English department and be at a university where my talents would not be underutilized.  That it would involve a substantial raise in pay never occurred to me until I got the offer of employment.  And when I went and got that outside offer in order to boost my UNC salary (as mentioned in the last post), it was the inequity of what I was being paid that drove me, not the money itself.  In fact, despite worrying about money for all those years, I never actually imagined having more than enough.  It was as if I just accepted that financial insecurity was my lot in life.  I could worry about it, but I didn’t have any prospects of changing it.

Certainly, my worries did make me into a cheap-skate.  And undoubtedly those niggardly habits are the reason we now have more than enough each month.  Habits they certainly are since at this point they don’t even pinch.  They just are the way I live in the world—and allow me to feel like I am being extravagant when (fairly often now) I allow myself luxuries others would even give a second thought.

My main theme, however: the worries about money were utterly separate from the decision to have children.  That this was so now amazes me.  It is simply true that when Jane and I decided the time had come to have children, the financial implications of that decision never occurred to me.  We made a very conscious decision to have children.  Our relationship was predicated, in fact, on the agreement that we would have children.  And when that pre-nuptial agreement was made the issue of having money enough to have kids was raised.  But four years later, when we decided to have the anticipated child, money was not considered at all.  And when we decided to have a second child after another two years, once again money was not an issue.  I don’t know why not.  Why—when I worried about having enough money for all kinds of other necessities—did I not worry about having enough money to raise our two children?  That’s the mystery.

I have no answer.  And I can’t say if that was true generally for those of us having our children in the 1980s, although it seems to have been true for most of my friends.  On the other hand, as my wife often notes, I do have a fairly large number of married friends (couples who have been together forty years now) who do not have children.  Very likely that a mixture of professional and financial reasons led to their not having kids.

I do, however, feel that financial considerations do play a large role now (in the 2010s) in the decision to have children.  That’s part of the cultural sea-change around winners and losers, the emptying out of the middle class, and the ridiculous price of “private” and quasi-private education.  Most conspicuous to me is the increasing number of single-child families among the upper middle class.  Yes, that is the result of a late start for women who take time to establish themselves in a profession.  But it also an artifact of worrying about the cost of child-care and of education.

I come from a family of seven children.  And my parents, relatively, were less well-off when they had us than Jane and I were when we had our children.  (That statement is a bit complicated since my parents had access to family money in case of emergency that was not there for me to tap.  But, in fact, my parents didn’t rely heavily on that reserve until much later in their lives.)  Was my not following my parents’ footsteps toward a large family financially motivated?  A bit, I guess.  But it really seems more a matter of style—plus the fact that my wife was 34 when she got pregnant with our first.  But even if she had been 24 (as my mother was at her first pregnancy), it is highly unlikely we would have had more than two kids (perhaps three).  The idea was unthinkable by 1987; it just wasn’t done.

It is also hard to see how we could have done it (even though that fact didn’t enter into our thinking).  Certainly, it would have been done very differently.  We paid $430,000 for our two children’s educations: three years of private high school and four years of private university (with a $15,000 scholarship each year) for my son, and four years of private high school and four years of private university for my daughter. And that figure is just the fees paid to the schools; it doesn’t include all the other costs. We would certainly have relied much more heavily on public education if we had more than two children.

Once again, I have no moral to draw.  I am just trying to track what seem to me particularly significant shifts in cultural sensibility.

On Salaries and Money and American Universities

My last post on the future of the humanities led me to think about American higher education, which I am tempted to call, semi-blasphemously, “our peculiar institution.”  But it also led me to think about money. I was led to that thought by recalling that I, a humanist scholar, am a state employee of North Carolina.  But my munificent salary is, actually, largely paid by “private dollars,” funded out of the “endowed chair” donated to the university by members of the Hanes family (of Winston-Salem and underwear fame).  This post will be an unholy mixture of what that fact means for American higher education and what it means for my own relationship to money and to my work.

I am not being ironic when I use “munificent” to characterize my salary.  I make more money than ever, in my most avaricious dreams, I could have believed an English professor could make.  That salary is public knowledge because North Carolina has rather strict “sunshine” laws.  You can go to a website and look it up.  Yet in keeping with American prudery, which insures that we know less about our friends’ financial circumstances than about their sex lives, I can’t bring myself to name the sum here—or to name the sum that my wife and I have accumulated in our retirement accounts.  When, every once in a while, I do disclose those two numbers to friends and family, I am very conscious of a weird (unsettling) mixture of shame and boast in the disclosure.  I think I am overpaid—but I am proud to be valued so highly.  David Graeber is good on this feeling in his book BullShit Jobs.  For those of us who love our work and didn’t go into it for the money, there is something shameful about the pay.  Even more shameful when the pay makes one rich.

I feel guilty getting paid so much for doing a job that I like and that, frankly, comes very easy to me.  I have many colleagues who are overwhelmed, who feel constantly way behind, who are anxious, who are bedeviled by a sense that they have never done enough.  I have been, until the past year, always extremely busy; I have always worked on weekends.  But I have seldom been anxious.  When I go to North Carolina, it became clear to me very early on that this place operated at a speed that was very comfortable for me.  My pace of work, my productivity, was going to place me in the top tier at UNC.  I was never going to be made to feel inadequate, not up to snuff. (I am not extremely busy at the moment–which makes me feel even more guilty–because I have become persona non grata on campus following my public criticisms of the Chancellor.  I don’t get asked to do anything anymore.)

A time came, inevitably, when I was a victim of salary compression.  Professors get raises that average below inflation.  I tell my grad students the hard truth that their starting salary at a job could easily become their salary for life.  Raises will never go far beyond the increases in the cost of living.  But here is where we get back to the “peculiar institution” issue.  American universities exist within a prestige hierarchy. At the top of that hierarchy—meaning not only the top schools but also the wannabes—there is competition for the “best faculty.”  This is just one place where things get weird.

Why weird?  Because the measure of quality among faculty is their research productivity.  As my cynical friend Hans puts it: “in academics, quantity doesn’t count, quantity is everything.”  It’s not quite that bad, but almost.  Faculty must publish in order to distinguish themselves from other faculty—and then universities must have a faculty that publishes a lot to distinguish themselves from other universities.  In Britain, this has led to the absurdity of the government actually allocating funds to departments based on their research productivity; in America, it is more indirect, since the “best” universities can increase their funding through three means: 1) more state support in the way of research grants from the Federal (and in the case of state universities) and state governments; 2) an ability to charge higher tuition because more prestigious; and 3) a greater ability to raise philanthropic dollars because more expensive and more prestigious, which means having richer alumni.

One oddity (among others) is, of course, that research has, at best, a tangential relation to the educational mission of the university.  More to the point, the students attracted to the university by its prestige have very close to no interest in the research that underwrites that prestige.  Furthermore, the connection between prestige and the research is also completely fuzzy.  For one things, the prestige hierarchy is just about set in stone.  The same schools that headed the list in 1900 still head the list in 2020.  Reputations are, it seems, just about impossible to tarnish.  They glow like the light from long extinguished stars.

It is true that some schools—notably Duke—have managed to elbow their way into the top tier.  There are now lots of Duke imitators, all trying to crack into the stratosphere of Harvard, Yale, Stanford.  But it seems quaint to think Duke’s success can be tied in any direct way to its faculty’s research.  That success seems much more tied to a well-timed (they got into this game first) branding exercise.  They made splashy faculty hires, at the same time that they made themselves into a perennial contender for the national basketball championship.  What those faculty actually did after they were hired was secondary.  It was a question of having names on the letterhead that would lead to U.S. News (and other ranking outlets) to give Duke a boost.

Duke’s timing was impeccable because they hopped aboard the first privatization wave.  The 1980s began the move toward a renewed obsession with prestige that dovetailed with the superstition that “public” education was, by its nature, inferior to “private” education.  As the rich and the elites (see Christopher Lasch’s The Revolt of the Elites) abandoned the public commons (most dramatically in where they sent their kids to school), universities like Duke and my alma mater Georgetown were there to pick up the slack.  Georgetown shows that there was room to move up for the Duke imitators; the smallish privates, like Georgetown, Northwestern, Emory, and Vanderbilt, came up in the world, occupying a particular niche below the Ivies, but with a prestige value, a tuition price tag, and tough admission standards that simply were not the case when I was a Hoya in the 1970s.  As I learned when I got to grad school at SUNY Buffalo in 1974, they thought of themselves as having taken a chance on me because they didn’t know what a Georgetown degree meant.  Yale and Cornell turned me down.

My old employer, the University of Rochester, has always wanted to play in the Northwestern, Emory, Vanderbilt league–without ever quite managing to pull it off.  When I taught there in the late 1980s, Rochester’s president insisted on a 30% rise in tuition–in order to bring UR’s tuition in line with Northwestern etc.  He said we would never be thought any good if we didn’t charge like “our peers.”  I argued that there surely was a market niche for a good school that charged 30% less–and that UR had a better shot of getting students in that niche than in competing with Northwestern.  I, of course, lost the argument–but not just in terms of what the university did, but also in terms of its effect on applications and admissions.  I didn’t understand in those days that, when it comes to higher education, for many aspirants prestige trumps all other factors every time.  And just as in the wider market, it pays much better to cater to the wishes of the well-to-do than to a mass market.

Back to research for a moment.  As Christopher Newfield’s work has amply documented, universities lose money on the big science grants they get.  The infrastructure required to compete for such grants costs more than the grants can bring in.  Thus, either tuition, direct state support, or philanthropic dollars must underwrite the research enterprise.  Yet schools compete wildly for the research dollars because they are essential to their prestige.  Thus, UNC set a goal some years back of $1 billion a year in research funding, a goal that the Vice Chancellor for Research also admitted would worsen our bad financial plight.  We have since surpassed that goal—and are going broke.  But we had 44,000 applicants for 5000 undergraduate slots this past admissions cycle, and our departments and schools remain highly ranked.

The research imperative also makes faculty lives hell.  I have been lucky, as I already said.  For whatever reason, research has always come easily to me; it is not a burden, just something I do.  In part—and truthfully—I enjoy it.  But I will also admit it is so tangled up with issues of self-respect and of respect from my peers, that I would be hard pressed to sort out the various strands of my emotional attachments to my work.  I do know, however, that for many of my colleagues, the research is just a site of constant frustration, of a constant sense of not being good enough or productive enough.  For what?  First of all, the university needs good teachers, as well as good administrators who serve as directors of undergraduate studies, who sponsor various student clubs, who keep the educational enterprise running smoothly.  The administrative bloat on American campuses (which has, demonstrably, be a major factor in the rising costs of higher education) stems in part from freeing faculty from doing that work in the name of giving them more time to do research.

No one wants to admit that much of the research is not much worth doing.  The world will get on just fine without the many bad books and journal articles—many of which are never read by anyone—that the emphasis on research creates.  We have wasted countless hours from imaginative people by pushing faculty toward only one metric of work, toward only one way to contribute to the university.

My position is that good books will still get written even if faculty weren’t forced to write them.  This is tricky.  I am, after all, trying to think about prestige hierarchies.  And it would take a massive cultural sea-change within academia to reach the point where those who were productive researchers were not at the top of the ladder.  Cultural sea-changes require alterations in what Raymond Williams called “structures of feeling.”  I have already indicated the extent to which I recognize my own research was motivated by issues of self-worth and of looking worthy in the eyes of my peers.

Reputation drives many academics much more than money—and it cripples them far more effectively as well.  But still, part of me wants to insist that if the work is worth doing, it will get done.  In other words, we could lose all the research produced just because there is gun to people’s heads—and there still would be good books written (and some bad ones as well) because there will still be people for whom the enterprise of writing a book is central to their sense of themselves (as writers, as persons) and because they see the writing of books as valuable in and of itself.  That Holy Grail of “intrinsic value.”  I doubt we ever get full purity.  But, after all, we do do certain things because we find them worth doing.  And the writing of books is either something some people find worth doing—or it shouldn’t be done at all.

I always read Proust and other social novelists with an inability to suspend disbelief.  I could not understand a life where social climbing, where social ambition, was the driving passion.  I thought that such a world had long since disappeared.  People didn’t orient their lives in that fashion anymore.  But today I read The New Yorker and it is full of tales of people who are tortured and paralyzed by social media, who are obsessed with the “right brands,”star chefs and restaurants, and by celebrities.   And I should probably admit that academics are embroiled in their own kind of social climbing; they, too, want to be part of certain inner circles.  I always held myself rather aloof from all that—and, yet, by the Proustian law of getting what you seem (to others) not to want, I have had, by any objective standard, a highly successful academic career.  I never reached superstar status; I am more like the number 50th ranked tennis player in the world, known by some but not all, but still getting a fair number of perks that fall to those in the inner circles, even if I don’t have their name recognition and my books are read by much, much smaller audiences.

Among the perks, in my own context, there is that absurd salary.  When compression struck, I was able (as you are forced to do in the academic game) to go get an “outside offer.”  I had the kind of research profile that would lead another school that was in the prestige game to bid for my services.  I was able to force UNC to raise my salary so it was in line with that of my colleagues who had been hired after me or who had gotten outside offers of their own.  (Maybe another time I will talk about the complex layers of guilt unleashed by playing the game of getting such an offer.)

Which brings me full circle.  UNC can only compete for the “best faculty” as it struggles to maintain its high reputation, its high ranking, because private donors (alumni who are committed to UNC maintaining its standing) supplement the salaries the state is willing to pay.  UNC, like almost all the top public universities (Virginia, Michigan, UCLA, Berkeley) is a quasi-public school at this point.  Since UNC is more dependent on state dollars than the other schools I have just named, its standing is, in fact, sinking while theirs is holding steady.  Public schools further down the ladder—the UNC Charlottes of the world—are playing a desperate game of catch-up since they don’t’ have the fund-raising potential of the “flagships” and thus are hurt even more by the steady withdrawal of state support.

In short, the privatization of American higher education is a product of the lessening prestige of the public schools—a decline that is semi-rational given that schools are much less fully funded now than they once were.  But it is only semi-rational because it is also tied to the resurgence in the US of prestige-hunger, a resurgence related to the many sins that get covered by the name “neoliberalism.”  There is a heightened—if only rarely explicitly stated—sense of the great divide between winners and losers in our contemporary world.  And going to the “right” college now seems essential (to many people) to making sure you are one of the winners.  The Dukes and Georgetowns of the world have risen because of that anxiety about being left behind and because anything public has been underfunded and denigrated since the 1980s.  This, of course, explains the recent scandal of cheating the admissions process.  More importantly, it explains the on-going scandal of “legacy” admissions, which are motivated by fund-raising imperatives and by the time-worn abilities of elites to retain privileges.

The wider story, however, is about distinction–and cultural mores.  Here’s another argument I lost regarding college admissions.  UNC never had any “merit-based” scholarships (apart from the Moreheads, a whole ‘nother story).  In the early 1990s UNC realized it was beginning to lost the “best” in-state students to schools like Brown and Georgetown and Harvard.  Losing such students, of course, hurt our US News rankings, since average SAT scores for the incoming class were a major metric.  So it was decided to begin offering $500 and $1000 named scholarships to top applicants, irrespective of financial need.  My argument: “you mean to tell me that giving someone $1000 off our $12,000 in-state tuition will make them come to UNC, when their family is fully ready to pay $45,000 for them to go to Brown?”  Once again, I was wrong.  Students wanted to be singled out as “different,” as “special.”  The merit scholarships did increase our yield among top in-state students.  Maybe I am hopelessly romanticizing the 1950s and 1960s–and maybe the middle middle class that came from still exists.  I went to the most elite Catholic high school on Long Island.  All of my classmates went to college.  And there was some sense of a distinction between “going away” to college and going to a college within fifty miles of our high school.  But, really, beyond that little to no sense that Hamilton was different from Villanova, or Northwestern not the same as Marist.  And there was certainly no sense that a school had to distinguish me from other admitted students in order to get me to attend.  I can’t help but believe we are a far less democratic, far less egalitarian society culturally and emotionally (as well as, obviously, economically) now than we were in 1965.

My fat salary is linked to the same sea changes.  In academia, too, the divide between winners and losers has widened.  The spread between the highest and lowest salary in my department is much greater now than it was in 1992, when I arrived.  And, of course, academia has also created its own version of “contract workers,” the “adjuncts” who get low wages and no benefits to do the teaching that the “research faculty” does not do.  It stinks—even as I am a beneficiary of it.  No wonder I feel guilty.  Yeah, you say, you and your guilt feelings plus $1.50 will get you a ride on the subway.  I hate coming across as defensive, but I will record here that I have turned down all available raises over the past five years (admittedly, they were hardly large) so that the money could be distributed among my less well-paid colleagues.

A last point about money.  This thought comes from the Paul Manafort story.  I must be a person of very limited imagination.  Over the past three years, after all the deductions for taxes, retirement funds, health insurance etc., my wife and I together have approximately $10,000 a month in take home pay.  That’s the amount that lands in our bank accounts each month.  We bought our house quite some time ago, so our monthly mortgage plus escrow is $2000.  I understand that is low for most people.  But we have had a number of medical bills that our shitty medical insurance fails to cover—certainly coming to at least $500 a month when averaged over a whole year.  In any case, the point is that we can’t spend $10,000 a month—even as we were supplementing my wife’s mother’s retirement home costs to the tune of $1500 a month, and give a fair amount of money to our two children.  Yet we do not deny ourselves anything, and basically don’t pay much attention to what we spend.  This last, not paying attention, is an astounding luxury after at least twenty years of sweating every penny.  Yet, even with being wildly careless in relation to our earlier habits, there is always enough money.  In fact, it slowly accumulates, so that at the end of every year, no matter what medical emergencies or extravagant trips or increases in the number of charities we send an automatic monthly donation to, there is an extra $10,000 or so.

Clearly—as Paul Manafort showed us—there are a significant number of people in the US to whom $10,000 a month would be woefully inadequate.  Of course, there are millions more for whom, as for my wife and I, it would be untold riches. I don’t really know what moral to derive from that fact.  So I will simply state it here—and cease.

The Future of the Humanities

For some time now, I have a question that I use as a litmus test when speaking with professors of English.  Do you think there will be professors of Victorian literature on American campuses fifty years from now?  There is no discernible pattern, that I can tell, among the responses I get, which cover the full gamut from confident pronouncements that “of course there will be” to sharp laughter accompanying the assertion “I give them twenty years to go extinct.”  (For the record: UNC’s English department currently has five medievalists, seven Renaissance scholars, and six professors teaching Romantic and Victorian literature—that is, if I am allowed to count myself a Victorianist, as I sometime was.)

I have gone through four crises of the humanities in my lifetime, each coinciding with a serious economic downturn (1974, 1981, 1992, and 2008).  The 1981 slump cost me my job when the Humanities Department in which I taught was abolished.  The collapse of the dot.com boom did not generate its corresponding “death of the humanities” moment because, apparently, 9/11 showed us we needed poets.  They were trotted out nation-wide as America tried to come to terms with its grief.

Still, the crisis feels different this time.  Of course, I may just be old and tired and discouraged.  Not “may be.”  Certainly am.  But I think there are also real differences this time around—differences that point to a different future for the humanities.

In part, I am following up my posts about curriculum revision at UNC.  The coverage model is on the wane.  The notion that general education students should gain a familiarity with the whole of English literature is certainly moving toward extinction.  Classes are going to be more focused, more oriented to solving defined problems and imparting designated competencies.  Methods over content.

But, paradoxically, the decline of the professors of Victorian literature is linked to more coverage, not less.  The History Department can be our guide here.  At one time, History departments had two or three specialists in French history (roughly divided by centuries), three or four in English history, along with others who might specialize in Germany or Spain or Italy.  That all began to change (slowly, since it takes some time to turn over a tenured faculty) twenty or so years ago when the Eurocentric world of the American history department was broken open.  Now there needed to be specialists on China, on India, on Latin America, on Africa.  True, in some cases, these non-European specialists were planted in new “area studies” units (Asian Studies, Latin American Studies, Near Eastern Studies etc.).  But usually even those located in area studies would hold a joint appointment in History—and those joint appointments ate up “faculty lines” formerly devoted to the 18th century French specialist.

Art History departments (because relatively small) have always worked on this model: limited numbers of faculty who were supposed, somehow, to cover all art in all places from the beginning of time.  The result was that, while courses covered that whole span, the department only featured scholars of certain periods.  There was no way to have an active scholar in all the possible areas to be studied.  Scholarly “coverage,” in other words, was impossible.

English and Philosophy departments are, in my view, certain to go down this path. English now has to cover world literatures written in English, as well as the literatures of groups formerly not studied (not part of the “canon”).  Philosophy, as well, now incldue non-Western thought, as well as practical, professional, and environmental  ethics, along with new interests in cognitive science.

There will not, fifty years from now, be no professors of Victorian literature in America.  But there will no longer be the presumption that every self-respecting department of English must have a professor of Victorian literature.  The scholarly coverage will be much more spotty—which means, among other things, that someone who wants to become a scholar of Victorian literature will know there are six places to reasonably pursue that ambition in graduate school instead of (as is the case now) assuming you can study Victorian literature in any graduate program.  Similarly, if 18th century English and Scottish empiricism is your heart’s desire, you will have to identify the six philosophy departments you can pursue that course of study.

There is, of course, the larger question.  Certainly (or, at least, it seems obvious to me, although hardly to all those I submit to my litmus test), it is a remarkable thing that our society sees fit to subsidize scholars of Victorian literature.  The prestige of English literature (not our national literature after all) is breath-taking if you reflect upon it for even three seconds.  What made Shakespeare into an American author, an absolute fixture in the American curriculum from seventh grade onwards?  What plausible stake could our society be said to have in subsidizing continued research into the fiction and life of Charles Dickens?  What compelling interest (as a court of law would phrase it) can be identified here?

Another paradox here, it seems to me.  I hate (positively hate, I tell you) the bromides offered (since Matthew Arnold at least) in generalized defenses of the humanities.  When I was (during my years as a director of a humanities center) called upon to speak about the value of the humanities, I always focused on individual examples of the kind of work my center was enabling.  The individual projects were fascinating—and of obvious interest to most halfway-educated and halfway-sympathetic audiences.  The fact that, within the humanities, intellectual inquiry leads to new knowledge and to new perspectives on old knowledge is the lifeblood of the whole enterprise.

But it is much harder to label that good work as necessary.  The world is a better, richer (I choose this word deliberately) place when it is possible for scholars to chase down fascinating ideas and stories because they are fascinating.  And I firmly believe that fascination will mean that people who have the inclination and the leisure will continue to do humanities work come hell and high water.  Yes, they will need the five hundred pounds a year and the room of one’s own that Virginia Woolf identified as the prerequisites, but people of such means are hardly an endangered species at the moment.  And, yes, it is true that society generally (especially after the fact, in the rear view mirror as it were) likes to be able to point to such achievements, to see them as signs of vitality, culture, high-mindedness and the like.  But that doesn’t say who is to pay.  The state?  The bargain up to now is that the scholars (as well as the poets and the novelists) teach for their crust of bread and for, what is more precious, the time to do their non-teaching work of scholarship and writing.  Philanthropists?  The arts in America are subsidized by private charity—and so is much of higher education (increasingly so as state support dwindles.)  The intricacies of this bargain warrant another post.  The market?  Never going to happen.  Poetry and scholarship is never going to pay for itself, and novels only very rarely so.

The humanities, then, are dependent on charity—or on the weird institution that is American higher education.  The humanities’ place in higher education is precarious—and the more the logic of the market is imposed on education, the more precarious that position becomes.  No surprise there.  But it is no help when my colleagues act as if the value of scholarship on Victorian literature is self-evident.  Just the opposite.  Its value is extremely hard to articulate.  We humanists do not have any knock-down arguments.  And there aren’t any out there just waiting to be discovered.  The ground has been too well covered for there to have been such an oversight.  The humanities are in the tough position of being a luxury, not a necessity, even as they are also a luxury which makes life worth living as contrasted to “bare life” (to appropriate Agamben’s phrase).  The cruelty of our times is that the overlords are perfectly content (hell, it is one of their primary aims) to have the vast majority only possess “bare life.”  Perhaps it was always thus, but that is no consolation. Not needing the humanities themselves, our overlords are hardly moved to consider how to provide it for others.

More Comments on What We Should Teach at University

My colleague Todd Taylor weighs in—and thinks he also might be the source for my “formula.”  Here, from Todd’s textbook is his version of the three-pronged statement about what we should, as teachers, be aiming to enable our students to do.

  1. To gather the most relevant and persuasive evidence.
  2. To identify a pattern among that evidence.
  3. To articulate a perspective supported by your analysis of the evidence.

And here are Todd’s further thoughts:

“I might have been a source for the ‘neat formula’ you mention, since I’ve been preaching that three-step process as “The Essential Skill for the Information Age” for over a decade now.  I might have added the formula to the Tar Heel Writing Guide.  I am attaching a scan of my textbook Becoming a College Writer where I distill the formula to its simplest form.  I have longer talks on the formula, with notable points being that step #1 sometimes includes generating information beyond just locating someone else’s data.  And step #3, articulating a perspective for others to follow (or call to action or application), is the fulcrum where “content-consumption, passive pedagogy” breaks down and “knowledge-production, active learning” takes off.

The high-point of my experience preaching this formula was when a senior ENGL 142 student shared with me the news of a job interview that ended successfully at the moment when she recited the three steps in response to the question ‘What is your problem solving process?’

In my textbook, I also have a potentially provocative definition of a “discipline” as “a method (for gathering evidence) applied to a subject,” which is my soft attempt to introduce epistemology to GenEd students.  What gets interesting for us rhet/discourse types is to consider how a “discipline” goes beyond steps #1 and #2 and includes step #3 so that a complete definition of “discipline” also includes the ways of articulating/communicating that which emerges from the application of a method to a subject.  I will forever hold onto to my beloved linguistic determinism.  Of course, this idea is nothing new to critical theorists, especially from Foucault.  What might be new(ish) is to try to explain/integrate such ideas within the institution(s) of GenEd requirements and higher ed.  I expect if I studied Dewey again, I could trace the ideas there, just as I expect other folks have other versions of the ‘neat formula.'”

Todd also raised another issue with me that is (at least to me) of great interest.  The humanities are wedded, we agreed, to “interpretation.”  And it makes sense to think of interpretation as a “method” or “approach” that is distinct from the qualitative/quantitative divide in the social sciences.  Back to Dilthey.  Explanation versus meaning.  Analysis versus the hermeneutic.  But perhaps even more than that, since quantitative/qualitative can be descriptors applied to the data itself, whereas interpretation is about how you understand the data.  So no science, even with all its numbers, without some sort of interpretation.  In other words, quantitative/qualitative doesn’t cover the whole field.  There is much more to be said about how we process information than simply saying sometimes we do it via numbers and sometimes via other means.

Comments on the Last Post

Two colleagues had responses to my post on the curriculum reform currently in proces at UNC.

First, from Chris Lundberg, in the Communications Department, who thinks he may be the source for my (stolen) list of the primary goals of university education in our information saturated age:

“I think I might be the unattributed source for the formula!

The only thing that I’d add to what you already wrote here is to disassociate capacities from skills. Here’s an abbreviated version of my schtick (though you’ve heard it before).

The university is subject to disruption for a number of reasons: folks don’t understand the mission; the content we teach is not responsive to the needs that students have beyond Carolina, and lots of folks have a legitimate argument to teach information and skills.

One of the ways we talked about this in the conversation we had awhile back was to ask “what are the things that can’t be outsourced?”—either to another mode of learning information or skills, or, in the case of the job market, to someone behind a computer screen somewhere else. So the formulation that we’d talked about was something like If you can learn content remotely, the vocation organized around that content that is highly likely to be outsourced.

So the case for the university also has to be a case about what is unique about the mode of instruction. That’s the thing about capacities. They aren’t just about something that you learn as content, they are also the kinds of things that you have to do and receive feedback on in the presence of other folks. Writing, Speaking, reasoning together, framing arguments, etc.

The information/content part of education doesn’t make a case for the uniqueness of the university—the Internet is real good at delivering information. You don’t need a library card anymore to access the repository of the world’s information. What you need is to learn how to effectively search, pick, and put together a case for what that information is useful. The capacity for sorting and seeing connections in information is the thing. (see the “neat formula”)

Skills (or as the folks in the corporate sector call them now, competencies) are defined by the ability to know how to perform a given task in a given context. Their usefulness is bounded measuring (typically a behavior_) against the demands of that context. A capacity, OTOH is a trans-contextual ability (set of habits of inquiry and thought, ways of deliberating, etc.) that works across multiple contexts. For example, the biology text my dad used was horribly misinformed about genetic expression (they didn’t know about epigenetics, micro RNA, etc.). What was valuable about his biology class (dad was a biotech entrepreneur) was that he learned how to engage the content: what was a legitimate scientific argument; what made a study good; how to do observational research; a facility for the vocabulary, etc. That set of capacities for thinking about science benefitted him even if the content did not. A capacity is something like the condition of possibility for learning content—think Aristotle on dunamis here—not unlike a faculty in its function, but unlike a faculty because it is the result of learning a specific style or mode of thought and engagement. Where faculties are arguably innate, capacities are teachable (constrained by the innate presence of faculties). That, by the way, is what makes it hard to outsource capacity based learning either in terms of the mode of learning (harder to do lab research online) and in terms of the vocation (you can’t acquire it as effectively as you might in the context of a face-to-face learning community).

So, a big part of the sell, at least in my opinion, should be about framing capacities as the socially, politically, and economically impactful “middle” ground between information and skills—and therefore justifying both the university and Gen Ed as an element of a liberal arts curriculum.”

Second, from my colleague in the English Department, Jane Danielewicz, who puts some flesh on the bones of “active learning” and weighs in issues of assessment:

“If we relinquish our grip on teaching primarily content, then we must also develop new methods of assessment.  Our standard tests are content focused.  To assess competencies, students must be asked to demonstrate those competencies.  Our methods of assessment will need to evaluate students’ performances rather than their ability to regurgitate content knowledge.

We should be asking students to write in genres that are recognizable to audiences in various real world settings.  We should also strive to provide real occasions where a student can demonstrate their competencies to an audience, starting with an audience of their peers and moving out from there.  For example, students can present posters or conference papers at a min-conference (held during the exam period).

Assessment can be tied to the genre is question.  E.g. for the conference presentation (and we all know what makes a good or bad conference presentation–and should work to convey that knowledge to students), students can be assessed on how well they performed the research, made an argument, supplied evidence, and communicated (orally and visually).

Yes, classes will need to be redesigned to encourage active learning, immersive classroom environments, process-based instruction, problem-oriented class content, and appropriate assessment methods.  Many faculty are already moving in these directions, teaching in ways that develop students’ competencies.  Faculty organizations such as the Center for Faculty Excellence are (and have been) providing instruction and support for active learning, experiential learning, and collaborative learning practices.  Some of our students have built web-sites, started non-profit organizations (grounded in research about an issue), written family histories, presented at national conferences, and published in peer-reviewed journals.  We will be sorely disappointing our very action-oriented student body if we retrench and insist on a coverage model of GenEd.”

 

 

What Should—and Can—the University Teach?

The University of North Carolina, Chapel Hill is currently attempting to develop a substantially new “general education” curriculum.  GenEd, as it is known at Carolina, is the broad “liberal arts” part of a student’s college career; it is a set of requirements separate from the more specialized course of study that is the “major.”

Anyone even remotely connected to universities knows that changing the curriculum always insures lots of Sturm and Drang, gnashing of teeth, and ferocious denunciations.  Much of this is driven by self-interest; any change, necessarily, will benefit some people more than others.  At a time when students are abandoning the humanities (particularly) and the social sciences (to some extent) as “majors,” the health of those departments depends, more than in the past, on enrollment in their “GenEd” classes.  Thus, any curricular change that seems to funnel fewer students toward those classes is viewed as a threat.  Of course, an oppositional stance taken on that ground pushes the (presumably) primary responsibility of the university to serve the educational needs of its students to the back seat, displaced by internal turf battles.

But there is a legitimate, larger issue here—and that’s what I would like to address.  What does a student in 2019 need to know?  And how does our current understanding of how to answer that question relate to the “liberal arts” as traditionally understood?  At a time when respect for the liberal arts in the wider culture seems at an all-time low, how can their continued centrality to university education not only be protected, but (more importantly) justified or even expanded?

My sense is that practitioners of the liberal arts are having a hard time making the shift from a “coverage” model to one that focuses on “skills” or “capacities.”  Yes, all the proponents of the liberal arts can talk the talk about how they teach students to “think critically” and “to communicate effectively.”  So, all of us in the humanities (at least) have, to that extent, adopted skills talk—even where we fear that it turns our departments into training grounds for would-be administrators of the neoliberal world out there.  But, in our heart of hearts, many of us are really committed to the “content” of our classes, not to the skills that, as by-products, study of that content might transmit.

But, please, think of our poor students! The vast universe of knowledge that the modern research university has created means, as any conscientious scholar knows, that one can spend a lifetime studying Milton and his 17th century context without ever getting to the bottom.  Great work if you can get it.  And isn’t it wonderful that universities (and, by extension, our society) sees fit to fund someone to be a life-long Milton devotee?  But it is futile to think our undergrads, in two short years before they assume a major, are going to master Etruscan pottery, Yoruba mythology, EU politics, the demographics of drug addiction, the works of James Joyce, and the principles of relativity.  The standard way of approaching (ducking?) this conundrum has been “survey courses.”  The “if this is Tuesday, it must by John Donne” approach.

Any teacher who has ever read the set of exams written by students at the end of those survey classes knows what the research also tells us.  They are close to useless.  They are simply disorienting—and fly through the material at a speed that does not generate anything remotely like real comprehension.  The way people learn—and, again, the research is completely clear on this point—is by taking time with something, by getting down and dirty with the details, followed by synthesizing what is learned by doing something with it.  Active learning it is called—and, not to put too fine a point on it, faculty who despise it as some fashionable buzz-word are equivalent to climate change deniers.  They are resolutely, despite their claim to be scholars and researchers, refusing to credit the best research out there on how people learn.

Back to our poor student.  Not only has she been subjected to survey classes, but she has been pushed (by curricular requirements) to take a smorgasbord of them, with no effort to make the various dishes relate or talk to one another.  Each course (not to mention each department) is its own fiefdom, existing in splendid isolation from all the rest.  The end result: students have a smattering of ill-digested knowledge about a bunch of different things, with no sense of why they should know these things as opposed to other possible ones, and with no overarching sense of how this all fits together, or a clear sense of what their education has actually given them.  If we wanted to create confusion, if that was our intended outcome, we could hardly have done better.

The “content” approach in my view, then, leads to confusion for the students and tokenism in the curriculum.  We simply cannot deliver a meaningful encounter with the content of our multiple disciplines during GenEd. So the question becomes: what can we do that is meaningful in the GenEd curriculum?  After all, we could scrap GenEd altogether and do as the Brits do: just have students take courses in their chosen majors during their college years.  Like most American educators, I think the British model a bad mistake.  But that does mean I have to offer a coherent and compelling account of what GenEd can do—and the best way to insure it does what it aims for.

The answer, I believe, is to define what we want our students to be able to do as thinkers and writers.  Here’s a neat formula I stole from someone (unfortunately, I cannot remember my source).  We want a student in 2019 to 1) learn how to access information; 2) learn how to assess the information she has accessed; and 3) know how to use that information to solve specific problems and to make a presentation about it to various audiences in order to communicate various things to those audiences.  I take number 3 very expansively to include (crucially) understanding (through having some experience of) the fact that members of your audience come from very different backgrounds, with very different assumptions about what matters. Thus, effective communication relies heavily on being able (to adapt Kant’s formula) “empathize with the viewpoint of the other,” while effective problem solving relies on being able to work with others. Assessing information (#2 on my list) involves understanding that there are various methods of assessment/evaluation.  Judging the features of a text or a lab experiment in terms of its technical components and the success with which they have been deployed is different than judging its ethical implications.

I think we can, if careful and self-conscious, make significant progress toward achieving the three goals stated above during the first two years of college.  I think success requires that we de-fetishize content; that we design our classes to develop the identified skills; and that we re-design our classes to make sure we are achieving them.  Assessment will come in many different varieties, each geared to evaluating students’ performances of the competencies rather than to regurgitation of content knowledge. We should be asking students to “perform” their skills, which involves (partly) the presentation of knowledge acquired through reading, research and hands-on experience, in a variety of genres for different kinds of audiences.  The quality of their performances will be the first indication of whether or not we are being pedagogically successful.

I will confess real impatience with teacher/scholars who resist all “assessment” as a dirty word.  Somehow we are supposed to magically know that we are actually teaching our students something, when (in the old curriculum) all we really knew was that the students had checked off the requisite boxes, gotten a grade, and been passed on.  It is no secret that universities have neglected the arts and sciences of pedagogy over the years—and there is no excuse for it.  If we claim to be teaching our students, we need 1) to state clearly and precisely what it is we claim to be teaching them; 2) to do the work necessary to ascertain that we are actually succeeding; and 3) revise our methods when they are not getting the job done.

Necessarily, courses will still have “content”—and that content matters a lot!  The capacities will be taught through a semester-long engagement with some specific subject matter.  In my ideal university, the person we hire to teach medieval literature, or the history and beliefs of Buddhism, or astronomy, is someone who, in their heart of hearts, believes life is less worth living if you don’t know about their subject of expertise.  They convey that enthusiasm and conviction when they teach their classes—and gain a reputation on campus that attracts students because it is known that Professor X makes Subject Y come alive.  But Professor X also has to know, on another level, that the vast majority of her students are not going to make Subject Y their life work and that even a vaster majority of the human race will lead worthy lives knowing nothing whatsoever of Subject Y.  So we are asking our professor to also—and consciously—design her class to develop some specified capacity.  In other words, her class should model a way of thinking, and require students to put that model into practice.

The proposed new curriculum at Chapel Hill moves from the “coverage” model to one focused on skills or capacities.  I think that means we are moving from something we cannot possibly achieve to something we can, perhaps, do.  I also think the new curriculum has the distinct advantage of trying to specify those skills and capacities.  And it challenges our faculty to craft their classes with care in order to inculcate those capacities in our students.  It is a feature, not a bug, in my eyes that many of our classes will need to be modified.  The point of change is change.  Doing the same old same old is not an improvement—and I, for one, think the need for improvement is evident.

Is the new curriculum perfect?  Of course not.  We cannot know with any certainty exactly how it will play out.  The definition of the capacities and the most effective ways of transmitting them to students will have to be honed and reformed through the crucible of practice.  Any successful institution needs to fight calcification tooth and nail, continually revising itself at it moves along, with an eye firmly on the goals that motivate its practices. The tendency of institutions to stagnate, to do something because that’s the way it has always been done and how it currently distributes its resources and rewards, is all too familiar—and depressing. Change is upsetting and, as I said at the outset, some will benefit more than others from change.

In fact, I think the proposed curriculum protects the arts, humanities, and social sciences at a time when they are particularly vulnerable. I also think the liberal arts will suffer if they stick resolutely to old models that do not respond to larger cultural shifts.  We cannot resist or even speak to those shifts if we don’t find a way of meeting our students—who come to college now with a set of needs and objectives that represent their own response to new societal pressures—at least halfway.  We also must recognize that students will, inevitably (within the “elective” system that dates back to the 1880s) make their own decisions about what courses to take.  Thus we must articulate clear rationales for them to take the various courses that will be available within the GenEd curriculum.  What I like about the new curriculum is the way that it calls us to our task as educators, asking us to identify what we believe passionately our students need to learn, and placing the responsibility that our students get there in our hands.

Harry Frankfurt on Inequality

I read Harry Frankfurt’s essay on inequality (published as a small book by Princeton University Press, 2015) over the weekend.  Frankfurt’s position is simple: “Economic equality is not, as such, of any particular moral importance; and by the same token, economic inequality is not in itself morally objectionable.  From the point of view of morality, it is not important that everyone should have the same.  What is morally important is that each should have enough.  If everyone had enough money, it would be of no special or deliberate concern whether some people had more money than others.  I shall call this alternative to egalitarianism the ‘doctrine of sufficiency’—that is, the doctrine that what is morally important with regard to money is that everyone should have enough” (7).

Economic inequality is morally objectionable only when the fact of its existence leads to the production of other moral harms.  But it is not intrinsically (a key word for Frankfurt) morally objectionable in itself.  “That economic equality is not a good in itself leaves open the possibility, obviously, that it may be instrumentally valuable as a necessary condition for the attainment of goods that do generally possess intrinsic value. . . . [T]he widespread error of believing that there are powerful moral reasons for caring about economic equality for its own sake is far from innocuous.  As a matter of fact, this belief tends to do significant harm” (8-9).

Frankfurt’s efforts to specify the harm done are not very convincing, involving tortured arguments about marginal utility and implausible suppositions about scarcity.  By failing to deal in any concrete cases, he offers broad arguments that fall apart (it seems to me) when applied to things like access to clean air and clean water (think of the Flint water crisis) or to health care and education (where provision of equal access and quality to all is a commitment to the equal worth of every life, a principle that seems to me intrinsic.)  It gets even worse at the end of the book, in the second essay, “Equality and Respect.”  Frankfurt writes: “I categorically reject the presumption that egalitarianism, of whatever variety, is an ideal of any intrinsic moral importance” (65).  His argument rests on a bit of a shell game, since he substitutes “respect” for “equality”, and then acknowledges that there are certain rights we deem morally due to all because of our “respect” for their “common humanity.”  But he is against “equality” because he thinks we also accord respect (and even certain rights) differentially.  We need to take the differences between people into account when those differences are (in his words) “relevant.”  What he fails to see is that “equality” names the moral principle that, in (again) particular cases, no differences can or should be relevant (in spite of the fact that various agents will try to assert and act on the relevance of differences).  The most obvious case is “equality before the law.”  It is very hard to see how “equality before the law” is not an intrinsic moral principle.  It functions as a principle irrespective of outcomes—and its functioning as a principle is demonstrated precisely by the fact that it is meant to trump any other possible way of organizing how the law functions.  It is good in and of itself; we could even say that “equality before the law” constitutes the good, the legitimacy, of law—and it preforms this constitutive function because it is the intrinsic principle law is meant to instantiate.

But let’s go back to economic inequality.  There Frankfurt is on much stronger ground.  I don’t think he makes a good case that concern over economic inequality causes harm.  But as what I have been calling a “welfare minimalist” (what he calls “the doctrine of sufficiency”), he echoes the comment of my colleague that questions of inequality are irrelevant if everyone has enough.  As Frankfurt puts it: “The doctrines of egalitarianism and of sufficiency are logically independent: considerations that support the one cannot be presumed to provide support for the other” (43).  “The fact that some people have much less than others is not at all morally disturbing when it is clear that the worse off have plenty” (43).

There are practical questions here of a Marxist variety: namely, is it possible for there to be substantial inequality without the concomitant impoverishment of some proportion of the population?  Oddly enough, Frankfurt briefly talks about the inflationary effects of making the poor better off, but never considers the inflationary effects of their being vast concentrations of wealth (in housing costs, for example).  Mostly, however, Frankfurt shies far away from practical issues.

On his chosen level of abstraction, he makes one very good and one very provocative point.  The good point is that concerns about inequality help us not at all with the tough question of establishing standards of sufficiency.  If the first task before us is triage, then what is needed is to provide everyone with enough.  It seems true to me that triage is the current priority—and that we have barely begun to address the question of what would suffice.  Talk of a UBI (Universal Basic Income) is hopelessly abstract without a consideration of what that income would enable its recipient to buy—and of what we, as a society, deem essential for every individual to be able to procure.  There is work to be done on the “minimalist” side, although I do think Martha Nussbaum’s list of minimal requirements (in her book on the capabilities approach from Harvard UP) is a good start.

The provocative point comes from Frankfurt’s stringent requirement that a moral principle, au fond, should be “intrinsic.”  The trouble with inequality as a standard is that it is “relative,” not “absolute” (41-42).  It is not tied to my needs per se, but to a comparison between what I have and what someone else has.  The result, Frankfurt believes, is that the self is alienated from its own life.  “[A] preoccupation with the alleged inherent value of economic equality tends to divert a person’s attention away from trying to discover—within his experience of himself and of his life conditions—what he himself really cares about, what he truly desires or needs, and what will actually satisfy him. . . . It leads a person away from understanding what he himself truly requires in order effectively to pursue his own most authentic needs, interest, and ambitions. . . . It separates a person from his own individual reality, and leads him to focus his attention upon desires and needs that are not most authentically his own” (11-12).

Comparisons are odious.  Making them leads us into the hell of heteronomy—and away from the Kantian heights of autonomy and the existential heaven of authenticity.  But snark is not really the appropriate response here.  There seem to me interesting abstract and practical questions involved.  The abstract question is about the very possibility (and desirability) of autonomy/authenticity.  Can I really form desires and projects that are independent of my society?  In first century Rome, I could not have dreamed of becoming a baseball player or a computer scientist.  Does that mean that my desire to become one or the other in 2019 is inauthentic?  More directly, it is highly likely that my career aspirations are shaped by various positive reinforcements, various signals that I got from others that my talents lay in a particular direction.  Does that make my choice inauthentic?  More abstractly, what is the good of authenticity?  What is at stake in making decisions for myself, based on a notion of my own needs and ambitions? Usually, the claim is that freedom rests on autonomy.  Certainly both Kant and the existentialists believed that.  But what if freedom is just another word for nothing left to lose, if it indicates a state of alienation from others so extreme that it is worthless—a thought that both Kierkegaard and Sartre explored.

I am, as anyone who has read anything by me likely knows, a proponent of autonomy, but not a fanatic about it.  That people should have the freedom to make various decisions for themselves is a bottom-line moral and political good in my book.  But I am not wedded to any kind of absolutist view of autonomy, which may explain why appeals to “authenticity” leave me cold.  On the authenticity front, I am inclined to think, we are all compromised from the get go.  We are intersubjectively formed and constituted; our interactions with others (hell, think about how we acquire language) embed “the other” within us from the start.  It’s a hopeless task to try and sort out which desires are authentically ours and which come from our society, from the others we have interacted with, etc. etc.  To have the freedom to act on one’s desires is a desirable autonomy in my view; to try to parse the “authenticity” of those desires in terms of some standard of their being “intrinsic” to my self and not “externally” generated seems to me one path to madness.

Even more concretely, Frankfurt’s link of the “intrinsic” to the “authentic” raises the question of whether any judgments (about anything at all) are possible without comparison.  His notion seems to be that an “absolute” and “intrinsic” standard allows me to judge something without having to engage in any comparison between that something and some other thing.  I guess Kant’s categorical imperative is meant to function that way.  You have the standard—and then you can judge if this action meets that standard.  But does judgment really ever unfold that way?  By the time Kant gets to the Critique of Judgment, he thinks we need to proceed by way of examples—which he sees as various instantiations of “the beautiful” (since “the beautiful” in and of itself is too vague, too ethereal, a standard to function as a “determinative” for judgment).  And, in more practical matters, it would seem judgment very, very often involves weighing a range of possibilities—and comparing them to see which is the most desirable (according to a variety of considerations such as feasibility, costs, outcomes etc.) A “pure” judgment–innocent of all comparison–seems a rare beast indeed.

Because he operates at his insistently high level of abstraction, Frankfurt approaches his “authenticity” issue as a question of satisfaction with one’s life.  Basically, he is interested in this phenomenon: I am satisfied with my life even though I fully realize that others have much more money than me.  One measure of my satisfaction is that I would not go very far out of my way to acquire more money.  Hence the fact of economic inequality barely impinges on my sense that I have “enough” for my needs and desires.  This is a slightly different case from saying that my concept of my needs and desires has been formed apart from any comparison between my lot and the lot of others.  Here, instead, the point is that, even when comparing my lot to that of those better off than me, I do not conclude that my lot is bad.  I am satisfied.

For Frankfurt, my satisfaction shows that I have no fundamental moral objection to economic inequality.  Provided I have “enough” I am not particularly morally outraged that others have even more.  I am not moved to act to change that inequality.

It seems to me that two possibilities arise here.  The first is that I do find the existence of large fortunes morally outrageous. I don’t act because I don’t see a clear avenue of effective action to change that situation, although I do consistently vote for the political parties who are trying to combat economic inequality.  But Frankfurt’s point is that my satisfaction shows I don’t find economic inequality “intrinsically” wrong.  I am most likely moved to object to it by seeing what harms have been done to others in order to accumulate such a large fortune—or I point to the wasted resources that are hoarded by the rich and could be used to help the poor.  Frankfurt, in other words, may be right that economic inequality is not “intrinsically” wrong, but only wrong in terms of the harms that it produces.  I think I would take the position that economic inequality is a “leading indicator” of various ills (like poverty, exploitation, increasing precarity, the undermining of democratic governance, etc.)—and that the burden of proof lies in showing that such inequality is harmless.  If this focus on produced harms means economic inequality is not an “intrinsic” value, so be it.

The other interesting consideration Frankfurt’s discussion brings to the fore is the absence of envy.  Conservatives, of course, are fond of reducing all concerns about economic inequality to envy.  And the mystery to be considered here (and to which Frankfurt points) is how, if I am aware that others have more than me, I am not consumed with envy, resentment, or a sense of abiding injustice (i.e. it’s not fair that he has more than me).  Certainly some people’s lives are blighted by exactly those feelings.  But others are content, are satisfied, in the way Frankfurt describes.  The comparison has no bite for them.  The difference is noted but not particularly resented—or, if resented, still doesn’t reside at the center of the judgment of their own life.  Maybe some kind of primitive narcissism is at work here, some sense that I really like being me and don’t really want to trade in “me” in order to be some other chap.  The deep repudiation of self required by envy may just be beyond the reach of 80% of us.  Just how prevalent is self-hatred?  How many would really desire to change their lot with another?

Pure speculation of course.  But the point is not some fantasy about authenticity, about living in a world where I don’t shape my desires or self-judgments at least partially by comparing myself to others.  Rather, the fact of our constantly doing such comparing is here acknowledged—and the question is how we live contentedly even as we also recognize that we fall short of others in all kinds of ways.  He has better health, a more successful career, a sunnier disposition, more money, more friends, more acclaim.  How can I be content when I see all that he possesses that I do not?  That’s the mystery.  And I don’t think Frankfurt solves it–and I cannot explain it either.  His little book makes the mystery’s existence vivid for me.