Category: The professions

Institutions

A former student got in touch to talk about “institutions”—which are important in Latour’s work, but rather “undertheorized” (as we used to say in the 1980s).  At least not much discussed in An Inquiry into the Modes of Existence, even as he chides “baby boomers” (278) for their knee-jerk hostility to them.  The boomers “accuse” institutions “of being routinized, artificial, bureaucratic, repetitive, and soulless,” fatal “to the initiative, autonomy, enthusiasm, vivacity, inventivity, and naturalness of existence. . . . [T]here is life only on condition of getting out of institutions, even destroying them, or, short of that, getting as far away from them as possible in order to subsist on the periphery” (278).  He locates institutions in the mode of existence called Habit—and sees them as a source of continuity and, hence, subsistence.  To be hostile to institutions is to end up throwing away a focus on subsistence in order to pursue that phantom: substance. The hostility to habit partakes of the characteristic “iconoclasm” of the moderns, who keep thinking they can get behind appearances to reality, can pierce through the “Shows” of the world to the “thing itself.”  We need (Latour argues), rather,  to develop the healthy regard for habit we find in William James, recognizing its benefits, its ways of making us at home in the world.

So the moral for Latour is “that we should ‘learn to respect institutions.’  [Otherwise], it will be impossible to know, given that habit has so many enemies, whether you want to protect a value by instituting it or, on the contrary, whether you want to betray it, stifle it, break it down, ossify it.  Now we baby boomers have drained that bitter cup to the dregs.  Confronting the ruins of the institutions that we are beginning to bequeath to our descendants, am I the only one to feel the same embarrassment as asbestos manufacturers targeted by the criminal charges brought by workers suffering from lung cancer?  In the beginning, the struggle against institutions seemed to be risk-free; it was modernizing and liberating—and even fun; like asbestos, it had only good qualities.  But, like asbestos, alas, it also had disastrous consequences that no one had anticipated and that we have been far too slow to recognize” (278-79).

For all this, Latour has little to say about how we are to think about institutions, how we are to describe them and what they do (or don’t do).  Maybe he does elsewhere.  I will have to take a look.

In the meantime, here is what I wrote to my student as a first stab of thinking about what institutions are:

My latest blog post (thanks for reading, by the way) does a little Latour stuff that points toward institutions.  I think, in fact, that what you can glean from his Science in Action or Reassembling the Social is most likely the best bet.  In short, Latour is great in getting us to think about all “the players” that contribute to the production of something.  Of course, he is interested in both human and non-human “actants” (to use his term).  Institutions, then, are formal structures within which actants operate (establishing hierarchies, differential access to resources, lines of authority and of connection), but which also represent an effort to stabilize and enable the continued existence of networks that spring into existence and act in relation to some specific end.  Institutions, in other words, put a public face on, and identity to, what might otherwise be ephemeral relations formed in the heat of action.  The institution tries to enable repetition–the gathering of these actants in the next instance, the next attempt to produce something.  This formalization of the actant network has its dangers/downsides (sclerosis is always a threat), but also its upsides (establishing relationships and procedures, so that re-invention of the wheel is not always necessary, and garnering resources).  A continuing presence, an institution can also bridge the gap between one instance of action and the next.  Finally, institutions can accumulate and store authority and/or prestige.  They can become a name-brand, thus attracting resources and attention.

 

As I thought more about this, I found myself troubled by the thought that most of what I say about institutions could also be said of “organizations.”  Yet in ordinary language, we do distinguish between the two.  Congress is a political institution; the Democratic Party is a political organization.  Amazon, Amnesty International, the New England Patriots, and the Modern Language Association (MLA) are all organizations.  To my ear, at least, it would be odd to call any of them “institutions.”  The Catholic Church, the University of North Carolina, and the Supreme Court are institutions.  In common parlance, we can also say that “Harriet Jones is an institution in these parts,” but we would never call her an “organization.”

“Hollywood” is a collective noun that designates the film industry; the “studio system” refers to a particular way that industry was (is?) organized.  But I don’t think we would normally call Hollywood an institution or an organization.  It is a loose affiliation of various actors—sometimes interconnected enough for us to speak of “networks”—with (perhaps) habitual ways of doing its self-appointed tasks.  But somehow it doesn’t rise to the status of “institution.”

Yet I feel as if Major League Baseball is on the cusp of being an institution—and is certainly an organization.  Even as I feel that the National Football League is definitely an organization, but nowhere near being an institution.  So can I make any sense of these contradictory intuitions?

Here’s a try before I go to the dictionary.  An institution is the framework within which a variety of actants can practice (in any variety of ways, including cooperatively or competitively).  The institutions lays down protocols—canons for a specific action being counted as an instance of the “practice” that the institution shelters/enables/presides over.  The authority of the institution faces two ways: 1. Inwardly toward instances of the practice itself, judging the status and quality of those instances. And 2. Outwardly toward the world as it makes the case for the general benefit that practice can provide to non-practitioners.  [In short, I am stealing here Bruce Robbins’ understanding of professionals; their guild establishes and maintains “professional standards,” even as their guild must legitimate to a wider public the usefulness of “professional practices.”]

Within that institutional setting, there can be a wide variety in the ways its practices are put to use—and there can be widespread disagreement and contestation about substantive matters.  The institution provides “the rules of the game” and the certification of who gets to be “a player.”

And something, like Major League Baseball, becomes “an institution” when the it garners a widely acknowledged “authority” and respect in relation to its wider legitimating function.

An organization may establish a “brand” that is well-trusted, seen as reliable.  But it will not have the “authority” that an institution has.  Why?  Because an organization is put together to facilitate the more efficient accomplishment of a single purpose.  Everyone in the organization must get with the program; all of the members of the organization must contribute to its achieving its goal.  The organization is not a framework for multiple uncoordinated actions; just the opposite.  Its whole point is coordination, in making sure that actants work in sync, in tandem.  An organization is never, like an institution, “above the fray.”  It is never the enabler of the varieties of practice; instead, it harnesses energies toward a goal.

Hence, if the Supreme Court becomes the tool of one political faction, it loses its “authority” as the institution that enables political contestation, becoming instead just another piece of an organization.  So maybe I can say that organizations exist to produce something; but institutions exist to enable the production of things, but do not produce things directly themselves.

Major League Baseball allows for the playing of numerous games of baseball; it does not do the playing itself.  It is the integrity with which it plays that role, as guardian of the practice, that gains it the “authority” that leads us to think of it as an institution.  But if the single-minded organizational goal of making money comes to dominate, then Major League Baseball will only be an organization, not an institution.  Football seems much more directly commercial than baseball—and hence the National Football League is not an institution.  This may be pure sentimentality, but it also has to do with how differently the two professional sports are related to the history of their games, and to the ways in which football players are interchangeable parts and constricted to a communal project.  Baseball is much more individual, much less faceless (it takes a truly devoted fan to know the linemen on a football team.)

Anyway, I could be totally wrong about this baseball/football divide.  More important is to recognize that the issue is not commercial versus non-commercial.  Amnesty International is an organization because devoted to a specific goal.  It is working for something substantive, not providing a framework within which a practice can unfold in myriad, even unexpected, ways.  But Amnesty is not commercial.  So the distinction I am trying to probe is not about the presence or absence of a profit motive.

It turns out the dictionary is not much help.  Here’s my Random House dictionary on “institution”: 1. An organization or establishment devoted to the promotion of a particular object.

But # 4 might help us some: Sociology, a well-established and structured pattern of behavior or of relationships that is accepted as a fundamental part of a culture, as marriage.

Followed by # 5: any established law, custom etc.  and #6: any familiar practice or object.

Whereas the definitions offered for “organization” are not very useful either.  #1 is “the action or process of organizing.”  #5 is “a body of persons organized for some end or work.”

I would say that the dictionary’s deficiencies indicate a general difficulty in describing collective action.  Organizations, quite obviously, act.  Things get produced and decisions get made that could never be done by a single person acting alone—and the thing produced and the decision made is not fully controlled by one of the actors (actants) in the process that yields that result.

When it comes to institutions it can seem even trickier.  If we are talking “habit” or “custom,” we can seem to be identifying a force that has no obvious origin.  It is “just our way of doing things,” even as that “way” does not remain completely impervious to change. But the mechanisms of change are hard to identify and even harder to manipulate.  We like to think we can tell an origin story about our political institutions—and we even have mechanisms for their being revised/amended/reformed etc.

But when it comes to relations between the sexes or between the races, the dead hand of the past, of cultural mores, proves incredibly resistant to direct intervention even as those relations do not remain immobile.  If we deem racism “an institution,” then it is like the Supreme Court in that it provides a framework for a whole set of practices, but it is unlike the Supreme Court in that there are no procedures for adjudication among those practices.  Racism as “an institution” is a product of various actions/practices in the past; but none of those actions/practices in itself had the power to establish racism.  We have what is truly a collective product here, one that is only “deliberate” in a very attenuated way.  No wonder conspiracy theories as so appealing; at least they identify agents powerful enough to serve as the originators or perpetuators of a particular state of affairs.

All of this is inconclusive enough.  The term “institution” clearly encompasses apples and oranges.  The more fruitful approach might be a version of Latour: consider particular instances of something you are tempted to call an “institution” and try to trace the actions that lead to its production.  Then, “institution” is the end product, not the starting place, of an inquiry.  And we don’t assume from the outset that one institution has much in common with another one.  An escape from essentialism into particularities.

Response to Michael Clune’s “Judgment and Equality”

Headnote: I was scheduled to present at the American Comparative Literature Association meeting in Chicago on March 20th.  Obviously, the meeting got cancelled.  The session was on “Aesthetic Education” and the panel members were all asked to read Joseph North’s recent book Literary Criticism: A Concise Political History (Harvard UP, 2017) and an essay by Michael Clune entitled “Judgment and Equality” (Critical Inquiry, 2018).  After reading the Clune essay, I was moved to write the response posted below.  I think it is fairly self-explanatory, even if you haven’t read the Clune essay.  After writing this response, I discovered that Clune had offered a shorter version of his plea for the authority of experts (and polemic against equality in matters of judgment) in a Chronicle of Higher Education piece that generated a fair amount of hostile response.  (You can easily find these pieces on line by googling Clune’s name.)  In particular, the hostility came from the fact that conservative New York Times pundit, Ross Douhat, wrote favorably about Clune’s position on the op-ed page of the Times.  Doubtless, Clune was chagrined to see his argument, which he thought was radically leftist, embraced by a right-wing writer.  But I don’t know that he should have been particularly surprised; to question–or to think about limiting–the claims of democratic equality is always going to play to the right’s fundamental commitment to reining in equality and democracy wherever it rears its dangerous head.  In any case, it is to the anti-democratic implications of Clune’s argument that my piece responds to.  I will post some thoughts on North’s book in the next few days.

 

In November 2008, a week after the election of Barack Obama to the presidency, I was in a New York city room full of bankers and hedge fund managers leading a discussion on the implications of that election.  The financiers were horrified; they earnestly told the gathering that Obama and a Democratic Congress, led by Nancy Pelosi were know-nothings who, through their ignorant meddling, were about to ruin American economic prosperity.  These men—and of course they were all men—were completely unshaken in their conviction of their competence even following the financial collapse of the previous month.  A portrait of expertise in action, offering a strong case for why the rule of experts must be tempered by the oversight of the demos.  Every profession is a conspiracy against the laity, George Bernard Shaw famously warned us.

Democracy means many things, but one of its many entailments is that elites must subject themselves to the judgment of the masses.  As experts we can deplore the ignorance of the non-initiated, but in a democracy authority is not to be had as a gift but must be earned.  Democracy is a supremely rhetorical political form.  Any one, including the expert, who has a position they want the polity to act upon must convince a majority of her fellow citizens to endorse that policy.  Persuasion is the name of the game; and saying it again, just louder this time and standing on my credentials as an expert, is not a very effective rhetorical move.  There is a deep anti-authoritarian bias in the demos—and we should celebrate that fact.  Democracy, as Winston Churchill said, has some very obvious flaws, but it sure beats all the alternatives.

The right has eaten the left’s lunch for some forty years now.  We people of the left can scream that it hasn’t been a fair fight, but that still doesn’t provide any justification for retreating from the democratic arena into a petulant insistence on our being correct and the misled masses being wrong.  The technocracy of the EU may be somewhat preferable to the plutocracy of the US, but the “democratic deficit” is real in both cases.  Maybe democracy is always a battle between elites for endorsement from the general populace.  If that is the case, and if violence is not considered a viable or desirable alternative, then the rhetorical battle for the hearts and minds of the people is where all the action is.  It makes no sense in such a battle to begin by maligning the judgment of those people.  Depending on the capacity of the people to judge for themselves is the foundational moment of faith in a democratic society.  Yes, as Clune reminds, us, Karl Marx refuses to make that leap of faith.  Do we really want to follow Marx down that anti-democratic path?

Marx, after all, also warns us that every ruling elite indulges itself with the sweet conviction that it acts in the interests of all.  We, those business men I spent the evening with told themselves, are the “universal class” because we bring the blessings of economic plenty to all.  In their utter belief in their own goodness, I saw a mirror image of myself and my leftist friends.  If we don’t for a moment want bankers to avoid accountability to the people they claim to serve, why would we think we deserve an exemption.  Listen to your academic colleagues rant about the vocabulary of assessment and outcomes when applied to what happens in the classroom—and you will hear an echo of what I listened to that night in New York. Who dares to question the effectiveness of what transpires on our college campuses?

Kenneth Burke picked up the term “professional deformation” from John Dewey.  He used it to highlight the blindness that accompanies immersion in a discipline.  I think Clune is right to present judgment as emerging from the practices and institutions of a discipline. (“[T]o show someone the grounds of a given judgment is to educate them in the field’s characteristic practices,” he writes [918].)  The oddity of his position, it seems to me, is that he takes this Kuhnian point as a reason to enhance our faith in the judgments of those encased in a paradigm.  That strikes me as a very odd reading of Kuhn, taking his book as a celebration of “normal science” instead of a meditation on the difficulty of intellectual revolution because of the blinders normal science imposes.  It is only a bit exaggerated, in my view, to see Kuhn as telling us that textbooks devour their readers and turn them into mindless conformists. Yes, Clune nods to the fact that communities of practitioners “can and do manifest bias and thus serve as sites of oppression” (918), but he seems to think acknowledgment of that fact is enough to render it harmless, appealing to an unspecified “broad range of measures” (919) that can compensate for the potential oppressions.  But I read Kuhn as suggesting that it is precisely the young, the uninitiated, the outsiders (in other words, those who are least embedded in the community of practice, or even non-members of it), who are most likely to disturb its complacency, its confidence in its judgments and its blindness to its biases and oppressions.  Let’s remember Foucault’s lessons about the power of disciplines.  All concentrations of power are to be distrusted, which is another reason (besides a discipline’s in-built blind spots) to advocate for the subjection of expert judgments to external review—and not simply external review by other members of the community in question.  I am a firm believer in the 80/20 rule; spend 80% of your effort in mastering your discipline; spend 20% of your time in wide-ranging reading and activities that are completely unrelated to that discipline.  And then use that 20% to break open your discipline’s inbreeding.

I am fully sympathetic with Clune’s desire to find in aesthetics an alternative to the norms and values of commercial society.  And that position does seem to entail a commitment to aesthetic education as the site when that alternative can be experienced and embraced.  I also believe that the democratic commitment to the people’s right to judge the prescriptions and advice of the experts does make the need for an educated citizenry a priority for our schools and universities.  The liberal arts curriculum should be aimed at making citizens more competent judges.  It is a strong indication of the right wing’s rhetorical triumph with a section of the populace that a majority of Republicans in a recent poll agreed that universities did more harm than good.  I don’t need to tell this audience that the liberal arts and the arts are under a sustained rhetorical attack.

What drives people like me and you crazy is that the attitudes adopted by the right are impervious to facts.  Climate change denial has become the poster child for this despair over the ability of the demos to judge correctly or wisely.  It is worth mentioning that the denigration of the liberal arts is equally fallacious, at least if the reasons to avoid humanities or arts classes are economic.  All the evidence shows that humanities and arts majors, over a lifetime, do just as well economically as science and engineering and business majors.  The sustained attack on the arts and humanities has more to do with a distaste for the values and capacities (for critical thinking, for sophisticated communication) they promote.

So what are we, the defenders of the aesthetic and the humanities (along with the world-view those disciplines entail), to do?  Saying our piece, only louder this time, and with a statement of our credentials as experts, won’t do.  Declaring our inequality, my superiority to you, should be a non-starter at a moment in history where increasing inequality is among our major problems.  I, frankly, am surprised that Clune is even tempted to take that route.  It comes across as pretty obvious petulance to me.  Why isn’t anyone paying any attention to me?  I know what’s what and they don’t. Listen up people.

In short, I stand with those who realize that judgment needs to be reconceived in ways that render it compatible with equality.  Clune is undoubtedly right that some writers have failed to face squarely the fact that judgment and equality are not easily reconcilable.  The problem, to put it into a nutshell, is that judgment seems to entail right and wrong, correct and incorrect, true and false.  To make all judgments equivalent is akin (although it is not actually that same as) total relativity, the idea that every judgment is “right” within a specified context.  Contrasted to that kind of relativism, the acceptance of the equivalence of all judgments can look even more fatuous, marked with a shrug and a “whatever.”  No point arguing since there is no accounting for tastes, and no one gets to dictate your tastes to you even if they are weird, incomprehensible, obnoxious, disgusting.  One’s man’s meat is another man’s poison.

Faced with such epistemological throwing in of the towel, it is not a surprise that folks keep coming back to Kant.  Clune details how both Sianne Ngai and Richard Moran have recently tried to come to terms with Kant’s attempt to demonstrate that aesthetic judgments make a “demand” on others, thus raising our aesthetic preferences above a mere statement of personal taste and towards an intersubjective objectivity.  Ngai, Moran, and Clune all use the term “demand” and the three translations of Kant’s Critique of Judgment I have consulted also use that term.  But I will confess to preferring Hannah Arendt’s translation of Kant, even though I have never been able to find in Kant where she finds the phrase that she puts in quotation marks.  For Arendt, those making an aesthetic judgment, then “woo the consent” of the other.  Arendt, in other words, places us firmly back into the rhetorical space that I am arguing is central to democracy.  Surprisingly, Clune never recognizes the affinity between his “community of practitioners” and Kant’s sensus communis.  What Arendt calls our attention to—especially when she tells us that Kant’s Critique of Judgment is the “politics” critics claim he never got around to writing—is the fact that the sensus communis always needs to be created and its ongoing reconfiguration is the very stuff of politics.  Yes, judgments are deeply indebted to and influenced by the community from which they are articulated, but that community and its practices is a moving target.  Think of Wittgenstein’s image of language as a sea-going vessel that undergoes a slow, but complete, rebuild even as it never leaves the water for dry-dock.  The democratic community—and its judgments on the practices of its various sub-cultures and its elites and its experts—is continually being refashioned through the public discourses that aim to sway the public in one direction or another.

How does this understanding of the scene of politics help.  Clune, I think, provides a clue when he writes “For me to be convinced by the critic’s aesthetic judgment that James is interesting means not that I have evaluated the reasons for that judgment but that I’ve decided to undertake an education that promises to endow me with his or her cultural capacities” (926).  What gets under-thought here is what would actually motivate such a decision.  We need to invoke Aristotle in conjunction with Raymond Williams at this point.  The expert—be she a climate scientist, a heterodox economist, or a Proust scholar—wants, at a minimum, to inspire trust, and, at a maximum, the auditor’s desire to join her community of practitioners, to make its common sense his own.  It is not “reasons,” as Clune says, that are decisive here, but ethos.  I would be willing to be that almost everyone in this room could point toward a teacher who inspired them—and inspired them exactly as the kind of person I myself wanted to become.  What an aesthetic education offers is initiation into a particular “structure of feeling.”  It is the attractiveness of that sensibility that our political and public rhetorics need to convey.  Once again, Kant and Arendt help us here when they point to the crucial importance of the “example” to these attempts to “woo the other.”  Modelling what a life lived within that structure of feeling looks like is far more potent that pronouncing from on high that Moby Dick is superior to Star Wars.

Look at this concretely.  The rhetorical genius of the Republican party since Ronald Reagan has been to portray the professional, educated, upper-middle class left (who occupy then “helping professions” of doctor, lawyer, teacher, social work) as joyless scolds, continually nagging you about how all the things you do are harmful to the environment, to social harmony, to your own well-being.  They have made it a political statement to drive a gas-guzzling truck while smoking a cigarette in defiance of those pious kill-joys.  That’s the rhetorical battle that the left has been losing since 1980.  Yes, the populace scorns our expert judgments, but that’s because they have no desire at all to be part of the communities in which those judgments are common sense.  Our problem, you might say, is not how to educate—aesthetically or otherwise—those who make the decision to undertake an education, but is how to make the prospect of an education appealing to those who see it as only a constant repudiation of their own sensibilities and capacities.  In short, “structures of feeling” triumph over “interests” much of the time and the left has proved spectacularly inept at modelling positive examples of the sensibility we wish to see prevail in our society.

I shouldn’t be so overwhelmingly negative about the left.  The sea-change in attitudes (and public policy) toward LBGTQ citizens over the past thirty years cannot be overstated.  Of course, given that attitudes are, as I have argued, a moving target, changes in any one direction are never set in stone.  Constant maintenance, rearticulation, and adjustments on the fly are necessary.  The task of education, of initiation into a sensibility that has come to seem “common sense,” as both attractive and right, is always there in front of us.  I am simply arguing that the right wing has been more attuned to that educative task than the left.  Or as I am prone to say, the left goes out and marches in the street on the weekend before returning to work on Monday while the right gets itself elected to school boards.

As a teacher, I find Ngai’s focus on “the interesting” crucial and poignant.  When we call something “interesting,” we are saying it is something worry of attention, something worthy of pausing over and considering at more length.  And that plea for attention is certainly at the very center of my practice as a teacher.  When I declare in front of class that this or that is “interesting,” I am inviting students into a sensibility that wants to ponder the significance of the thing in question.  But I am also pleading with them to take that first step—knowing that for many of them I am just another professor who incomprehensively gets excited about things to which they are supremely and irredeemably indifferent.  You can’t win them all, of course.  But the effort to win some of them over is endless, never fully successful, and in competition with lots of other demands on their attention.

There is, I am arguing, no other course of action open in a democratic society.  We are, if you will, condemned to that rhetorical battle, attempting to woo our students, to woo the demos, to a particular sensibility, a particular vision of the good.  That, I will state it nakedly, is politics.  To dream of a world where expert opinion is accepted by the non-experts is to dream of salvation from politics, from its endless wrangling, its messy compromises, its inevitable mix of failures with successes.  It is to desire a technocratic utopia, in which the “administration of things” replaces the conflicts of political contestation.  No thank you.

Another way to say this is that politics is the inevitable result of living in a pluralistic universe.  There will never be full consensus, there will never be a single vision of the good to which all subscribe, there will never be an all-encompassing and all-inclusive sensus communis.  On the whole, I’d say that’s a good thing.  I would hate to live in a world where everyone disagreed with me about everything.  But I am convinced that a world in which everyone agreed with me about everything would be almost as bad.

But, but, but . . . climate change.  Please recognize that climate change is just one in a long string of existential threats that democracy—slow, contentious, ruled by greed and passion—is deemed ill equipped to handle.  Authoritarians of whatever political stripe are always going to identify a crisis that means democracy must be put on hold.  The terrible attraction of war is that it negates the messy quotidian reality of pluralism.  The dream is of a community united, yoked to a single overwhelming purpose, with politics suspended for the duration.  Thus, that great champion of pluralism, William James, could also dream of a “moral equivalent of war.”  Perhaps democracy truly is unequal to the challenge of climate change, but then the desire/need to jettison democracy should be stated openly.  Otherwise, it is back to the frustrations of political wrangling, to the hard process of winning over the demos.

So, yes, I am in favor of an aesthetic education that aims to introduce students to a sensibility that finds commercial culture distasteful and (perhaps more importantly but perhaps not) unjust. And I want them to see that indifference to climate change is of a piece with the general casualness of our prevailing economic order to the sufferings of others. But I cannot endorse Clune’s picture of that educational process.  “[T]he significant investment of time and energy that this education requires—both at its outset and for a long time afterwards—is channeled in submission to the expert’s judgment that these works make particularly rewarding objects of attention.  The syllabi of an English department’s curriculum, for example, codify this submission” (926).  I have been fighting against my English department’s curriculum for twenty-five years.  The texts I want to teach in my classes are the ones I find good to think with—and I invite my students to join me in that thinking process.  (More Arendt here: her notion that judgment involves “going visiting” and you can know a thinker’s ethos by considering the company she wants to visit—and to keep.)  What I model is one person’s encounter with other minds—the minds represented by the books we read and by the people who are in the classroom with me.  My colleagues should have similar freedom to construct their courses around the texts that speak to them—and in which they then try to interest their students.

Fuck submission.  Maybe it’s because I teach in the South.  But my students have been fed submission with mother’s milk.  What they need to learn is to trust their own responses to things, to find what interests them, to find what moves them emotionally and intellectually.  They need to learn the arrogance of democratic citizenship, which arrogates to itself the right to judge the pronouncements of the experts.  Certainly, I push them to articulate their judgments, to undertake themselves to woo others to their view. They must accept that they too are joined in the rhetorical battle, and if they want allies they will have to learn how to be persuasive. But that’s very, very different from suggesting that anyone should ever take the passive position of submission.

Clune is scornful of Richard Moran’s “liberal” endorsement of freedom of choice.  So I want to end with a question for all of you as teachers.  Can I safely assume that you would deem it inappropriate, in fact unethical, to tell your students whether or not to believe in god, or what career path to follow, or for whom they should vote?  If you do think, in your position as a teacher, that you have the right to tell your students what to do in such cases, I would like to hear your justification for such interference.  Obviously, what I am suggesting here is that our sensus communis does endorse a kind of baseline autonomy in matters of singular importance to individuals.  I certainly wouldn’t want to live in a society where my freedom to choose for myself about such matters were not respected.  If some of you in the room feel differently, I am very interested in hearing an articulation and defense of such feelings.

Now we could say that our expertise as teachers does not extend to questions of career, religious faith, or politics.  But where we are experts, there we are entitled to tell a student he is wrong.  James really in interesting; Moby Dick really is better than Star Wars.  But surely such bald assertions are worthless.  How could they possibly gain the end we have in view?  Via the path of submission?  I can’t believe it.  Yes, we stand up there in our classrooms and use every trick we can muster to woo our students, to get them interested, and even to endorse our judgments after careful consideration; one of our tasks is to teach (and model) what careful consideration looks like.  And I certainly hope you are especially delighted when some student kicks against the pricks and makes an ardent case that Star Wars is every bit as good as Melville.  Because that’s the sensibility I want aesthetic education to impart.

 

 

 

Moral Envy and Opportunity Hoarding

One quick addendum to the last post—and to Bertrand Russell’s comment about how the traditionalist is allowed all kinds of indignation that the reformer is not.  What’s with the ubiquity of death threats against anyone who offends the right wing in the United States?  That those who would change an established social practice/pattern, no matter how unjust or absurd, deserve a death sentence is, to all appearances, simply accepted by the radical right.  So, just to give one example, the NC State professor who went public with his memories of drinking heavily with Brett Kavanaugh at Yale immediately got death threats—as did some of his colleagues in the History Department.  Maybe you could say that snobbish contempt for the “deplorables” is the standard left wing response to right wingers—just as predictable as right wingers making death threats.  But contempt and scorn are not solely the prerogative of the left, whereas death threats do seem only mobilized by the right.

Which does segue, somewhat, into today’s topic, which was to take up David Graeber’s alternative way of explaining the grand canyon between the left and right in today’s America.  His first point concerns what he calls “moral envy.”  “By ‘moral envy,’ I am referring here to feelings of envy and resentment directed at another person, not because that person is wealthy, or gifted, or lucky, but because his or her behavior is seen as upholding a higher moral standard than the envier’s own.  The basic sentiment seems to be ‘How dare that person claim to be better than me (by acting in a way that I do indeed acknowledge is better than me?”” (Bullshit Jobs: A Theory [Simon and Schuster, 2018], 248).  The most usual form this envy takes, in my experience, is the outraged assertion that someone is a “hypocrite.”  The right wing is particularly addicted to this claim about liberal do-gooders.  The liberals, in their view, claim to be holier than thou, but know what side their bed is feathered on, and do quite well for themselves.  They wouldn’t be sipping lattes and driving Priuses if they weren’t laughing their way to the bank.  Moral envy, then, is about bringing everyone down to the same low level of behavior—and thus (here I think Graeber is right) entails a covert acknowledgement that the general run of behavior is not up to our publicly stated moral aspirations.  So we don’t like the people who make the everyday, all-too-human fact of the gap between our ideals and our behavior conspicuous.  Especially when their behavior indicates that the gap is not necessary.  It is actually possible to act in a morally admirable manner.

But then Graeber goes on to do something unexpected—and to me convincing—with this speculation about moral envy.  He ties it to jobs.  Basically, the argument goes like this: some people get to have meaningful jobs, ones for which it is fairly easy to make the case that “here is work worth doing.”  Generally, such work involves actually making something or actually providing a needed service to some people.  The farmer and the doctor have built-in job satisfaction insofar as what they devote themselves to doing requires almost no justification—to themselves or to others.  (This, of course, doesn’t preclude all kinds of dissatisfactions with factors that make their jobs needlessly onerous or economically precarious.)

Graeber’s argument in Bullshit Jobs is that there are not enough of the meaningful jobs to go around.  As robots make more of the things that factory workers used to make and as agricultural labor also requires far fewer workers than it once did, we have not (as utopians once predicted and as Graeber still believes is completely possible) rolled back working hours.  Instead, we generated more and more bullshit jobs—jobs that are make-work in some cases (simply unproductive in ways that those who hold the job can easily see) or, even worse, jobs that are positively anti-productive or harmful (sitting in office denying people’s welfare or insurance claims; telemarketing; you can expand the list.)  In short, lots of people simply don’t have access to jobs that would allow them to do work that they, themselves, morally approve of.

Graeber’s point is that the people who hold these jobs know how worthless the jobs are.  But they rarely have other options—although the people he talks to in his book do often quit these soul-destroying jobs.  The political point is that the number of “good” jobs, i.e. worthwhile, meaningful jobs is limited.  And the people who have those jobs curtail access to them (through professional licensing practices in some cases, through networking in other cases).  There is an inside track to the good jobs that depends, to a very large extent, on being to the manor/manner born.  Especially for the jobs that accord upper-middle-class status (and almost guarantee that one will be a liberal), transmission is generational.  This is the “opportunity hoarding” that Richard Reeves speaks about in his 2017 book, Dream Hoarders.  The liberal professional classes talk a good game about diversity and meritocracy, but they basically keep the spots open for their kids.  Entry into that world from the outside is very difficult and very rare.

To the manner born should also be taken fairly literally.  Access to the upper middle class jobs still requires the detour of education–and how to survive (and even thrive) at an American university is an inherited trait.  Kids from the upper middle class are completely at home in college, just as non-middle-class kids are so often completely at sea.  Yes, school can be a make-it and a break-it, a place where an upper class kid falls off the rails and place where the lower class kid finds a ladder she manages to climb.  But all the statistics, as well as my own experience as a college teacher for thirty years, tell me that the exceptions are relatively rare.  College is a fairly difficult environment to navigate–and close to impossibly difficult for students to whom college’s idiolects are not a native language.

So two conclusions. 1.  It is a mixture of class resentment and moral envy that explains the deep animus against liberal elites on the part of non-elites—an animus that, as much as does racism in my opinion, explains why the abandoned working class of our post-industrial cities has turned to the right.  As bad as (or, at least, as much as) their loss of economic and social status has been their loss of access to meaningful work.  Put them into as many training sessions as you want to transition them to the jobs of the post-industrial economy, you are not going to solve their acute knowledge that these new jobs suck when compared to their old jobs in terms of basic worth.  So they resent the hell out of those who still hold meaningful jobs—and get well paid for those jobs and also have the gall to preach to them about tolerance and diversity.  2.  It is soul-destroying to do work you cannot justify as worth doing.  And what is soul-destroying will lead to aggression, despair, rising suicide rates, drug abuse, and susceptibility to right-wing demagogues.  Pride in one’s work is a sine non qua of a dignified adult life.

Church vs. State

The current battles between the politicians in North Carolina (both those in our state legislature and those on the Board of Governors for the state-wide university system) remind me of nothing so much as the battles between church and state as portrayed in the film, Beckett.  Like the medieval church, the university, under the double banner of academic freedom and the right of professional expertise to self-governance, claims—and actually possesses—an autonomy that infuriates the statesmen.  The politicians (despite their hypocritical claims to abhor state power and over-reach) are determined to bring the university to heel.  It only exacerbates matters that universities generate a loyalty and affection among students and alums that politicians can only dream of attaining.

Put this way, the university is the Church.  And, certainly, the university has plenty of analogues with the Church, especially in the pretension to and, sometimes achievement of, the otherworldly.  Plenty of room for hypocrisy there—and undoubtedly no shortage of actual indulgence in that vice.

But I can’t help but view our power-grasping politicians through the lens of religion as well.  I have tried, mostly successfully, during my life and academic career to resist those narratives that posit a sickness deep in the American soul, that see our nation as doomed by a darkness, an original sin, that means it is impossible we will ever live up to our high-falutin’ ideals.  I don’t want to believe that racism explains all of the American past and the American present.  I do want to believe that the US has done a decent—albeit far from perfect—job of providing a good enough life for a higher percentage of its citizens than have most societies in human history.  But I cannot deny that the desire to believe these things may be making me blind to the uglier truth.

In any case, I read this in a Kipling story (“Watches of the Night”): “You may have noticed that many religious people are deeply suspicious.  They seem—for purely religious purposes, of course—to know more about iniquity than the unregenerate.  Perhaps they were specially bad before they became converted!  At any rate, in the imputation of things evil, and in putting the worst construction on things innocent, a certain type of good person may be trusted to surpass all others.”

Now, you could say that the evangelicals meet their match in this regard with the “America is rotten to the core” crew.  Fair enough.  But what I want to ponder is the desire to punish.  When I consider why these right-wingers hate the university—and consider the ways they express that hatred—what I see (among other factors, no doubt) is the desire to subject professors to “market discipline.”  It is not enough to see evil.  One must punish it.  And the chosen instrument for punishment is the market.  The right-wingers may be able to mouth all the virtues of the free market.  But what they really like is that it punishes people, that it causes pain to the reprobate.  How else to explain the need to hunt down the poorest and most vulnerable at every turn and make sure that they are suffering enough?  It’s almost as if the prosperous cannot enjoy their riches without also knowing that some are excluded from that enjoyment.

Of course, the price for that enjoyment is “hard work”—and the right (reminiscent of Kipling’s comments on “suspicion”) is obsessed with the notion that there are people out there who are avoiding “hard work,” who are living off the fat of government largesse.

The university looks like a free consequence zone.  Bad enough that students get to play on their parents’ and the state’s dime for four years.  But that professors get to do so for a lifetime is truly insufferable!  Teaching only two days a week!  Summer vacations!  Sabbaticals!  And with fancy titles and exaggerated respect.  There ought to be a law against it.

Self-Regulation

I am reading Siddhartha Mukherjee’s The Gene: An Intimate History (Scribner, 2016).  Lots of interest here—and lots of scientific information that is simply new to me and sometimes beyond my ability to comprehend.  More on that, perhaps, later.

For the moment, I want to focus on a more political point.  Mukherjee devotes a few pages to a 1975 conference at Asilomar (near Monterey in California) in which genetic scientists hammered out an agreement to not pursue certain possible laboratory experiments and procedures because of the potential danger of loosing pathogens into the world.

Quoting from Mukherjee’s account:

“Extraordinary technologies demand extraordinary caution, and political forces could hardly be trusted to assess the perils or the promise of gene cloning (nor, for that matter, had political forces been particularly wise about handling genetic technologies in the past [a reference to forced sterilizations in the US and Nazi eugenics]).  In 1973, less than two years before Asilomar, President Nixon, fed up with his scientific advisors, had vengefully scrapped the Office of Science and Technology, sending spasms of anxiety through the scientific community.  Impulsive, authoritarian, and suspicious of science even at the best of times, the president might impose arbitrary control on scientists’ autonomy at any time.

A crucial choice was at stake: scientists could relinquish the control of gene cloning to unpredictable regulators and find their work arbitrarily constrained—or they could become science regulators themselves.  How were biologists to confront the risks and uncertainties of recombinant DNA?  By using the methods they knew best: gathering data, sifting evidence, evaluating risks, making decisions under uncertainty—and quarreling relentlessly.  ‘The most important lesson of Assilomar,’ [Paul] Berg [Stanford professor and key figure at the conference] said, ‘was to demonstrate that scientists were capable of self-governance.’ Those accustomed to the ‘unfettered pursuit of research’ would have to learn to fetter themselves” (232-233).

Except, of course, that they don’t—fetter themselves that is.  Oddly enough, Mukherjee doesn’t seem to see this.  He hails Asilomar as “a graduation ceremony for the new genetics” (235).  Less than ten pages later, as Mukherjee retails the story of the creation of synthetic insulin, we learn that the success comes from a private company, Genentech, which beats the Harvard team working on the same problem because unconstrained by university regulations and caution.  Later, Mukherjee treats Craig Venter, who creates a private company to compete with the government-funded Human Genome Project, much more kindly than many commentators do, while gingerly avoiding the issue of what corners Venters allowed himself to cut by stepping outside of a regulatory regime.

At issue, however, is not Mukherjee’s failure to develop a coherent stance on regulation.  Rather, I am interested in the whole notion of self-regulation—and in the paradoxes of regulation itself.

For starters, regulation is a tough one for people because it is not full-bore permission and it is not full-bore prohibition.  If I give my teen-age son a curfew, I am regulating his behavior, but not forbidding him to go out at night, and not granting permission for him to stay out all night.  Seems simple enough in principle—but it proves very difficult in practice.  The regulation sets a clearly visible limit which (as we know from the Garden of Eden) creates an immediate and powerful temptation.

With self-regulation, then, the limit setter and the tempted transgressor have to be one and the same.  Again, it is trivially true that learning how to regulate oneself, to set and abide by limits not externally imposed, is a crucial step toward maturation.  I am hardly saying that humans are incapable of avoiding “over-doing” something.

But the case is very different when strong social incentives are in place to reward going past a limit.  That situation appears particularly relevant in any competitive environment.  So, in sports, using performance enhancing drugs or even just over-training (to the point of self-harm) are such strong temptations because the rewards for success are so massive.  Similarly, in science, where getting there first is just about everything (to echo Vince Lombardi).  And the same is true, of course, in economic competitions, where various forms of unregulated or expressly forbidden, behaviors can reap one a market advantage.

George Bernard Shaw said “all professions are a conspiracy against the layman.”  By that, he meant that professions claim to have expertise and knowledge that the ordinary person does not possess.  One of the first consequences of that claim is that professions want to be self-governed, to get out from under any external oversight.  The outsiders cannot possibly understand the full complexity of our professional tasks—and hence can only muck things up by interfering.

I was in a room full of hedge fund managers and Wall Street financial guys (none of the finance people was a woman) shortly after the 2008 election of Obama in the aftermath of that fall’s financial meltdown.  To a man, this group lamented how the Democrats were going to cripple financial markets and the absolutely essential flow of capital by coming in and ignorantly regulating things.  There was not a single iota of self-doubt expressed by this group.  They were too focused on the image of themselves as victims of innocent politicians.

In short, it is hard to believe that any profession can ever successfully regulate itself. The reward structures internal to the profession are tied too closely to surpassing limits.  After all, regulation is about trimming back, about not letting everything that is possible be undertaken.  And the logic of the profession is to push relentlessly forward.

But, as Mukherjee’s anecdote about Nixon (making him sound remarkably like Trump) reminds us, are the politicians really in a better position to do the regulating?  When we watch the spectacle of our politicians denying climate change, endorsing nut-case theories about vaccinations and autism, and calling for a balanced federal budget and a return to the gold standard, aren’t we forced to agree that their ignorance should not be allowed to cripple the experts’ knowledge?

How, in other words, are we to establish true accountability?  Some, of course, say we should rely on markets for that.  But the market’s decision is always (even when it does come—and it does not always come) after the fact.  The harm has been done.  Regulations are often also created after the fact, to prevent a disaster happening a second time. But regulations are also anticipatory.  My curfew for my teen-age son was not motivated by any particular incident.  It is just a rule that seems to fit the circumstances—and some possible issues.  So regulation is not just in order to hold people accountable; it is also about prevention.  Don’t do this because it will have bad consequences.

That still leaves the question of who is the best judge of possible bad consequences.  I don’t think the profession itself is.  Professionals have their minds fixed on other things—on success as their profession defines it, on pushing the limits, on following a line of thought or action out to all its logical and possible conclusions.  But no one else seems to be in a very good position to set the boundaries.  We reach here a fundamental dilemma in democratic governance.  The professions need to be governed by a demos that actually lacks good credentials for doing the governing.  We are stuck, I would say, with trial and error, with repeated attempts to regulate that will be resisted by the professions and yet still must be enforced, with (hopefully) continual revision as some regulations prove salutary and others harmful or useless.

Regulation will also have to be dynamic—no once and for all fix will ever be achieved—because the attempt to evade regulations will be endless, as will be the emergence of new possibilities and innovations. (I scorn the oft-heard conservative argument that regulations are counter-productive because they generate evasion.  No one uses that argument against the prohibition of murder or the regulation of prescription drugs.) Some of those innovations will have arisen precisely as mechanisms to evade regulation.  But others arise just because human ingenuity knows no bounds and things undreamed of in the current regulatory scheme become possible.  Trying to tailor old regulations (for radio and TV) to handle new media (the internet), to take just one example, is a fool’s errand.  But in an atmosphere of knee-jerk hostility to regulation, devising a whole new regulatory framework is almost impossible.  The result is the current patent mess, which cries out for a reform that seems beyond our political capability to enact.

So let me conclude by considering that wide-spread hostility to regulation.  Every one of us has experienced it: some bureaucratic barrier placed between us and just getting the job done.  “Enough to make me a Republican,” was my exasperated way of responding to HR hurdles in the days when I was trying to hire staff for the Institute that I directed.  It was fairly easy (in almost all cases if one could look at the thing impartially) to see why a certain regulation was in place, what possible abuse it was trying to guard against, but that didn’t lessen the hassle of having to abide by the regulation.

But it is also worth thinking about just what regulations disallow—or enable.  Our heroic individualists always claim regulations stiffly ingenuity, creative thinking, going beyond the current sense of what is thinkable or doable.  Nonsense.  Just like those who talk most loudly about risk are actually risk averse (businesses make bets when the odds are stacked in their favor), what really irks most people about regulations is that they assault their habitual ways of doing things.  Many of those HR regulations were about insuring a diverse applicant pool and avoiding the nepotism and unconscious biases that lead to all-white offices.  Similarly, requiring that professors deposit their syllabi with a central office prior to the semester’s start means they must actually plan their classes and inform their students about the course’s content and expectations.  Regulations are ways of intervening in shoddy professional practices, of trying to not let habit rule the roost.

And regulations are also reminders that you, perhaps, are not the best judge of your performance.  In my corner of the professional world, college professors, there is deep resentment against the introduction of notions like “learning outcomes” and attempts to measure whether those outcomes have been attained by students.  Finding the right metrics is, no doubt, very difficult, but there is absolutely no denying at this late date the well-documented findings that lectures and reading are a poor way to transmit information to today’s students.  But deny those findings my colleagues will.  It was good enough for them—and they are also damned sure their students are learning lots.  How do they now this latter fact?  They can just tell.

External demands that any profession actually demonstrate, actually prove, its worth can only be to the good, in my opinion.  I sure as hell don’t want an unregulated Wall Street.  So how can I, in good faith, then argue for unregulated professors? The give-and-take, the endless jostling and disputes, between the professions and those external to them that try to regulate them is never going to be resolved.  But that process is far preferable to the delusion that the profession will self-regulate.  Just recall that every time a new environmental or economic policy is bruited in our fair land, some industry group will step forward and say: “We will voluntarily adopt this standard.  Just leave it up to us.”  How many times should we fall for that ploy?

As Michael Bérubé puts it in Life as Jamie Knows It, “bioethics is too important to be entrusted to the bioethicists.”  The same goes for every profession.  It has to be kept on its toes by knowing not just that outsiders are watching, but also by knowing that outsiders wield regulatory power to intervene in its practices.  And when such interventions come, let the fight begin.