Author: john mcgowan

Silent Sam: The Current State of Play

What follows is my understanding of where things currently stand in the ongoing controversy over the disposition of the Confederate monument (known as Silent Sam) on the campus of the University of North Carolina, Chapel Hill.  This is a reconstruction based on the conversations I have had with various people and on the public news reports.  I could be very wrong about all of this.  But I do not think that I am.  The one crucial institutional fact you need to know to thread your way through this labyrinth: the Board of Trustees (BOT) is the local governing Board for the Chapel Hill campus.  The Board of Governors (BOG) is the governing body for the whole University of North Carolina system.  Both boards are dominated by Republicans appointed by the aggressively partisan North Carolina state legislature which has enjoyed (since 2012) a veto-proof majority in both houses.  (That veto-proof majority will end in January 2019, when the State House will still be majority Republican, but will not be a 2/3rds majority.  Hence the Democratic governor Roy Cooper will now be able to veto bills and not see his vetoes overridden.)

In the case of Silent Sam, the BOG was the body designated to make a final recommendation as to the statue’s disposal.  But even their recommendation was only that, since the law (passed in the wake of the Dylan Roof shootings in Charleston SC that led to the removal of several Confederate monuments around the country) by our Republican legislators said that monuments on public property could not be removed, except at the behest of the state historical commission, and even in such cases could not be placed in a museum or re-located to another jurisdiction.  The law was pretty obviously aimed squarely at Silent Sam, which has been a sore point on campus for well over forty years, with the intensity of the protests against his presence waxing and waning over that period.

After the statue was toppled by protesters in late August 2018, the Chapel Hill campus was given by the BOG until November 15th to suggest a plan for its disposal.  Even that was a small victory since it headed off those on the right wing who insisted the statue must immediately be restored to its now empty pedestal.  Failing to put it right back up, the right insisted, was caving in to “mob rule.”  Campus fears that the statue would be restored led to faculty and student clamor vociferous enough to lead Chapel Hill Chancellor Carol Folt to make a public statement (on the Friday before Labor Day weekend) that she did not believe the statue belonged in its former place, prominently displayed at the entrance to campus.  She was immediately reprimanded by the chair of the BOG for disrespecting the process that had been put in place, since she was taking one option for the November 15th recommendation unilaterally off the table.  Folt’s Labor Day statement was made, I believe, with her understanding that her public comment could get her fired. She weathered that storm. Subsequently, on University Day, the annual celebration of the university’s birthday (this year was its 225th anniversary), Chancellor Folt make a public apology for UNC’s racist past.

The November 15th date appeared to have been chosen to push the final decision past election day, in a year when the Democrats were making a concerted push to break the Republican “super majority” in the state legislature.  Except for Folt’s University Day apology, which in fact generated surprisingly little response from either left or right, the Silent Sam issue went underground.  Campus seemed preoccupied by the usual business of a semester, while the issue played no part at all in the legislative races around the state.  Since there were polls suggesting that 70% of the state’s residents believed the statue should be restored to its empty pedestal, the failure of Republican candidates to demagogue the issue baffled me.  The reason, I was told, was that Apple was about six inches away from announcing that it was opening a major new facility in North Carolina (in fact, about ten miles from the UNC campus) and that the only thing holding Apple up was the Silent Sam mess.  They wanted nothing to do with aggressive Southern white boy culture.  So, apparently, the fix was in from the state Republican Party about staying silent about Silent Sam.

The silence was broken post-election when, after a small delay (the November 15th deadline was not met) Chancellor Folt and the UNC BOT announced in early December their recommendation: to build a brand new five million dollar “history and education center” (that was, somehow, not a museum) on the Chapel Hill campus to house the statue.  The proposal, it seemed pretty clear, was meant to stay within the parameters of the state law regarding confederate monuments while also respecting the fact that every single possible spot on the current campus was impossible because the current occupants of those places had made it very clear they didn’t want the thing.

The BOT recommendation was met on campus with incredulity and outrage.  Campus again went into overdrive, with the Faculty Senate condemning the proposal and reiterating its conviction that the statue had no place on the Chapel Hill campus, while graduate students and a small group of faculty sympathizers announced—and worked to muster support for—a grade strike.  They would not submit grades for the fall semester work, just about to be completed.  (They could not stop teaching, since classes for the semester had ended by this point.)

There is some plausibility to the claim that the BOT proposal was really just a way of kicking the can down the road since its implementation would take years—and in that time the state’s politics might have changed enough to make repeal of the monument law a possibility.  But the Chancellor and the BOT could hardly state that hope in public as a way of justifying their plan.  Rather, in taking the plan to the campus and the world, the Chancellor said she preferred an off-campus disposition of the stature, but that she was constrained by the law and, thus, was offering the only feasible and palatable option that the law made available.  The campus was not impressed, since the campus community did not care a fig about the law and saw no compelling reason to abide by it.

I think it is pretty obvious that the proposal from the BOT represented the best plan the Chancellor could get that body to agree to.  Remember that it is stacked with Republicans.  As for the Chancellor herself, I think it fair to say that she has behaved exactly as Barack Obama did on the issue of gay marriage.  Her position has been “evolving” over the past two years—and that evolution has been driven by the persistent pressure from campus activists to her left.  She has always been a tight-rope walker, trying to placate all sides in a state where campus sentiment, public sentiment, and the beliefs/actions of the state legislature do not align but are deeply at odds with one another.  She has always been in a terrible position.  I don’t think she has played her hand particularly well, but she has definitely had a very bad and fairly weak hand to play.  There is no doubt in my mind, however, that the line she has tried to walk has been pulled steadily leftward over the past two years (hence her statement that the statue should not be restored to its pedestal and her public apology) because of the campus activists.

So—and here we really get to what is speculation on my part, but speculation that I am 80% certain is correct—we come to the events of the past five days.  Speculation number one: leading up to the BOG’s scheduled meeting for December 14th, during which it would respond to the BOT proposal, Chancellor Folt and the UNC administration lobbied the BOG to table the BOT proposal.  In other words, the campus response to the BOT proposal had led to yet another “evolution.”  Now the Chancellor wanted the BOG to reject her own proposal.

In the meantime, the campus administration was desperate, in particular, to head off a grade strike, convinced that such a strike would only strengthen the hand of the right wing in the state by generating public outrage over campus teachers not doing their jobs.  That desperation led to campus officials threatening those who withheld grades with expulsion and with financial penalties.  I think the administration over-reacted, both because actual participation in such a strike was always going to be much, much less prevalent than they imagined, and because the threats only cemented the determination of the most dedicated to not back down.  In any negotiation, you need to give the other side a face-saving way to back down.  But the administration didn’t negotiate; it simply made its threats.  (Let me add here, that the administration’s failure, over the past two years, to engage in any serious negotiations with black faculty is, to my mind, is its greatest—and most egregious—failure during this whole saga.)

The BOG not only tabled the BOT proposal at is December 14th meeting—but rejected it altogether.  The can got kicked down the road again.  The time honored formula was followed: appoint a committee to look into the issue and come up with a recommendation.  This new recommendation is to be ready by March 15, 2019.  This non-resolution was announced after a three hour closed session of the BOG.

So here comes speculation number two, since obviously I cannot know what went on behind closed doors.  My claim: the most conservative members of the BOG lost.  The three hours gave those conservatives time to vent.  But if the far righters had the votes to force the return of the statue to the pedestal, they would have held that vote and won.  The formation of a committee means that the return of Silent Sam to the now empty pedestal is never going to happen.  The far right’s moment to force reinstallation has now come and gone.  They were outvoted.  Folt and the UNC administration had successfully lobbied the BOG to not recommend the restoration of the statue to its former place.

That also means that the campus protesters have won a partial victory—only partial but none the less extremely significant.  One problem, of course, is that their victory cannot be publicly acknowledged by the administration or by the BOG because they do not wish to rile up the state legislature.  But the failure to acknowledge the victory also means that many on the left do not believe—or understand—that restoration of the statue will never occur.  Some on the left are fighting the wrong battle at this point, fighting against restoration, not against its relocation on campus.  And the left is also missing its chance to declare victory—when victories, especially when partial, are a means to attracting more people to a cause.  “See what we have accomplished so far.  But there is still more to be done.  Join us.”

That the BOG failed to recommend restoration signals a split among its members.  Without a doubt, some hardliners on the Board favored restoration.  That certain Board members have taken to the press to express their hardliner positions is a sign of weakness, not strength.  They knew they did not have a majority on the board, so were going public in an effort to stir up enough public outrage to move their fellow board members in their direction.  For that reason, the left wing should ignore the public comments of these BOG outliers.  For better and for worse, in the non-democracy that is North Carolina (hat tip to my colleague Andy Reynolds) what happens in public is mere froth.  The real action is in the back rooms.

So what is happening in the back rooms?  That depends on how severe the schism is between “moderate” business Republicans and the social conservatives.  How pissed off are the business folks at the loss of Apple and at the general loss of reputation for the whole state, which now exists in the same nether world as South Carolina, Alabama, and Mississippi after decades of priding itself on being more sensible than that.  Because the action moves now to the state legislature. (In that busy last week leading up to the December 14th meeting of the BOG, Apple announced it is expanding in Austin, Cupertino, and San Diego.  North Carolina’s failure to resolve the Silent Sam mess meant it lost Apple.  You will object: but Texas is hardly a beacon of progressivism.  Yes, but they removed confederate monuments on the University of Texas campus and there was barely a stir.)

The March 15th deadline is to provide time to lobby the legislators to accept an off-campus disposition of the statue, to put the Silent Sam mess behind us once and for all.  I have no idea as to what the outcome will be because I have no idea about the balance of power between the business Republicans and the social conservatives.  Part of me wants to say that money always wins—and, thus, if the business Republicans really want to solve this problem once and for all, they will get their way.  But I don’t know just how pressing they think solving the problem is.  And the pessimist in me says that we have tons of evidence that, in fact, it is culture that always wins.  Racism and lots of other deep-seated cultural values/beliefs are demonstrably economically harmful—but seem ineradicable just the same. (Of course, I really, really wish that the “right” thing–morally–would be what wins, but somehow it seems to lose out to money or culture just about every time.)

This is non-democracy 2018 style.  The decision will be made in the backrooms—and the politicians involved will be swayed by their ambitions within the Republican party pecking order and by their need to have money to run their campaigns.  Public opinion on the issue might play a 10% role in which way they finally choose to jump.  Their own personal convictions about what is the right thing to do will play a 15% role for some of them, and no role at all for others of them.  What they will do is what they deem it is safe to do.  They are about avoiding pain, avoiding losing office, and not about doing anything positive.  It is all about avoiding the negative.

Despite our well-grounded fears about the decline of faculty governance, the university is much more democratic than the general polity.  All the campus protests have accomplished a lot.  We have pushed the evolution of the Chancellor and have insured that the statue is not restored.  I don’t know how campus activism can influence this next stage.  The administration clearly fears that aggressive tactics like a strike will back-fire, handing the right wing a hammer to use against us.  That is certainly a plausible fear.  Escalating a fight in a way that leaves no face-saving exit, in a way that backs your opponent into a corner, often leads to non-optimal results.  But backing down in a fight can also be taken as a sign of weakness—a weakness that your opponent will then move to exploit.  There simply is no infallible rule here about which tactics will work best.  The elites—the legislators and the Republican power brokers—who now have to decide the statue’s fate are, for the most part, beyond the reach of us on campus.  We can only reach them indirectly, by keeping up the pressure on the Chancellor.

But even there, I think it fair to say that the Chancellor deserves a grade of B+ for fall semester 2018 (her grade for prior semesters would be much lower in my opinion.)  She has swung the BOG over to her side, a substantial feat.  They have now come to accept that the statue cannot be restored to its former place.  At this point, it pretty much is out of Folt’s hands.  She has to leave it to the BOG to do the lobbying of the legislature—and hope that they can pull off the impressive feat of getting the law relaxed in such a way as to allow for a off-campus installation of the damn thing.  Stay tuned.

Meretricious

Here’s a passage from Jonathan Coe’s excellent 2004 novel, The Closed Circle.

“. . . the young couple, who had arrived just behind Paul in a white stretch limo were enjoying the attention of a crowd of journalists and photographers.  This couple, whom Paul had not recognized, had last year been two of the contestants on Britain’s most popular primetime reality TV show.  For weeks they had kept the public guessing as to whether or not they were going to have sex with each other on camera.  The tabloid papers had devoted hundreds of column inches to the subject.  Neither of them had talent, or wisdom, or education, or even much personality to speak of.  But they were young and good-looking, and they dressed well, and they had been on television, and that was enough.  And so the photographers kept taking pictures, and the journalists kept trying to make them say something quotable or amusing (which was difficult , because they had no wit, either).  Meanwhile, Doug could not help noticing, right next to them, waiting for his wife to emerge from the ladies’, the figure of Professor John Copland, Britain’s leading geneticist, one of its best-selling science authors, and regularly mentioned as potential Nobel prizewinner.  But no one was taking his photograph, or asking him to say anything.  He could have been a cab driver, waiting to drive one of the guests home, as far as anybody was concerned.  And for Doug this situation encapsulated so perfectly everything he wanted to say about Britain in 2002—the obscene weightlessness of its cultural life, the grotesque triumph of sheen over substance, all the clichés which were only clichés, as it happened, because they were true—that he was, perversely, pleased to be witnessing it” (275-76).

Not a good passage; usually Coe avoids editorializing like this in his novel.  But I wanted to comment on it because 1) I usually, by absenting myself completely from it, avoid “weightless” culture while 2) fighting shy of the clichéd lament about its “obscenity” (laments that echo through the two hundred plus years of despair over the mediocrity of bourgeois, democratic, non-noble mores).  It is interesting to see Coe feeling compelled to both make the clichéd complaint and to chide himself for making it in almost the same breath.  At some level, we elites are not allowed to sound like Flaubert anymore, not allowed to express our distaste—and, yes, our contempt—for what gets dished out on reality TV shows.  Perhaps Milan Kundera was the last fully self-righteous and completely un-self-aware critic of kitsch.  Even as his notion of weightlessness (“the unbearable lightness of being,” such a portentous but still fantastic title/phrase) winds up being little more than the fact that men find it unbearable to be faithful to just one woman.  Kundera’s petulance and (ultimately) silliness put the last stake through the heart of “high” culture’s contempt for low.

But, still.  I have seen Fox news only three or four times in my life; read People  magazine the same number of times, and have never seen a reality TV show.  When I do encounter such things, I am (I admit) flabbergasted as well as bored.  That such trash fills the channels of communication is a mystery as unfathomable to me as the idea that people buy $10,000 watches.  Who would do such a thing—and for what earthly reason?  I don’t even have a condescending explanation to offer.  Fascination/obsession with the British royal family fits into the same category for me.

Meanwhile—and I don’t think Coe sees this—his ignored professor is a “best-selling” author and likely to win a Noble prize–so hardly universally treated like a “cab driver.”  Yeats and W. B. Auden are just two among the great early 20th century poets who lived in fairly dire poverty.  Even the post World War II poets—Berryman, Jarrell, Schwartz and the like—were spared that kind of poverty by having moved into sinecures in the beefed-up post-war universities.  Twenty-first century poets will complain bitterly about how few books they sell, but they are lionized within the tight confines of the “poetry world,” giving readings to robust audiences, and never threatened with the kind of poverty that Yeats took for granted.  We live in a world of niches now, so that no poet today can command a nation’s attention the way Yeats did (of course, he had the advantage of writing for a very small nation, about four million people strong, half the size of today’s New York City or London), even though no poet today can be as poor as Yeats.  The niches, in other words, reward well—have cultural capital in both its forms (financial and reputational) available for distribution.

All of this has to do, in very large part, with the ways that the post-war universities have become the patrons for the arts in our time.  Outside of the university it is very hard to make a living by the sweat of your pen.  The Grub St man of letters, writing his reviews for the papers and the weeklies, no longer exists—while no poet and very few novelists can make a living apart from teaching creative writing.  But the universities do provide a structure that insures rewards.

What everyone keeps lamenting these days (instead of lambasting the meretricious glob of TV and the tabloids) is the utter lack of contact between the niches.  The “culture” we teach in school is utterly divorced from the “culture” our students access outside of school.  They know nothing, and care less, for the material to which we introduce them—except for the very small minority we convert over to what by now should be called “school” culture, not “high” culture.

School culture does get a boost from all those middle to upper middle class parents who, for various reasons, see fit to give their children violin, ballet, singing, and (less frequently) art and acting lessons in lieu of (or in addition to) having them play little league or soccer or join a swim team.  The arts/athletics divide in American child rearing practices deserves sociological study.  Both for characterizing the parents who give their children different kinds of lessons—and in a longitudinal study of what effect those lessons have on later choices in life (chances of going to art museums or to the symphony; kinds of career paths taken).  And how does deep involvement in youth sports culture track to an obsession with celebrities or TV world?  Not any obvious connection there.

These schisms no doubt always existed in American culture.  But they didn’t used to track so directly to different political allegiances/views.  My colleague Jonathan Weiler thinks he can tell your political affiliating after asking only four questions, one of which is your emotional response to Priuses.  I have fear he is right.

And, as usual, most perplexing–and disheartening–to me is the deep hostility that such divides now generate.  Just as I really cannot understand why the uber-rich are so discontented, so determined to increase the financial insecurity of their employees, I cannot understand why our cultural warriors are out to destroy the universities.  Yes, its partly their war against all things public.  UNC is in the cross-hairs in a way that Duke will never be.  But it is more than that.  They have some leverage over UNC; they’d go after Duke as well if they could.  The need to punish one’s enemies as well as look to one’s own well-being is what I don’t get.  Peaceful co-existence of the various niches, the indifference of tolerance, is off the table it seems.  I keep referring back (in my mind) to a comment Gary Wills made years ago about the Republican nominating convention (of 1992 or 1996; I don’t remember what year).  He reported that over 30% of the delegates were millionaires, yet they seethed with discontent and rage.  What objective reason did they have to be so agitated? Life in the US had treated them damn well.  The same, of course, can be said of Donald Trump in spades.  What is the source of all his anger?  Pretty obviously the fact that he does not feel respected by the cultural elites.  So he wishes to destroy them, to cause them maximum pain.

A final question: does meretricious popular culture, all that weightless trash, always have this kind of aggression against dissenters to that culture packed within it?  In other words, I am back to thinking, yet again, about resentment–about its sources and about the cultural/societal locations in which it lurks.

Moral Envy and Opportunity Hoarding

One quick addendum to the last post—and to Bertrand Russell’s comment about how the traditionalist is allowed all kinds of indignation that the reformer is not.  What’s with the ubiquity of death threats against anyone who offends the right wing in the United States?  That those who would change an established social practice/pattern, no matter how unjust or absurd, deserve a death sentence is, to all appearances, simply accepted by the radical right.  So, just to give one example, the NC State professor who went public with his memories of drinking heavily with Brett Kavanaugh at Yale immediately got death threats—as did some of his colleagues in the History Department.  Maybe you could say that snobbish contempt for the “deplorables” is the standard left wing response to right wingers—just as predictable as right wingers making death threats.  But contempt and scorn are not solely the prerogative of the left, whereas death threats do seem only mobilized by the right.

Which does segue, somewhat, into today’s topic, which was to take up David Graeber’s alternative way of explaining the grand canyon between the left and right in today’s America.  His first point concerns what he calls “moral envy.”  “By ‘moral envy,’ I am referring here to feelings of envy and resentment directed at another person, not because that person is wealthy, or gifted, or lucky, but because his or her behavior is seen as upholding a higher moral standard than the envier’s own.  The basic sentiment seems to be ‘How dare that person claim to be better than me (by acting in a way that I do indeed acknowledge is better than me?”” (Bullshit Jobs: A Theory [Simon and Schuster, 2018], 248).  The most usual form this envy takes, in my experience, is the outraged assertion that someone is a “hypocrite.”  The right wing is particularly addicted to this claim about liberal do-gooders.  The liberals, in their view, claim to be holier than thou, but know what side their bed is feathered on, and do quite well for themselves.  They wouldn’t be sipping lattes and driving Priuses if they weren’t laughing their way to the bank.  Moral envy, then, is about bringing everyone down to the same low level of behavior—and thus (here I think Graeber is right) entails a covert acknowledgement that the general run of behavior is not up to our publicly stated moral aspirations.  So we don’t like the people who make the everyday, all-too-human fact of the gap between our ideals and our behavior conspicuous.  Especially when their behavior indicates that the gap is not necessary.  It is actually possible to act in a morally admirable manner.

But then Graeber goes on to do something unexpected—and to me convincing—with this speculation about moral envy.  He ties it to jobs.  Basically, the argument goes like this: some people get to have meaningful jobs, ones for which it is fairly easy to make the case that “here is work worth doing.”  Generally, such work involves actually making something or actually providing a needed service to some people.  The farmer and the doctor have built-in job satisfaction insofar as what they devote themselves to doing requires almost no justification—to themselves or to others.  (This, of course, doesn’t preclude all kinds of dissatisfactions with factors that make their jobs needlessly onerous or economically precarious.)

Graeber’s argument in Bullshit Jobs is that there are not enough of the meaningful jobs to go around.  As robots make more of the things that factory workers used to make and as agricultural labor also requires far fewer workers than it once did, we have not (as utopians once predicted and as Graeber still believes is completely possible) rolled back working hours.  Instead, we generated more and more bullshit jobs—jobs that are make-work in some cases (simply unproductive in ways that those who hold the job can easily see) or, even worse, jobs that are positively anti-productive or harmful (sitting in office denying people’s welfare or insurance claims; telemarketing; you can expand the list.)  In short, lots of people simply don’t have access to jobs that would allow them to do work that they, themselves, morally approve of.

Graeber’s point is that the people who hold these jobs know how worthless the jobs are.  But they rarely have other options—although the people he talks to in his book do often quit these soul-destroying jobs.  The political point is that the number of “good” jobs, i.e. worthwhile, meaningful jobs is limited.  And the people who have those jobs curtail access to them (through professional licensing practices in some cases, through networking in other cases).  There is an inside track to the good jobs that depends, to a very large extent, on being to the manor/manner born.  Especially for the jobs that accord upper-middle-class status (and almost guarantee that one will be a liberal), transmission is generational.  This is the “opportunity hoarding” that Richard Reeves speaks about in his 2017 book, Dream Hoarders.  The liberal professional classes talk a good game about diversity and meritocracy, but they basically keep the spots open for their kids.  Entry into that world from the outside is very difficult and very rare.

To the manner born should also be taken fairly literally.  Access to the upper middle class jobs still requires the detour of education–and how to survive (and even thrive) at an American university is an inherited trait.  Kids from the upper middle class are completely at home in college, just as non-middle-class kids are so often completely at sea.  Yes, school can be a make-it and a break-it, a place where an upper class kid falls off the rails and place where the lower class kid finds a ladder she manages to climb.  But all the statistics, as well as my own experience as a college teacher for thirty years, tell me that the exceptions are relatively rare.  College is a fairly difficult environment to navigate–and close to impossibly difficult for students to whom college’s idiolects are not a native language.

So two conclusions. 1.  It is a mixture of class resentment and moral envy that explains the deep animus against liberal elites on the part of non-elites—an animus that, as much as does racism in my opinion, explains why the abandoned working class of our post-industrial cities has turned to the right.  As bad as (or, at least, as much as) their loss of economic and social status has been their loss of access to meaningful work.  Put them into as many training sessions as you want to transition them to the jobs of the post-industrial economy, you are not going to solve their acute knowledge that these new jobs suck when compared to their old jobs in terms of basic worth.  So they resent the hell out of those who still hold meaningful jobs—and get well paid for those jobs and also have the gall to preach to them about tolerance and diversity.  2.  It is soul-destroying to do work you cannot justify as worth doing.  And what is soul-destroying will lead to aggression, despair, rising suicide rates, drug abuse, and susceptibility to right-wing demagogues.  Pride in one’s work is a sine non qua of a dignified adult life.

The Class/Race/Generation/Political Divide

Back with a little tidbit from Bertrand Russell’s Human Society in Ethics and Politics: “Traditionalists hold their opinions more fanatically than their liberal-minded opponents and therefore have power out of proportion to their numbers.  A man who publicly advocates any relaxation of the traditional code can be made to suffer obloquy, but nothing of the sort can be inflicted upon benighted bigots” (125).

Lots can be said about this—and count on me to say lots.  For starters, we have here the usual contrast between mild-mannered liberals, lacking fire-in-the-blood passion, and visceral conservatives.  The politics of reason versus the politics of passion. “The best lack all conviction, the worst are full of passionate intensity” (Yeats).  I am not very convinced.  More plausible, I think, are explanations that look to “loss aversion” and to the superiority in “reality” of what is over what could be.  In my experience, those proposing reforms always meet with fierce resistance; stepping into the unknown always is based on uncertain gains balanced against very obvious losses.  What will be destroyed by the change is concretely There.  Those who are just fine with current arrangements will have a direct, straight-forward case for outrage.  “Jeopardy” in Albert Hirschman’s anatomy of the “rhetoric of reaction.”  Your changes will jeopardize the good things we enjoy now with no guarantee that what you put in the present’s place will be better.  You, the reformer, are inflicting an easy to identify harm.

Russell believes that “most of the disagreements that occur in practice are, not as to what things have intrinsic value, but as to who shall enjoy them.  The holders of power naturally demand for themselves the lion’s share” (110).  Is this true? That is, are there actually very few deep moral disagreements; rather, the real source of disagreement is about the distribution of the goods that everyone agrees are actually good.  That shifts the moral terrain significantly; the focus becomes who legitimately is entitled to a share and who legitimately can be denied a full share.  I am inclined to think conservatism is always, au fond, about legitimating unequal distribution.  The grounds for cutting some people out—race, meritocracy, education, expertise, various social and moral stigmas, citizenship—vary widely, but the basic goal is the same: to justify inequality.  We fight over the goods–not over what should be designated good.  At least in most instances.  Sounds plausible.

One maddening thing is that unequal distribution could (possibly) be justified by scarcity.  If there was not enough to go around, then some might have to do without.  But there is ample evidence to show that removing the condition of scarcity does little to quell the urge toward unequal distribution.  The drive for status, for hierarchy, for distinction, leads to inequalities as steep and as cruel (i.e. tending to total deprivation) as scarcity.  Russell does not pay much attention to the deep desire for status.  He is no sociologist.  But he believes that the “desire for power” is basically universal, as is the abuse of that power by any who possess it (118).  His only solution to this snake in the garden is sublimation: “to educate in such a manner that acquired skills will lead the love of power into useful rather than harmful channels” (118).  Like Freud and William James, he seeks for a “moral equivalent” of war, competition, status seeking, and the desire to dominate over others.

Not much cause for optimism there.  I do think “loss aversion” can help a bit here, as can a ground-level sense of fairness, of justice.  Russell is not keen on appeals to justice.  “I think that, while the arguments for approximately equal distribution are very strong wherever an ancient tradition is not dominant, they are nevertheless arguments as to means, and I do not think that justice can be admitted as something having intrinsic value on its own account” (117).  The idea is that justice is a means to peace—where peace produces a stable society in which everyone can enjoy the goods they have without fearing the violence of either the strong seekers of power/privilege/wealth/status or the aggrieved violence of the deprived.  Self-interest in such peace and the stability/security it provides is the foundational rock, not some commitment to justice per se.

I think Russell is wrong about that.  I think a disinterested (for lack of a better term) outrage about perceived violations of justice is a much stronger—and independent—motive than he allows.  It is, of course, true that many disputes that claim to be about justice are masking self-interest.  But I do not think that is always to case.  The same psychologists who uncovered “loss aversion” with their ingenious experiments have also noticed that people will be satisfied with less for themselves when a distribution procedure is seen as “fair.”  A real life example is elections.  People accept being on the losing side of a vote if they think the vote was fairly conducted.  One sign of deep trouble in our democracy is the growing refusal to accept the outcome of elections.  When results trump procedures, democracy is in trouble.  Even then, radicals on both sides—left and right—will shout that the vote was not “fair,” that is was fraudulent in one way or another.  A pretty infallible sign of the far-out radical left is the deep conviction that the “real majority” in the US favors the radical’s own program, refusing to countenance all the evidence that the American public is just not that leftist.

I am inclined to believe that those who are driven by an inordinate desire/need for power are a small minority, akin to the small set of adepts that Randall Collins claims can actually commit sustained violence.  (In his book Violence.)  That small number prey on the rest of us.  Our part in life is to try to ward them off, to resist them, and to get on with the business of living.  The powers of resistance are pretty strong; not always sufficient of course but able in many instances to frustrate the seekers of power.  It is not the insecurity of the tyrant that makes him miserable (in my view and pace Plato).  The control of the means of violence is pretty thorough, plus the tyrant’s delusions of grandeur include a sense of immunity to the normal vulnerabilities of the flesh (think of all those 80 year old Senators).  No, what makes the tyrant’s life miserable is the limitations on his power.  Finally, it’s just damned hard to get other people to do what you want them to do.  They resist—passively more often than actively, by not paying attention or doing things half-assedly, or just melting away.  The art of not being governed, as James Scott calls it.  It’s the path that Fred Moten and David Graeber recommend.  Just ignore the tyrant, as far as that is possible.

Or scream bloody murder—like the traditionalists do.  Take the moral high ground whenever any kind of change is proposed.  There were all those artists—Yeats, Proust, Galsworthy, Nietzsche—documenting (often lamenting) the death of the aristocracy as the 19th century became the 20th century.  A privileged class was losing some of its privileges, but more crucially was losing its relevance.  Its material well-being wasn’t threatened, but its right to lead, to set the tone culturally and to direct the nation politically, was slipping away.  Today, it’s white America that is slipping away.  In the popular arts, black America has set the tone for quite some time.  Look at our music and our sports (the NFL and the NBA).  The change has been less swift in film and TV, and even less swift in the non-popular arts like classical music and museum culture.  The difference this time (as contrasted to the period of 1880 to 1920) is that neither the declining class (whites) nor the ascendant one (non-whites) is gaining economically.  Instead, both groups are getting played by the 1% that is hoovering up all the wealth to itself. But the decliners, the traditionalists, are certainly screaming bloody murder.  To a lesser extent, so are the exploited.  (Or maybe they are screaming just as loud, but lack access to the channels–literally Fox and Limbaugh–that would allow their screams to be heard.  The corporate consolidation of American media condemns them to an outer darkness.)

Hence the generalized rage.  The whites has “loss aversion” to the max; they are increasingly irrelevant, feel disrespected, and increasingly insecure financially.  The non-whites, while accorded a certain kind of cultural power and respect (but only within elite circles in New York and Hollywood and, even there, inconsistently), are resolutely kept from getting a decent slice of the pie.  And everyone looks for someone to blame, with the sad, boring, classic American story of getting the poor whites to obsess about their non-white rivals to the advantage of the rich whites.  I wish I had a different story to tell.  Sometimes the truth is astoundingly uninteresting, completely predictable, and apparently immune to any kind of creative rewriting.  It just sits there, an indigestible lump.

No surprise, then, that we turn to the young for an imagined way out of this impasse.  Their much-vaunted sympathy for socialism coupled with their skepticism toward a capitalism that has not served them at all (no less “well”) is seen as the road toward radical transformation.  The radical always relies on a sense that “things can’t continue this way,” that the current arrangements are unsustainable.  But they are unsustainable only if people refuse to countenance, to suffer, them.  And things from my perspective have been intolerable for fifty years now.  And, somehow, little in terms of the basic structures of distribution have changed in the US—except for the worse.

I can’t help but think that American politics are still transfixed by the political, economic, and cultural upheavals of 1965 to 1975.  Just like mainstream economists are still fighting the battle against inflation of the 1970s (unable, apparently, to process that inflation has been a non-issue for Western economies since 2000), so our political fault lines divide along the axis of those who want to return to a mythical 1950s (its prosperity, its blue collar jobs, its women contentedly at home, its blacks out of sight and out of mind, its gays utterly invisible) and those who affirm the various upheavals that brought women, blacks, gays into the public view, with their noisy demands for attention, respect, and their due.  Astounding, really, how traumatic the 1960s were—and how long-lasting (as is the case with traumas) its after-shocks.  The problem is that it is the cultural upheavals (experienced as traumatic by some and liberating by others) that gets all the attention, that generates 90% of the heat.  The economic coup d’etat, every bit as traumatic as the cultural changes, mostly flies under the radar.  The consolidation of economic power never becomes the explicit topic of political inquiry or rhetoric.

Those fiery youth of the 60s did not effect some radical transformation. The few radicals, like some SDSers and Martin Luther King at the end of his life, who tried to “pivot” away from anti-war and pro-civil rights activism toward economic issues (the poor people’s campaign) didn’t get much traction.  (Although we should not forget that something akin to a basic guaranteed income for all was actually debated in Congress in 1971.  How far we have fallen from that high moment.)  Rather, as my daughter likes to remind me, the baby boomers have left the US—and the world—much worse off than they found it.  So I am not likely to place too much faith in the transformative power of today’s youth, even if the generational divide is once again as intense as it was in the “generation gap” years.  Sixties youth, after all, had the insouciance of those who felt immune to economic worry.  No such luck for today’s millennials as they step into the world of contract labor.  Welcome to the precariat.

The lines of this analysis are familiar enough, which (as I say) doesn’t mean they are not (roughly) true.  But David Graeber offers a different way to think of all this—and I will go in that direction in my next post.

Two Kinds of Reason?

The semester has obviously gotten the better of me.  Loads of things to catch up on in these notes.  So let me try to make at least a beginning.

I am reading Bertrand Russell’s 1953 book, Human Society in Ethics and Politics (Simon and Shuster, 1955), which is a summary of his ethics and political views.  Russell’s prose is extraordinary.  He is so clear, so direct, and so ready, in every instance, with an illustrative example.  He really seems to have mastered that Wordsworthian goal of being a man speaking to men (sic).  The tone is conversational, ever even-toned and reasonable, with a trick of his taking you (the reader) into his confidence when he reaches those knotty moments where he has no surefire solution to offer.

Russell is just about 100% a Humean utilitarian.  His position is that there is only one kind of reason: instrumental reason.  Reason is only at play when we are determining what means are most appropriate to the achievement of a particular end.  What Kant called the “hypothetical imperative”—willing the means that will lead to our announced goal.  For Russell, ends are determined by desire or passion (in the classic Humean formula).  Furthermore, Russell is pretty wedded to the notion that a pleasure/pain calculus can explain our desires—even if he rejects the idea (so loved by economists) that self-interest is “rational.”  The pursuit of pleasure and avoidance of pain is passional for Russell, not rational, based in feeling, not thought or logic.  Pleasure as an end is not a product of rational calculation, although figuring out how to achieve that end is a matter of rational calculation.

Russell even ends up asserting (as do Adam Smith and Hume) that there is a “natural” (and, hence, presumably universal) tendency in humans to sympathize with the pain/suffering in others in ways that make the observation of others’ sorrows painful to the observer.  But he has to admit that this “natural” emotion is not everywhere present.  “Sympathy with suffering, especially with physical suffering, is to some extent a natural impulse: children are apt to cry when they hear their brothers or sisters crying. [Not true in my experience.] This natural impulse has to be curbed by slaveowners, and when curbed it easily passes into its opposite, producing an impulse to cruelty for its own sake” (87).

A thin reed indeed, if it so “easily” turns into its opposite: a delight in the suffering of others.  Yet it is very hard to see how you can even get ethics founded on emotion rather than reason started if you don’t posit some kind of sympathy.  That is, if your ethics must be derived from a primitive pleasure/pain impulse, then you have to figure out a way to ground caring about others’ pain in the fact of feelings of pleasure and pain confined to the self. Here’s Russell again; “I do not think it can be questioned that sympathy is a genuine motive, and that some people at some times are made somewhat uncomfortable by the sufferings of other people.  It is sympathy that has produced the many humanitarian advances of the last hundred years. . . . Perhaps the best hope for the future of mankind is that ways will be found of increasing the scope and intensity of sympathy” (155-56).  The extremely cautious language here (some, somewhat) perhaps reflects Russell’s recalling how Hume, despite his thoughts on sympathy, speculated/worried that it is not irrational for me to care more about a cut to my little finger than about 10,000 deaths in China.  If you begin from egotistic premises about pain and pleasure, that Humean thought is hard to refute.  I experience my pain quite differently from the ways I experience the pain of someone else, no matter how deeply I might feel for them.

The Continental tradition, ever hostile to utilitarianism, has sought to solve this problem by appeal to another kind of reason—one that is quite distinct from instrumental reason.  In Kant, it’s the reason of logic.  Ethics is to be grounded in the pain (I use this word advisably) we feel at self-contradiction.  The categorical imperative basically says that I cannot, except on the pain of contradiction, assume goods to myself that I would deny to others.  A radical egalitarianism is the only path to an ethics that avoids contradiction—and, this goes mostly unsaid in Kant, our sense of self-worth, of dignity, and integrity would be lost if we contradicted ourselves.  Just what our stake is in self-worth, dignity etc. is never specified.  It is simply assumed that we desire to esteem ourselves.  Russell, along with other utilitarians, would say that Kant, at bottom, also relies on pain—just the pain of being inconsistent instead of the pain of witnessing the suffering of others.  Then the question becomes which of these two pains would we take more pains to avoid, which is the more powerful motive.

Habermas’ version of a second kind of reason is “discursive reason.”  It shares some features with Kantian reason, especially in its egalitarian strictures that all are provided with equal access to the discourse that Habermas identifies as central to human interactions.  But Habermas also adds the rationality of being convinced by arguments (or viewpoints or even conclusions) that are best supported by the evidence and by the “reasons” provided to believe them.  Our beliefs, in other words, are potentially rational for Habermas—and those beliefs are not just confined to the designation of efficacious means.  Our ends can also be determined (at least in part) through rational argument, through discursive processes of intersubjective consultation/contestation that yield conclusions about what ends to pursue.  Desire is important, but does not entirely rule the roost.  We don’t necessarily have to express it as desire being tempered or corrected or revised by reason.  We can imagine desire and reason as born in the same moment, that way avoiding giving desire some of temporal or psychological priority—a priority that may get translated into thinking desire a stronger force or one that must be tamed (as in Plato’s image of desire as the horse that must be controlled by the weaker, but smarter, rider).  I think Habermas (like Martha Nussbaum in a somewhat different way) would want to say that desire and reason are intertwined (perhaps completely inextricably) from the start—a position that makes human beliefs and behavior susceptible to argument/persuasion, thus giving “discursive reason” a space in which to operate.

Reason in Habermas and Nussbaum, then, is secular and immanent; it is produced in and through human sociality.  And I think they would say that it works to create “sensibilities,” that our “moral intuitions” are the products of cultural interactions.  Certainly, I read Dewey as taking that position, which is a way of reconciling what can seem his over-optimistic faith in “intelligence” (that key Deweyean term) with his equally firm insistence that “morality is social.”  There is no transcendent rational dictate (as there is in Kant) that grounds morals, that even pronounces its fundamental “law” (i. e. never do anything that you cannot will that everyone do).  Dewey’s social historicism tries to account for both the variety in moral beliefs/intuitions across time and space and to capture the “force” of those intuitions, the fact that they are motivating and that we feel shame/guilt when we do not act in accordance with them.  The “intelligence” on which Dewey relies does seem to be consequence-based.  He seems to be saying that things go better for human lives—whether focused on individual lives or on the collective life of societies—when we adopt modes of “democratic association” that stress cooperation over conflict/competition and proved the means for all to actively pursue their chosen ends.

Still, the rub is there: what cultivates the sensibility of, commitment to, enhancing the well-being of others.  What, in Kantian terms, keeps me from using the other as means to my self-fulfillment, just as I use various non-human things that the world affords as means.  The Kantian path basically says we must have some way to designate some things (primarily human lives) as sacred, as never to be used as means.  Otherwise, utilitarianism will run roughshod over the world—and the people in it—during its pursuit of pleasure.  What is unclear is whether “reason” can get us to that designation of “the sacred” (defined as the “untouchable,” or as that which is always an ends, not a means).

The alternative seems to be some kind of arbitrary fiat, the kind of decisionism that Derrida seems to adapt in the later stages of his career, or perhaps the kind of pre-rational “call” (or intuition) upon which Levinas bases his ethics.  The sacredness of the other is just asserted; it is not justifiable in any rational or argumentative way.  Just what the nature of its appeal is remains unclear.  What motivates one to heed the call?  To what within the self does the call touch? One answer leads to a kind of pantheism (I would read Hegel this way): the call resonates with that fragment of the spirit (or of the divine) that lurks within us, but which lies buried until activated by this voice from without.  That path, not surprisingly, is too mystical for me.  Yet it is clear that I am almost as equally suspicious of “reason” as some kind of power that can pull us up by our bootstraps, that can give us the terms of an ethics that we embrace as our own.

I am left, I think, with the idea that there are certain images of human possibility—both of individual exemplars (call them “saints” if you like) and of livable communities (call them “utopias” if you like)—that appeal to us as desirable visions of the forms life could take.  These visions are given to us by history (by religion, by literature, by philosophy, by the stories we tell)—and can become the focus of desire/aspirations, as well as the standards by which we criticize what does exist now.  In other words, articulations of the ideal (of ideas of justice) by philosophy and imaginations of the ideal in stories and literature, as well as certain concrete examples pulled from history form the basis of commitments that also are seen as ethical obligations, since it is shameful to act in ways that make realization of those ideals unlikely or impossible.  Is this “rational”?  Not fully or categorically.  But it can involve the deployment of reasons (in the plural), of arguments.  And in that sense Dewey’s appeal to “intelligence” might not seem quite so silly.  Intelligence is not a bad term to use for the assessment of our ideals and of the reasons they give us to act in certain ways as well as for assessing the possibility of the realization of those ideals.  At the same time, it seems to me that ideals do make an emotional appeal, so that the passional nature of our commitments can be acknowledged as well.

“Intelligence,” then, is a smudge term.  It’s meant the bridge the classical divide between passion and reason—in much the same way that Martha Nussbaum, in her work upon the emotions, has worked hard to demonstrate the contribution to “cognition” made by them.  Of course, the term “emotional intelligence” has entered the language in the past fifteen to twenty years.  It’s hard not to think that “intelligence” is doing a similar work to “judgment” in traditional faculty psychology.  In other words, as opposed to the Plato/Hegel line, which appeals to a transcendent Reason (with a capital R), or the Catholic theological line, which appeals to Revelation (with a capital R), we get the Aristotelean line, which aims to remain firmly grounded in the human and the here and now.  No divine interventions or even implanted divine sparks, just what our inborn mental capacities and emotional make-up renders possible. Russell is as addicted to appeals to intelligence as is Dewey.  “I would say, in conclusion, that if what I have said is right, the main thing needed to make the world happy is intelligence.  And this, after all, is an optimistic conclusion, because intelligence is a thing that can be fostered by known methods of education” (158).  I think it is almost inevitable that liberals will always end up appealing to education as the motor of improvement because they believe our ills are not permanently grounded in some kind of “nature” that cannot be re-formed.  Education is the means toward that re-formation.

But in that line (to which Hume and Kant, despite all their differences, both belong), the other sky hooks (besides education) that can get us out of being the mere pigs of J. S. Mill’s fears turn out to be either the needs generated out of human sociality or the mysterious processes of judgment (the topic of Kant’s third critique).  A utilitarianism shorn of both of these mechanisms can either throw up its hands at the issue of ends, just taking them for granted, in all their variety and perversity, as modern economic thought does.  Or it seems doomed to finding “altruism” and various other moral behaviors a deep puzzle, one only slightly assuaged by notions of “enlightened self-interest.”  In short, the problem for an utilitarianism—for any one who, like Russell, says there is only instrumental reason—is that it leaves us no way to talk about the formation of, the fixation on, ends. (This is the most customary complaint about pragmatism.) Those ends are just the product of passion, of the fundamental desire to gain pleasure and avoid pain.  Yet the actual variety of human ends, the number of things to which people are committed defies a simple calculation of pleasure or pain, indicates that utilitarianism’s psychology, its understanding of human motivations, is woefully inadequate to the actual complexities of human desires and calculations.

That said, accounting for the production of ends still remains a puzzler.  “Judgment” merely names the puzzle, gives it a site to reside. It hardly solves it.  Judgment stands as a way to explain that our moral views and our desired ends are not completely dictated to us by our culture.  That individuals in all worlds that we know of have the capacity to stand out against the prevailing practices and beliefs of their society.  They can, in short, submit those practices and beliefs to judgment.  But where do the standards by which the judgment is made come from?  That’s where some kind of notion of “intelligence” or “reason” or “cognition” (aided or not by the emotions) comes in.  Even in cases where the fact that judgment can be refined by education, where it can be developed in particular ways by particular exercises, there is still the sense that judgment also imparts an ability to stand apart from that education and those practices, to sit in judgment upon them.  I will be looking to see how Russell smuggles something like this capacity into his account of morals.  Judgment, I am saying, takes the place of that second kind of reason, that other “faculty,” that can do more than just indicate suitable means, instead offering us a way to make choices about ends.

Violence, the Irish and Religion

Here, from Maud Gonne’s autobiography, is her rationale for being a firm “physical force” advocate, scorning the “constitutional” road toward Home Rule pursued by the Irish Parliamentary Party from 1885 to 1914.

“A robber will not give up his spoil for the asking unless the demand is backed by force.  Once a constitutional party turns its back on physical force, because not being able to control it, . . . its days of usefulness are over.  It may linger on, but, being unable to deliver the goods, it falls shamelessly into the corruption of its environment.  . . . The funeral of the Parliamentary party should have taken place when its leader Parnell was lowered into his grave at Glasnevin in October 1891.  He had failed when he had repudiated acts of violence.  He was never a physical-force man himself, but he had walked hand in hand with physical force in the early days when luck and the spiritual forces of Ireland were with him, so that even ordinary words from his lips became charged with great significance and power.  Luck deserted him when he deserted the force which had made his movement great” (174-75). [The Autobiography of Maud Gonne, University of Chicago Press, 1995).

Charles Taylor, in his A Secular Age, spends hundreds of pages worrying the issue of violence.  Basically, he keeps insisting that humans experience some kind of mysterious or mystical connection to the “numinous” when engaged in or stand as witness to acts of violence.  He never gets more specific than that, but insists efforts to simply repress violence will never work.  Violence is as ineradicable as sex; religion both gropes toward a way of grasping the meaning of violent and sexual acts, while also providing forms (rituals and stories) that enclose those acts.  Here’s a typical Taylor passage along these lines (he repeats this point several times without ever getting more concrete):  “if religion has from the beginning been bound up with violence, the  nature of the involvement has changed.  In archaic, pre-Axial forms, ritual in war or sacrifice consecrates violence; it related violence to the sacred, and gives a kind of numinous depth to killing, and the excitements and inebriation of killing, just as it does through other rituals for sexual desire and union.  With the coming of the ‘higher,’ post-Axial religions, this kind of numinous endorsement is more and more withdrawn.  We move toward a point where, in some religions, violence has no more place at all in the sanctified life. . . . But nevertheless . . . various forms of sanctified and purifying violence recur.” {at which point Taylor instances the Crusades and the violence of ideologies like fascism and communism} (688-89).

Without ever saying so, Taylor seems to imply that religions that incorporate violence, that practice sacrificial rites, can thus contain it.  Whereas attempts to eradicate violence only lead to uncontrolled, massive outbreaks of the sort that characterized the 20th century.  At other points, he references William James’s idea of finding a “moral equivalent for war,” but doesn’t pursue that idea; rather, he seems faintly skeptical that some substitute would do the trick.  We want/need real violence because of that urge to connect to the “numinous.”  All of this goes mostly unsaid in Taylor because he cannot bring himself to simply endorse sacrificial practices.  Yet he is also committed to this idea that violence and the numinous have some kind of “deep” (his favorite word in the whole book) connection to one another—and thus religion has to attend to, even provide the means for, achieving, that connection.

What has this to do with Maud Gonne?  Yes, she offers a utilitarian defense of “physical force.”  The English robbers are never going to relinquish hold of Ireland unless forced to do so.  But there’s more.  Non-violent movements become corrupt (she argues); without the laying of one’s all, one’s life, on the line, there is no way to overcome the temptations of life.  The reformer will succumb to the fleshpots available to him; he will betray the cause in favor of his own comfort and advancement.  As in Yeats’s and Lady Gregory’s play Cathleen ni Houlihan (Gonne, famously, played the lead in its first public performance), only those who renounce everything to serve the Queen (Gonne’s autobiography was titled “Servant of the Queen” with that Queen being Ireland) can be trusted to serve the cause faithfully to the bitter end.

The logic here is precisely the logic of sacrifice, where in some weird way the proof of one’s absolute devotion to the cause, the willingness to die for it, becomes more important than the success of the cause itself.  Pragmatism and utilitarianism are spurned; caring about the ends violence might achieve is subordinated to the glorious commitment itself.  Such would seem to be the burden of Padriac Pearse’s sacrificial fantasies—embodied in the plays and pageants he staged—in the years just prior to the 1916 Easter Rebellion.  And, of course, the dating of that uprising at Easter was no coincidence.  The rising was a pageant itself of sacrifice leading to resurrection.

And as we see in Rene Girard’s work—and this idea lurks there in Taylor although never made explicit—an embrace of violence is palatable when connected to self-sacrifice.  Harder to countenance is murder, the killing of the other guy.  It’s the embrace of one’s own death that is fairly easy to sanctify; even ritualized killing of the other is harder to stomach.  For all her hatred of the English, Gonne devotes her life to the cause of aiding imprisoned Irish rebels and their destitute families, not to killing Englishmen.  The one time in her autobiography where actual violence seems in the offing, Gonne (to her credit) backs down and avoids pushing the confrontation to killing.  Gonne is speaking to a riled-up crowd, when the police arrive.  Here’s her rendition of the incident.

“’If you go on I shall give the order to fire,’ said the officer.

‘Go on, go on,’ cheered the crowd.

I heard an order given. I saw the constabulary get their rifles at the ready and heard the click of triggers.  Most of the men now had their backs to the platform and were facing the police; they had nothing but ash plants in their hands but were ready to fight; some still shouted for me to go on.

‘No,” I said.  ‘Men, you know your duty; the proclaimed meeting is now over,’ and I got off the car.

There was disappointment; one man said: ‘You should have gone on.’  I heard another man say: ‘You couldn’t expect a woman to fight.’  I said: ‘If you had guns I would have gone on; the rifles were pointed at you, not me. I couldn’t see unarmed men shot down.’

Again a wave of depression overwhelmed me. . . . Perhaps I had been wrong in not letting the Woodford evicted tenants fight and be shot down.  Dead men might have aroused the country as living men could not and at least made the evicted tenants a live issue.  I had not dared take responsibility; I had refused leadership and the situation was not of my own making” (301).

The practical triumphs over the ideal here, as I (for one) would wish it to.  But then she is led to wonder if bloodshed would have been impractical.  A massacre might, in fact, have advanced the cause, making it (ironically) a “live” issue.  She wonders if she, at the moment of crisis, has proved weak, has allowed inappropriate scruples to stop her hand.

Which brings us back to the earlier passage—to Gonne’s analysis of Parnell, an analysis that actually seems to put some flesh on the bones of Taylor’s idea that violence connects us to the “numinous.”  Gonne argues that Parnell’s charisma in only intact so long as he remains tied to the ”physical force” revolutionaries. And that is because the “physical force” advocates are in touch with, bring forward into some kind of mysterious presence, “the spiritual forces of Ireland.”  Violence is the way those spiritual forces speak to us, through particular men who are its priests, its mouthpieces.  Here, eloquently stated, is Taylor’s conviction that violence provides a pathway to the numinous.

Of course, to a pragmatist skeptic like myself, the numinous here is better described as “nationalism”—and the cult of the nation seems to result in much more evil than good.  Taylor knows that, which is why he keeps stumbling on the vexed question of just what is the content of the numinous, just as he cannot specify an actual violent rite that we, with our modern sensibilities, could actually endorse.

Historical distance offers one out here.  Do I wish that the 1916 rebellion never took place?  One hundred years later don’t the rebels seem admirable heroes—even though I have no doubt that in 1916 I would have thought them vainglorious fools.  And didn’t their sacrifice actually achieve, in the long run, their ends?  Yes and no.  Plausible to say that there would have been no Irish Republic without the Easter rising.  Equally plausible to say that the ongoing violence of Irish politics throughout the 20th century was also a product of that rising.  No violence, it seems, without answering acts of violence, producing those cycles of violence that are all too familiar, and rarely conclusive, rarely actually creating a desired state of affairs.  There is always some rub, some imperfection, that justifies more violence—even if it is just the violence of revenge.

Would Taylor accept that the numinous is always out of reach—and thus no act of violence, even if it yields intimations of the numinous—ever satisfies?  Religion is born of frustration, of a longing for “something more” than what the ordinary provides—and violence is born of frustration as well.  Infinite desire in a finite world.  Or a desire for the infinite in a finite world.  We can dream of more than what we can actually have.  Taylor wants to honor how those dreams push us beyond the here and now, how they lead to the astounding, almost unbelievable, things that humans manage to do.  But why claim that destruction and violence are part and parcel of that reaching for what exceeds our grasp? Why not, instead, think of destruction and violence as the rage engendered by our reach falling short, as the spite (resentment) we feel against the world and against others when they disappoint our visions—or worse when someone else achieves what we have failed to accomplish?

One riposte from the Taylor side—and here we return to the power of nationalism—is that violence (like religion more generally) is a collective act.  Soldiers always talk of the astounding camaraderie, the enjoyed intimacy, of the platoon.  One of the things we long for is that kind of melting of the self into communion with others—and that melting can feel numinous, a connection to some larger and higher power.  Violence, like sex, is a way of escaping the self, of ecstatically merging it with others.  It carries us outside of ourselves.  That’s one of its attractions, its lures, its way of thumbing its nose at bourgeois calculations and prudence.  Violence is aristocratic (as in Yeats and in Gonne) or sub-bourgeois (as in Synge).  Taylor wants to tap that “noble” side of religion as well—a task made rather difficult by Christianity’s affinity with book-keeping.  The ledgers of sin must be kept so as to see if the reward of heaven will be won.  Hardly an ecstatic way of thinking.

Another, very different, note on which to end.  In Roy Foster’s wonderful book about the Irish revolutionaries, Vivid Faces: The Revolutionary Generation in Ireland, 1890-1923 (Norton, 2014), he mentions how naïve the “physical force” rebels were.  In some ways, they simply shared the naiveté of a Europe that went blithely to war in 1914.  A massive failure of imagination.  Violence is rarely attractive when seen close up, which is why historical distance is so often needed to sanitize it.  (We are back here to Grossman’s work on killing—which is only exhilarating at a distance except for a very few, exceptional, persons.)  I have always thought it greatly to Yeats’s credit that he mostly abandoned his romantic celebrations of violence once he witnessed actual violence during the 1920 to 1923 wars in Ireland.  Foster quotes Min Ryan, who “admitted afterwards that when Tom Clarke told her in 1916 that most of them would be ‘wiped out,’ it brought her down to earth with a bump. ‘I got an awful shock because I was living a most unreal kind of life as if nothing could happen to anyone.  I could hardly believe that we would take up arms at all and then I began to believe that we would come out of it alright.’”  Foster goes on to comment: “The five years from 1916 to 1921 would provide a steep learning curve” (72)  Why he excludes the two years of the Civil War, with its brutal executions, is a mystery.

In any case, the rhetoric that calls for violence is easy, all too easy, and very often disconnected from any real sense of what violence means or entails.  Again, violence is more palatable the more distance one maintains from it.  It is hard for me to imagine Taylor participating in the rites he seems to endorse.  Certainly, I want no part of them—even if the numinous were to arrive as promised.

Further Thoughts on Civil Disobedience

My colleague Eric Muller, who teaches at UNC’s Law School and has done important and wonderful work on Japanese internment during World War II, responded to my previous post about the toppling of Silent Sam as an act of civil disobedience in this way:

“A thoughtful and excellent piece about the nature of the act of toppling Silent Sam (our Confederate statue on campus) by my UNC colleague John McGowan. I am with him right up to the very last couple of lines. But I part with him there.

What is the moral justification for lying to the police – effectively committing the crime of filing a false police report – in order to impede the prosecution and possible conviction of those who engaged in civil disobedience? When a person thinks things through and decides to engage in an unlawful act in order to make a larger moral or ethical point, or to bring about some change, it seems to me that she has made the choice to risk prosecution and conviction. In fact, it’s precisely the acceptance of that risk that makes the act courageous and gives it broader meaning. So I am hard-pressed to see a case for others telling lies in order to prevent the outcome that the civilly disobedient person knowingly risked.

(And this is not even to mention the fact that if hundreds of people file false police reports, that will impede the prosecution not just of the people who toppled Silent Sam, but will slow the administration of justice in that jurisdiction more generally. What’s the moral case for that?)”

Eric’s response has pushed me to think through my notions of and intuitions about civil disobedience.  I will end up, to a fairly large extent although not entirely, disagreeing with his disapproval of having many people step forward as perpetrators of the toppling—but it is going to take me some time to get there.  So I am begging your indulgence and your patience as I try to work this through.

Civil disobedience is the act of disobeying a law, where the justification for that disobedience is an appeal to some other standard of judgment apart from sheer (or mere) legality.  In the name of justice, of the right, of the good, or even of a “higher” moral law, a civil disobeyer says: “I cannot act legally in this case because it violates my sense of what is the right thing to do.”

Such an act can be individual.  Some pacifists and some conscientious objectors will defy conscription laws because, as a matter of individual conscience, they cannot participate in a war.  How they define participation can also vary, with some COs willing to serve as medics or in other non-combat roles, while others think that any assistance offered to the war effort is wrong. Those who take this latter position have two choices: one, to go to prison or two, to attempt to evade the law’s punishments (by, for example, going into exile, as many did during the Vietnam War.)  Evasion could also, of course, just mean lying low, trying to avoid the law’s notice.

It seems to me that everything changes drastically when acts of civil disobedience turn rhetorical—that is, when such acts are not a question of an individual attending to her own conscience, but are publically enacted violations of the law that seek to demonstrate to fellow citizens that law’s deficiencies.  An act of civil disobedience, in such cases, is the staging of a dramatic argument.  It asks the non-participating spectators, those who are simply witnessing this forced (by the civil disobeyers) confrontation between the law and those who deem it unjust, to decide what side they are on.  Do these spectators favor the continuation of the law in question and favor the fullest prosecution of the civil disobeyers—or do those spectators recognize that the law is deficient in this case, and actually want to thank the disobeyers for making that fact dramatically clear?

First consequence of this rhetorical view: the act of civil disobedience must be public, must be visible.  The CO doesn’t necessarily turn his evasion of conscription into a public spectacle.  But those who practice civil disobedience in an attempt to sway public opinion, as a tactic within a larger plan to change the law, must act in public—and, in fact, desire the widest possible publicity in order to grab the attention of the widest possible public.  Thus, as distinct from the ordinary criminal, who tries to break the law invisibly, the civil disobeyer performs his law breaking in the light of day.  Otherwise, she cannot achieve her goal, which is extensive public deliberation about the justice of the law.

Breaking the law in full view means that evading punishment becomes difficult, if not impossible.  In fact, as Eric alludes to in his comment, many theorists of civil disobedience take the full assumption of responsibility for the act of disobedience as a crucial component of civil disobedience. The dignity and the impact of the act is heightened by the stalwart presentation of oneself in the public sphere: I committed this act of disobedience in the name of these principles, and am fully willing to be called to account by the law for my action.

Let’s call that the heroic model of civil disobedience—and I use that term “heroic” completely .  The gambit here is that the spectacle of the law prosecuting these individuals of conscience will aid the cause of revealing the law’s injustice (according to the “other” standard being appealed to against the law’s own standards). The nobility of the disobeyers (their integrity and willingness to undergo punishment from an unjust law in the name of their alternative notion of what is right) furthers the attempt to sway public opinion to their side.

And, certainly, we needn’t be utterly rhetorical about this.  Stoically accepting responsibility and punishment is not just a rhetorical ploy; it also accords with the disobeyer’s own sense of dignity, which includes differentiating her acts from those of a criminal.  That is why, for so many dissidents, the distinction between a political prisoner (a prisoner of conscience), and a criminal prisoner is such an important one.

The heroic stance can be summed up in this way:  I did this act, I did it in full public view because I am proud of this act since I fully believe it was the essentially right way to act even though it was illegal, and I will take full responsibility for the consequences of the act, including being punished by the law.

But there are alternatives to the heroic view.  And those alternatives are what I need to explore here.  I am deeply attracted to the heroic view—and fully respect Eric’s position that the heroic route is the way to go.  But I do think there are circumstances where it is not the optimal strategy—and I find (as I reflect upon these matters, as Eric has pushed me to do) that I am willing to jettison some of the heroic in the name of effectiveness.  I am committed to civil disobedience successfully leading to the reformation or repeal of bad laws—and unheroic approaches may be more effective in some cases.

Let me throw out a big question first, even though I will postpone full consideration of how to answer it.  Why should I, who think a law unjust, enable (through cooperation with the process of prosecuting me and others for violating it) the smooth functioning of that law?  Having stated the point so abstractly, let’s think about it applies in four different cases.

Case 1: One way to render a law a dead letter is massive non-compliance.  Prohibition in American history is the obvious example, but there are others.  Any law’s effectiveness depends on large-scale voluntary compliance.  If the strategy of dissenters is to inspire wide-spread non-compliance, there is no particularly compelling reason to adopt the heroic strategy of being prosecuted.  Instead, the strategy is to make the law look ridiculous, incredible.  They want to (think they can) stop us from doing that?  Let them try.

Case 2: Jim Crow laws.  The strategy here was not direct violation of those laws—with the consequent punishment of such violators.  Instead, the strategy was to stage massive public demonstrations to publicize the widespread dissent from those laws.  The aim was repeal (or the court nullification of the laws as unconstitutional) and the enactment of new legislation (Civil Rights Act, Voting Rights Bill) that would make segregation illegal.  If laws were going to be violated in this movement, it would be the violation of laws that hampered public expressions of opinion.  More to the point: the civil rights demonstrators provoked their opponents into over-reaction, which played badly to a wider national audience.  Before he turned to economic issues and to racism in the North, MLK won the rhetorical battle.  His movement did so, in part, by having its members go to prison, but much more important was the public spectacle of the battering of non-violent demonstrators by infuriated police and other public authorities.  Arguably, the anti-Vietnam protestors were not as rhetorically successful because not as disciplined in their non-violence and because they never had—or created—the solidarity among whites that the civil right movement (at least until 1965-66) achieved among blacks.

Case 3: Immigration laws.  When Trump was elected, I figured that meant I would end up getting arrested some time in the coming four years.  It was just a matter of time—and of choosing the occasion where I felt it might make some positive difference, or be such an urgent matter of conscience that I would have to make a stand.  I assumed the real push-comes-to-shove moment would involve immigration.  If the Trump administration were to attempt to expel (for any reason) undocumented immigrants from my community or to harass/deport foreign students on our campus, I would feel compelled to do something to hinder such efforts.  Here is the case where I find myself most at odds with Eric.  I would consider every and any way of hindering the  law’s enforcement justified (and imperative upon me personally) in that case—and think the heroic stance would be utterly counter-productive.  The goal would be to throw as much sand into the gears as possible—using every single tactic that could frustrate the law’s ability to operate.  I wonder how Eric would think about this case in relation to the internment of the Japanese during World War II.  I think also of the Danish all wearing yellow stars as a way of frustrating the Nazi’s murderous anti-Semitism.  This would precisely be the case of presenting the law with more perpetrators, more deemed guilty under its understanding of guilty, than it could handle.

Case 4:  Silent Sam.  So what kind of case is Silent Sam?  A very odd case once I am forced to think hard about it.  Odd, first of all, because of the ambiguities I have noted (in my previous post) about whose property the statue is anyway. And then there is all the stuff about “destruction of property” as referenced in the statement from Margaret Spellings et.al.  Which is really a red herring, because the real nub here is a specific state law—not some general set of property rights. That specific state law says that a certain class of property—namely memorials on public property—are removed from all public deliberation about their desirability.  It is widely acknowledged that general property rights do not trump all other considerations.  There are grounds on which property rights can be overruled or suspended.  But the state law on the public memorials says that kind of debate cannot be held, that kind of case cannot be brought forward. In short, it takes out of a community’s hands, the ability to decide, after a due process of deliberation, whether it wants a memorial present or not in its community.

It is, as I also mentioned in my previous post, precisely in cases where legal methods of appeal and redress are blocked that civil disobedience is most likely to occur.  Again, the Jim Crow South offers the classic example.  When the law and public officials and the courts are completely stacked against you, civil disobedience is one of the few alternatives left (violent rebellion is another).  Legal avenues for the removal of Silent Sam appeared completely blocked.  (Of course, as Eric eloquently argued in public—and within university circles—that there was a legal pathway for removal available, but the university refused to pursue that path, not by rejecting it outright but by refusing to ever acknowledge that such a path existed.  A frustrating approach to the whole dilemma of Silent Sam to say the least.  But from start to finish, the university’s leadership has failed miserably in its response to the presence of Silent Sam on our campus.)

A further oddity: even though we have this state law that was blocking any legal way to remove or move Silent Sam, the protestors were not interested in the repeal of said law.  They just wanted to remove Sam, the law be damned.  So once they toppled Sam, their work was done.  (Unless if gets undone by an effort to re-install the statue.)  Unlike many cases of civil disobedience, there is no on-going need to demonstrate the law’s injustice, to win over a public to the law’s repeal.

Thus, their work being done, why not “try to get away with it”?  We did the right thing, the demonstrators might think, so why should we be punished for it?  Eric’s position, which I respect and 20% agree with, is that the toppling of the statue only becomes a criminal action, not an act of civil disobedience, if you try to evade punishment.

But here’s where I take my 80% stand: it was a collective act of civil disobedience.  Allowing the law to single out a handful of “ring-leaders” will only support their desired narrative of a “few” trouble-makers and outside agitators.  I think the rhetorical battle is ongoing in this case—and that one key rhetorical point to make is that there is wide-scale endorsement of Monday’s action, which includes wide-scale endorsement of the means used (an unauthorized toppling of the statue) and thereof a wide-scale acceptance of responsibility for that action.  If that gums up the works, so be it.

There is, after all, fairly wide discretion about which laws to enforce—and to what extent.  Making it both absurd and costly to enforce the protection of Silent Sam, making the state divert what are always limited resources, to this particular vendetta, helps to make the argument about their priorities and their values that we—those against the prominent presence of Silent Sam on our campus—have been trying to make all along.

In short, it seems to me an acceptable tactic of a campaign of civil disobedience in certain cases to make the functioning of the law in question difficult.  And in cases where there are wide divergences of opinion, I also think that standing in solidarity with those in your camp is incredibly important.  There will be various attempts to divide and conquer going forward, some dependent on making the costs of solidarity high, some dependent on painting the dissenters in certain kinds of way—and then tarring fellow travelers with the same brush.  Anticipating this ongoing rhetorical battle, I still think (despite Eric’s cogent arguments) that counter-acting the law’s attempt to identify a few perpetrators by a mass declaration of guilt is the right move.  My “moral case” (which is what Eric asks for) is based, then, on these claims of solidarity, in the name of the collective that both enacted and endorsed the toppling—and which wishes to resist the attempt to label it the action of just a few outliers, some easy to isolate and dismiss dissidents.