Morality—and the State

In The View from Nowhere (Oxford University Press, 1986), Thomas Nagel writes: “moral requirements have their source in the claims of other persons” (197) and that “the basic moral insight is that objectively no one matters more than anyone else, and that this acknowledgment should be of fundamental importance to each of us” (205).

This seems to me both pretty accurate—and utterly wrong—about what morality is and what it does.  Morality, I want to say, is a two-edged sword.  And that makes it very hard to decide whether, in the final analysis, morality is a good thing or a bad one.  Does it do more harm than good in the world?  I don’t think that is an easy question to answer.

Why?  Because morality attempts to order the relationships among people—and between people and other inhabitants of the globe. What is the right way to interact with others?  What attention, consideration, and resources do I owe to others—and they owe to me?  What actions and attitudes do I find admirable, even worthy of imitation as well as esteem?  What actions and attitudes, all things considered, make things go better in this sublunary world—reduce suffering, promote happiness and flourishing?  It is unthinkable that humans would not ponder such questions—and attempt to provide answers.  Nagel seems to think that if we ponder these questions—taking into consideration the claims of others and our own claims on others—that we will, in Kant-like fashion, reach a radical version of egalitarianism.  No human matters any more than any other.  I cannot make an exception of myself (or of my family, or of my compatriots); I am to be treated the same as everyone else.  What I feel due to myself is due to them.

Now Nagel does take seriously Bernard Williams’ objection that Kantian universalism asks something that is not only impossible to achieve, but in actual practice would be fairly objectionable, perhaps even monstrous.  Any morality that asks us to treat our mother or our spouse exactly as we would treat everyone else in the world flies in the face of human psychology—and of human flourishing (since rich particularized relationships with a small set of others are essential to flourishing).  Nagel’s response is that the “gap” between universalist (“objective” in his terms) morality and personal partiality cannot be closed.  It is a tension we must needs live with—and negotiate on a case by case basis.

My concern is rather different.  I do think a moral politics looks something like Nagel says it does.  “An important task of politics,” he writes, “is to arrange the world so everyone can live a good life, without doing wrong, injuring others, benefiting unfairly from their misfortunes, and so forth. . . [The best would be a world in which] “the great bulk of impersonal claims were met by institutions that left individuals . . . free to devote considerable attention and energy to their own lives and to values that could not be impersonally acknowledged” (206-207).  In other words, a social democracy that served the resource needs of all while regulating against exploitation and other forms of special privileges as the public business of politics, while leaving individuals both free and resource-enabled to pursue their individualized, private visions of the good life.  Again: a vision premised on the notion that all are equally entitled to the means for flourishing, and that many different versions of how to flourish are to be tolerated.

The problem is that there is a very different view of what morality entails.  This other morality is more prescriptive (more restrictive) in its vision of the ways one might choose to flourish—and still be found morally acceptable.  Even more crucially, this second “other” morality is not based on a vision of the equality of all.  Just the opposite.  This morality divides between the worthy and the reprobate—and feels fully justified (in fact finds the grounds for that justification in morality itself) to deny to some what is granted to others.  Sinners are not entitled to anything; they deserve nothing, except to be punished.  Far from being an equalizer, morality is deployed to be a great divider.  It gives us the means to identify those who are not equal, who are not worthy of consideration and respect and a sufficient share of the world’s goods.

In other words, it seems like the height of wishful thinking for Nagel to say that the (objective, impersonal”) view of morality leads to the conclusion that all are equal.  It is pretty implausible, it seems to me, that even 25% of humanity holds to that conclusion as a moral demand upon themselves—and, as Williams points out, even that 25% makes exceptions to equal treatment all the time.  More obvious is that the vast majority of humans distinguish between worthy and less worthy people—and use morality to both make that distinction and to justify treating the unworthy in various differential ways, ranging from indifference to and lack of sympathy with their troubles to active deprivation and punishment as what they deserve.

This divisive use of morality—accompanied as it often is with a distasteful, sanctimonious self-righteousness—is more than enough to give morality a bad name.  Many have argued that morality is the source of more harm to humans than any absence of morality.  In morality’s name, we meet out punishment, deprivation, contempt, and hate-filled condemnations. 

So what’s the answer?  Does morality do any good at all—or should we dispense with it altogether? (Note here that I have loaded the question to the liberal, social democrat side.  The practitioner of divisive morality would say it does good; it is fit and proper that we identify sinners and deal with them as they deserve.  After all, isn’t justice getting what you deserve, not this namby-pandy liberal idea that everyone is deserving just by the fact of showing up?) In any case, I can only say that the Kantian, equality affirming morality has done good in the world; there has been progress toward increasing equality inspired by that viewpoint.  But there is no denying that divisive morality has justified great cruelty and massive exclusions.  So we can’t say morality is to be endorsed tout court.  It is an ambiguous—and very dangerous—tool that can be used in contradictory ways. 

Would we be better off without morality at all, without these attempts to delineate worthy ways of living and of arranging our social relations?  I am not prepared to go that far, but I do think we should be wary of any self-congratulation about our tendency to partake of such attempts to use morality to advance the egalitarian thesis.  Those attempts (as Nietzsche among others alerts us) might very well be more despicable than otherwise.

Reflecting on these matters led me to realize that much the same can be said about the State.  Let me explain.  Despite the resistance to this idea in certain leftist circles, I think there is little doubt that States work against pan-violence.  The historical record of pre-state societies is one unbroken litany of violence.  Hobbes was mostly right: the war of all against all (or, at least, of tribe/clan against neighboring tribe/clan) was endemic.  Men strove to grab what other men possessed.  (That all this is a pathology of masculinity seems indubitable.) Plunder and rapine were hardly abhorred; they were the source of honor even in Homeric epics that could also register their horror and insist that an unattainable peace would be preferable.

What some deny is that states bring this omnipresence of violence to an end.  “War is the health of the state,” Randolph Bourne proclaimed.  And that statement is hardly nonsense.  We can say of the Western states formed in the period from 1500 to 1900 that they 1) exported violence/war to the regions that became their “empires”; 2) that they exerted violence (in the form of various types of punishment) on their own domestic populations of criminals, religious and political dissidents, and those deemed mentally or morally deficient; and 3) that they fought one another with astonishing regularity.  Periods of peace and security for people trying to live out a normal life-span untainted by violence against them were short and uncertain. 

Furthermore, and this is usually the clincher in such arguments, is that (starting with the Napoleonic wars at the very least, but likely true of the earlier religious wars) the organizational powers of centralized states meant that violence was carried out at a scale impossible for the clashes between clans/tribes characteristic of pre-Columbian America, pre-monarchy in Scotland, or in various locales of the Middle East before the rise of the Ottoman Empire (to take just a few examples).

The State, in other words, is also double-faced: it suppresses one kind of anarchic and ever-present violence, the outright kleptocracy of pre-state conditions.  But in its gathering the means of violence to itself (partly as a way to cow other actors into non-violence) it periodically (and with depressing regularity) deploys that violence with results (in terms of deaths, suffering, and destruction) that dwarfs pre-state violence.  So the state, like morality, seems both a pathway to peace and forms of society that allow for peaceful co-existence—and the source of the most horrific violence.

Can you get one without the other?  The anarchist dreams that getting rid of the state will eliminate violence altogether as we live in ways that realize our mutual dependence on one another.  The Kantian dreams of a perpetual peace if only we can have one super state (thus eliminating in one fell stroke wars of one state against another and the violence of pre-state societies).  Like morality, the state delivers something that is good (control over omnipresent violence) and something terrible: the infliction of violence on vast numbers.

And there is more than just an analogy between that state and morality on the level of their doubleness.  There is also a clear connection in that both work to designate those who are legitimate targets of differential treatment—reaching all the way to killing them.  The reprobate for morality are the non-citizens for the state (even as the state will also treat citizens deemed reprobate differentially). 

I sometimes think it all comes down to punishment.  Both morality and the state identify those who should be punished.  These people deserve punishment, are worthy of being punished.  When it comes to the state, the justification is even more arbitrary than it is for morality.  The non-citizen can be legitimately deprived of various things simply on the basis of bad luck.  The non-citizen was born elsewhere, so has no claim to the state’s protection or its largesse. 

There are multiple ways to configure the assertion that some human being is not entitled to what I have.  Morality and that state (through the law and through the category of citizenship) enact that sorting function all the time.  It is to a certain extent their raison d’etre.  That an alternative morality aims to establish the equal entitlement of all, just as an alternative politics looks to use state power to distribute to all the resources needed to flourish, stands as one justification for holding on to morality and the state as necessary contributors to what this alternative vision wants to accomplish. But it’s such a steep climb because morality and the state are tainted with the ways in which they actively thwart what the social democratic, Kantian vision aims for.

Crisis–and Civil War

I have just finished reading Critique and Crisis: Enlightenment and the Pathogenesis of Modern Society by Reinhardt Koselleck (MIT Press, 2000; although the book was published in Germany in 1959).  I was pointed to this book by my friend Philip Wilson; I would never have come across it otherwise.

Koselleck is a follower of Carl Schmitt and I may, in a future post, have something to say about the lineaments of conservative thought as found in Schmitt and as articulated in this book.

For the moment, however, I just want to pick up on two nuggets from the book that gave me new ways to think about the current mess in the United States.

First, Kosseleck’s definition of crisis.  “It is in the nature of crisis that problems crying out for solution go unresolved.  And it is also in the nature of crisis that the solution, that which the future holds in store, is not predictable.  The uncertainty of a critical situation contains one certainty only—that it will end.  The only unknown quantity is when and how. The eventual solution is uncertain, but the end of the crisis, a change in the existing situation—threatening, feared, and eagerly anticipated—is not” (127).

That the US is a society and a polity currently incapable of “resolving problems” seems obvious to me.  Environmental disaster is already upon us and only going to get worse.  Homelessness, childhood poverty, maternal mortality, and other symptoms of social and economic inequality go unaddressed.  A concerted assault on democratic procedures for the assignment and transfer of state power is underway in full view. And the scourge of racism continues to afflict just about every aspect of American life. 

On the one hand, what I see is a society that is paralyzed, frozen in stasis. The ability of government to act effectively—or even to act at all—has been undermined, partly deliberately by the party hostile to government, partly by a kind of bureaucratic sclerosis.  Institutional inertia makes change just about impossible. Transmitting directives down through the multiple capillaries of huge corporate or governmental structures means constant watering down or simple evasion of new initiatives.  The operative metaphor has always been turning around an ocean liner.  Settled habits, routines, prejudices, combined with resistance to doing things in new ways, all work against solving the problems that stare us in the face.

In short, I don’t share Kosseleck’s confidence (very German; think of Hegel and Marx) that a society that can’t solve problems in unstable and doomed to a short lifespan.  Muddling along through a combination of willful blindness, aggressive denial that the problems exist, calculated distractions of public attention to other issues (rising crime!; immigrants!; inflation!; transgender people!; welfare cheats!), and simple economic interest (changed policies will threaten your livelihood) are more than enough to stabilize dysfunction.  Jared Diamond’s tragic view that societies will obstinately stick to their prevailing practices all the way to extinction seems apposite.

And yet.  It would be very foolish to think nothing has changed since 1980.  Even as government has been paralyzed (unable to address even obvious problems like gun control, massive tax evasion, and securities fraud along with other forms of corporate malfeasance), change initiated by non-governmental agents has been everywhere.  The shifts in economic production (global supply chains, outsourcing, the destruction of unionized labor, the creation of precarity and the gig economy, the full emptying out of economic activity from rural America, the continued growth of industrial agriculture, the movement of vast amounts of commerce on-line) and in social organization (the “big sort” which clusters people in like-minded communities; the increasing segregation of public schools along with the growing private education sector; the outsized influence of Fox News; the privatization of various parts of “the commons”) have hardly been insignificant.  The world my children (now in their early 30s) have had to navigate differs greatly from the one I encountered in the late 70s leading up to 1983 (the year I turned 30).

So the crises of 2022 America is not exactly about standing still.  It seems more to be a crisis generated by a) a failure to come to terms with several looming problems through either ignoring their existence, or denying their existence, or by adopting a cynical/fatalistic conviction that nothing can be done about them and b) the loss of any sense of a collective agency that identifies the government as the place where that agency can be mobilized.  Instead, everything is left to non-governmental actors, who (predictably enough) pursue their own interests, grabbing from the commons whatever they can.  The American version of kleptocracy.

What keeps this situation stable, it seems to me, is that the kleptocrats let enough crumbs fall off the table to keep lots of people in fairly decent economic shape.  The mystery of the years since 1980 is that the kleptocrats are so unhappy; they keep yelping that their haul isn’t big enough.  Being a millionaire no longer counts for anything.  Only a billion will do.  To increase their haul, they have beaten down wages, ended anything like job security, and taken ever larger chunks of any profits resulting from increased productivity.  One effect of increased economic insecurity for wage earners (a result of “loss aversion”) is the desperate attempt to cling on to what I have—making me fearful of change, liable to vote for the shitty status quo rather than take the risk of endorsing change.  So sclerosis can be endorsed at the voting booth.  But you would think there would have to come a tipping point, a moment when the steady immiseration of the wage earners would generate a backlash against their economic overlords.

We seem further than ever from that tipping point—which is why I am saying we live in a crisis that seems to be unending.  Yes, as Kosseleck says, we have multiple and highly visible “unresolved problems,” but that doesn’t seem to be unsustainable.  We can—and probably will—fail to address them (imagining “solving” them seems laughably naive) for quite some time to come.

As a conservative, Kosseleck identifies that tipping point with “revolution,” an event he deeply deplores.  He does so by deploying an interesting distinction between “civil war” and “revolution’—even as he eventually undercuts the difference between them (pp. 160-62, especially the long footnote on p. 160).  Basically, a civil war is when two factions fight over assumption of power within the current political structures.  Revolutions, by way of contrast, aim to abolish the current structures and replace them with something entirely different.  In Kosseleck’s mind, that means revolutions are always Utopian, trying to create a new social and political reality out of whole cloth and according to a blueprint that has been imagined in the isolation of the study.  Revolutionary dreams are delusory—and here’s where the distinction from civil war breaks down: revolutions always spur civil wars (think of the French, Russian, Chinese, even Irish, revolutions).  From the Utopian heights of revolutionary dreams are born the more mundane, but usually horrific, war of factions that is civil war. 

What Kosseleck comes close to saying, but never quite does, is that civil wars are reactionary (initiated by a faction that feels threatened by change and wants to insure that established privileges and possessed power/property are not undermined) while revolutions stem from the dispossessed, those at the bottom in current arrangements.  So, if revolutions cause civil wars, it is because those currently on top won’t go down without a fight. 

The American case seems more like Spain of the 1930s than either France in 1792 or Russian in 1921.  Our nineteenth century civil war was instigated by reactionaries who felt slavery was threatened even as the North (and Lincoln) told them slavery was constitutionally protected.  They thought they saw the writing on the wall in the efforts to keep slavery out of new territories and the growing demographic and economic strength of the North.  So they forced the issue long before they needed to if their aim was to preserve slavery—and foolishly started a war they could not win (except if they could convince the other side not to fight it).

The similarity to 1930s Spain stems from the fight of 1861 being waged against a legitimately elected government—and was made in the name of anticipated horrors that that government would enable, not anything that government had actually done.  And our current situation feels the same.  What did Obama’s government do that would justify the belief that a Democratic administration would be such a disaster that it must be fought at every level—to the point of overturning election results to insure that only Republicans take office?  [“Fought at every level” is meant to include: a congressional veto by Republicans on any measure, not matter how anodyne, that Democrats initiate; endless trumped up congressional investigations of supposed malfeasance on the part of the administration; attempted judicial nullification (sometimes successful) of any measures Democrats do succeed in establishing; rabid right-wing promulgation in various media (talk radio, cable news, on-line channels) of baseless accusations; aggressive gerrymandering and other ways of distorting actual voter preferences; the list could go on.]

As with the behavior of the kleptocrats, the question becomes where will the tipping point be reached.  We have the obvious political crisis of Republicans putting the machinery in place to steal elections.  Will they, by these “legal” means (since passed by state legislatures and unlikely—it would seem—to be overturned by the Supreme Court), install themselves in power without triggering a vehement response?  In other words, will they be able to achieve a bloodless coup—avoiding civil war.  Or will the power grab generate a more dramatic response?

Back to crisis.  We have, then, three sets of unresolved problems: 1. The threat to the democratic peaceful transfer of power from one faction to another. 2. The undermining of government as the agent of collective decision-making and action, thus leaving the powers that are transforming our society in the hands of private actors, including corporations, philanthropic foundations, and the like. And 3. The inability to address the looming problems of climate change, destruction of the commons, economic inequality and precarity, and the gap between whites and people of color. 

The reactionaries who are bringing us to the brink of civil war by undermining democracy (number 1 in the paragraph above) do so in service of resolutely ignoring numbers 2 and 3—the crippling of government’s power to act effectively and the continued refusal to face up to looming problems.  In fact, they want to hasten the crippling of government, and they, at times, aggressively want to exacerbate racial tensions/inequities, along with enabling the kinds of economic practices the increase inequality and precarity.  And they have no desire to acknowledge or do anything about climate change.

All of this in a world where the dream of revolution seems entirely dead.  The left is adrift because it cannot imagine—or find a way to work toward—effective collective action.  But that’s a topic for another day—and another post.

Change, Violence, and Innocence

Two passages from two different novels by Salman Rushdie.

The first from Quichotte:

“After you were badly beaten, the essential part of you that made you a human being could come loose from the world, as if the self were a small boat and the rope mooring it to the dock slid off its cleats so that the dinghy drifted out helplessly into the middle of the pond; or as if a large vessel, a merchant ship, perhaps, began in the grip of a powerful current to drag its anchor and ran the risk of colliding with other ships or disastrously running aground.  He now understood that this loosening was perhaps not only physical but also ethical, that when violence was done to a person, then violence entered the range of what the person—previously peacable and law-abiding—afterwards included in the spectrum of what was possible.  It became an option” (339).

The psychology of violence, how it can be committed and why so many turn to it, has been a puzzle I have returned to again and again over the past forty years, without ever getting anything close to a solution that satisfied me.  That violence is contagious seems indisputable; that people become inured to violence is demonstrated by the behavior of soldiers in wartime; that much violence stems from an enraged self-righteousness also seems true.  But what has eluded me is how one commits the act itself—the plunging of the knife into another’s body, the pulling of the trigger of the gun whose barrel sits in one’s own mouth.  That seems non-human, which is perhaps why violence is often outsourced as bestial but also divine (Charles Taylor’s “numinous violence.”)  I don’t say Rushdie’s thought here is the answer, but it seems very shrewd to me, focusing in on the dehumanization that underlies the ability to act violently, while also highlighting the ways in which violence is done by those to whom violence has been done.  A curse handed down in various ways through time.

The second from Golden House:

“When I looked at the world beyond myself I saw my own moral weakness reflected in it. My parents had grown up in a fantasyland, the last generation in full employment, the last age of sex without fear, the last moment of politics without religion, but somehow their years in the fairy tale had grounded them, strengthened them, given them the conviction that by their own direct action they could change and improve the world, and allowed them to eat the apple of Eden, which gave them the knowledge of good and evil, without falling under the spell of the spiraling Jungle Book Kaa-eyes of the fatal trust-in-me Snake.  Whereas horror was spreading everywhere at high speed and we closed our eyes or appeased it” (188).

I always want to resist narratives of lost innocence—or of ancestors whose strength and virtues we cannot hope to reproduce. Lost innocence is in many ways the favorite American narrative, and it will play us as false as narratives about a lost greatness.  Yet Rushdie’s list of what we have lost resonates with me.  I graduated from college in 1974, into the gas crisis recession that started the ball rolling away from full employment and endless, inescapable precarity.  I turned 30 in 1983, just as AIDS appeared over the horizon and put an end to the promiscuity of the 1970s, my 20s.  And the emergence of the religious right in the Reagan triumphs of the 1980s was a shock to those of us who had assumed we lived in the secular world of the modern.  In short, the three things Rushdie lists were actual and momentous changes, registered (at least by me) in the moment.  The kind of thing that history throws at you—and you discover you are powerless to thwart. 

To discover one’s powerlessness is to lose a kind of innocent optimism, a faith that things can be made better.  But let’s not get carried away in either direction.  Life in 1955 America was terrible for blacks and gays, as J. Edgar Hoover and Joseph McCarthy reigned.  That things in 2022 are better for blacks and gays is the result of hard, persistent political work.  We are never fully powerless, even if we are also never in the clear.  The forces of reaction are never annihilated.  The gains made yesterday can always be lost tomorrow.  The struggle is never decided once and for all in either direction.  2022 is better for gays and blacks than 1955, but in various ways worse than 2012.  

And it didn’t take the 1980s to teach us lessons in powerlessness.  The US waged a pointless and cruel war in Vietnam that millions of protesters were powerless to stop.  Nothing seemed capable of knocking the military-industrial complex off its keel, and the logic of doubling down on bad decisions, of not losing face, led the government to lie, spy on domestic dissenters, and pile violence upon violence.  History’s imperviousness to efforts to divert its floods coming at us from upriver is always ready to humble naïve political projects and hopes.

Still, it is important to note changes.  It is not just the same old same old.  Plus ça change and all that shrugging of the shoulders cynicism never has an accurate grasp of facts on the ground.  The terms of the struggle shift.  To take just one example: capitalism today is not the same as capitalism in 1955 or even 1990.  It is organized very differently, while the alignment of forces for and against various of its manifestations has also shifted dramatically.  Similarly, the obstacles blacks face in America today are very different from those they faced in 1955, and somewhat different from those faced in 1990.

So I think Rushdie does name three crucial things that did change in my lifetime, as someone who was just a bit too young to really live through the 60s (I was a freshman in high school in 1967), and for whom the 1970s and early 1980s were the truly formative years, the time of my coming into my own, picking my head up and actually getting a view of how this world I was entering was configured.  The loss of economic security was evident immediately in the way I and my classmates navigated the years after college.  No security assumed; it was going to be dog eat dog.  And the glee with which Reagan and his ilk embraced that inhuman and dehumanizing competition was appalling.  Especially when that cruelty was wrapped in the pieties of a Christianity that saw the sufferings of the poor as their just desert.

I was mostly a bystander to the promiscuity—both hetero- and homo- –of the 70s.  But a bystander in fairly close proximity to both of those worlds.  Some of its was tawdry, some of it exploitative (the abuse of unequal hierarchical relationships was rampant).  But there was also a joyousness that has been lost.  Not having sex always be a serious business has things to recommend it.  All the studies indicate that young people today (caught in the evermore insecure world of precarity) are having much less sex than my generation did at their age.  And I really can’t see that as a good thing.  Sex under the right conditions is one of the great goods of life.  It is a mark of our human perversity that we can also manage to turn it to evil so often and (apparently) so effortlessly.

When Rushdie’s narrator contemplates his parents’ faith that humans are moral and by striving can make a better world, he ends up demurring:

“And they were wrong.  The human race was savage, not moral.  I had lived in an enchanted garden but the savagery, the meaninglessness, the fury had come in over the walls and killed what I loved most” (152).

This is Rushdie’s valediction to a certain form of hopeful liberalism, a form he thinks was only made possible by the Trentes Glorieuses, those thirty halcyon years (ignoring Vietnam, Korea, and the violences of decolonization) in the West following the second World War. I, of course, still want to hold on to that hopeful liberalism, to its vision of a social democracy that does its utmost to deliver to all a life now reserved to the privileged.

Rushdie’s narrator’s viewpoint is echoed in one of the book’s epigraphs, which itself echoes the currently fashionable academic preoccupation with ways of living in the ruins.  The passage is taken from D. H. Lawrence’s Lady Chatterly’s Lover: “Ours is essentially a tragic age, so we refuse to take it tragically.  The cataclysm has happened, we are among the ruins, we start to build up little habitats, to have new little hopes.  It is rather hard work: there is now no smooth road to the future: but we go round, or scramble over the obstacles.  We’ve got to live, no matter how many skies have fallen.”

Giving in to a notion of a necessarily ruined world, about which we can do nothing except try to carve out a “little habitat,” a way to keep on keeping on, seems defeatist to me.  But forging such a separate peace is also deeply alluring since the general madness and cruelty are so relentless and so resistant to alteration.


It is hard, but not impossible, to disentangle the aesthetic from the meaningful.  Clearly, aestheticism tries to drive a hard boundary between what is aesthetic and what conveys meaning.  But since the aesthetic always entails a relationship between a perceiver and the thing perceived, it seems “natural” (i.e. to occur almost automatically and seemingly of its own accord, unwilled) to ascribe some significance to that relationship.  When the thing perceived it itself “natural” (i.e. not human made, but—for example—a mountain landscape), we get the kinds of “oceanic” sensations of harmony or of the self melting into the non-self that are associated with romanticism.  When the thing perceived is human made, an artifact, it is difficult not to view it as an act of communication.  This thing is offered or presented by one human to another—and we presume that the offering has some meaning, is thought of as being significant.

Meaning and significance are not exactly equivalent.  I can discern the meaning in a banal sentence, but deem it insignificant.  When the artist presents something to an audience, she (it would seem) is making an implicit (and sometimes explicit) claim of significance.  This is worth paying attention to.

On what grounds can that claim to significance, importance, be made?  Either the artist claims to have something important to say, some message we need to hear.  Or the artist is offering a valuable experience.  Now that word “valuable” has snuck in.  “Meaning” and “significance” are synonyms when they refer to sense (i.e. what does that sentence mean; what does that sentence signify); but then both words move from reference to the sense something (a sentence, an event) makes to intimations of “worth,” of importance, of value.  Something is meaningful as opposed to trivial or meaningless; something is significant as contrasted to not worth paying any mind. 

The aesthetic, then, seems pretty inevitably engaged in pointing to something or some event as worthy of our attention—and then has to justify that pointing in terms of importance or significance or meaningfulness.  It seems a very short leap from that kind of justification to making claims about what does or should hold value for us as humans beings living a life.  It seems difficult to avoid some kind of hierarchizing here.  These activities or these insights are valuable; they contribute to leading a good or worthwhile life.  Those activities and beliefs are, at best, a waste of time, or, at worst, pernicious. 

Yes, certain modern artists (although far less of them than one might suppose) wanted to get out of the value game.  But it was awfully hard to present your art work in a “take it or leave it” way, utterly and truly indifferent as to whether anyone found it worthy of attention.  A sense of grievance, a denunciation of the philistines, is much more common when the artist fails to find an audience.  People have bad taste, have a misguided sense of what is valuable and should be valued. 

I suspect that even as the arts were trying in the early 20th century to escape meaningfulness, to simply offer experiences that were their own end and carried no message, that the humanities were going in the opposite direction.  The humanities are devoted to uncovering the meanings of cultural artifacts and events.  This is partly because the humanities are an academic pursuit—and thus tied to models of knowledge that were developed in reference to the “hard” sciences.  Just as science should explain to us natural events, the humanities should explain cultural ones.

But, as people (like Dilthey) quickly noted, scientific explanations were causal.  It was not very clear how the explanations offered by the humanities could (or even should) be causal.  You could say that, as a communicative act, the art work causes the audience to receive a certain meaning.  And certainly that kind of approach to the problem of meaning has figured fairly prominently in the philosophy of language and in certain forms of literary theory.  So, for example, the philosophers struggle mightily with metaphor and irony because it undermines any kind of direct mapping of semantic sense to conveyed meaning.  And then someone like Wayne Booth, a literary theorist, comes along and tries to provide a list of the textual markers that allow us to see when irony is being deployed.  Vague terms like “tone,” “implication,” and “connotation” are trotted out—and interpretation (even when given a jargony, snazzy name like “hermeneutics”) quickly begins to seem too seat of the pants, too ad hoc, to really qualify as science.

The alternative is to try to explain how and why “interpretation” is different from “explanation.”  For starters, interpretation is not trying to explain how this thing we are perceiving was produced.  (There are other branches of humanistic inquiry that do try to answer that question.)  Interpretation is trying to suss out the meaning conveyed to the perceiver.  The movement, we might say, is forward not backwards.  The interest is not in the causes of this artistic artifact or event, but in its effects. 

I don’t want to get into the tangles of trying to differentiate “explanation” from “interpretation.”  This is mostly from cowardice.  I do think there is a methodological distinction to be drawn between the sciences and the humanities, but I have not been able to draw that distinction in a way that is even minimally plausible or satisfactory.  So I have nothing ready for prime time on that topic.

Instead, I want to end this post with two observations.  The first is that the humanities, I think, are always pulling art works back into the realm of the meaningful even in cases where the artists themselves are determined to escape the nets of meaning.  In such cases, the humanities will often then give us the meaning of the attempt to escape meaning.  And it is worth adding here that history is one of the humanities when it considers the effects of events as opposed to trying to trace the causes of events.  That history is pulled both toward causes and to effects is why it is often considered one of the social sciences.   But, then again, it would be silly to say the natural sciences don’t, at times, pay attention to effects as well as causes.  And, as I have already said, some branches of literary criticism (although not very prominent) do attend to causes.  So the difference here can’t be grounded on whether causes or effects is the focus.  (This is a taste of the muddle I am in about these things.) 

Instead, perhaps the key difference is meaningfulness itself.  The natural scientist tracing causes and effects of a natural process does not have to assign that process meaningfulness apart from what transpires.  But the humanities, it seems to me, always consider the further question: how is this event or object meaningful for some group of humans?  The “uptake” by some human community is almost always part of the humanities’ account of that event/artifact; that community’s paying attention to and its ways of elaborating, playing out, its relationship to the event/artifact and to the humans involved in the making of that event/artifact, is a central concern for the humanities.

The second point need not be belabored since I have already made it above.  It seems to me only a short step (and one almost impossible not to make) from considering the meanings that people have made of an event or an artifact to considering what things are or should be valuable.  At the very least, the humanities declare: this is what these people valued.  But to look at what they valued is to think about what can have value, and to consider what I value.  Furthermore, for many devotees of the humanities, that reflection on values is precisely what is valuable about novels, historical narratives, anthropological investigations. 

This interest in questions of value can be formal or substantive.  I think most teachers of literature (just to stick to that limited domain for the moment) pursue both.  They are committed to what usually gets called “critical thinking,” which means a mode of reflection on received ideas and values, a way of questioning them in order to examine what I will still believe after doing that reflecting.  The examined life and all that.  But literary works often advocate for specific substantive values: sympathy, justice, the alleviation of suffering and/or of inequalities (or, on the conservative side, reverence for tradition and established authority).  And teachers often choose to have students read works that promote values the teacher values. 

I don’t think the humanities can get out of the values business (even if some of the arts can).  There is no fact/value divide in the humanities—and, thus, the humanities are going to be embroiled in endless controversies so long as values themselves are a site of dispute.  You can’t, I believe, wipe clean the substantive bits of the humanities, leaving only a formal method that has no concrete implications.  As current controversies demonstrate, “critical thinking” and “open-mindedness” are themselves deemed threatening in certain quarters because they imply that various sacred cows are not sacred, are open to dissent.  Any approach that refuses to take things on authority is suspect. The arts may (although usually don’t) sidestep issues of authority by just saying this is one person’s take on things—and you can ignore it as you wish.  But the humanities don’t have that escape route; they are committed to the view that only things and beliefs that have been examined are worthy of authority and credence—and they, in their practice, are inevitably involved in considering what things/activities/beliefs about what is meaningful, what is valuable, one should adopt. Every formal methods, in other words, has substantive consequences, so formalism of any sort is never going to be value-free.