Attention Deficits

I am, it seems, about to embark on a long, convoluted journey into the mysteries of meaning.  I’ve been mulling over this topic for some years now, but didn’t think I was going to write another book.  But it seems that I am.  I have agreed to give two talks next year that will force me to get my thoughts on the subject into some kind of coherent form.  Basically, I want to distinguish questions of meaning from questions of causation/explanation—and make the old time Dilthey case that the humanities and the arts are more inclined to investigate questions of meaning.  But this all involves actually thinking through “the meaning of meaning.”  To that end, I have just started reading C. K. Ogden and I. A. Richard’s classic of that title.  My pole stars in this investigation will be, to no surprise, the pragmatists and Wittgenstein.

So the William James interest in “attention” will be one focus.  What do we attend to, what do we note, in any situation?  Clearly there is always “more” to be seen and taken in than any single observer manages to process.  What we attend to would seem to have some connection to what we find meaningful.  We notice those things we are predisposed to notice, which is a way of defining one’s “interests,” of identifying what are matters of concern and care to one, as contrasted to things of indifference.  (We are highly likely to notice things that inspire hostility or disgust, so it is the intensity of the engagement, not its positive or negative valence, that seems determinative here.)

Psychology since James’s day has paid a lot of attention (pun intended) to the oddities and pathologies of what we notice and what we fail to perceive.  I grabbed this little survey of some of that work from the academic blog Crooked Timber—and lodge it here because I will want to chase down its links at some future date.

“If stereotypes are the cause, why don’t we just eradicate them? Stereotypes arise, in part, because they must.  They belong to a broader category of cognitive attention biases which arise because we simply cannot pay attention to all of the particulars.  We take shortcuts. We bin people into categories.  We lump to live.  That lumping may be the result of rational calculations – it’s not worth our time to consider every particular (rational inattention bias). It may be that we lack the cognitive capacity (behavioral attention bias) or the time or experience to draw precise inferences (categorical cognition bias).   Regardless of the cause, we could not navigate the world without categorizing reality and therefore stereotyping.”

From Scott E Page, Stephen M Ross School of Business, University of Michigan, and the Santa Fe Institute; taken from a blog post on Crooked Timber, August 14th, 2019.

Plus Ça Change . . .

Offered without comment.  From Flaubert’s 1869 Sentimental Education (the Penguin edition of 1964, translated by Robert Baldick).

“’All the same,’ protested Martinon, ‘poverty exists, and we have to admit it.  But neither Science nor Authority can be expected to apply the remedy.  It is purely a matter for individuals.  When the lower classes make up their minds to rid themselves of their vices, they will free themselves from their wants.  Let the common people be more moral and they will be less poor!’

According to Monsieur Dambreuse, nothing useful could be done without enormous capital.  So the only possible way was to entrust, ‘as was suggested, incidentally, by Saint-Simon’s disciples (oh, yes, there was some good in them!  Give the devil his due) to entrust, I say, the cause of Progress to those who can increase the national wealth.’ Imperceptibly, the conversation moved on to the great industrial undertakings, the railways and the mines” (238).

“Most of the men there had served at least four governments; and they would have sold France or the whole human race to safeguard their fortune, to spare themselves the slightest feeling of discomfort or embarrassment, or even out of mere servility and instinctive worship of strength.  They all declared that political crimes were unpardonable. . . . One high official even proclaimed, ‘For my part, Monsieur, if I found out my brother was involved in a plot, I should denounce him!” (240).

Legacy

Some years back, when I was planning to step down as director of UNC’s Institute for the Arts and Humanities, someone asked me what I wanted my legacy to be.  My predecessor in the job (and the founder of the Institute), Ruel Tyson, who died recently at the age of 89, and whose funeral was this week, was very legacy conscious.  He wanted his name associated with the Institute—and cared deeply about the direction the Institute, and the University more generally, took even after his retirement.  He took pains not only to continue being involved with the Institute, but also to get into writing materials relevant to the Institute’s history and to its ongoing evolution.

Ruel’s death has me dwelling on such things, along with Martin Hägglund’s assertion in This Life (Pantheon Books, 2019) that “It is a central feature of our spiritual life that we remember the dead, just as it is a central feature of our spiritual life that we seek to be remembered after our death. This importance of memory—or recollection—is inseparable from the risk of forgetting.  Our fidelity to past generations is animated by the sense that they live only insofar as we sustain their memory, just as we will live on only insofar as future generations sustain the memory of us” (181-82).  Elsewhere he states baldly: there is “no afterlife apart from those who care to remember” us (167).  And continues: “The death of the beloved is irrevocable—it is a loss that cannot be recuperated—since there is no life other than this life” (167-68).

I fully believe that there is no life other than this life.  But I find myself uninterested in, unattached to, the idea of an afterlife in the memories of others.  Why should I care?  I will be beyond caring.  I have never thought of the books I have written as messages sent to some future.  I wrote them to address my contemporaries and desired a response from those contemporaries: to stir up their thoughts, to change their minds, to win their praise.  I wanted to be part of the conversation of my time—not a part of some conversation from which I was absent because dead.  Similarly, in my work at the university, I wanted to enable all kinds of intellectual adventures for my colleagues and students.  I was not at all focused on building the conditions for things that would happen after I was gone.  The here and now was all.

Yes, I care about my children’s lives—and the world they will inherit to live those lives in.  I want to give them the wherewithal to have good lives.  That wherewithal involves money, but plenty of other things, all I hope given as a gift of love.  But my children are not my “legacy.”  They are people with their own lives, albeit people I care deeply about.

I certainly don’t think of them as under any obligation to remember me or (worse) to memorialize me.  I have little filial piety myself (a fact I intend to ponder as I go along), but am made more uneasy by the thought of my children having filial piety than I am by their lacking it.  (Only a coincidence that I am writing this on father’s day, a day not celebrated in our household.)  I want my children’s love, not their reverence or piety.  And I want them to take the gifts I have given them (of all sorts) for granted, as the daily and completely unexceptional manifestation of a love that is like the air they breathe, simply an unquestioned fact of daily existence, sustaining but unremarkable.

It is not simply that I will be dead—and thus in no position to know that I am being remembered or to care.  It is also that memory is abstract.  It is leagues away from the full experience of being alive, in all its blooming, buzzing confusion, its welter of emotions, desires, hopes, and activities.  Those who knew you while you were alive know some of that concrete you, but soon enough you are just a name on a family tree, with only the slightest hints, some bare facts, maybe a photograph, maybe some letters, suggesting the actual person.  Such attenuated selfhood is nothing like life—and holds no appeal to me.  It seems a mug’s game to care about that time to come—just another way of not attending to the present, of focusing in on “this life.”

Jane and I went to another funeral yesterday (after Ruel’s on Tuesday).  This memorial service was for a lovely man who taught both of our children at the Quaker high school they attended.  Conducted as a Quaker meeting, we had 90 minutes of people sharing their memories of Jamie—and those memories did capture him to a remarkable degree.  He was concretely witnessed and imagined in what people had to say.  His lived reality, his personality, was caught and conveyed.

Now that is a memory process I can endorse; we were all filled with his spirit during and after those ninety minutes.  But I don’t think it helps—and don’t find myself wishing—to claim that kind of specific memory will last for more than ten or so years, or to think of even the fullest kind of memory in any way counts as a satisfactory afterlife.  To paraphrase Woody Allen (this catches the spirit, and nowhere close to the letter of his comment): I want the kind of afterlife where I am worrying about the mortgage that is due next Tuesday and what gift to buy for my beloved, whose birthday is next week.  In short, not an afterlife, but this life is what I want.  And I am perfectly happy to let the time after my death take care of itself, without its occupants feeling any obligation to keep me in mind.

The United States and the History and Fate of Liberal Democracy

 

I have just finished reading Sheri Berman’s Democracy and Dictatorship in Europe: From the Ancien Régime to the Present Day (Oxford University Press, 2019).  For much of the book, I was disappointed by what Berman has to say.  She lays out the histories of France, Britain, Germany, Italy and Spain (with a more truncated account of the Eastern European countries of Poland, Hungary, and Czechoslovakia) to describe their transition to liberal democracy (or failure to make that transition) from their starting points, monarchial dictatorship in the case of France, Britain, and Spain, non-statehood in the cases of Germany and Italy, and the muddled, colonized situations in Poland, Hungary, and Czechoslovakia.  The disappointment came from the fact that she offers non-revisionist history in what, even in a long 400 page plus book, must necessarily be fairly quick narratives of each country’s story.  It is nice to have all of this history within the covers of a single book, but I learned nothing new.  And the stories told are so conventional that I found myself suspicious of them.  Surely more recent work (my knowledge base for this material is at least twenty years old) has troubled the received accounts.

But Berman’s final chapter takes her story in a different direction.  She develops what has been hinted at throughout her narratives: a set of enabling conditions for the achievement of liberal democracy.  Basically, she sees six types of governments in European nation-states since 1650: monarchial dictatorship (Louis XIV; attempted unsuccessfully by the Stuart kings in England);  military (conservative) dictatorship (Franco, Bismarck, other more short-lived versions; Napoleon Bonaparte is, in certain ways, a liberal military dictatorship, thus rather different); fascist dictatorships (Italy, and Germany; crucially not Franco); totalitarian communism (Eastern Europe after WW II); illiberal democracy (Napoleon III, Berlusconi, Hungary and Poland right now); and liberal democracy.

Today, it seems pretty clear, illiberal and liberal democracy are pretty much the only games in town, at least in what used to be called the First World.  Military coups and their follow-up, military dictatorships, are still possibilities, especially outside of Europe, but not all that likely in Europe.  More ominous, perhaps, are the authoritarian regimes now in place in Russia and China—regimes that don’t fit into the six types listed above, and represent some kind of new development that responds to the aftermath of disastrous totalitarian communist regimes.   Again, the appearance of such regimes in Western Europe seems unlikely, although a real possibility in Eastern Europe and perhaps already installed in Turkey.

Here’s Berman on what makes a democracy “liberal.”  “[L]iberal democracy requires governments able to enforce the democratic rules of the game, guarantee the rule of law, protect minorities and individual liberties, and, of course, implement policies.  Liberal democracy requires, in other words, a relatively strong state.  Liberal democracy also requires that citizens view their government as legitimate, respect the democratic rules of the game, obey the law, and accept other members of society as political equals.  Liberal democracy also requires, in other words, a consensus on who belongs to the national community—who ‘the people’ are—and is therefore entitled to participate in the political process and enjoy the other rights and responsibilities of citizenship.  Reflecting this, throughout European history liberal democracy—but not illiberal or electoral democracy—has consolidated only in countries possessing relatively strong states and national unity” (392).

Berman thus insists that liberal democracy is dependent upon the nation-state—where a shared sense of national identity underwrites (makes possible) the existence of a strong central state.  There are three major obstacles to the achievement of national unity: regionalism, ethnic differences, and the “old order.”  For the most part, Berman focuses on the “old order.”  She adopts Eric Hobsbawm’s assertion that “since 1789 European and indeed world politics has been a struggle for and against the principles of the French Revolutions” (49 in Berman).  For Berman, that means that the old order which straightforwardly granted “privileges” to a certain segment of society (the aristocracy and the clergy in ancient régime France) must be destroyed to create the political equality of full participation and the general equality before the law that are the sine non qua of liberal democracy.  The story of European history since 1650 is of the very slow destruction of the old order—and of the ways that elites resisted fiercely the movement toward democracy and toward liberalism. (Crucially, democracy and liberalism are not the same and do not inevitably appear together.  Napoleon Bonaparte arguably was a liberal dictator, whereas his nephew Louis Napoleon was an illiberal democratic leader.)

A key part of that story is Berman’s claim that the “sequencing” of the moves toward democracy is crucial to actually getting there.  Three things must happen: 1. A strong central state must be created; i.e. the power of regions must be broken as well as the power of local elites; crucially, this move involves the creation of institutions that can function to govern the whole territory;   2. A strong sense of national identity (again opposed to more local loyalties) must be created; and 3. Building upon the existence of that strong state and strong sense of shared identity, liberal democracy can be securely established.  Berman notes that in post-colonial situations, where the new state begins without possessing a strong central government or a strong sense of national identity, the attempt to establish liberal democracy almost never succeeds. Doing all three things at the same time is just about impossible.

“European political development makes clear, in short, that sequencing matters: without strong states and national identities, liberal democracy is difficult if not impossible to achieve.  It is important to remember, however, that regardless of how sequencing occurred, there was no easy or peaceful path to liberal democracy.  The difference between Western and Southern and East-Central Europe was not whether violence and instability were part of the back-story of liberal democracy, but when and over how long a period they occurred.  In Western Europe state- and nation-building were extremely violent and coercive, involving what today would be characterized as colonization and ethnic cleansing, that is, the destruction and absorption of weaker political entities into stronger ones (for example, Brittany, Burgundy, and Aquitaine into France, Scotland, Wales, and especially Ireland into Britain) and the suppression or elimination of traditional communities, loyalties, languages, traditions, and identities in the process of creating new, national ones.  But in much of Western Europe these processes occurred or at least began during the early modern period (but not, notably, in Italy or Germany), and so unlike Southern and Central Europe, Western Europe did not experience the violence and coercion associated with state- and nation-building during the modern era at the same time the challenge of democratization appeared on the political agenda.  By the nineteenth century in France and England, and by the second half of the twentieth century in the rest of Western Europe, states were strong and legitimate enough to advance nation-building without overt coercion but instead via education, promoting national culture, language, and history, improved transport and communication networks, and by supporting a flourishing civil society within which potentially cross-cutting cleavages and networks could develop, strengthening the bonds among citizens” (394-95).  East and Central Europe did not have this long time span—and had to cram all three projects (state building, nation building, and democratization) into the same period, which makes success much less likely (where success is establishing a stable liberal democracy).

Berman also argues that, in the aftermath of World War II, Western Europe adopted “social democracy” (aka the welfare state) in order to demonstrate the state’s commitment to the well-being of all its citizens after the sacrifices of the war and the sufferings of the depression.  National solidarity, she argues, is heightened by this responsiveness of the state to the needs of all its citizens—an antidote to the 1930s conviction in much of Europe that liberal regimes could not protect citizens from the depredations of capitalism.  She quotes Henry Morgenthau, American Secretary of the Treasury in his opening remarks at the 1944 Bretton Woods Conference: “All of us have seen the great economic tragedy of our time.  We saw the worldwide depression of the 1930s . . . . We saw bewilderment and bitterness become the breeders of fascism and finally of war.  To prevent a recurrence of this phenomenon, national governments would have to be able to do more to protect people from capitalism’s ‘malign effects’” (Berman, 284).  Berman is a firm believer in Habermas’s “constitutional nationalism”; she thinks that national solidarity is best reinforced by a welfare state that extends benefits and protection to all its citizens.  (See pages 296-297).  She also is a strong proponent of “the primacy of politics” (the title of her excellent earlier book, which I discussed in this blog post), meaning that governments should take management of the economy as one of its essential political projects.

How might all this relate to US history?  It certainly offers an interesting way to think about the American South.  To even create a national state, the South had to be granted the privilege of continued slavery.  Without slavery, there would have been no United States in 1787.  The founder of my university (the University of North Carolina), William Davie is only recorded as speaking once at the Constitutional Convention.  “At a critical point in the deliberations, however, William Davie spoke up for the interests of the Southern slaveholders. In his pivotal statement, Davie asserted that North Carolina would not join the federal union under terms that excluded slaves from being counted for representation. Unlike other Southern delegates, Davie was flexible and willing to negotiate, because he was committed to the realization of the union. Indeed, once the three-fifths compromise was reached, Davie became an enthusiastic advocate of the United States Constitution. He spent two years campaigning for the document’s ratification.” (Source)

Hence slavery was akin to the privileges (the bribes) French kings had to grant the nobility in order to create a strong central French state.  Similarly, the regions (i.e. the separate colonies) had to be granted the privilege of equal representation in the Senate in order to yield sovereignty to the national government.  Thus the American state was compromised from the start.  It took violence to end slavery and then the South was bribed again in the aftermath of the Civil War when a blind eye was turned on Jim Crow.  The elites of the South, in other words, never had to submit to democratization; they barely had to maintain any kind of national allegiance or identity.  The South was allowed to go its own way for the most part.  Yet the Dixiecrat South, because of the Senate, held the balance of power in Roosevelt’s New Deal, guaranteeing that the first steps toward social democracy in the US were not open to all citizens.  Blacks were excluded from most of the New Deal programs.  The non-democratic Senate (made even less democratic by its extra-constitutional adoption of the “filibuster”) served anti-democratic elites well.

Arguably, World War II created a stronger sense of national identity through the participation in a mass army. (The war, of course, also made the federal government immensely bigger and stronger.) That mass participation opened the way toward the civil rights movement—both because the national government felt more secure in its power and because the justice of rewarding blacks for their military service appealed strongly to Harry Truman (among others), even as service overseas gave black veterans a taste of dignity and freedom.  It is not an accident that the first significant integration mandated by the national government was of the military (by Truman in 1948).

It is also no accident that Strom Thurmond ran against Truman in the 1948 presidential election, winning five Southern states, and beginning the slow process of the South moving from being solidly Democratic to becoming solidly Republican.  Even though Republicans (the party of Lincoln) were crucial to the passage of the 1964 Civil Rights Act, the party’s presidential standard bearer in that year was Barry Goldwater, who opposed the civil rights bill—and carried the South even as he was defeated in a landslide.  The “Southern strategy” was born.  The long impotent right-wing opposition to the New Deal could gain power if the national solidarity created by World War II and the welfare state could be overcome by selling a significant portion of the  general populace on the notion that welfare was exploited by lazy, sexually promiscuous, and potentially violent blacks.  Throw in fear of communism and a religious-tinged moral panic about “permissiveness” among the unwashed, drugged-out hippies protesting the Vietnam War and the scene was set for the conservative roll-back of America’s (always less than generous or fully established) social democracy.

American Conservatism from 1964 on was not simply Southern, but took its playbook from the South.  That is (to recall Berman’s list of the requirements of liberal democracy above), the Republican party embraced positions that denied the full equality of all citizens in terms of political participation and demonized the opposition as unfit to govern, as an existential threat to the nation, as not “real” Americans.  The two Democratic presidents post-Reagan were condemned as illegitimate and criminal by the right-wing media and by Republican congresses, with Clinton impeached and Obama subjected to everything from the “birther” fantasies to deliberate obstruction and the refusal to even vote on his Supreme Court nominee.

In short, Berman’s analysis suggests that the South was never integrated into the American nation—and has successfully resisted that integration to this day.  Furthermore, one of the national political parties has allied itself with that Southern resistance, using it to further its own resistance to democracy.  That resistance to democracy has multiple sources, but certainly includes the business elites’ desire to prevent government management of the economy—including environmental regulations, support of labor’s interests against employers, aggressive deployment of anti-trust and anti-discrimination laws, and strong enforcement of financial regulations and tax laws.  Just as the South had to be bribed to even nominally be part of the Union, so the economic elite has also been bribed to accept grudgingly even the attenuated democracy and welfare state in place in the US.  The bribery, we might say, goes both ways; the plutocrats bribe the politicians by financing their campaigns, and the politicians bribe the plutocrats by keeping the state out of their hair.

Berman’s story is that liberal democracy collapses when people become convinced that it cannot serve their needs.  Only “a socioeconomic order capable of convincing its citizens that liberal democracy could and would respond to their needs” (295) stands between us and the illiberal alternatives that offer themselves when liberal democracy appears incapable of delivering the goods. The failures of liberal democracy since 1970 are manifest; its corruption and its slide into plutocracy in the United States are plainly evident.

In the United States today, we live in a cruel society.  The right wing solution is to say “Yes, life is cruel.  There are winners and losers—and we are offering you a chance to be on the side of the winners, while also giving you a way to justify the fate of the losers.  They are the lazy, or the weak-willed (drug addicts), the ungodly, or the illegal (criminal, or undocumented,) or otherwise unworthy of full citizenship, or full compassion.”  The left tries to hold on to the vision of social democracy.  An anti-democratic left is not a strong force in present-day America the way it was in 1900 to 1935 Europe.  The mushy center wants to hold on to existing civil liberties and to the existing rules of the game even as the emboldened right ignores both with impunity.

It is possible that the 2020 presidential election will present a clear choice between a robust re-assertion of social democracy versus the divide-and-conquer rightism that also aligns itself with ruthless capitalism. (We could also get a Democratic candidate like Biden who represent the mushy center.) I have friends who are convinced that the right will not accept the election results if it loses by a fairly small margin.  I find that scenario implausible; I don’t think the stability of American democracy is that precarious.  But a recent conversation with one friend made me less sure.  And Berman’s book puts the question rather starkly: If the Trumpists refuse to accept the election results, is there enough commitment to liberal democracy to lead to the kind of large-scale public response that would make a coup fail?  Or has faith in liberal democracy been so eroded by its gridlock and its impotence over the past eight years (ever since the feeble and inadequate response to the 2008 financial crisis) that the response to another stolen election would echo the shrug of January 2001 when the Supreme Court handed the presidency to Bush.  A scary thought.  But it would certainly seem, in light of the history Berman outlines, that a complacent faith in the persistence of our (even attenuated) liberal democracy is probably unfounded.

Secular Ethics

I am about one-third of the way through Martin Hägglund’s This Life: Secular Faith and Spiritual Freedom (Pantheon Books, 2019), of which more anon.

But I have been carrying around in my head for over seven months now my own build-it-from-scratch notion of ethics without God.  The impetus was a student pushing me in class last fall to sketch out the position—and then the book on Nietzsche’s “religion of life” that I discussed in my last post (way too long ago; here’s the link).

So here goes.  The starting point is: it is better to be alive than dead.  Ask one hundred people if they would rather live than die and 99 will choose life.

A fundamental value: to be alive.

First Objection:

Various writers have expressed the opinion that is best not to have been born since this life is just a constant tale of suffering and woe.  Life’s a bitch and then you die.

Here’s Ecclesiastes, beginning of Chapter 4:

“Next, I turned to look at all the acts of oppression that make people suffer under the sun. Look at the tears of those who suffer! No one can comfort them. Their oppressors have all the power. No one can comfort those who suffer. I congratulate the dead, who have already died, rather than the living, who still have to carry on. But the person who hasn’t been born yet is better off than both of them. He hasn’t seen the evil that is done under the sun.”

Here’s Sophocles’ version of that thought, from Oedipus at Colonus:

“Not to be born is, beyond all estimation, best; but when a man has seen the light of day, this is next best by far, that with utmost speed he should go back from where he came. For when he has seen youth go by, with its easy merry-making, [1230] what hard affliction is foreign to him, what suffering does he not know? Envy, factions, strife, battles, [1235] and murders. Last of all falls to his lot old age, blamed, weak, unsociable, friendless, wherein dwells every misery among miseries.”

And here is Nietzsche’s version, which he calls the “wisdom of Silenus” in The Birth of Tragedy:

“The best of all things is something entirely outside your grasp: not to be born, not to be, to be nothing. But the second best thing for you is to die soon.”

Second Objection:

As Hägglund argues, many religions are committed to the notion that being alive on earth is not the most fundamental good.  There is a better life elsewhere—a different thought than the claim that non-existence (not to have been born) would be preferable to life.

Response to Objections:

The rejoinder to the first two objections is that few people actually live in such a way that their conduct demonstrates an actual belief in non-existence or an alternative existence being preferable to life on this earth.  Never say never.  I would not argue that no one has ever preferred an alternative to this life.  But the wide-spread commitment to life and its continuance on the part of the vast majority seems to me enough to go on.  I certainly don’t see how that commitment can appear a weaker starting plank than belief in a divine prescriptor of moral rules.  I would venture to guess that the number of people who do not believe in such a god is greater than the number who would happily give up this life for some other state.

Third Objection:

There are obvious—and manifold—reasons to choose death over life under a variety of circumstances.  I think there are two different paths to follow in thinking about this objection.

Path #1:

People (all the time) have things that they value more than life.  They are willing (literally—it is crucial that it is literally) to die for those things.  Hence the problem of establishing “life” as the supreme value.  Rather, what seems to be the case is that life is an understood and fundamental value—and that we demonstrate the truly serious value of other things precisely by being willing to sacrifice life for those other things.  To put one’s life on the line is the ultimate way of showing where one’s basic commitments reside.  This is my basic take-away from Peter Woodford’s The Moral Meaning of Nature: Nietzsche’s Darwinian Religion and its Critics (U of Chicago P, 2018; the book discussed in my last post.)  To use Agamben’s terms “bare life” is not enough; it will always be judged in relation to other values.  A standard will be applied to any life; its worth will be judged.  And in some cases, some value will be deemed of more worth than life—and life will be sacrificed in the name of that higher value.  In other words, “life” can not be the sole value.

I am resolutely pluralist about what those higher values might be that people are willing to sacrifice life for.  My only point is that an assumed value of life provides the mechanism (if you will) for demonstrating the value placed on that “other” and “higher” thing.  In other words, the fact (gift?) of life—and the fact of its vulnerability and inevitable demise (a big point for Hägglund, to be discussed in next post)—establishes a fundamental value against which other values can be measured and displayed.  Without life, no value. (A solecism in one sense.  Of course, if no one was alive, there would be no values.  But the point is also that there would be no values if life itself was not valued, at least to some extent.) Placing life in the balance enables the assertion of a hierarchy of values, a reckoning of what matters most.

Path #2:

It is possible not only to imagine, but also to put into effect, conditions that make life preferable to death.  As Hannah Arendt put it, chillingly, in The Origins of Totalitarianism, the Nazis, in the concentration camps and elsewhere, were experts in making life worse than death. Better to be dead than to suffer various forms of torture and deprivation.

I want to give this fact a positive spin.  If the first plank of a secular ethics is “it is better to be alive than dead,” then the second to twentieth planks attend to the actual conditions on the ground required to make the first plank true.  We can begin to flesh out what “makes a life worth living,” starting with material needs like sufficient food, water, and shelter, and moving on from there to things like security, love, education, health care etc.  We have various versions of the full list from the UN Declaration of Rights to Martha Nussbaum’s list of “capabilities.”

“Bare life” is not sufficient; attending to life leads quickly to a consideration of “quality” of life.  A secular ethics is committed, it seems to me, to bringing about a world in which the conditions for a life worth living are available to all.  The work of ethics is the articulation of those conditions.  That articulation becomes fairly complex once some kind of base-line autonomy—i.e. the freedom of individuals to decide for themselves what a life worth living looks like—is made a basic condition of a life worth living.  [Autonomy is where the plurality of “higher values” for which people are willing to sacrifice life comes in.  My argument would be 1) no one should be able to compel you to sacrifice life for their “higher value” and 2) you are not allowed to compel anyone to sacrifice life for your “higher value.”  But what about sacrificing your goods—through taxes, for example?  That’s much trickier and raises thorny issues of legitimate coercion.]

It seems to me that a secular ethics requires one further plank.  Call it the equality principle.  Simply stated: no one is more entitled to the basic conditions of a life worth living than anyone else.  This is the minimalist position I have discussed at other times on this blog.  Setting a floor to which all are entitled is required for this secular ethics to proceed.

What can be the justification for the equality principle?  Some kind of Kantian universalism seems required at this juncture.  To state it negatively: nothing in nature justifies the differentiation of access to the basic enabling conditions of a life worth living.  To state it positively: to be alive is to possess an equal claim to the means for a life worth living.

Two complications immediately arise: 1. Is there any way to justify inequalities above the floor?  After every one has the minimal conditions met, must there be full equality from there?  2.  Can there be any justification for depriving some people, in certain cases, of the minimum? (The obvious example would be imprisonment or other deprivations meted out as punishments.)

Both of these complications raise the issue of responsibility and accountability.  To what extent is the life that people have, including the quality of that life, a product of their prior choices and actions?  Once we grant that people have the freedom to make consequential choices, how do we respond to those consequences?  And when is society justified in imposing consequences that agents themselves would strive to evade?

No one said ethics was going to be easy.  Laws and punishments are not going to disappear.  Democracy is meant to provide a deliberative process for the creation of laws and sanctions—and to provide the results of those deliberations with legitimacy.

All I have tried to do in this post is to show where a secular ethics might begin its deliberations—without appealing to a divine source for our ethical intuitions or for our ethical reasonings.

The Tree of Life

I have just finished reading Richard Powers’ latest novel, The Overstory (Norton, 2018).  Powers is his own distinctive cross between a sci-fi writer and a realist.  His novels (of which I have read three or four) almost always center around an issue or a problem—and that problem is usually connected to a fairly new technological or scientific presence in our lives: DNA, computers, advanced “financial instruments.”  As with many sci-fi writers, his characters and his dialogue are often stilted, lacking the kind of psychological depth or witty interchanges (“witty” in the sense of clever, off-beat, unexpected rather than funny) that tend to hold my interest as a reader.  I find most sci-fi unreadable because too “thin” in character and language, while too wrapped up in elaborate explanations (that barely interest me) of the scientific/technological “set-up.” David Mitchell’s novels have the same downside for me as Powers’: too much scene setting and explanation, although Mitchell is a better stylist than Powers by far.

So is The Overstory Powers’ best novel?  Who knows?  It actually borrows its structure (somewhat) from Mitchell’s Cloud Atlas, while the characters feel a tad less mechanical to me.  But I suspect that’s because the “big theme” (always the driving force of Powers’s novels) was much more compelling to me in this novel, with only Gain of the earlier ones holding my interest so successfully.

The big theme: how forests think (the title of a book that is clearly situated behind Powers’s work even though he does not acknowledge it, or any other sources.)  We are treated to a quasi-mystical panegyric to trees, while being given the recent scientific discoveries that trees communicate with one another; they do not live in accordance with the individualistic struggle for existence imagined by a certain version of Darwinian evolution, but (rather) exist within much larger eco-systems on which their survival and flourishing depend.  The novel’s overall message—hammered home repeatedly—is that humans are also part of that same eco-system—and that competition for the resources to sustain life as contrasted to cooperation to produce and maintain those resources can only lead to disaster.  Those disasters are not just ecological (climate change and depletion of things necessary to life), but also psychological.  The competitive, each against each, mentality is no way to live.

I am only fitfully susceptible to mystical calls to experience some kind of unity with nature.  I am perfectly willing to embrace rationalistic arguments that cooperation, rather than competition, is the golden road to flourishing.  And, given Powers’s deficiencies as a writer, I would not have predicted that the mysticism of his book would move me.  But it did.  That we—the human race, the prosperous West and its imitators, the American rugged individualists—are living crazy and crazy-making lives comes through loud and clear in the novel.  That the alternative is some kind of tree-hugging is less obvious to me most days—but seems a much more attractive way to go when reading this novel.

I have said Powers is a realist.  So his tree-huggers in the novel ultimately fail in their efforts to protect forests from logging.  The forces of the crazy world are too strong for the small minority who uphold the holistic vision.  But he does have an ace up his sleeve; after all, it is “life” itself that is dependent on interlocking systems of dependency. So he does seem to believe that, in the long run, the crazies will be defeated, that the forces of life will overwhelm the death-dealers.  Of course, how long that long run will be, and what the life of the planet will look like when the Anthropocene comes to an end (and human life with it?) is impossible to picture.

Life will prevail.  That is Powers’ faith—or assertion.  Is that enough?  I have also read recently an excellent book by Peter J. Woodford: The Moral Meaning of Nature: Nietzsche’s Darwinian Religion and its Critics (University of Chicago Press, 2018).  Woodford makes the convincing argument that Nietzsche takes from Darwin the idea that “life” is a force that motivates and compels.  Human behavior is driven by “life,” by what life needs.  Humans, like other living creatures, are puppets of life, blindly driven to meet its demands.  “When we speak of values, we speak under the inspiration, under the optic of life; life itself forces us to establish values; when we establish values, life itself values through us” (Nietzsche, Twilight of the Idols).

 

Here is Woodford’s fullest explanation of Nietzsche’s viewpoint:

“The concept that allows for the connection between the biological world, ethics, aesthetics, and religion is the concept of a teleological drive that defines living activity.  This drive is aimed at its own satisfaction and at obtaining the external conditions of its satisfaction. . . . Tragic drama reenacts the unrestricted, unsuppressed expression of [the] inexhaustible natural eros of life for itself. . . . Nietzsche conceived life as autotelic—that is, directed at itself as the source of its own satisfaction.  It was this autotelic nature of life that allowed Nietzsche to make the key move from description of a natural drive to discussion of the sources and criteria of ethical value and, further, to the project of a ‘revaluation of value’ that characterized his final writings.  Life desires itself, and only life itself is able to satisfy this desire.  So the affirmation of life captures what constitutes the genuine fulfillment, satisfaction, and flourishing of a biological entity.  Nietzsche’s appropriation of Darwinism transformed his recovery of tragedy into a project of recovering nature’s own basic affirmation of itself in a contemporary culture in which this affirmation appeared, to him at least, to be absent.  His project was thus inherently evaluative at the same time that it was a description of a principle that explained the nature and behavior of organic forms” (38).

Here’s my takeaway.  Both Powers and Nietzsche believe that they are describing the way that “life” operates.  Needless to say, they have very different visions of how life does its thing, with Powers seeing human competitiveness as a perverted deviation from the way life really works, while Nietzsche (at least at times) sees life as competition, as the struggle for power, all the way down.  (Cooperative schemes for Nietzsche are just subtle mechanisms to establish dominance—and submission to such schemes generates the sickness of ressentiment.)

What Wofford highlights is that this merger of the descriptive with the evaluative doesn’t really work.  How are we to prove that life is really this way when there are life forms that don’t act in the described way?  Competition and cooperation are both in play in the world.  What makes one “real life,” and the other some form of “perversion”?  Life, in other words, is a normative term, not a descriptive one.  Or, at the very least, there is no clean fact/value divide here; our biological descriptions are shot through and through with evaluation right from the start.  We could say that the most basic evaluative statement is that it is better to be alive than to be dead.  Which in Powers quickly morphs into the statement that it is better to be connected to other living beings within a system that generates a flourishing life, while in Nietzsche it becomes the statement that it is better to assume a way of living that gives fullest expression to life’s vital energies.

[An aside: the Nazis, arguably, were a death cult–and managed to get lots and lots of people to value death over life.  What started with dealing out death to the other guy fairly quickly moved into embracing one’s own death, not–it seems to me–in the mode of sacrifice but in the mode of universal destruction for its own sake.  A general auto de fe.]

In short, to say that life will always win out says nothing about how long “perversions” can persist or about what life actually looks like.  And the answer to the second question—what life looks like—will always be infected by evaluative wishes, with what the describer wants life to look like.

That conclusion leaves me with two issues.  The first is pushed hard by Wofford in his book.  “Life” (it would seem) cannot be the determiner of values; we humans (and Powers’ book makes a strong case that other living beings besides humans are in on this game) evaluate different forms of life in terms of other goods: flourishing, pleasure, equality/justice.  This is an argument against “naturalism.”  Life (or nature) is not going to dictate our values; we are going to reserve the right/ability to evaluate what life/nature throws at us.  Cancer and death are, apparently, natural, but that doesn’t mean we have to value them positively.

The second issue is my pragmatist, Promethean one.  To what extent can human activity shape what life is.  Nietzsche has always struck me as a borderline masochist.  For all his hysterical rhetoric of activity, he positions himself to accept whatever life dishes out.  Amor fati and all that.  But humans and other living creatures alter the natural environment all the time to better suit their needs and desires.  So “life” is plastic—and, hence, a moving target.  It may speak with a certain voice, but it is only one voice in an ensemble.  I have no doubt that it is a voice to which humans currently pay too little heed. But it is not a dictator, not a voice to which we owe blind submission.  That’s because 1) we evaluate what life/nature dishes out and 2) because we have powers on our side to shape the forms life takes.

Finally, all of this means that if humans are currently shaping life/nature in destructive, life-threatening ways, we cannot expect life itself to set us on a better course.  The trees may win in the long run—but we all remember what Keynes said about the long run.  In the meantime, the trees are dying and we may not be very far behind them.

Money and Babies

Since I got onto money in my last post, I am going to continue that line of thought (briefly).  I worried a lot about money between the ages of 29 and 44 (roughly); it’s strange how hard it is for me to remember my feelings.  Sure, I forget events as well.  But the main outlines of my life’s history are there to be remembered.  What I can’t pull up is how I felt, what I was thinking, at various points.  My sense now that I was somehow not present at much of my life stems from this inability to reconstruct, even in imagination, who I was at any given moment.  I did all these things—but don’t remember how I did them or what I felt as I was doing or even exactly what I thought I was doing.  Getting through each day was the focus, and somehow I made it from one day to the next.  But there was no overall plan—and no way to settle into some set of coherent, sustaining emotions.  It was a blur then and it’s a blur now.

All of which is to say that I worried about money, about the relative lack of it, without having any idea about how to get more of it.  I didn’t even have money fantasies—like winning the lottery or (just as likely) writing a best-selling book.  What I did for a living, including writing the books that my academic career required, was utterly disconnected emotionally and intellectually from the need to have more money.  When I made my first academic move (from the University of Rochester’s Eastman School of Music to the University of North Carolina) the motive was purely professional, not monetary.  I wanted to teach in an English department and be at a university where my talents would not be underutilized.  That it would involve a substantial raise in pay never occurred to me until I got the offer of employment.  And when I went and got that outside offer in order to boost my UNC salary (as mentioned in the last post), it was the inequity of what I was being paid that drove me, not the money itself.  In fact, despite worrying about money for all those years, I never actually imagined having more than enough.  It was as if I just accepted that financial insecurity was my lot in life.  I could worry about it, but I didn’t have any prospects of changing it.

Certainly, my worries did make me into a cheap-skate.  And undoubtedly those niggardly habits are the reason we now have more than enough each month.  Habits they certainly are since at this point they don’t even pinch.  They just are the way I live in the world—and allow me to feel like I am being extravagant when (fairly often now) I allow myself luxuries others would even give a second thought.

My main theme, however: the worries about money were utterly separate from the decision to have children.  That this was so now amazes me.  It is simply true that when Jane and I decided the time had come to have children, the financial implications of that decision never occurred to me.  We made a very conscious decision to have children.  Our relationship was predicated, in fact, on the agreement that we would have children.  And when that pre-nuptial agreement was made the issue of having money enough to have kids was raised.  But four years later, when we decided to have the anticipated child, money was not considered at all.  And when we decided to have a second child after another two years, once again money was not an issue.  I don’t know why not.  Why—when I worried about having enough money for all kinds of other necessities—did I not worry about having enough money to raise our two children?  That’s the mystery.

I have no answer.  And I can’t say if that was true generally for those of us having our children in the 1980s, although it seems to have been true for most of my friends.  On the other hand, as my wife often notes, I do have a fairly large number of married friends (couples who have been together forty years now) who do not have children.  Very likely that a mixture of professional and financial reasons led to their not having kids.

I do, however, feel that financial considerations do play a large role now (in the 2010s) in the decision to have children.  That’s part of the cultural sea-change around winners and losers, the emptying out of the middle class, and the ridiculous price of “private” and quasi-private education.  Most conspicuous to me is the increasing number of single-child families among the upper middle class.  Yes, that is the result of a late start for women who take time to establish themselves in a profession.  But it also an artifact of worrying about the cost of child-care and of education.

I come from a family of seven children.  And my parents, relatively, were less well-off when they had us than Jane and I were when we had our children.  (That statement is a bit complicated since my parents had access to family money in case of emergency that was not there for me to tap.  But, in fact, my parents didn’t rely heavily on that reserve until much later in their lives.)  Was my not following my parents’ footsteps toward a large family financially motivated?  A bit, I guess.  But it really seems more a matter of style—plus the fact that my wife was 34 when she got pregnant with our first.  But even if she had been 24 (as my mother was at her first pregnancy), it is highly unlikely we would have had more than two kids (perhaps three).  The idea was unthinkable by 1987; it just wasn’t done.

It is also hard to see how we could have done it (even though that fact didn’t enter into our thinking).  Certainly, it would have been done very differently.  We paid $430,000 for our two children’s educations: three years of private high school and four years of private university (with a $15,000 scholarship each year) for my son, and four years of private high school and four years of private university for my daughter. And that figure is just the fees paid to the schools; it doesn’t include all the other costs. We would certainly have relied much more heavily on public education if we had more than two children.

Once again, I have no moral to draw.  I am just trying to track what seem to me particularly significant shifts in cultural sensibility.