Category: moral philosophy

Fact/Value Divide

It’s been a long hiatus.  But I want to pick up where I left off.  I have three issues on the table:

1. Cognitive versus non-cognitive theories of art.

2. The very distinction between cognitive and non-cognitive appears to motivate a fact/value divide—as shown most dramatically in the emotivist, non-cognitive theories of ethics/morals developed by the logical positivists in the mid-20th century.

3. I am still angling to get eventually to a consideration of the connection of art to meaning—with the corollary of considering if meaning differs substantially from information and/or causal explanation. On this last point, I am courting, it would seem, my own dichotomy.  I, for the most part, have no commitment to proving the arts “distinctive” in some absolute way.  I don’t feel a need to show that the arts do something that other activities we would not consider artistic do not.  But I do suspect that a focus on or concern with meaning leads in different directions than a focus on explanation.  To explain how hydrogen and oxygen combine to create water says little to nothing about the “meaning” that interaction might have. At least, that’s my intuition.

But today’s post focuses in on #2, the fact/value divide.  I think I am stealing my basic insight here from my friend Allen Dunn, but will follow a path derived from Wittgenstein and Dewey to make my case.

Consider the following sentences, all of which (except the last two) use some form of the verb “to be,” and take the form of assertions.

1.  There is a red house. [The speaker points at a yellow house.]

2.  There is a dog.  [The speaker points at a cat.}

3.   Henry is taller than John.

4.  Abraham Lincoln was the 16th president of the United States.

5.  Abraham Lincoln was the greatest president in American history.

6.  Incest is wrong.

7.  “All men are created equal.”

8.  Hitting your child is wrong.

9.  Moby Dick is the greatest American novel.

10.  Moby Dick is a chaotic mess.

11.  Moby Dick is about a milkman who loses his job.

12.  James has Parkinson’s Disease.

13.  William has prostrate cancer.

The usual, intuitive, reasons for believing in a fact/value distinction are 1. The belief that values are human and are added on top of natural facts and 2. The notion that facts, generally, are verifiable and thus non-controversial, while disagreements over values are rife and irresolvable.  It is easy to agree that this copy of Moby Dick has a red cover, but it is harder to reach agreement over the artistic value of Moby Dick.

On number one: my inclination would be to say facts are as human as values—insofar as facts are fabricated (in the ways Bruno Latour’s work has made familiar to us) and that the mobilization of facts, their use in rhetorics of persuasion that aim to achieve agreement, is a human enterprise.  (Let’s leave speculation about the consciousness of animals and plants to one side for the moment.  I am a complete agnostic on this topic.  We learn more and more every year about animal and plant consciousness.  So I do not deny out of hand that these non-human creatures might have their ways of ascertaining facts and, crucially, bringing apprehended facts to bear in subsequent behavior, and communicating with others.)  For humans, the key for me is that facts are understood as pieces of information that have been produced, and then mobilized in processes of deliberation and the formation of individual and collective intentions. 

In other words, once a fact has been fabricated in the Latourian fashion, it then becomes something that is used in making plans and in trying to persuade others to assist with those plans.  Thus, a hurricane is not a fact until it has been made into one by an assemblage of the symptoms and consequences and causes gathered under the name “hurricane” by meteorologists—and then that name (with all that is associated with it) is used (for example) to justify an evacuation order.  In short, on this account, there is no reason to think the creation of values differs significantly from the creation of facts.  Both facts and values are assemblages that bring together various factors to designate something as the case (i.e. hurricanes cause damage; incest is wrong).

What I take from Allen refers to number two, the idea that facts are non-controversial while values generate endless disagreements.  Allen’s point was that we have many value statements that are almost universally accepted.  Very few people insist that incest is just fine.  Far fewer people call incest OK than believe that alien abduction happens.  We cannot sort things into the fact bin and the value bin on the basis of agreement over the truth of fact assertions as contrasted to value assertions.  The American experience of the past four years has merely brought the idea that facts are incontrovertible to its knees.

At this point, the temptation is to throw up one’s hands and say “anything goes.”  This is where Wittgenstein and Dewey can prove helpful—even though they will not “solve” the problem of disagreement.  But they can help us think about it more clearly. 

The sentences I offer at the top of this post are Wittgenstein-like.  For sentence one, where a speaker calls a yellow house red, we would first ask him to look again.  If he repeated his assertion, we could only conclude that he is color-blind (and would arrange for him to be tested for that condition), or that he doesn’t understand how the word “red” is applied in English (and would proceed to try to teach him the color terms and their application in English.) 

For sentence two, where the speaker calls a dog a cat, we don’t even have known medical condition to appeal to.  Now it is simply telling him that “we” (the speakers of English) call that animal a “dog” not a “cat.”  This looks like sheer compulsion.  There is no underlying reason or fact that justifies using the word “dog” instead of the word “cat.”  It is just the way we do things in English.  The agreement is motivated (perhaps) by its usefulness in facilitating communication, but nothing else underwrites the convention.  For Wittgenstein, reasons stop at a certain point.  This is where my spade turns, he writes.  Reasons come to an end, and there is just the bald statement: this is what we do, this is how we think and act, this is what we believe. 

What Dewey adds to this Wittgensteinian picture is the notion of “warrants.”  Where there are disagreements over an assertion, there are reasons I mobilize in an effort to convince another that my assertion should be credited.  It is important to recognize that the warrants vary widely depending on the nature of the assertion.  The warrants for sentences one and two are, from a positivist point of view, pretty feeble.  The only “verification” is to show that this use of the words “red” and “dog” is actually what English speakers do.  There is no connection to natural facts involved.

When we get to sentence three, Henry is taller than John, we can stand the two boys next to each other.  Here there appears to be a fact of the matter that can be “shown.”  Agreement still depends on both parties understanding the term “taller” in the same way, but there is also (as the positivist sees it) “direct’ evidence for the assertion.

Sentence four shows how quickly the positivists’ view of facts falls apart.  There is no “direct” proof that Lincoln was the 16th president.  In a very real way, we must take that fact on faith, placing credence in various documents.  We in 2020 can have no first-hand knowledge of Lincoln having been president.  In part, we take that fact on authority.  But we also believe that fact because questioning it would undermine all kinds of other beliefs we have.  Our beliefs “hang together” to establish a holistic picture of our world and our place in it.  To discredit a single assertion can, in many cases, threaten to unravel a whole web of beliefs.  We are, for this reason, “conservatives” in the matter of beliefs, William James says.  We want to conserve, to not upset the apple cart.  We have to have very strong reasons (of interest, or argument) to give up a settled belief.  Saying this, however, indicates the extent to which we believe what it is “comfortable” to believe—and thus points to the ways we can believe things that, to others, seem to patently disregard compelling evidence to the contrary.  On the other hand, “confirmation bias” points to the ways we credit anything that seems to shore up our current beliefs.  We humans can be remarkably resistant to what others will claim are “the plain facts of the matter.”

Dewey’s notion of “warrants” tells us that what will “count” as evidence will vary from case to case.  The evidence brought to back up the assertion that Lincoln was the 16th president is different from the evidence called up to claim he was the greatest American president.  Of the asking for and giving of reasons there is no end.  But the kinds of reasons offered must be deemed pertinent to the matter at hand. 

Thus, when we get to the statements “incest is wrong,” or “all men are created equal,” we might be tempted to argue in terms of consequences.  There are harmful biological consequences for inbreeding, and there might be harmful social consequences (violence, resentment, various other forms of conflict) from treating some as inferior to others.  But incest was considered wrong long before there was any understanding of genetics, and the harmful consequences of inequality are uncertain.  In the case of incest, there is not much (if any) disagreement.  Certain persons might violate the injunction against it, but they recognize the force of the assertion in their keeping its violations secret.  I am tempted to say that the assertion “incest is wrong” is akin to saying “that animal is a dog.”  It is just the way this community does things.  It is foundational to our being a community (we share a language; we share a belief than incest is wrong and that it should be forbidden).  Our spade turns there.

The equality assertion is more debatable (as is the assertion that hitting a child is wrong).  Arguments (reasons) are offered for both sides.  Disagreements over consequences (installing equality breeds mediocrity; sparing the rod spoils the child) will be rife.  Kant, of course, offers a different argumentative strategy, one that depends on seeing the contradiction in making an exception of oneself.  You, Kant says, don’t want to be treated as an inferior.  So why should you think that it is right for another to be so treated?  Kantian arguments have proven no more decisive than consequential ones.  But the point is that these two kinds of reasons are typical of the “warrants” offered in cases of moral assertions.  Where they fail, we can only say “I have nothing more to offer. Here my spade turns.”  The kinds of evidence/reasons offered are different than the kinds I offer against claims of alien abduction or that Donald Trump really won the 2020 election, but there comes a point where what I deem more than sufficient reason to believe something does not work for others.  At that point, there is nothing further to be done.

 The Moby Dick sentences make the point that in debates over aesthetic values different kinds of assertions will call for different “warrants.”  The assertion that the novel is about a milkman is akin to someone calling a dog a cat.  There is no place to go with such a disagreement; the parties to it are literally not speaking the same language.  Wittgenstein’s point is that only where there is a fundamental agreement—we can call it the minimum required to be part of a community—can a disagreement then unfold.  I can’t play a game of tennis if my opponent says balls that go into the net are do-overs.  Unless we both stand within the constitutive rules of the game, the competition of an actual match cannot unfold.  I can’t have a conversation about Moby Dick with someone who thinks it is the story of a milkman.

But the other two sentences about the novel require different warrants.  To talk about it being the greatest American novel (just as any talk of Lincoln as the greatest American president) requires some kind of articulation of what makes a novel great and some attempt at comparison with other American novels my interlocutor might consider great.  To say Moby Dick is a chaotic mess need not involve any comparison to other novels, while the criteria for “chaotic mess” will be different from the criteria for “greatness.”  In both cases, I will presumably appeal to features of the novel, perhaps quoting from the text.  In the Latour vision, these appeals to the text are acts of assemblage, of putting together my case, calling into presence various available sources—features of the text, the opinions of prior readers and critics, my own responses to the novel’s shifts of tone and topic etc.—to make my assertion credible.

I have included the last two sentences, which provide a diagnosis of Parkinson’s and prostrate cancer, to indicate how warrants in medical science also differ from one case to another.  In prostrate cancer, we have blood tests and various forms of imaging that establish the “fact of the matter.”  In Parkinson’s there are no such definitive tests.  A patient is judged to have Parkinson’s on the basis of a bundle of symptoms (not all of which will be present in the majority of cases) and on the basis of how they respond to certain drugs or other treatments. 

There are two conclusions to draw, I think.  The first is pluralism.  Our reasons for believing assertions vary; we offer to ourselves and others different arguments/reasons/evidence to undergird (or justify) what we do assert, what we do hold to be true or to be good to believe.  (I recognize that I am going to have to think about the word “true” and the work it does when I take up the questions of meaning.) 

The second is that the kind of reasons offered in different cases do not separate out along lines that coincide with the traditional way of understanding the fact/value divide.  A consequential argument about the damage hurricanes produce takes a similar form to a consequential argument about the damage done by physically punishing a child.  In both cases, we might very well point to previous instances to show what those consequences might be—or we may point to larger-scale studies of multiple instances to show the odds of bad consequences, even while admitting that in some cases not much harm ensues.  And if, in the two cases of the hurricane and of child-rearing, we argue that humans should do all they can to mitigate the possible damage, we are asserting that suffering is a bad thing and to be averted wherever possible—an argument that can probably only be underwritten by some kind of Kantian reasoning about the good (the right) of all beings to avoid unnecessary pain.

Various writers—of the ones I know, Kenneth Burke and Hilary Putnam prominent among them—argue that fact and value are inextricably intertwined.  That the two comes packaged together in our apprehension of the world.  In Burke’s account, we have an “attitude” toward things and situations embedded within our apprehension of them.  But this Burke position still accepts the analytic distinction between fact and value, even if that “analysis” comes after the moment of their combination in actual experience.  The pluralism that Dewey points us toward suggests the distinction between fact and value misleads us altogether by suggesting that some things (Facts) are given and incontrovertible while others (Values) are human contrivances.  Better to look at how both facts and values are “made” (just as William James asks us to consider how “truths” are made), while paying attention to the plurality of ways of making humans (and other creatures) deploy.

Non-Cognitive Theories of Art (1)

Non-Cognitive Theories of Art

Enough of this election anxiety.  Back to the airy heights of theories of the aesthetic.

My four posts on cognitive theories of the aesthetic were really just a prelude to considering non-cognitive theories.  And I am going to start with Martha Nussbaum (although she can be seen as just the latest in a long line that would include David Hume and George Eliot).

Basically, Nussbaum believes that art works activate sympathy.  A novel can portray the sufferings of Oliver Twist and children like him.  Such a novel may serve to bring to our attention facts about orphans and workhouses, thus adding to our knowledge.  But more crucial is the way the story inspires fellow-feeling, a new sympathy for the plight of orphans.  It is one thing to know that orphans are often underfed; it is another thing to respond to that fact feelingly, to experience it as something that should be rectified.  The moral emotions of indignation and sympathy are brought into play through the power of the story, a power that a simple recitation of the facts does not have.

Such a way of explaining what is going on rests on a fairly stark fact/value divide, Hume’s worry about deriving an “ought” from an “is.”  One can see that an orphan does not have enough to eat.  But that seeing does not entail the judgment that the orphan’s hunger is “wrong” (or “unjust”) and that it should be rectified.  Rationalist theories of moral value (Kant or Mill, one deontological, the other utilitarian) believe that reason provides the basis for moral judgments.  But the Humean school hands that job over to feelings.  Our moral judgments come from those moral emotions, from our indignation at suffering felt (perceived?) as unnecessary or cruelly inflicted, from our sympathy with those who suffer.  

Some may be able to see the hungry child and feel no sympathy, may even be able to claim the child is getting what he deserves.  Those seeking to convert such a person to their sympathetic view needs to find a way to pull on the heartstrings, to call forth the needful feelings.  Arguments and reasons will not do the trick.  We don’t know something is heinous simply by looking at it.  Thus it is unlike knowing something is red.  We don’t need some particular “feeling state” to judge the thing is red.  But we do need the appropriate feelings to judge something is unjust, should be condemned and, if possible, rectified.

This is philosophy, so of course it gets complicated.  My own theoretical and moral commitments mean that I really would like to avoid such a sharp fact/value divide.  There are, as far as I can see, two pathways to lessening the gap between fact and value.  Neither, I think, closes that gap completely.

The first path is one I think Nussbaum takes.  She is very committed to the assertion that feeling and cognition are not distinct—that, in fact, a feeling-less cognition is monstrous and mostly impossible.  For her, sympathy enhances understanding.  The story of Oliver Twist increases our understanding of the plight of orphans. (George Eliot would make this claim as well.) If we define “empathy” as the ability to get a sense of another’s experience, then sympathy is the gateway to empathy.  We know more about others when we are able to sympathize with them—and that ability is feeling-dependent.  No amount of simple or “rational” looking will do the job.  The feelings must be activated for the most adequate knowledge to be accessed. 

Thus, Nussbaum (ultimately) is a cognitivist when it comes to (at least) literature. (What she would have to say about non-literary artistic forms is not clear; she seldom writes of them.)  But there still lingers the difference between explanation and understanding, or determinative and reflective judgment.  To know that the house is red is a determinate judgment (in Kant’s terms).  We don’t claim to “understand” the house; we just state what its color is, and would presumably “reduce” that judgment to the physics of wavelengths and the semantic facts about English if we had to explain to someone the basis for the judgment. 

[A digression: I continue to struggle with the possibility that there is a significant difference between “explanation” and “understanding.” To “understand” the orphan’s plight is not to “explain” it; to understand can mean either I now see that he is hungry or now empathize with, have a sense of, his suffering. To explain his hunger would, presumably, be to trace its causes, what factors have deprived him of enough food, or what physiological processes lead to hunger. Since Dilthey (at least) there has been an effort to see “explanation” as characteristic of the sciences, and “understanding” as characteristic of the humanities. My problem–shared with many others–is not being able to work out a clear distinction between explanation and understanding. Plus there is the problem that making such a clear distinction threatens to create another gap similar to the fact/value divide. Do I really want to see the sciences and the humanities as doing fundamentally different things, with fundamentally different goals and methods? How drastic a dualism do I want to embrace–even when a thorough going collapse of all distinctions between the science and humanities is also unattractive? The trouble with many aesthetic theories, in my eyes, is their desperate commitment to finding something that renders the aesthetic distinct from every other human practice and endeavor. I don’t think the aesthetic is so completely distinctive–and I don’t see what’s gained (in any case) if one could prove it unique. So my struggle in this long series of posts on the aesthetic is to find some characteristics of the aesthetic that do seem to hold over a fairly large set of aesthetic objects and practices–while at the same time considering how those characteristics also operate in other domains of practice, domains that we wouldn’t (in ordinary language as well as for analytical reasons) deem aesthetic. And, to name once again the golden fleece I am chasing, I think some account of meaning-creating and meaning-conferring practices is the best bet to provide the theory I am questing for.]

To return to the matter at hand: The judgment that the plight of orphans is unjust or outrageous is a reflective judgment in Kant’s sense.  Reflective judgments have two features that distinguish them from determinative judgments:

1. The category to which this instance is being assigned is itself not fixed.  Thus, for Kant, “beauty” is not a stable standard.  A new work of art comes along and is beautiful in a way we have never experienced before and/or had hardly expected.  But we judge that the term “beauty” is appropriate in this case, even though it is novel—and even though our judgment revises our previous senses of the category “beauty.” 

2. Kant is also very clear that reflective judgments originate in subjective feelings.  He is concerned, of course, to find a way to move from that subjectivism to “universal validity” and “universal communicability.”  But the starting point is individual feeling in a way that it is not for determinative judgments.  My feeling about the house plays no role in my assertion that is red.  But my feelings about the Matisse painting are necessary, although not necessarily sufficient, to my judging it “beautiful.” (not necessarily sufficient because my judgment also takes the sensus communis into account. I judge, as Arendt puts it, in the company of others. Reflective judgment is neither entirely personal nor entirely social. Its public character comes from the fact that it will be stated for/to others.)

Thus, even if we (as Nussbaum wants to do) say our aesthetic and moral judgments count as knowledge, as assertions that we make with confidence and expect others to understand (at least) and agree with (at best), those judgments still arise from a different basis than judgments of fact. (N.B.: I am following Arendt here in taking Kant’s aesthetics as a more plausible basis for morality than Kant’s own moral theory.)

To summarize: aesthetic judgments (“this is beautiful”) and moral judgments (“this is unjust”) would still be seen as “cognitive.”  Such judgments are assertions about how some thing in the world (an art work, an orphan’s hunger) should be understood, should be labeled—and purport to say something substantial about that thing in the world.  But the process by which that judgment is reached—and the process by which I would get others to assent to it—is distinct (in certain ways) from the processes that underwrite statements of fact. A key feature of that difference is the role feelings play in reaching the two different kinds of judgment.

So maybe Nussbaum’s approach is not non-cognitive; instead, it is committed to their being different forms/processes of cognition.  Then we would just get into a fight over what we are willing to label “cognitive.”  How capacious are we willing to let that term be? Is calling the Matisse painting “beautiful” a knowledge claim or not. The positivists, of course, pronounced aesthetic and moral judgments non-cognitive in the 1930s–and philosophers (of whom Nussbaum is prominently one) have been pushing against that banishment ever since. The only stake (it seems to me) would be whether being deemed “cognitive” is also seen as conferring some kind of advantage over things deemed “non-cognitive.”  Nussbaum certainly seems to think so. She is very committed to expanding the realm of the cognitive and the rational to include feeling-dependent judgments—and seems to believe that enhancing the status of such feeling-dependent judgments will increase the respect and credence they command.

But the alternative would be to say credence does not rest on something being cognitive; that we should look elsewhere for what leads people to make judgments and to assent to the judgments that others make. Standard understandings of cognition are just too simple, too restrictive, to account for the complexities of how people actually judge and come to have beliefs. Better to abandon the cognitive/non-cognitive distinction altogether–and provide an alternative story about how we come to think and feel about things.

I am going to leave it here for today—and discuss in my next post an alternative way to lessen the fact/value gap, one that does move toward ignoring characterizing judgments and beliefs as either cognitive or non-cognitive.

Arendt Contra “Life”

Hannah Arendt famously insisted that any politics that attended to the demands of “life” was doomed to descend into factional strife.  How to understand her argument on these matters has troubled her readers ever since she first articulated this view in 1957’s The Human Condition and, more forcefully, in 1962’s On Revolution. It doesn’t help matters that the critique of a life-based politics in the former book is replaced (augmented) by a differently inflected argument in On Revolution: namely, that politics must avoid addressing “the social question.”  Just how Arendt’s disdain for “the social” connects to her insistence that “life” should never be the principal motive for “action” is hard to parse.

Let me start with life.  Arendt’s argument (derived from Aristotle in ways that resonate with Agamben’s adoption of the distinction between “bios”—bare life—and “zoe”—a cultivated life) is that life belongs to the realm of “necessity.”  What is needed to sustain life (food, shelter, etc.) must be produced and consumed.  The daily round of that production and consumption is inescapable—but the very opposite of freedom. 

Politics exists in order to provide freedom, to provide a space for action that is not tied to necessity.  As countless readers have pointed out, Aristotle’s polity relies on slaves to do the life-sustaining work tied to necessity—and Arendt seems nowhere more mandarin than in her contempt for that work.  While it is going too far to say that she endorses slavery, there is more than a little of Hegel’s master/slave dialectic in Arendt.  She seems at times to accept that the price of freedom, the price of escaping slavery, is an heroic, aristocratic disdain for life that allows the master to achieve his (it’s almost always a “he”) position of mastery in the life/death struggle that creates slavery in the Hegelian story.  Those tied to “life” are slavish in disposition; they have bargained away their freedom because they have valued life too highly—have, in fact, taken life (not freedom or mastery) as the highest (perhaps even the sole) value.  This contempt gets carried over into Arendt’s deeply negative views of “the masses.” 

Arendt’s disdain for “life” has often been seen as a critique of bourgeois sensibility.  The bourgeoisie is focused on “getting and spending” which it deems “private”—and is, consequently, uninterested in politics.  That’s one way of interpreting Arendt’s lament that politics is in danger of disappearing altogether in the modern world.  In a liberal society, all the focus is on “private” pursuits—the religion of personal salvation, economic pursuits, family and friends.  It is reductive, but not altogether inaccurate, to link Arendt to figures like Tocqueville who lament the loss of an aristocratic focus on “honor” even as they both admit that aristocratic virtues are lost forever.  If the triumph of “life” is to be overcome, it won’t be through a revival of either Aristotle’s or Machiavelli’s worlds. 

Arendt’s prescription (especially in The Human Condition) appears to be the attempt to substitute amor mundi (a love of the world) for the love of life.  My student Martin Caver wrote a superb dissertation on the concept of amor mundi in Arendt—and had to contend mightily with how slippery and vague that notion is in her work.  Pushed into thinking about this all again by Matt Taylor’s essay—and by a subsequent email he wrote to me in response to my post on his essay—here is how I would pose the contrast world/life today.

The problem with “life” from Arendt’s point-of-view is that life is monolithic.  Its demands appear to be everywhere the same: sustenance.  To maintain a life is a repetitive grind that Arendt depicts as a relentless “process” that never allows for individuation.  There are no distinctions within life.  Every living thing is the same in terms of possessing what we can call “bare life.”  Paradoxically, life renders everyone the same even as it also renders everyone selfish. Unlike politics, which for Arendt offers the possibility of individuation, selfishness just makes everyone alike. The bourgeois self is focused on “getting his”—which is why “life” is antithetical to amor mundi.  We humans are in a sorry condition unless we can generate some care (think of Heidegger on Sorge at this point) for the world that we share.  When everyone is pursuing only his own interest, the world falls apart. (Certainly sounds like a pretty good description/diagnosis of American society in 2020.)

What is this “world” that Arendt calls us to love?  She insists that it is the fact of “plurality” (the fact that we are with others on this planet) and that it is what lies “between” the various actors who inhabit it.  The modern retreat into the private is making the world recede.  We no longer (at least as intensely) live and act together in a shared world, in a public space.  That public space is the scene of politics for Arendt.  And politics is where one distinguishes oneself (i.e. where one can achieve a distinctive identity).  Politics is also where the world is produced through “acting in concert.”  The notion here (although Arendt never articulates it in this way and is way too vague about the particulars of “acting in concert”) is that a public space is created and maintained by the interactions of people within that space—just as a language is created and maintained by people using it to communicate.  The ongoing health and existence of the language is a beneficial, but not directly intended, by-product of its daily use by a community of speakers.  Our common world is similarly produced.

Love of that world thus seems to mean two things: caring for its upkeep, it preservation, and a taste, even a love, for plurality.  I must cherish the fact that it is “men,” not just me, who constitute this world.  In Iris Murdoch’s formulation: “Love is the extremely difficult realization that something other than oneself is real.”

To understand Arendt’s critique of “life” in these terms leads almost too smoothly into her work of Eichmann and, then, to The Life of the Mind.  To be thoughtless (as Arendt accuses Eichmann of being) is precisely to be incapable of comprehending otherness, that fact that “something other than oneself is real.”  Selfishness is thoughtless, a failure of imagination, a failure to grasp the fact of plurality in its full significance.  Soul-blindness. And she reads Eichmann’s blindness in terms of his being entirely focused on climbing the ladder in the bureaucracy within which he works.  That’s why his evil is “banal.”  It’s the product of his daily round of making his way, not a product of any deeply-held convictions or ideology.  He was, in her view, quite literally just doing his job with an eye toward promotion, without any conception of how his actions were effecting other people.  (Whether this is a plausible reading of Eichmann is neither here nor there for the more general argument that the modern mind-set, along with the  bureaucracies—among which we must count large corporations—in which so many moderns are embedded, generates soul-blindness, the thoughtless inability to see the consequences of one’s actions apart from how those actions contribute to one’s “getting ahead.”)

No wonder, then, that Arendt’s grasps onto the passage in the Critique of Judgment where Kant calls for “enlarged thinking”—and ties judgment to the capacity to see something from the other’s point of view.  I must go “visiting,” Arendt says, in order to make a judgment.  The person who is focused solely on gaining a “good life” for himself will never encounter “the world,” never grasp plurality.

The problem comes when the critique of “life” in The Human Condition is paired with a critique of “the social”—and that problem becomes a crisis when the full implications of banning the social from politics are articulated in On Revolution.  Even Arendt’s most adept readers—Seyla Benhabib, Bonnie Honig, Hanna Pitkin—barely try to defend her position at this juncture.  Bluntly put, Arendt says that the polity should never attempt to address or alleviate poverty or material inequities.  The necessities of life—and how to secure them—should never be seen as a matter appropriate to politics.  To make that mistake is simply to make politics itself impossible while leading to endless strife. 

The puzzle has always been how a thinker of Arendt’s power could have been so blind, so stupid, so thoughtless (she is never so close to her caricature of Eichmann as at this point) on this score.  How could she think 1) that banishing the endless strife over material resources to “the social” somehow solves the problem of that strife, and 2) that “politics” could somehow (by fiat?) be separated from allocation of resources (where those resources include power and status as well as material goods)?  I can only suspect that she harbors the old aristocratic disdain of “trade” and imagines she can erect of field of contention where only distinction, honor, and virtuosity are at stake—and nothing so vulgar as monetary reward.  Arendt’s ideal politics are, after all, agonistic.  She is not against strife.  But she wants a “pure” strife focused exclusively on excellence, unsullied by irrelevant considerations of money or status.  She hates “society” because she deplores the standards by which it confers distinction.  No surprise that her politics seem so aesthetic—and that she goes to Kant’s Critique of Judgment to discover his politics.  What matters in the idealized aesthetic space is the quality of the performance—and nothing else. 

So the question Arendt poses for us is: Is it harmful to have this ideal of a practice (or practices) that are divorced (by whatever means are effective) from questions of material necessity and reward?  At a time when utilitarian considerations seem everywhere triumphant, the desire to carve out a protected space has a deep appeal.  Reduction of everything to what avails life (Ruskin’s formula) very quickly becomes translated into what can produce an income.  Various defenses of the university are predicated on fighting back against the utilitarian calculus.

But the danger of taking the anti-utilitarian line (the aestheticist position, if you will) is that it reinforces the bourgeois/classical liberal assertion that “the economic” is its own separate sphere—one that should be understood as “private.”  Arendt may be a sharp critic of bourgeois selfishness and how that selfishness diminishes what a life can be even as its blithely denies the necessities of life to others, but she seems to be reinforcing the liberal idea of “private enterprise.” 

It is not clear how (or where) economic activities exist at all in the “world” she wants us to love.  And we have ample evidence by now that leaving economics to themselves is not a formula for keeping the economic in its place, in preventing its colonizing other spheres of human activity.  Just the opposite.  Laissez-faire is a sure-fire formula for insuring that the economic swallows up everything else.  It accumulates power as relentlessly as it accumulates capital—and thus distorts every thing in the world.

In the realms of theory, then, Matt’s instinct that a monolithic, overarching concept like “life” would be better replaced by a pluralistic reckoning of the needs and desires of “living” seems promising.  The thought is that “life” requires (in order for it to be defined) a contrast with “not life” (the world fills that role in Arendt)—and thus to a designation of the enemies of life (or, in Arendt’s mirror image, to a denigration of “life” in favor of another value, amor mundi).  In either case, the logic leads to a desire to eliminate something because it threatens what is desired. 

The alternative path of pluralism disarms such categorical condemnations.  That path returns us to the “rough ground” (Wittgenstein) of tough judgments about what to do in particular cases where we have to attend to the particulars—and not think that generalized formulas are going to be of much (if any) use.  There are always going to be multiple goods and moral intuitions in play, with painful trade-offs, and messy compromises.  No overarching commitment or slogan—like “reverence for life”—is going to do the work. Similarly, we cannot successfully separate things into separate spheres—the aesthetic in that bin, the economic in another one, and politics in a third. It is just going to be messier than that even as we also struggle to prevent any one type of motive swamp the others.  Pluralism is about (among other things) giving multiple motives some room to operate.  Which is why I remain so attracted to some version of a universal basic income, some version of supplying the minimal resources required to “flourish” to all.  Only when the material necessities can be taken for granted because secured (not disdained because they are bestial or vulgar) can other motives take wing.

One can also expect that others will disagree with, castigate her for, the course of action she does pursue, the positions for which she advocates.  Plurality comes with a price—which is why it is hard to love.  And why thinkers keep imagining formulas that will enable our escape from it. 

Ontological Egalitarianism, Or, Can We Derive an Ethics from “Life”

My colleague and friend Matthew Taylor has a terrific essay in the current issue of PMLA (Vol. 135, No. 3: 474-491 [May 2020]).  His topic is the “new materialism,” aka “the ontological turn,” although it also crops up under various other aliases.

Most simply put, the “new materialism” declares that all matter is animate; humans lived surrounded by other entities that should be recognized as having agency, as possessing “life.” Specifically, all things act to sustain themselves, perhaps even to better themselves (William James’ meliorism).  One version is Latour’s “trajectories of subsistence” contrasted to a more static notion of “substance.”   The idea is a) to reduce any qualitative distinction between humans and other entities; and b) to introduce a dynamic interactive web of relationships in which both humans and non-humans are entangled to replace the more traditional subject/object split where activity resides in the human subject who works upon passive material objects.  In that traditional view, all the entities have their stable identities, their essences, their abiding substance.

Matt’s essay ties current thinking along these lines back to the “philosophies of life” current in the post-Darwinian intellectual world of (roughly) 1870 to 1920.  I am more familiar with the characterization of Bergson, Nietzsche, James, Pater, and Whitehead as champions of “life.”  Matt shows how “hylozoism” or “panpsychism” (basically, the assertion that all matter is “alive”) was the prevailing view of late-nineteenth and early twentieth-century biology as well.  From this point of view, Nietzsche does not look like an outlier, a lonely rebel (as he loved to portray himself), but very much in tune with the dominant intellectual orthodoxies of his time.

Current day versions of hylozoism often think there is an ethical pay-off.  There are two ways to go in an ethical direction from the assertion that all matter is alive.  First, you can preach a deontological respect for “life,” basically extending the Kantian “kingdom of ends” to include everything—thus erasing the privilege of “the human” to arrive at “posthumanism.”  Second, you can use life (as Ruskin wants to do in “Unto the Last”) as your ethical standard.  Whatever promotes life is good; whatever harms life is bad. 

In both cases, it is easy to see that the ethicists among the new materialists are driven by a concern about climate change.  The “respect” position addresses the massive extinctions of our era and bemoans an exclusionary focus on what is good for humans. 

The “promotion of life” position is basically utilitarian.  We judge actions in terms of whether they serve the interests of life—or not.  Since climate change will be a disaster (is already a disaster) for many varieties of life (human and non-human), it is ethically wrong to perform actions that fail to work against that change.

Matt is having none of it.  He does not think you can derive an ethics from an allegiance to life.  I want to consider his reasons for this conclusion—some of which I agree with and others that I want to resist.

He presents four major arguments (as I understand the essay).

1.  There is a central—and fatal—imprecision lurking in the term “life.”  No one is ever able to nail down just what “life” means or entails.  It is hard to deploy something so vague as a standard.  I don’t quite know what to do with this argument, so will leave it be.

A different, but related, argument along these lines seems to me to have real bite.  If you say mountains are alive as are protozoa as are human beings, you obviously need to have a very capacious (and perhaps vacuous) notion of life.  However, at the same time, you can’t simply ignore the differences between mountains, protozoa, and humans.  Inevitably (in other words), forms of life are going to be differentiated within the overarching category of life.  And Matt argues that this differentiation will lead to a hierarchy; some things will be deemed “more alive” than others; there will be “degrees” of life. 

This is the familiar post-structuralist insistence that wherever there is difference, there will be the privileging of one term over the others.  Humans just aren’t equipped (mentally? in terms of the deep structures of thought?) to be egalitarians.  I have always been suspicious of this transcendental move—transcendental because it posits a fundamental form that is endemic to all human mental processes.  I always suspect “false necessity” at such junctures.  Why can’t we equally value things that we recognize to be different?  I don’t see any logical or ontological or psychological impediment to that possibility.

2.  But Matt has a much better argument for the inevitability of hierarchy.  Ethics, he says, requires judgments about better and worse.  You don’t have an ethics is you have a pure egalitarianism.  If you value life, then you must declare some actions harmful to life, even as you applaud others as life-sustaining or promoting.  What is our stance going to be toward the mosquitos that carry malaria, the ticks that carry Lyme disease, and the virus that causes COVID-19, not to mention white supremacists?  How are we going to avoid valuing some forms of life over others when some agents pose a threat to other agents?  In other words, the new ontology repeats the classic liberal mistake of imagining a conflict-free world.  But ethics is precisely about conflict—about choosing between competing visions of the good.  The mosquito who infects me is pursuing life; from its point of view, its actions are not harmful. 

This insistence that ethics must take sides, cannot be universally affirmative, is deeply troubling.  For one thing, this insistence is at the root of many tragic and conservative worldviews.  The tragic version is highlighted in Freud’s Civilization and its Discontents.  Freud expresses outrage in that text at the Christian injunction to love one’s enemies.  Such an injunction takes away the very meaning of love, Freud says.  As Yeats puts it, “hearts are to be earned, not had.”  But Freud adds that our only bestowing our love in some cases goes hand-in-hand with our aggressive feelings (and actions) toward those we cannot (or will not) love.  And numbered among those we cannot love is our own self.  The superego’s aggression is directed at myself—as well as at my “enemies.” 

Ethics—the self-righteous attempt to justify our aggressions—hoists us on own petard even as it stands as the crippling condition of an unending and inescapable tragedy: the tragedy of our uncontrolled and uncontrollable aggression.

Conservative thought holds onto the self-righteousness that the tragic vision (which deems all humans trapped in the same play) eschews.  Conservatives hold onto a strong version of the righteous few and the reprobate many; they scorn the idea of “social justice” precisely because it would bestow benefits on the unworthy.  Justice is about getting what you deserve—and thus the equal distribution of any good (whether it be health care, a decent education, or a basic income) is an outrage against morality. 

The liberal/left tries to use the notion of “social justice” to place some things out of the conflict zone.  The liberal must avoid the mistake of wishing away conflict, even as she tries to develop strategies for its mitigation.  More on that later in this post.  For now, Matt’s point against the new ontologists is well-taken.  A univeralist ethos of respect for all forms of life sounds wonderful, but it is so general, so vague, that it can’t stand up for very long when actually encountering facts on the ground.  “Life” pits some forms of life against others, so “life” itself can’t be the standard for adjudicating those conflicts.

3.  This last point—that “life” can’t be the standard—leads Matt to adopt a strict fact/value dichotomy.  You can’t read values out of “life” (or “nature”) is his fairly explicit position.  “Justice” or “equality” or even “reverence for life” are human notions; there is no evidence at all (in Matt’s view) that the world or nature or some basic “life force” cares for any of those human values.  Life carelessly and prodigally deals out death. 

Life, we might say, is deaf and mute.  It has nothing to say to us—and cannot hear anything we say to it.  Humans, like the other life forms identified/celebrated by the new ontology, are the random, utterly contingent, result of long evolutionary processes that were not aiming to produce what ended up being produced.  If ethical ideals are going to get any purchase in this evolutionary production, then it will because humans act to make their ethical values effective. 

      I want to be careful about adopting fact/value canyons.  I am going to skip that can of worms here, only gesturing toward my intuition that the dichotomy functions differently in different contexts, and should be resisted in some of those contexts.  But in this ontological context, I am inclined to accept a fairly drastic nature/human split.  I am uncomfortable doing so, but don’t see a good alternative.

     Two observations underline my willingness to accept that nature and life are amoral, while the human is the realm of value and moral judgments.  The first is that we humans are not inclined to morally condemn hurricanes or animals for their destruction of life.  We will bemoan the fact that the grizzly bear killed a person, but will not be morally indignant.  In other words, we do not hold nature accountable for life-harming actions the way that we do human beings. 

     The second is the point made so forcefully in Plato’s Euthyphro—and in the scene in Genesis where Abraham bargains with Yahweh about saving Sodom from destruction if a certain number of just inhabitants can be identified there.  In both cases, the point is that humans have self-generated standards that they wish/hope/try to get the non-human to adhere to.  “Innocence” is a human concept—and the gods and nature are to be condemned when they inflict suffering on the innocent.  The ethical standard is being imposed on the non-human—rather than the standard being derived from the non-human.  Oedipus at Colonus thus becomes an attempt to save the gods from human condemnation.

The upshot would be a kind of humanism that is hard to evade as long as you want to maintain ethics.  Nietzsche, of course, saw this clearly.  To escape humanism, you had to go “beyond good and evil” and simply embrace the ruthless indifference of the non-human to human values and to life itself.  Wanton destructive indifference, nature red in tooth and claw, is the fact of the matter—and you might as well join ‘em rather than trying to convert them over to (pathetically weak and sentimental) human values.  (Of course, there is also plenty of cooperation among living creatures as well, a fact Nietzsche neglects.  Sometimes, cooperation proves better than competition in advancing one’s life chances.)

4.  Matt also argues that hylozoism almost always leads to a form of Platonism.  He doesn’t put it that way.  But I think it a fair account of the argument.  Basically, the idea is that the general standard (or “form” if we use Platonic vocabulary) of “life” renders every actual instantiation of life an inadequate copy of that ideal.  The logic here is endemic to versions of evolution that see each novelty an improvement on what went before.  (For that reason, hylozoism in the 1870-1920 period was very, very often tied to eugenics, as Matt demonstrates.)  Nietzsche’s “uber-mensch” displays this kind of thinking.  The “true” or “ideal” embodiment of life is always out in front of us, which renders current forms unsatisfactory—perhaps even suitable for sacrifice in order to usher in the better future, just as Stalin and Mao murdered millions in the name of a world to come.  (But, then again, Christianity committed similar murders long before the justification of a warped Darwinism.)

“Life” thus becomes the bringer of, the justification for, death—an argument found in Foucault and Agamben, but perhaps lurking as well in Arendt’s emphatic contempt for “life.”  Certainly, Nietzsche (in another of his guises) points the way here.  Platonism and Christianity preach a disregard for, a nihilistic rejection of, the here and now.  With Christianity, we get the added hope that a non-human force will “redeem” the human—and the whole world.  Against that nihilism, Nietzsche wants to find his way to “affirmation.”  How can we affirm what is here before us, instead of whoring after strange gods and wish-fulfilling futures? 

I am not convinced that an affirmation of “life” necessarily leads to a denigration of the life currently available.  I don’t, in other words, buy the paradox that a stated commitment to life in fact generates a murderous aggression against actually existing life.  I am, however, convinced by Matt’s other argument, i.e. that a bland egalitarianism cannot do the ethical work that needs doing.

So how would I propose going forward?  At this point, I actually think pushing hard at the fact/value dichotomy might prove productive.  We (everything that exists) are not going to be redeemed from the natural (and evolutionary) conditions that set the stage for singular life spans.  But there is a social/cultural world that humans construct in their efforts to respond/adapt to that natural setting.  That social world develops notions of what a “good” or “flourishing” life looks like (where the notion of flourishing in no way needs to be confined to only human life forms).  Life (“bare life”) is a good, but a very minimal one if the means for “flourishing” are not available. 

Egalitarianism is tied to ideals of “social justice” when we define what resources are required to afford the possibility of flourishing—and the political/ethical imperative is to work toward social arrangements where those resources are afforded to all. 

This is a minimalist position.  What goods are needed—clean water and air, enough food, a decent education, health care, security from violence, etc.—to have a life that escapes the sufferings that social arrangements can alleviate?  What tribulations are remediable—not in terms of a redemption from the terms of existence, but in terms of having what is needed to cope with those terms?  These are questions that can only be answered through political processes of deliberation and negotiation. 

The liberal gambit is that providing those necessities to all would mitigate conflict.  Yes, there is conflict now over doing such providing.  But for many countries the idea of providing health care is no longer a live issue.  Constitutionalism is a strategy for removing certain questions from the realm of conflict, of deciding them once and for all.  Not a fool-proof strategy, but it works some time for certain issues.  And some seemingly dead issues can rise again, zombie fashion. 

But the liberal social democrat has this basic agenda: to increasingly make the provision of “basic goods” to all a matter of settled social practice.  That is a way to serve “life” without promoting the death of those currently alive.  But it is serving “life” in relation to human standards of what a “good” or “flourishing” life requires.  So, in that sense, Matt is right to say you can’t derive those standards from life itself.

What about non-human forms of life?  What about climate change?  I do think that comes back to where I started.  We can take the position that respect for all life forms is an ethical imperative—although that will run us into the kinds of problems Matt identifies (namely, that such universal respect is not possible where some life forms actively harm others).  The utilitarian position seems more plausible.  The new ontology can help cement the lesson that human flourishing is dependent in various ways on the larger ecological network of relations in which humans are embedded.  Destroying the planet for short term gain is suicidal.  Still, utilitarianism also has its limits.  It is not utterly convincing to say humans could not flourish if the snow leopard went extinct.  That’s why the deontological argument of respect gets trotted out so often. 

Such puzzles remind us that ethical positions—despite the hopes of philosophers like Kant, Bentham, and Rawls—are never logically air-tight.  Much more important, in my view, is ethical sensibility.  What things outrage us?  What things do we admire?  Unless unnecessary deaths and lives lived in abject poverty strike us as unacceptable, as demeaning to our human capacities to make life well worth the living, we humans cannot expect either rational arguments nor non-human entities (like “life” or “god”) to generate the ethically affirmable life we claim to desire.  Similarly, unless the extinction of the snow leopard strikes us emotionally as a diminishment of the world, we are unlikely to be argued into caring.