Tag: moral philosophy

The Perplexities of Violence

I have been gnawing at this issue for some forty years now and am no nearer to a formulation that satisfies me.  I think it’s because there are no generalizations about violence and its effects that stand up to even the most cursory encounter with historical examples.  I would love to believe that violence is always (in the long run) counter-productive.  Certainly in any utilitarian calculus that measures whether people (in the aggregate) are better off as a result of violence, the answer most usually will be a clear-cut NO.  Apart from its immediate (in the moment) victims, violence breeds violence. The definition of violence from which I work is “physical harm done to a person by another person.”  To perpetuate violence is to insure that physical harm will be done—either to others or to oneself.

But before I even try to consider how violence leads to more violence, let me dwell a moment on my definition.  I do not intend to deny the extended use of the term “violence” to denote psychological or material (destruction of a person’s goods or livelihood) harm done to people. And destruction of non-human entities (whether human built, like cities, or non-human built, like forests) could also be covered by the word violence.  It is also the case that physical harm is done by earthquakes and the like. But, for simplicity’s sake, I want to stick with direct physical harm done by some human to another human in trying to come to grips with how violence is deployed in various cases, and what that violence causes or does not cause to happen.  In other words, what motivates the action of inflicting physical harm on others?  What benefits does the perpetrator of violence believe the violence will give him?  What are the actual consequences of acts of violence? (This question indicates my belief that perpetrators of violence are routinely mistaken about violence’s effects.) And, finally, how does violence either underwrite or undermine power?  (The relation of violence to power is an ongoing puzzle.)

Again, let’s be simple about it for starters.  People deploy violence either 1) to force others to do things they would, if left to themselves, not do and 2) to eliminate people who are actual (or are perceived to be) obstacles to what the agent of violence desires.  Violence as intimidation and/or coercion (1) or violence as the means to winning a competition that is understood as either/or (2).  Either I win or you do—therefore, I will use whatever means necessary to assure that I win.  And violence appears the most compelling strategy to assure victory.  There can be no compromise.  It is, as we say, a fight to the death.  As long as you still are present in the field, I am threatened.  You must be eliminated for me to be at peace (the term “peace” used ironically here to indicate a sense of security that is impossible as long as my opponent lives). 

In short, violence is, one, the great persuader (in the coercion case) or, two, the surest means for victory in a competition.  The argument against claiming violence is always counter-productive is that it can secure submissive obedience and the absence of competitors over very long stretches of time.  Terror deployed by either state or non-state actors can subdue whole populations. (Definition of terror:  the use of sporadic violence against one’s opponents. Many opponents can be left unharmed, but the key is that they know themselves subject to violence at all times and that acts of violence are unpredictable.  When and where violence will be inflicted cannot be calculated; thus, violence is ever present as a threat that is then actuated sometimes.)   Historical examples abound, including the killing and corralling of native American populations as an instance of the “elimination” path, with the reign of Jim Crow in the American South offers a case of terror’s effectiveness when deployed over a one hundred year span. 

The reductionist view of the relation of violence to power is that power is, at bottom, just violence.  Or, to put it differently, power’s ultimate recourse is always violence (the ability of the state—or of other actors—to physically harm with impunity).  The knowledge that the powerful can harm you is what keeps those who would resist power in line.  Power can inflict harms short of physical destruction to keep resistors in line (including economic destitution and incarceration), but it remains the fact that harm done to bodies is the ultimate threat—and power remains dependent on that threat. Inevitably, power will act upon that threat at times. 

The problem is that the reductionist view does not work—or, at least, not in all cases.  When power resorts to violence to secure obedience is precisely when it is weakest, Arendt argued.  Her generalization is as false as the reductionist generalization.  But she was on to something.  Any law (or other device to govern behavior) is only effective if the vast majority obey it voluntarily.  The power of the law resides (in this analysis) in the governed’s acceptance that the law as binding.  One classic case is the American experiment with prohibition of alcohol.  And history offers many examples of seemingly powerful regimes that simply collapsed without much in the way of a battle.  The French and Russian revolutions are cases in point; the governments in both instances were “taken over” very quickly and with very little bloodshed.  It was only after the revolution had occurred that reactionary forces gathered themselves together and instigated civil wars. 

So, it would seem, power based solely on violence follows the Hemingway description of bankruptcy: the power seeps away slowly until it suddenly collapses.  Again, to be clear: the seeping away period can be very long indeed, and collapse (if we take the very long view) in inevitable and multi-caused since nothing human lasts forever. 

What interests me in thinking about the relation between power and violence is the extent to which power’s resorting to violence is delegitimizing.  When and where power relies on violence, it admits that its edicts are not acceptable to those who can only be compelled by violence.  On the one hand, that admission necessitates the creation of the category “criminal.”  Power must insist that there are deviants who simply (for whatever perverse or self-interested reasons) will not obey the law.  On the other hand, extensive reliance on violence will indicate the law’s unreasonableness, its inability to win voluntary consent.  Violence may cow many, but it will not win their respect.  (Exceptions to this assertion, of course.  There will always be those who are impressed by violence, who aspire to be enlisted in the ranks of its foot soldiers.  I will get back to my thoughts on this sub-section of any population.)

The “on the one hand and on the other hand” of the previous paragraph reveals how completely acts of violence are entangled in speech acts.  The act of violence itself is a speech act.  It can only have its effect if the act is publicly known and the message it is meant to convey is somewhat unambiguous.  Thus, a Mafia killing must clearly indicate this is the result of encroaching on our territory.  Revenge killings must make the fact that this “was for revenge” obvious.  State violence must say “this kind of behavior/disobedience” will not be tolerated.  And, in a secondary speech act, the state creates the category “criminal” to justify its violence against those who disobey.  An exception to violence being public (as it must be if it is to send a message) are private murders where the perpetrator hopes to get away with the act never being ascribed to him.  Such murders only make sense if there is one victim, without any future intention to deploy violence—and hence no audience to whom a message needs to be sent.  Even serial killers, it seems to me, are message senders.  They get off on the terror they inspire among a certain population.

Because violence is embedded in message sending, the meaning of any act of violence inevitably becomes a contested field.  Violence is rhetoric.  Acts of violence are intended to persuade.  The regime (Romans against Christians; South Africa against black dissenters) that creates martyrs aims to dissuade others from acting as the martyr did; the martyr’s peers hold up his death as an inspiration to further acts of resistance.  War aims to persuade another country to bend to my country’s will just as violence against the “criminal” aims to persuade others to follow the law.  But just as violence often inspires violent resistance, the meanings attached to any act of violence will also generate resistance.  There will be competing interpretations.

I think that all of this means that acts of violence always need to be justified.  That is, every act of violence will be accompanied by a set of speech acts that strive to justify that act.  This is hardly to say that such justifications are equally plausible.  Some will be downright risible, but I daresay few acts of violence go unspoken.  This, admittedly, is tricky.  There are black holes, and people who are simply “disappeared.”  And regimes (or the Mafia) are rarely explicit about the kinds of torture they deploy.  Similarly, the Nazi concentration camps were (sort of) secret, while what was going on in those camps was even more secret.  Still, in all these cases it was generally known that “enemies” (of the state, of the people, of our clan) were targets, even if the details were left to the imagination or only whispered in various quarters. And leaving things to the imagination might even be a more effective way to instigate terror.

If I am right that all acts of violence need to be justified, that suggests there is a prima facie assumption that violence is wrong.  It can only be justifiable if compelling reasons as to its necessity are offered.  Violence is “moralized” (made moral) when it is claimed that only its deployment can insure the health of morality against the threats posed by the immoral.  Wherever an attempt to justify violence is made, the term “necessary” will almost invariably appear.  The perpetrator of violence will almost always express regret that violence had to be resorted to.  But his victim left him no choice.  It was a species of self-defense; without the recourse to violence, some horrible consequence would have unfolded.

To appeal to self-defense is always an attractive option because self-defense is almost universally accepted as the one obvious, incontrovertible, justification for violence.  No one currently thinks the Ukrainians are engaging in unjustified violence against the Russian invaders—unless they buy Russian propaganda in all its absurdity.  But even here matters are not simple.  Firstly, because self-defense gets entangled with questions of revenge, which may explain why the desire for revenge is so powerful.  But revenge notoriously generates cycles of violence and, thus, is not (in many cases) a successful remedy to inflicted violence.  It just keeps violence going.

Secondly, self-defense gets tangled up in notions of “proportionate violence.”  There is some sense that violence inflicted as a response to a prior act of violence should be proportionate.  To escalate the scale of violence, even in cases of self-defense, is usually seen as morally dubious.  The obvious current example is Israel’s response to the attacks by Hamas on October 7, 2023.  The whole notion of “proportionate violence” is bizarre.  Who is doing the measuring?  Yet the moral intuition underlying the notion is real and strongly felt.  Even in a no holds barred war (such as World War II) some limitations on violence are still respected.  The Germans did not kill downed Allied airmen wholesale, or non-Russian prisoners of war.  How to understand where and how some limitations are imposed on possible acts of violence is extremely difficult.  There is no formula; there is a tendency toward escalation; and yet since 1945 no belligerent with nuclear arms has used them.  Whether that restraint is solely a result of a rational fear of retaliation is an open question.  In any case, whether with the notion of proportion in violent responses to acts of violence or in self-imposed limitations on the means of violence deployed in conflicts, there is a shaky, unenforceable, yet real set of constraints.  When those constraints are ignored, the violent actors lose any plausible grounds for justification.  And it proves both difficult and rare for any person or any regime to say “fuck it” to all attempts at justification.  The rule does seem to apply even in the most egregious cases: those engaged in violence will attempt to justify their actions. Violent actors will try to win the rhetorical battle in the court of public opinion. (In international affairs currently, that court is often the United Nations. Its lack of enforcement powers make it seem absurd in many cases, yet state actors still care about its verdicts.)

Because self-defense is almost always accepted as a justification, those who initiate violence have a much harder row to hoe.  For that reason, peremptory violence is most often justified in the name of preventing an even greater harm than the violence itself. The speech acts here are counter-factual; if I don’t act violently, these things will happen.

Presumably, violence could be deployed to bring a better world into existence (Soviet violence was perhaps an instance), but much more usually violence is justified as overcoming the threat certain others pose to the current state of affairs.  Still, preventive violence can morph into (or be merged with) creative violence.  The Nazis offer an example of such intertwining.  They preached (and practiced) violence against the threat posed by Jews and communists, but they also used the violence to create a whole new political order, one they claimed would be strong enough to combat those threats. In its own way, the current Trump administration is following that path.  It has designated a set of enemies (including the “deep state”) fit to be punished while also attempting to create a whole new form of government (rule by executive) justified as the only means to overcome the enemies.

Since the revulsion against violence, the prima facie assumption of its being morally wrong, is so prevalent, the demonization of enemies is required.  Such enemies must be deemed outside the moral pale.  This gets complicated, of course, in the modern state system, with its distinction between citizens and non-citizens.  Even the Nazis felt compelled to strip people of citizenship first before making them the victims of violence.  It remains to be seen how much the Trump administration will refrain from violence against citizens.  Or if it will begin to strip citizenship from current citizens. For now, Trump has declared open season on non-citizens, while only (?) depriving citizens of employment while not sending them off to prison. (But his “lock ’em up” fantasies might lead to that next step.)

But what about those for whom violence is not wrong, but actually to be celebrated as a sign of strength.  Easy enough for Arendt to make fun of such losers for mistaking a capacity for violence with real power.  Those losers still can cause severe havoc in the world.  And it’s also easy to pathologize these incels, spending hours and hours “gaming,” and frustrated by their lack of access to good jobs, sexual partners, or social respect.  It remains the fact that for some people (mostly men) violence is the means to self-esteem, to showing that they are here and can make a difference in (an impact on) the world.  The recruits for para-military and state thuggery are standing by.  And, as Christopher Browning’s work has shown, just the need to go along, to be accepted as a member of a group, can facilitate violence once someone else instigates it.  Fear of ostracism from the only group that is offering one membership can be sufficient motive to participate in acts of violence in good conscience.  The point: any attempt to come to grips with violence that appeals only to its rationality or to the justifications offered to render it compatible with morality will miss the non-rational and non-moral motivations that enable much violence.  From sadism and crimes of unreflective passion to conformism and ecstatic participation in group actions, the sources of violence are multiple and defy calculation along cost/benefit lines, or in terms of what can be morally justified.

To be continued. These musings are, in large part, only the preliminaries to considering the use of violence as a tactic of resistance to established regimes.  I will take up that question of strategy in subsequent posts.

Alexandre Lefebvre’s Human Rights as a Way of Life: On Bergson’s Political Philosophy

I recently finished Alexandre Lefebvre’s Human Rights as a Way of Life (Stanford UP, 2013). It was a great read!  Maybe that’s just me in recoil from all the consciousness stuff I’ve been reading—glad to be back in more familiar territory: political philosophy.  Not just that, however.  It is just enthralling to read a closely reasoned, carefully constructed, argument.  There just are too few well-written and well-thought (if I can coin that adjective) books. 

Interestingly, when I think through what Lefebvre has to say in order to offer up the gist in this post, it’s not all that startling.  It is the care with which he makes his case that is exhilarating, not the substance (although it is hardly shabby. Just not all that startling either.)

So here’s the summary.

Bergson is a follower of Darwin. His reliance on evolutionary explanations for human phenomenon (like religion and morality) is quirky because he is a vitalist.  He believes in a fundamental “life force” that drives evolution, so is prone to 1) ascribe intention to evolution and 2) to think evolution has a single, dominating force (instead of resulting from a multitude of random—and unrelated—genetic mutations.)

In addition, Bergson is a dualist.  He believes that there exist spiritual entities that are distinct from material ones—and that the failure to give the spiritual its due is disastrous for human beings.  Bergson quite cheerfully declares himself a “mystic” and asserts that the spiritual is ineffable even as humans have various intimations of its existence (and importance!).

How do these basic commitments on Bergson’s part play into an account of human rights?  It all stems from the paradoxes built into morality.  For Bergson, human morality is a product of evolution.  “The evolutionary function of moral obligation is to hold society together. Its function is to ‘ensure the cohesion of the group.’” (page 25; quoted passage is from Bergson).  Unlike other theorists of morals, Bergson is adamant that morality is “natural,” is produced by evolution, as opposed to something that humans add on top of evolution.  Morality is not a human contrivance that tries to counteract natural impulses; instead, morality itself is a natural impulse.  Humans are social animals, utterly dependent on social relations to stay alive and to reproduce (the Darwinian imperatives).  Morality, insofar as it make sociality possible, is thus produced by evolution as are other human capacities essential to survival and reproduction.

The paradox comes from the fact that morality is exclusive.  Societies are “closed,” non-infinite, groupings.  One of the things essential to a society’s and its members’ survival and flourishing is protection from external threats.  Morality performs its service to life in part by distinguishing between friend (insider, fellow member) and enemy (outside, threat, non-member). 

“Closure is essential to moral obligation because its evolutionary purpose is to ensure the cohesion of the group in the face of an adversary.  It is this feature of exclusivity that Bergson brings to the fore with the concept of the closed society.  The purpose of this concept is not to claim that this or that society is closed.  Instead, it designates a tendency toward closure on the part of all societies” (25).

For this reason, war seems inevitable—and certainly human history appears to demonstrate that war is ineradicable.  Morality is good for the survival of particular societies—but is not conducive to the survival of human beings as a whole (especially once technology has given humans the means to mass annihilation) or to the survival of individuals (even the “winning side” in a war has many of its members killed in the contest).  To put it most bluntly: human morality generates not only cooperation and fellow-feeling with insiders, but also aggression toward outsiders.  For all the sophistication of his argument, Lefebvre ends up in a very familiar place: the claim that exclusion justifies doing harm to those designated as “other,” as beyond the pale.

Human rights, then, are an attempt to counteract the tendency of morality to sanction violence.  “Human rights are . . . an effort . . . that seeks to counteract our evolved moral nature. . . . Bergson [offers] a vision not just of what human rights must protect us from (i.e., morality) but also why (i.e., because of its [morality’s] biological origins” (54, 57). 

The standard way to address this paradox—that we need morality and that we also need something to counteract morality—depends on two planks.  The first recognizes that morality (the closed society) at least in the so-called Western world post 1700 functions most powerfully in the form of the nation-state.  Wars take place between nation-states—and the brutalities inflicted upon “enemies” have only increased since that time.  (The bombing of cities, the murder of refugees.)  Even in times of peace between nation-states, a particular state can identify certain people who live within its boundaries as “enemies within” and treat them differently and harshly in distinction from fully admitted members (citizens).

In response, there have been repeated efforts to create supra-national institutions that could rein in the aggressions of nation-states.  Such institutions have proved mostly ineffective.  When it comes to actually wielding power—and in securing the affective consent of people—the nation-state stands supreme, only minimally beholden to efforts to establish (and enforce) international law.  The institutionalization of human rights has mostly been a failure. Human rights are most fully protected when and where the state’s power has been used to uphold them.  But that’s useless in cases where it is the state itself that is abusing the human rights of some peoples living in its territory, not to mention its abuse of human rights on enemies during wartime.

The second plank is to widen morality in such a way that it is no longer exclusive.  The relevant “in group” would be all human beings—or, as proponents of animal rights desire—all animals.  Lefebvre demonstrates convincingly that the idea of “widening the circle” to be more inclusive is a prevalent call in much of contemporary political and moral philosophy.  Human rights are meant to apply “universally” and thus stand in direct opposition to any and all distinctions that would justify treating some people (or some groups) differently from others. 

Philosophers calling for expanding the circle offer different accounts of how that might be achieved.  Basically, the Humeans call for extending sympathy outwards.  Fellow feeling for those who can suffer—humans and animals—will underwrite our extending our consideration to them.  Kantians rely on reason to bring us to the recognition that only universalism keeps us from self-contradiction.  Utilitarians ask us to admit that suffering is a wrong—and then to avoid all actions that would increase the amount of suffering in the world. 

Levebvre’s most original contribution to such debates is to deny (forcefully) that expanding the circle is possible or adequate.  Morality, he insists, must be exclusive.  That is its whole modus operandi.  It only performs its natural function by being exclusive.  So it’s simply wrong to think it can be transformed into something non-exclusive. 

Human rights, therefore, must be something utterly different from morality, not an extension of it.  Lefebvre expresses this point by contrasting a distinction in quantity from one in quality.  We run into Bergson’s dualism here (although I doubt whether we have to embrace that dualism in order to adopt the distinction between a difference in quantity from a difference in quality.)  In any case, Bergson thinks “intelligence” deals in quantities and that we need another faculty (intuition or insight) to handle qualities.  Here’s Lefebvre’s account of Bergson’s view:

“[I]ntelligence does some things very well but not others.  It has a natural affinity with space and quantity and a natural aversion to time and quality.  More to the point, given its aptitude for quantity and number, intelligence views all forms of change in terms of (quantitative) differences of degree rather than (qualitative) differences in kind.  This includes moral change, of course.  It is no accident or simple error, therefore, which leads us to consider the evolution of morality in terms of expansion, growth, and continuous progress. . . . Intelligence is by its nature driven to picture the evolution of morality as the extension of a selfsame core (i.e., moral obligation) to more and more people” (49-50).

Bergson, then, wants to introduce an entirely different principle, one not based on moral obligation, as the underpinning of a human rights regime. Bergson wants to provide the basis for an “open society” that contrasts with closed societies that standard morality creates.  He strives to point his readers toward “a qualitatively different kind of morality, irreducible to obligation.  It [intelligence] struggles to conceive of a moral tendency that is not object attached.  And it struggles, as Bergson will come to say, to imagine a way to love that does not grow out of exclusive attachments” (50).

Before getting to a description of this “different kind of morality,” a morality of love, one other preliminary point must be made.  Bergson doubts the motivational power of reason.  He does not think that practical reason of the Kantian sort can move people to action.  Instead, he thinks morality must be a matter of habitus, of practice. 

“It is helpful to observe what Bergson has in common with an important strand of practical philosophy—call it antirationalism.  As Carl Power puts it, ‘Bergson might be said to join a counter-tradition that begins with Aristotle and includes more recent names such as Dewey, Heidegger, Wittgenstein, Bourdieu, and Taylor.  What these disparate figures share is a propensity to see the human agent . . . as a being who is immediately engaged in the world and whose understanding of self and other is first and foremost expressed in practice.’ Broadly speaking, for these thinkers moral life is not primarily a matter of concepts and principles but of concrete durable practices that integrate moral obligations into the texture of everyday life. On that view, morality is not primarily a matter of weighing the purity of one’s intentions or assessing the partiality of one’s judgments.  Certainly these can be part of moral life; but they are not its backbone.  Instead, most of the time the performance of our moral obligations is prereflexive and embedded in the habits and activities of day-today life” (57-8).

It is precisely this emphasis on “practice” that explains Lefebvre’s title: human rights as a way of life.  Only through practice, through the embedding of human rights into the fabric of daily existence, can they take up a place in our world.  The “love” that Bergson advocates must be habitual for humans, must, in a concrete way, become routine.  It’s worth quoting Lefebvre a bit more on what a reliance of “habit” means.

“With his focus on habit, Bergson . . . wants to shift the attention of moral philosophy away from its preoccupation with the rational self-present agent.  Only on rare occasions does the performance of duty involve a conscious or deliberative process.  By and large, it is automatic, second nature, and unconscious. As he says, we ordinarily ‘conform to our obligations rather than think of them.’ Hence the importance of habits, which for Bergson are the true fabric of moral life.  In fact, moral or social life . . . is nothing other than an interlocking web of habits that connect the individual to a variety of groups.  But they don’t merely join the individual to different groups, as if he or she were pre-formed.  Rather, habits constitute the very stuff of our personalities.  They are what make us into parents, professionals, citizens, and the like” (58-9).

We are in recognizably Aristotelean territory here.  Character (personality, selfhood) is created through what we do—and our doings quickly become habits.  Humans are creatures, mostly, of regularity.  Which is not entirely a good thing.  “Habit seems to favor not only passivity and acquiescence but also conformity and laziness” (59).

The would-be moral reformer, the preacher, must lead the audience to become aware of their habits and to consider whether they are desirable or not.  Bergson “repeatedly characterizes love and openness as an ‘effort.’ Love [of the kind he advocated] does not extend moral obligation and it does not follow the habits of everyday life.  It defies them” (60).

So, Bergson wants to enlist the power of habit by making this open love habitual, but he must first break through the habits that make standard closed morality the default mode for most people.

OK!  Finally, what is this open love?  How to describe it, how to experience it, how to incorporate into one’s way of being in the world, how to make it “a way of life”?

Lefebvre cannot—and does not aim to—offer definitive answers to these questions.  The very idea (better: the very experience) of open love grows out of Bergson’s self-proclaimed “mysticism.”  Intelligence has nothing of use to say on this topic.  What Lefebvre wants to show is that “Human rights are works of love that initiate us into love” (89).  We can only proceed by way of examples—and of practices.  Examples “disclose love; they bring it into the world” (88).

As mostly practiced in contemporary society (the human rights practices and discourse most familiar to us), human rights attempts to regulate our world of closed societies, aiming to prevent or (at least mitigate) the abuses to which closed societies are prone.  Normal human rights strive to protect us from hate.

Open human rights aim not to protect, but to convert.  “Human rights are the best-placed institution for the open tendency to gain traction in the world” (89).  They offer a pathway toward a conversion to love, to taking up love as our way of life.

Lefebvre offers four examples of this way of life.  I don’t think they are meant to convince as much as meant to appeal. The first example is the person who says “yes” to the world and to existence, someone who radically affirms that this life is good and a source of joy. “In this sense, love is a disposition or a mood.  It is a way of being in the world, rather than a direct attachment to any particular thing in it” (93).

The second example is a radical indifference (i.e. making no distinctions, and hence an “open” justice), “according no preference to any of the beings in our path, in giving everyone our entire presence, and responding with precise faithfulness to the call they utter to us. . . . Yet this glance is the opposite of an insensitive glance; it is a loving glance which distinguishes, within each individual being, precisely what he or she needs: the words that touch him, and the treatment he deserves” (94 in Lefebvre; he is quoting Louis Lavelle). 

The third example comes from Deleuze’s description of the moment in Dickens’ Our Mutual Friend when the on-looking crowd is deeply invested in Rogue Riderhood’s recovery from an apparent drowning.  That crowd is rooting for the life in Riderhood, not attuned to his specific person, personality, or history.  They extend those good wishes to everything that has life, but attuned to life’s manifestation in this singular instance which provides the specific occasion for this affirmation of life.

Finally, Lefebvre considers Elizabeth Costello, the main character of J. M. Coetzee’s novel of that name.  Elizabeth refuses the “insensibility to the pain of outsiders”(97) that, for her, must accompany the complicity with the slaughter of animals that all eating of meat entails.  She opens herself up to that pain—and in the process offends any number of human beings, to the extent that she doesn’t quite feel herself part of the human race any longer. 

In summary, Lefebvre tells us that “all four portraits are preoccupied with the care of others.  Or more precisely, each presents a mode of care made possible only once love ceases to be dedicated to a specific object. [With the first example] it is radiant joy and welcome; with Lavelle it is the responsiveness of indifference; with Deleuze it is attentiveness to singularity; and with Coetzee is it empathy not bound with the group” (100).

Obviously, just how moving these examples will prove to different readers will vary.  Lefebvre is offering, in a different key admittedly, his version of the argument between where to place one’s political efforts: in reforming laws and institutions or in reforming hearts and minds.  To his credit, he refuses to make this an either/or.  We need to do both; he resists the temptation (familiar in various leftist critiques) to see the discourse and institutions of human rights as corrupt and/or positively harmful. 

But, clearly, his focus is on conversion, on change at the individual level.  He struggles (in my view) to connect his perspective to Foucault’s (and the ancients) idea of “care for the self.”  I find this the least convincing move in his book—and I don’t think he really nails the connection he is trying to establish.  For me, even if I buy the idea of human rights as a “way of life,” that way of life had much more to do with my relation to others than it does to my relation to my self.  The “care” that Lefebvre focuses on in the passage quoted in the previous paragraph is not “care for the self” but “care of others.”  Both morality and love are about relations to what is beyond the self.  So I think it a mistake to try to bring them into the purview of the self.

I have undertaken to write a review of Lefebvre’s follow-up book, Liberalism as a Way of Life (Princeton UP, 2024).  I haven’t started reading it yet, but am eager to get into it since I enjoyed reading this human rights book so much.  More on Lefebvre once I do finish the new book.

Fact/Value

I noted in my last post that many twentieth-century artists aspired to an “innocent” perception of the world; they wanted to see (and sense) the world’s furniture outside of the “concepts” by which we categorize things.  We don’t know if babies enjoy such innocence in the first few months of life—or if they only perceive an undifferentiated chaos.  It is certainly true that, by six months at the latest, infants have attached names to things.  Asked to reach for the cup, the six month old will grasp the cup, not the plate.

If the modernist artist (I have no idea what 21st century artists are trying to do) wants to sever the tight bond between percept and concept, it has been the scientists who want to disentangle fact from value.  The locus classicus of the fact/value divide is Hume’s insistence that we cannot derive an “ought” from an “is.”  For humanists, that argument appears to doom morality to irreality, to merely being something that humans make up.  So the humanists strive to reconnect fact and value.  But, for many scientists, the firewall between fact and value is exactly what underlies science’s ability to get at the “truth” of the way things are.  Only observations and propositions (assertions) shorn of value have any chance of being “objective.”  Values introduce a “bias” into accounts of what is the case, of what pertains in the world.

Thus, it has been that artists, the humanists, and philosophers friendly to aesthetics and qualia that have argued that fact and value cannot be disentangled.  Pragmatism offers the most aggressive of these philosophical assaults on the fact/value divide.  The tack pragmatism takes in these debates is not to argue against Hume’s logic, his “demonstration” that you can’t deduce an “ought” from an “is.” 

Instead, pragmatism offers a thoroughly Darwinian account of human (and not just human) being in the world.  Every living creature is always and everywhere “evaluating” its environment.  There are no passive perceivers.  Pragmatism denies what James and Dewey both labeled “the spectator view of knowledge.”  Humans (and other animals) are not distanced from the world, looking at it from afar, and making statements about it from that position of non-involvement.  Rather, all organisms are immersed in an environment, acting upon it even as they are being acted upon by it.  The organism is, from the start, engaged in evaluating what in that environment might be of use to it and what might be a threat.  The pursuit of knowledge (“inquiry” in the pragmatist jargon) is 1) driven by this need to evaluate the environment in terms of resource/threat and 2) an active process of doing things (experiments; trial and error) that will better show if what the environment offers will serve or should be avoided.

If, in this Darwinian/pragmatist view, an organism was to encounter anything that was neutral, that had no impact one way or the other on the organism’s purposes, that thing would most likely not be noticed at all, or would quickly disappear as a subject of interest or attention.  As I mentioned in the last post, this seems a flaw in pragmatist psychology.  Humans and other animals display considerable curiosity, prodding at things to learn about them even in the absence of any obvious or direct utility.  There are, I would argue, instances of “pure” research, where the pay-off is not any discernible improvement in an organism’s ability to navigate the world.  Sometimes we just want to know something to satisfy that particular kind of itch.

So maybe the idea is that scientists aspire to that kind of purity, just as so many 20th century artists aspired to the purity of a non-referential, non-thought-laden art.  And that scientific version of the desire for purity gets connected to an epistemological claim that only such purity can guarantee the non-biased truth of the conclusions the scientist reaches.  The pragmatist will respond: 1) there is still the desire for knowledge driving your inquiry, so you have not achieved a purity that removes the human agent and her interests; and 2) the very process of inquiry, which is interactive, means that the human observer has influenced what the world displays to her observations (which is why Heisenberg’s work was so crucial to Dewey—and seemed to Dewey a confirmation of what pragmatism had been saying for thirty years before Heisenberg articulated his axioms about observation and uncertainty.)  The larger point: since action (on the part of humans and for other organisms) is motivated, and because knowledge can only be achieved through action (not passively), there is no grasping of “fact” that has not been driven by some “value” being attached to gaining (seeking out) that knowledge. 

Even if we accept this pragmatist assault on the fact/value divide, we are left with multiple problems. One is linguistic.  Hilary Putnam, in The Collapse of the Fact/Value Dichotomy and Other Essays (Harvard UP, 2002), basically argues that there are no neutral words, at least no neutral nouns or verbs.  (I have written about Kenneth Burke’s similar argument in my book, Pragmatist Politics (University of Minnesota P, 2012).  Every statement about the world establishes the relation of the speaker to that world (and to the people to whom the statement is addressed.)  In other words, every speech act is a way of adjusting the speaker’s relation to the content and the audience of that utterance.  Speech acts, like all actions, are motivated—and thus can be linked back to what the speaker values, what the speaker strives to accomplish.  Our words are always shot through and through with values—from the start.  And those values cannot be drained from our words (or from our observations) to leave only a residue of “pure” fact.  Fact and value are intertwined from the get go—and cannot be disentangled. 

Putnam, however, (like James, Dewey, and Kenneth Burke) is a realist.  Even if, as James memorably puts it, “the trail of the human serpent is over all,” the entanglement of human aspirations with observations about the world and others does not mean the non-self must forego its innings.  There is feedback.  Reality does make itself known in the ways that it offers resistance to attempts to manipulate it.  James insists that we don’t know how plastic “reality” is until we have tried to push the boundaries of what we deem possible.  But limits will be reached, will be revealed, at certain points.  Pragmatism’s techno-optimism means that James and Dewey thought that today’s limits might be overcome tomorrow.  That’s what generates the controversial pragmatist “theory of truth.”  Truth is what the experiments, the inquiries, of today have revealed.  But those truths can only be reached as a result of a process of experimentation, not passively observed, and those truths are not “final,” because future experiments may reveal new possibilities in the objects that we currently describe in some particular way.  Science is constantly upending received notions of the way things are.  If the history of science tells us anything, it should be that “certainties” continually yield to new and different accounts.  Truth is “made” through the process of inquiry—and truth is provisional.  Truth is always, in Popper’s formulation, always open to disconfirmation.

In short, pragmatism destabilizes “fact” even as it proclaims “value” is ineliminable.

I have suggested that “fact” is best understood as what in the external world frustrates (or, at least, must be navigated by) desire.  Wishes are not horses.  Work must be done to accomplish some approximation of what one desires.  The point is simply that facts are not stable and that our account of facts will be the product of our interaction with them, an interaction that is driven by the desires that motivate that engagement.

What about “value”?  I have been using the term incredibly loosely.  If we desire something, then we value it.  But we usually want to distinguish between different types of value—and morality usually wants to gain a position from which it can endorse some desires and condemn others.  In short, value is a battleground, where there are constant attempts to identify what is “truly” valuable alongside attempts to banish imposters from the field.  There is economic value, eudemonic value, moral value, and health value.  So, for example, one can desire to smoke tobacco, but in terms of “value for health,” that desire will be deemed destructive. 

Any attempt to put some flesh on the bare bones term “value” will immediately run into the problem of “value for” and “pure” (or intrinsic) value.  Some values are instrumental; they are means toward a specific end.  If you want to be healthy, it is valuable not to smoke.  If you want to become a concert pianist, it is valuable to practice. 

The search for “intrinsic” values can quickly become circular—or lead to infinite regress. Is the desire to become a concert pianist “intrinsic”? It certainly seems to function as an end point, as something desired that motivates and organizes a whole set of actions.  But it is easy to ask “why” do I value becoming a concert pianist so highly?  For fame, for love of music, to develop what I seem to have talent for (since, given my inborn talents, I couldn’t become a professional baseball player)?  Do we—could we ever—reach rock bottom here? 

The Darwinians, of course, think they have hit rock bottom.  Survival to the point of being able to reproduce.  That’s the fundamental value that drives life.  The preservation of life across multiple generations.  When organisms are, from the get go, involved in “evaluation,” in assessing what in the environment is of value to them, that evaluation is in terms of what avails life.  (The phrase “wealth is what avails life” comes from a very different source: John Ruskin’s Unto this Last, his screed against classical liberalism’s utilitarian economics.) 

One problem for the Darwinians is that humans (at least, among animals) so often value things, and act in ways, that thwart or even contradict the Darwinian imperatives.  Daniel Dennett argues that such non-Darwinian desires are “parasites”; they hitch a ride on the capacities that the human organism has developed through natural selection’s overriding goal of making a creature well suited to passing on its genes.  Some parasites, Dennett writes, “surely enhance our fitness, making us more likely to have lots of descendants (e.g. methods of hygiene, child-rearing, food preparation); others are neutral—but may be good for us in other, more important regards (e.g., literacy, music, and art), and some are surely deleterious to our genetic future, but even they may be good for us in other ways that matter more to us (the techniques of birth control are an obvious example)” (Freedom Evolves, p. 177).

Whoa!  I love that Dennett is not a Darwinian fundamentalist. (In particular, it’s good to see him avoid the somersaults other Darwinians perform in their effort to reduce music and art to servants of the need to reproduce.) The Darwinian imperative does not drive all before it.  But surely it is surprising that Dennett would talk of things that “matter more to us” than the need to insure our “genetic future.”  He has introduced a pluralism of values into a Darwinian picture that is more usually deployed to identify an overriding fundamental value.

Other candidates for a bedrock intrinsic value run into similar difficulties.  Too much human behavior simply negates each candidate.  For example, we might, with Kant, want to declare that each individual human life is sacred, an end in itself, not to be used or violated.  But every society has articulated conditions under which the killing of another human being is acceptable.  And if we attempt to find “the” value that underwrites this acceptance of killing nothing emerges.  So it does seem that we are left with a pluralism of values.  Different humans value different things.  And that is true within a single society as well as across cultures.  Values, like facts, are in process—continually being made and re-made.  And, as with facts, there is feedback—in terms of the praise/blame provided by others, but also in terms of the self-satisfaction achieved by acting in accordance with one’s values.  Does becoming a concert pianist satisfy me?  Does it make me happy?  Does it make me respect myself?  Am I full of regrets about the missed opportunities that came with practicing five hours a day?  Would I do it all over again (that ultimate Nietzschean test)?

When it comes to entrenched values, things get even trickier.  Here’s the dilemma: we usually condemn actions that are “interested.”  We don’t trust the word of someone who is trying to sell us something.  We want independent information from that which the seller provides.  The seller has an “interest” in distorting the facts.  Now we are back to the urge to find an “objective” viewpoint.  In cases of lying, the issue is straightforward.  The interested party knows all the relevant information, but withholds some of it in order to deceive.

But what if one’s “interest” distorts one’s view of things?  What if the flaw is epistemological more than it is moral?  I see what I want to see. Confirmation bias.  My “values” dictate how I understand the circumstances within which I dwell.  My very assessment of my environment is to a large extent a product of my predilections.  Feedback is of very limited use here.  Humans seem extraordinarily impervious to feedback, able to doggedly pursue counterproductive actions for long periods of time.  In this scenario, “interest” can look a lot like what some theorists call “ideology.”  The question is how to correct for the distortions that are introduced by the interested viewer.  Isn’t there some “fact” of the matter that can settle disputes?

The despairing conclusion is that, in many instances, there is no settling of such disputes.   What would it take to convince Trumpian partisans that the 2020 election was not stolen?  Or that Covid vaccines do not cause cancer?  All the usual forms of “proof” have been unavailing.  Instead of having “fact” drive “value” out of the world as the humanists feared (that feat motivated Kant’s whole philosophy), here we have “value” driving “fact” to the wall.  A world of pluralistic values creates, it now appears, a world of pluralistic facts.  No wonder that we get a call for bolstering the bulwark of “fact.” 

As I have already made clear, I don’t think we can get back to (or even that we were ever really there) a world of unsullied facts.  Our understandings of the world have always been “interested” in the ways that pragmatism identifies.  The only safeguard against untrammeled fantasy is feedback—and the 2020 stolen election narrative shows how successfully feedback can be avoided.  We have the various rhetorical moves in our toolbox—the presentation of evidence, the making of arguments, the outlining of consequences, emotional appeals to loyalties, sympathies, and indignation—for getting people to change their minds. These techniques are the social forms of feedback that go along with the impersonal feedback provided by the world at large.  But that’s it.  There is no definitive clincher, no knock-down argument or proof that will get everyone to agree.  It’s pluralism all the way down.

Here’s something else that is deeply troubling—and that I don’t know exactly where I stand.  Is there any difference between “interest” and moral value?  Usually the two are portrayed as opposed.  Morality tries to get individuals to view things from a non-personal point of view (Thomas Nagel’s famous “view from nowhere” or Rawls’ “veil of ignorance”).  “Interest” is linked to what would personally benefit me—with nary a care for how it might harm you.  Some philosophers try to bridge this gap with the concept of “enlightened self-interest.”  The idea is that social interactions are an iterative game; I am in a long-term relation with you, so cheating you now may pay off in the short-term, but actually screws up the possibility of a sustained, and mutually beneficial, relationship over the long haul.  So it is not really in my interest to harm you in this moment.  Morality, then, becomes prudential; it is the wisest thing to do given that we must live together.  Humans are social animals and the basic rules of morality (which encode various forms of consideration of the other) make social relations much better for all involved.

In this scenario, then, “interest” and “moral value” are the same if (and only if) the individual takes a sufficiently “long view” of his interest.  The individual’s “interests” are what he values—and among the things he values is acting in ways his society deems “moral.”  There will remain a tension between what seems desirable in the moment and the longer term interest served by adhering to morality’s attempt to instill the “long view,” but that tension does not negate the idea that the individual is acting in his “interest.”

A more stringent view, one that would drive a deeper wedge between morality and interest, would hold that morality always calls for some degree of self-abnegation.  Morality requires altruism or sacrifice, the voluntary surrender of things I desire to others.  I must act against selfishness, against self-interest, in order to become truly moral.  This is the view of morality that says it entails the curbing of desire.  I must renounce some of my desires to be moral.  Thus, morality is not merely prudential, not just the most winning strategy for self-interest in the long run.  Morality introduces a set of values that are in contradiction with the values that characterize self-interest.  Morality brings along with it prohibitions, not just recommendations of the more prudent way to handle relations with one’s fellows.  It’s not simply practice justice if you want peace.  It’s practice justice even if there is no pay-off, even if you are only met with ingratitude and resentment.

To go back to the Darwinian/pragmatist basic scenario.  We have an organism embedded in an environment.  That organism is always involved in evaluating that environment in terms of its own needs and interests.  Thus values are there from the very start and inextricably involved in the perception of facts.  The question is whether we can derive the existence of morality in human societies from the needs that arise from the fact of human sociality.  That’s the Darwinian account of morality offered by Philip Kitcher among others, which understands morality in terms of its beneficial consequences for the preservation and reproduction of human life.  Morality in that account aligns with the fundamental interest identified in Darwinian theory.

Or is morality a Dennett-like parasite?  An intervention into the Darwinian scheme that moves away from a strict pursuit of interest, of what enhances the individual’s survival and ability to reproduce. 

To repeat: I don’t know which alternative I believe.  And am going to leave it there for now.

Philosophy and How One Acts

A friend with whom I have been reading various philosophical attempts to come to terms with what consciousness is and does writes to me about “illusionism,” the claim that we do not have selves. We are simply mistaken in thinking the self exists. The basic argument is the classic empiricist case against “substance.” There are various phenomena (let’s call them “mental states” in this case), but no stuff, no thing, no self, to which those mental states adhere, or in which they are collected. Thomas Metzger is one philosopher who holds this position and in an interview tells us that his position has no experiential consequences. It is not clear to me whether Metzger thinks (in a Nietzschean way) that the self is an unavoidable illusion or if Metzger thinks that ll the phenomena we attribute to the self would just continue to be experienced in exactly the same way even if we dispensed with the notion (illusion) of the self. In either case, accepting or denying Metzger’s position changes nothing. Belief or non-belief in the self is not a “difference that makes a difference” to recall William James’s formula in the first chapter of his book, Pragmatism.

The issue, then, seems to be what motivates a certain kind of intellectual restlessness, a desire to describe the world (the terms of existence) in ways that “get it right”–especially if the motive does not seem to be any effect on actual behavior. It’s “pure” theory, abstracted from any consequences in how one goes about the actualities of daily life.

There does exist, for some people, a certain kind of restless questioning.  I have had a small number of close friends in my life, and what they share is that kind of restlessness.  A desire to come up with coherent accounts of why things are the way they are, especially of why people act the ways they do. People are endlessly surprising and fascinating. Accounting for them leads to speculations that are constantly being revised and restated because each account seems, in one way or another, to fail to “get things right.”  There is always the need for another round of words, of efforts to grasp the “why” and “how” of things.  Most people, in my experience, don’t feel this need to push at things.  I was always trying to get my students to push their thinking on to the next twist—and rarely succeeded in getting them to do so. And for myself this restless, endless inquiry generates a constant stream of words, since each inadequate account means a new effort to try to get it more accurately this time.

Clearly, since I tried to get my students to do this, I think of such relentless questioning as an intellectual virtue. But what is it good for?  I take that to be the core issue of your long email to me.  And I don’t have an answer.  Where id is, ego shall be.  But it seems very clear that being able to articulate one’s habitual ways of (for example) relating to one’s lover, to know what triggers anger or sadness or neediness, does little (if anything) to change the established patterns.  Understanding (even if there were any way to show that the understanding was actually accurate) doesn’t yield much in the way of behavioral results.

This gets to your comment that if people really believed Darwin was right, as many people do, then they wouldn’t eat animals.  William James came to believe that we have our convictions first—and then invent the intellectual accounts/theories that we say justify the convictions.  In other words, we mistake the causal sequence.  We take the cause (our convictions) as the effect (our theory), when it is really the other way around.  Nietzsche was prone to say the very same thing. 

One way to say this: we have Darwin, but will use him to justify exactly opposite behaviors.  You say if we believed Darwin we wouldn’t eat animals.  I assume that the logic is that Darwin reveals animals as our kin, so eating them is a kind of cannibalism.  We don’t eat dogs because they feel “too close” to us; that feeling should be extended to all animals, not just fellow humans and domestic pets.  (The French eat horse meat although Americans won’t).  But many people use Darwin to rationalize just the opposite.  We humans have evolved as protein seeking omnivores and we developed domesticating animals we eat just as we developed agriculture to grow plants we eat.  Even if we argue that domestication and agriculture were disasters, proponents of so-called “paleo diets” include meat eating in their attempt to get back to something thought basic to our evolved requirements.  So even is Darwin is absolutely right about how life—and specifically human life—emerged, people will use the content of his theory to justify completely contradictory behaviors.

This analysis, of course, raises two questions.  1) What is the cause of our convictions if it is not some set of articulable beliefs about how the world is?  James only answer is “temperament,” an in-built sensibility, a predilection to see the world in a certain way.  (Another book I have just finished reading, Kevin Mitchell’s Free Agents [Princeton UP, 2023], says about 50% of our personality is genetically determined and that less than 10% is derived from family environment.  Mitchell has an earlier book, titled Innate [Princeton UP, 2018], where he goes into detail about how such a claim is supported.)  Nietzsche, in some places, posits an in-built will to power.  All the articulations and intellectualisms are just after the fact rationalizations.  In any case, “temperament” is obviously no answer at all.  We do what we do because we are who we are—and how we got to be who we are is a black box.  Try your damndest, it’s just about impossible to make sure your child ends up heterosexual or with some other set of desires. 

2)So why are James and Nietzsche still pursuing an articulated account of “how it really works”?  Is there no consequence at all at “getting it right”?  Shouldn’t their theories also be understood as just another set of “after the fact” rationalization?  In other words, reason is always late to the party—which suggests that consciousness is not essential to behavior, just an after-effect.

That last statement, of course, is the conclusion put forward by the famous Libet tests.  The ones that say we move our hand milli-seconds before we consciously order our hand to move.  Both Dennett [in Freedom Evolves (Penguin, 2003) and Mitchell (in Free Agents) have to claim the Libet experiment is faulty in order to save any causal power for consciousness.  For the two of them, who want to show that humans actually possess free will, consciousness must be given a role in the unfolding of action.  There has to be a moment of deliberation, of choosing between options—and that choosing is guided by reason (by an evaluation of the options and a decision made between those options) and beliefs (some picture of how the world really is.)  I know, from experience, that I have trouble sleeping if I drink coffee after 2pm.  I reason that I should not drink coffee after 2pm if I want to sleep.  So I refrain from doing so.  A belief about a fact that is connected to a reasoned account of a causal sequence and a desire to have one thing happen rather than another: presto! I choose to do one thing rather than another based on that belief and those reasons.  To make that evaluation certainly seems to require consciousness—a consciousness that observes patterns, that remembers singular experiences that can be assembled into those patterns, that can have positive forward-looking desires to have some outcomes rather than others (hence evaluation of various possible bodily and worldly states of affairs), and that can reason about what courses of action are most likely to bring those states of affairs into being.  (In short, the classical account of “rationality” and of “reason-based action.”)

If this kind of feedback loop actually exists, if I can learn that some actions produce desirable results more dependably than others, then the question becomes (it seems to me): at what level of abstraction does “knowledge” no longer connect to action?  Here’s what I am struggling to see.  Learned behavior, directed by experiences that provide concrete feedback, seems fairly easy to describe in terms of very concrete instances.  But what happens when we get to belief in God—or Darwin?  With belief in God, we seem to see that humans can persist in beliefs without getting any positive feedback at all.  I believe in a loving god even as my child dies of cancer and all my prayers for divine intervention yield no result.  (The classic overdramatized example.)  Faced with this fact, many theologians will just say: it’s not reasonable, so your models of reasoned behavior are simply irrelevant at this point.  A form of dualism.  There’s another belief-to-action loop at play.  Another black box.

On Darwin it seems to me a question of intervention.  Natural selection exists entirely apart from human action/intention/desire etc.  It does its thing whether there are humans in the world or not.  That humans can “discover” the fact of natural selection’s existence and give detailed accounts of how it works is neither here nor there to natural selection itself.  This is science (in one idealized version of what science is): an accurate description of how nature works.  The next step seems to be: is there any way for humans to intervene in natural processes to either 1) change them (as when we try to combat cancer) or 2) harness the energies or processes of nature to serve specific human ends. (This is separate from how human actions inadvertently, unintentionally, alter natural processes–as is the case in global warming. I am currently reading Kim Stanley Robinson’s The Ministry for the Future–and will discuss it in a future post.)

In both cases (i.e intentionally changing a natural process of harnessing the energies of a natural process toward a specifically human-introduced end), what’s driving the human behavior are desires for certain outcomes (health in the case of the cancer patient), or any number of possible desires in the cases of intervention.  I don’t think the scientific explanation has any direct relation to those desires.  In other words, nothing about the Darwinian account of how the world is dictates how one should desire to stand in relation to that world.  Darwin’s theory of evolution, I am saying, has no obvious, necessary, or univocal ethical consequences.  It does not tell us how to live—even if certain Darwinian fundamentalists will bloviate about “survival of the fittest” and gender roles in hunter-gatherer societies. 

I keep trying to avoid it, but I am a dualist when it comes to ethics.  The non-human universe has no values, no meanings, no clues about how humans should live.  Hurricanes are facts, just like evolution is a fact.  As facts, they inform us about the world we inhabit—and mark out certain limits that it is very, very useful for us to know.  But the use we put them to is entirely human generated, just as the uses the mosquito puts his world to are entirely mosquito driven.  To ignore the facts, the limits, can be disastrous, but pushing against them, trying to alter them, is also a possibility.  And the scientific knowledge can be very useful in indicating which kinds of intervention will prove effective.  But it has nothing to say about what kinds of intervention are desirable.

I am deeply uncomfortable in reaching this position.  Like most of the philosophers I read, I do not want to be a dualist.  I want to be a naturalist—where “naturalism” means that everything that exists is a product of natural forces.  Hence all the efforts out there to offer an evolutionary account of “consciousness” (thus avoiding any kind of Cartesian dualism) and the complementary efforts to provide an evolutionary account of morality (for example, Philip Kitcher, The Ethical Project [Harvard UP, 2011.) I am down with the idea that morality is an evolutionary product—i.e. that it develops out of the history and “ecology” of humans as social animals.  But there still seems to me a discontinuity between the morality that humans have developed and the lack of morality of cancer cells, gravity, hurricanes, photosynthesis, and the laws of thermodynamics.  Similarly, there seems to me a gap between the non-consciousness of rocks and the consciousness of living beings.  So I can’t get down with panpsychism even if I am open to evolutionary accounts of the emergence of consciousness from more primitive forms to full-blown self-consciousness.

Of course, some Darwinians don’t see a problem.  Evolution does provide all living creatures with a purpose—to survive—and a meaning—to pass on one’s genes.  Success in life (satisfaction) derives from those two master motives—and morality could be derived from serving those two motives.  Human sociality is a product of those motives (driven in particular by the long immaturity, non-self-sustaining condition, of human children)—and morality is just the set of rules that makes sociality tenable.  So the theory of evolution gives us morality along with an account of how things are.  The fact/value gap overcome.  How to square this picture of evolution with its randomness, its not having any end state in view, is unclear.  The problem of attributing purposes to natural selection, to personifying it, has bedeviled evolutionary theory from the start.

For Dennett, if I am reading him correctly, the cross-over point is “culture,”—and, more specifically, language.  Language provides a storage device, a way of accumulating knowledge of how things work and of successful ways of coping in this world.  Culture is a natural product, but once in place it offers a vantage point for reflection upon and intervention in natural processes.  Humans are the unnatural animal, the ones who can perversely deviate from the two master motives of evolution (survival and procreation) even as they strive to submit nature to their whims.  It’s an old theme: humans appear more free from natural drivers, but even as freedom is a source of their pride and glory, it often is the cause of their downfall. (Hubris anyone?) Humans are not content with the natural order as they find it.  They constantly try to change it—with sometimes marvelous, with other times disastrous, results.

But that only returns us to the mystery of where this restless desire to revise the very terms of existence comes from.  To go back to James and Nietzsche: it doesn’t seem like our theories, our abstract reasonings and philosophies, are what generate the behavior.  Instead, the restlessness comes first—and the philosophizing comes after as a way of explaining the actions.  See, the philosophers say, the world is this particular way, so it makes sense for me to behave in this specific way.  But, says James, the inclination to behave that way came first—and then the philosophy was tailored to match. 

So, to end this overlong wandering, back where I began.  Bertrand Russell (in his A History of Western Philosophy) said that Darwin’s theory is the perfect expression of rapacious capitalism—and thus it is no surprise that it was devised during the heyday of laissez-faire.  That analysis troubles me because it offers a plausible suspicion of Darwin’s theory along the William James line.  The theory just says the “world is this way” in a manner that justifies the British empire and British capitalism in 1860.  But I really do believe Darwin is right, that he has not just transposed a capitalist world view into nature.  I am, however, having trouble squaring this circle.  That is, how much our philosophizing, our theories, just offer abstract versions of our pre-existing predilections—and how much those theories offer us genuine insights about the world we inhabit, insights that will then effect our behavior on the ground.  A very long-winded way of saying I can’t come up with a good answer to the questions your email posed.