Tag: Darwin

Fact/Value

I noted in my last post that many twentieth-century artists aspired to an “innocent” perception of the world; they wanted to see (and sense) the world’s furniture outside of the “concepts” by which we categorize things.  We don’t know if babies enjoy such innocence in the first few months of life—or if they only perceive an undifferentiated chaos.  It is certainly true that, by six months at the latest, infants have attached names to things.  Asked to reach for the cup, the six month old will grasp the cup, not the plate.

If the modernist artist (I have no idea what 21st century artists are trying to do) wants to sever the tight bond between percept and concept, it has been the scientists who want to disentangle fact from value.  The locus classicus of the fact/value divide is Hume’s insistence that we cannot derive an “ought” from an “is.”  For humanists, that argument appears to doom morality to irreality, to merely being something that humans make up.  So the humanists strive to reconnect fact and value.  But, for many scientists, the firewall between fact and value is exactly what underlies science’s ability to get at the “truth” of the way things are.  Only observations and propositions (assertions) shorn of value have any chance of being “objective.”  Values introduce a “bias” into accounts of what is the case, of what pertains in the world.

Thus, it has been that artists, the humanists, and philosophers friendly to aesthetics and qualia that have argued that fact and value cannot be disentangled.  Pragmatism offers the most aggressive of these philosophical assaults on the fact/value divide.  The tack pragmatism takes in these debates is not to argue against Hume’s logic, his “demonstration” that you can’t deduce an “ought” from an “is.” 

Instead, pragmatism offers a thoroughly Darwinian account of human (and not just human) being in the world.  Every living creature is always and everywhere “evaluating” its environment.  There are no passive perceivers.  Pragmatism denies what James and Dewey both labeled “the spectator view of knowledge.”  Humans (and other animals) are not distanced from the world, looking at it from afar, and making statements about it from that position of non-involvement.  Rather, all organisms are immersed in an environment, acting upon it even as they are being acted upon by it.  The organism is, from the start, engaged in evaluating what in that environment might be of use to it and what might be a threat.  The pursuit of knowledge (“inquiry” in the pragmatist jargon) is 1) driven by this need to evaluate the environment in terms of resource/threat and 2) an active process of doing things (experiments; trial and error) that will better show if what the environment offers will serve or should be avoided.

If, in this Darwinian/pragmatist view, an organism was to encounter anything that was neutral, that had no impact one way or the other on the organism’s purposes, that thing would most likely not be noticed at all, or would quickly disappear as a subject of interest or attention.  As I mentioned in the last post, this seems a flaw in pragmatist psychology.  Humans and other animals display considerable curiosity, prodding at things to learn about them even in the absence of any obvious or direct utility.  There are, I would argue, instances of “pure” research, where the pay-off is not any discernible improvement in an organism’s ability to navigate the world.  Sometimes we just want to know something to satisfy that particular kind of itch.

So maybe the idea is that scientists aspire to that kind of purity, just as so many 20th century artists aspired to the purity of a non-referential, non-thought-laden art.  And that scientific version of the desire for purity gets connected to an epistemological claim that only such purity can guarantee the non-biased truth of the conclusions the scientist reaches.  The pragmatist will respond: 1) there is still the desire for knowledge driving your inquiry, so you have not achieved a purity that removes the human agent and her interests; and 2) the very process of inquiry, which is interactive, means that the human observer has influenced what the world displays to her observations (which is why Heisenberg’s work was so crucial to Dewey—and seemed to Dewey a confirmation of what pragmatism had been saying for thirty years before Heisenberg articulated his axioms about observation and uncertainty.)  The larger point: since action (on the part of humans and for other organisms) is motivated, and because knowledge can only be achieved through action (not passively), there is no grasping of “fact” that has not been driven by some “value” being attached to gaining (seeking out) that knowledge. 

Even if we accept this pragmatist assault on the fact/value divide, we are left with multiple problems. One is linguistic.  Hilary Putnam, in The Collapse of the Fact/Value Dichotomy and Other Essays (Harvard UP, 2002), basically argues that there are no neutral words, at least no neutral nouns or verbs.  (I have written about Kenneth Burke’s similar argument in my book, Pragmatist Politics (University of Minnesota P, 2012).  Every statement about the world establishes the relation of the speaker to that world (and to the people to whom the statement is addressed.)  In other words, every speech act is a way of adjusting the speaker’s relation to the content and the audience of that utterance.  Speech acts, like all actions, are motivated—and thus can be linked back to what the speaker values, what the speaker strives to accomplish.  Our words are always shot through and through with values—from the start.  And those values cannot be drained from our words (or from our observations) to leave only a residue of “pure” fact.  Fact and value are intertwined from the get go—and cannot be disentangled. 

Putnam, however, (like James, Dewey, and Kenneth Burke) is a realist.  Even if, as James memorably puts it, “the trail of the human serpent is over all,” the entanglement of human aspirations with observations about the world and others does not mean the non-self must forego its innings.  There is feedback.  Reality does make itself known in the ways that it offers resistance to attempts to manipulate it.  James insists that we don’t know how plastic “reality” is until we have tried to push the boundaries of what we deem possible.  But limits will be reached, will be revealed, at certain points.  Pragmatism’s techno-optimism means that James and Dewey thought that today’s limits might be overcome tomorrow.  That’s what generates the controversial pragmatist “theory of truth.”  Truth is what the experiments, the inquiries, of today have revealed.  But those truths can only be reached as a result of a process of experimentation, not passively observed, and those truths are not “final,” because future experiments may reveal new possibilities in the objects that we currently describe in some particular way.  Science is constantly upending received notions of the way things are.  If the history of science tells us anything, it should be that “certainties” continually yield to new and different accounts.  Truth is “made” through the process of inquiry—and truth is provisional.  Truth is always, in Popper’s formulation, always open to disconfirmation.

In short, pragmatism destabilizes “fact” even as it proclaims “value” is ineliminable.

I have suggested that “fact” is best understood as what in the external world frustrates (or, at least, must be navigated by) desire.  Wishes are not horses.  Work must be done to accomplish some approximation of what one desires.  The point is simply that facts are not stable and that our account of facts will be the product of our interaction with them, an interaction that is driven by the desires that motivate that engagement.

What about “value”?  I have been using the term incredibly loosely.  If we desire something, then we value it.  But we usually want to distinguish between different types of value—and morality usually wants to gain a position from which it can endorse some desires and condemn others.  In short, value is a battleground, where there are constant attempts to identify what is “truly” valuable alongside attempts to banish imposters from the field.  There is economic value, eudemonic value, moral value, and health value.  So, for example, one can desire to smoke tobacco, but in terms of “value for health,” that desire will be deemed destructive. 

Any attempt to put some flesh on the bare bones term “value” will immediately run into the problem of “value for” and “pure” (or intrinsic) value.  Some values are instrumental; they are means toward a specific end.  If you want to be healthy, it is valuable not to smoke.  If you want to become a concert pianist, it is valuable to practice. 

The search for “intrinsic” values can quickly become circular—or lead to infinite regress. Is the desire to become a concert pianist “intrinsic”? It certainly seems to function as an end point, as something desired that motivates and organizes a whole set of actions.  But it is easy to ask “why” do I value becoming a concert pianist so highly?  For fame, for love of music, to develop what I seem to have talent for (since, given my inborn talents, I couldn’t become a professional baseball player)?  Do we—could we ever—reach rock bottom here? 

The Darwinians, of course, think they have hit rock bottom.  Survival to the point of being able to reproduce.  That’s the fundamental value that drives life.  The preservation of life across multiple generations.  When organisms are, from the get go, involved in “evaluation,” in assessing what in the environment is of value to them, that evaluation is in terms of what avails life.  (The phrase “wealth is what avails life” comes from a very different source: John Ruskin’s Unto this Last, his screed against classical liberalism’s utilitarian economics.) 

One problem for the Darwinians is that humans (at least, among animals) so often value things, and act in ways, that thwart or even contradict the Darwinian imperatives.  Daniel Dennett argues that such non-Darwinian desires are “parasites”; they hitch a ride on the capacities that the human organism has developed through natural selection’s overriding goal of making a creature well suited to passing on its genes.  Some parasites, Dennett writes, “surely enhance our fitness, making us more likely to have lots of descendants (e.g. methods of hygiene, child-rearing, food preparation); others are neutral—but may be good for us in other, more important regards (e.g., literacy, music, and art), and some are surely deleterious to our genetic future, but even they may be good for us in other ways that matter more to us (the techniques of birth control are an obvious example)” (Freedom Evolves, p. 177).

Whoa!  I love that Dennett is not a Darwinian fundamentalist. (In particular, it’s good to see him avoid the somersaults other Darwinians perform in their effort to reduce music and art to servants of the need to reproduce.) The Darwinian imperative does not drive all before it.  But surely it is surprising that Dennett would talk of things that “matter more to us” than the need to insure our “genetic future.”  He has introduced a pluralism of values into a Darwinian picture that is more usually deployed to identify an overriding fundamental value.

Other candidates for a bedrock intrinsic value run into similar difficulties.  Too much human behavior simply negates each candidate.  For example, we might, with Kant, want to declare that each individual human life is sacred, an end in itself, not to be used or violated.  But every society has articulated conditions under which the killing of another human being is acceptable.  And if we attempt to find “the” value that underwrites this acceptance of killing nothing emerges.  So it does seem that we are left with a pluralism of values.  Different humans value different things.  And that is true within a single society as well as across cultures.  Values, like facts, are in process—continually being made and re-made.  And, as with facts, there is feedback—in terms of the praise/blame provided by others, but also in terms of the self-satisfaction achieved by acting in accordance with one’s values.  Does becoming a concert pianist satisfy me?  Does it make me happy?  Does it make me respect myself?  Am I full of regrets about the missed opportunities that came with practicing five hours a day?  Would I do it all over again (that ultimate Nietzschean test)?

When it comes to entrenched values, things get even trickier.  Here’s the dilemma: we usually condemn actions that are “interested.”  We don’t trust the word of someone who is trying to sell us something.  We want independent information from that which the seller provides.  The seller has an “interest” in distorting the facts.  Now we are back to the urge to find an “objective” viewpoint.  In cases of lying, the issue is straightforward.  The interested party knows all the relevant information, but withholds some of it in order to deceive.

But what if one’s “interest” distorts one’s view of things?  What if the flaw is epistemological more than it is moral?  I see what I want to see. Confirmation bias.  My “values” dictate how I understand the circumstances within which I dwell.  My very assessment of my environment is to a large extent a product of my predilections.  Feedback is of very limited use here.  Humans seem extraordinarily impervious to feedback, able to doggedly pursue counterproductive actions for long periods of time.  In this scenario, “interest” can look a lot like what some theorists call “ideology.”  The question is how to correct for the distortions that are introduced by the interested viewer.  Isn’t there some “fact” of the matter that can settle disputes?

The despairing conclusion is that, in many instances, there is no settling of such disputes.   What would it take to convince Trumpian partisans that the 2020 election was not stolen?  Or that Covid vaccines do not cause cancer?  All the usual forms of “proof” have been unavailing.  Instead of having “fact” drive “value” out of the world as the humanists feared (that feat motivated Kant’s whole philosophy), here we have “value” driving “fact” to the wall.  A world of pluralistic values creates, it now appears, a world of pluralistic facts.  No wonder that we get a call for bolstering the bulwark of “fact.” 

As I have already made clear, I don’t think we can get back to (or even that we were ever really there) a world of unsullied facts.  Our understandings of the world have always been “interested” in the ways that pragmatism identifies.  The only safeguard against untrammeled fantasy is feedback—and the 2020 stolen election narrative shows how successfully feedback can be avoided.  We have the various rhetorical moves in our toolbox—the presentation of evidence, the making of arguments, the outlining of consequences, emotional appeals to loyalties, sympathies, and indignation—for getting people to change their minds. These techniques are the social forms of feedback that go along with the impersonal feedback provided by the world at large.  But that’s it.  There is no definitive clincher, no knock-down argument or proof that will get everyone to agree.  It’s pluralism all the way down.

Here’s something else that is deeply troubling—and that I don’t know exactly where I stand.  Is there any difference between “interest” and moral value?  Usually the two are portrayed as opposed.  Morality tries to get individuals to view things from a non-personal point of view (Thomas Nagel’s famous “view from nowhere” or Rawls’ “veil of ignorance”).  “Interest” is linked to what would personally benefit me—with nary a care for how it might harm you.  Some philosophers try to bridge this gap with the concept of “enlightened self-interest.”  The idea is that social interactions are an iterative game; I am in a long-term relation with you, so cheating you now may pay off in the short-term, but actually screws up the possibility of a sustained, and mutually beneficial, relationship over the long haul.  So it is not really in my interest to harm you in this moment.  Morality, then, becomes prudential; it is the wisest thing to do given that we must live together.  Humans are social animals and the basic rules of morality (which encode various forms of consideration of the other) make social relations much better for all involved.

In this scenario, then, “interest” and “moral value” are the same if (and only if) the individual takes a sufficiently “long view” of his interest.  The individual’s “interests” are what he values—and among the things he values is acting in ways his society deems “moral.”  There will remain a tension between what seems desirable in the moment and the longer term interest served by adhering to morality’s attempt to instill the “long view,” but that tension does not negate the idea that the individual is acting in his “interest.”

A more stringent view, one that would drive a deeper wedge between morality and interest, would hold that morality always calls for some degree of self-abnegation.  Morality requires altruism or sacrifice, the voluntary surrender of things I desire to others.  I must act against selfishness, against self-interest, in order to become truly moral.  This is the view of morality that says it entails the curbing of desire.  I must renounce some of my desires to be moral.  Thus, morality is not merely prudential, not just the most winning strategy for self-interest in the long run.  Morality introduces a set of values that are in contradiction with the values that characterize self-interest.  Morality brings along with it prohibitions, not just recommendations of the more prudent way to handle relations with one’s fellows.  It’s not simply practice justice if you want peace.  It’s practice justice even if there is no pay-off, even if you are only met with ingratitude and resentment.

To go back to the Darwinian/pragmatist basic scenario.  We have an organism embedded in an environment.  That organism is always involved in evaluating that environment in terms of its own needs and interests.  Thus values are there from the very start and inextricably involved in the perception of facts.  The question is whether we can derive the existence of morality in human societies from the needs that arise from the fact of human sociality.  That’s the Darwinian account of morality offered by Philip Kitcher among others, which understands morality in terms of its beneficial consequences for the preservation and reproduction of human life.  Morality in that account aligns with the fundamental interest identified in Darwinian theory.

Or is morality a Dennett-like parasite?  An intervention into the Darwinian scheme that moves away from a strict pursuit of interest, of what enhances the individual’s survival and ability to reproduce. 

To repeat: I don’t know which alternative I believe.  And am going to leave it there for now.

Percept/Concept

I tried to write a post on the distinction between cognitive and non-cognitive and got completely tangled up.  So, instead, I am taking a step backward and addressing the relation between percept and concept, where I feel on surer ground.  I will follow up this post with another on fact/value.  And then, with those two pairings sorted out, I may be able to say something coherent about the cognitive/non-cognitive pairing.

So here goes.  A percept is what is offered to thought by one of the five senses.  I see or smell something.  The stimulus for the percept is, in most but not all cases, something external to myself. Let’s stick to perception of external things for the moment.  I see a tree, or smell a rose, or hear the wind whistling through the trees.  I have what Hume calls “an impression.”

I have always wanted to follow the lead of J. L. Austin in his Sense and Sensibilia.  In that little book, Austin takes on the empiricist tradition that has insisted, since Locke, that one forms a “representation” or an “idea” (that is the term Locke uses) of the perceived thing. (In the philosophical tradition, this gets called “the way of ideas.”) In other words, there is an intermediary step.  One perceives something, then forms an “idea” of it, and then is able to name, think about, or otherwise manipulate that idea.  The powers of thought and reflection depend upon this translation of the impression, of the percept, into an idea (some sort of mental entity.)  Austin, to my mind, does a good job of destroying that empiricist account, opting instead for direct perception, dispensing with the intermediary step of forming an idea–and thus appealing to some kind of “mental state” to understand perception. 

But Kevin Mitchell in Free Agents (Princeton UP, 2023) makes a fairly compelling case for raw percepts being transformed into “representations.”  First, there are the differences in perceptual capabilities from one species to another, not to mention differences among members of the same species.  If I am more far-sighted than you, I will see something different from you.  True, that doesn’t necessarily entail indirection as contrasted to direct perception.  But it does mean that the “thing itself” (the external stimuli) does not present itself in exactly the same guise to every perceiving being.  What is perceived is a co-production, created out of the interaction between perceiver and perceived.  There is no “pure” perception.  Perception is always an act that is influenced by the sensory equipment possessed by the perceiver along with the qualities of the thing being perceived. Descriptions of how human sight works makes it clear how much “work” is done upon the raw materials of perception before the “thing” is actually seen. And, of course, we know that there are colors that the color-blind cannot perceive and noises that are in most cases beyond human perceptual notice.

Second, the experiences of both memory and language speak to the existence of “representations.”  We are able to think about a perceived thing even in its absence.  To say the word “elephant” is to bring an elephant into the room even when that elephant is not physically present.  Similarly, memory represents to us things that are absent.  Thus, even if we deny that the perception of present things has an intermediary step of transforming the percept into a “representation,” it seems indubitable that we then “store” the immediate impressions in the form of representations that can be called to mind after the moment of direct impression. 

Finally, the existence of representations, of mental analogues to what has been experienced in perception, opens the door for imagination and reflection.  I can play around with what perception has offered once I have a mental representation of it.  I can, in short, think about it.  The sheer weight of facticity is sidestepped once I am inside my head instead of in direct contact with the world.  A space, a distance, is opened up between perceiver and perceived that offers the opportunity to explore options, to consider possible actions upon, manipulations of, what the world offers.  Representation provides an ability to step back from the sensory manifold and take stock.

So it would seem that Austin’s appealing attempt to dispense with the elaborate machinery of empiricist psychology won’t fly.  As accounts of how human vision works show, too much is going on to make a “direct” account of perception true to how perception actually works. Stimuli sensed by the senses are “processed” before being registered, not directly apprehended.

So the next issue is what “registering” or “apprehending” consist of.  But first a short digression.  We typically think of perception as the encounter with external things through one of the five senses.  But we can also perceive internal states, like a headache or sore muscle.  In those cases, however, perception does not seem to be tied to one of the five senses, but to some sort of ability to monitor one’s internal states.  Pain and pleasure are the crudest terms for the signals that trigger an awareness of internal states.  More broadly, I think it fair to say that the emotions in their full complex panoply are the markers of internal well-being (or its opposite or the many way stations between absolute euphoria and abject despair).  Emotions are both produced by the body (sometimes in relation to external stimuli, sometimes in relation to internal stimuli) and serve as the signal for self-conscious registering of one’s current states.  It’s as if a tree was not just a tree, but also a signal of “tree-ness.”  Anger is both the fact of anger and a signal to the self of its state of mind in response to some stimuli.  Certain things in the world or some internal state triggers an emotion—and then the emotion offers a path to self-awareness.  So there appears to be an “internal sense capacity,” ways of monitoring internal states and “apprehending” them that is parallel to the ways the five traditional senses provide for apprehending the external world. 

What, then, does it mean to “apprehend” something once the senses have provided the raw materials of an encounter with that thing?  Following Kant, apprehension requires a “determinate judgment.”  The percept is registered by the self when the percept is conceptualized.  Percept must become concept in order to be fully received.  To be concrete: I see the various visual stimuli that the tree offers, but I don’t apprehend the tree until I subsume this particular instance of a tree into the general category/concept “tree.”  I “recognize” the tree as a tree when I declare “that’s a tree.”  The tree in itself, standing there in the forest, does not know itself as a tree.  The concept “tree” is an artifact of human language and human culture.  Percepts only become occasions for knowledge when they are married to concepts.  Pure, non-conceptualized, percepts are just raw material—and cannot be used by human thought.  In other words, back to the representation notion.  Until what perception offers is transformed into a representation, it is unavailable for human apprehension, for being taken up by the human subject as part of its knowledge of the world. (Of course, famously in Kant, this yields the distinction between our representations and the “thing in itself.” The cost of “the way of ideas”–the cost that Austin was trying to overcome–is this gap in our knowledge of the world, our inability to see things independently of the limitations of human perceptual and conceptual equipment. Science attempts to overcome these limitations by using non-human instruments of perception (all those machines in our hospitals), but even science must acknowledge that what a machine registers, just like what a human registers, is a representation that is shaped by the nature of the representing apparatus.)

Determinate judgment appears to be instantaneous.  At least in the case of the encounter with most external things.  I see a tree and, without any discernible time lapse, identify it as a tree.  I have no awareness of processing the sensory signals and then coming to a judgment about what category the seen thing belongs to.  Percept and concept are cemented together.  Of course, there are some cases where I can’t at first make out what it is before me.  The lighting is bad, so I see a shape, but not enough more to determine what the thing is.  Such cases do indicate there is a distinction between percept and concept.  But in the vast majority of cases it is just about impossible to pry them apart.

For many artists from Blake on, the effort to pry the two apart is a central ambition.  The basic idea is that we see the world through our conceptual lenses—and thus fail to apprehend it in its full richness, its full sensual plenitude.  We filter out the particulars of this tree as we rush to assimilate the singular instance to the general category.  Thus painters strive to make us see things anew (cubism) or to offer ambiguous items that can’t be immediately or readily identified (surrealism).  They try to drive a wedge between percept and concept. “No ideas but in things,” proclaims William Carlos Williams—and this hostility to ideas, to their preeminence over things (over percepts), is shared by many modern artists.

One of the mysteries of the percept/concept pairing is the relative poverty of our linguistic terms to describe percepts.  We can in most cases quickly identify the tree as a tree, and we can certainly say that the tree’s leaves are green in the spring and rust-colored in the fall.  But more precise linguistic identification of colors eludes us.  We can perceive far more variations in colors than we can describe.  Hence the color chips at any hardware store, which offer 45 variants of the color we crudely call “blue” and invent fanciful names to distinguish each different shade from the rest.  The same, of course, holds for smells and for emotions.  We have a few, very crude terms for smell (pungent, sharp) but mostly can only identify smells in terms of the objects that produce such smells.  It smells flowery, or like hard boiled egg.  The same with taste.  Aside from sweet, sour, sharp, we enter the world of simile, so that descriptions of wine notoriously refer to things that are not wine. Notes of black currant, leather, and tobacco.  And when it comes to emotions we are entirely at sea—well aware that our crude generalized terms (love, anger, jealousy) get nowhere near to describing the complexities of what one feels.  Thus some artists (Updike comes to mind) specialize in elaborating on our descriptive vocabularies for physical and emotional percepts.  Thus a whole novel might be devoted to tracing the complexities of being jealous, to strive to get into words the full experience of that emotional state.

In any case, the paucity of our linguistic resources for describing various percepts, even in cases where the distinction between the percepts is obvious to us (as in the case of gradients of color), shows (I  believe) that there are ordinary cases where percept and concept are distinct.  We don’t immediately leap to judgment in every case.  Now, it is true that I conceptualize the various shades of blue as “color” and even as “blue.”  But I do not thereby deny that the various shades on the color chip are also different, even though I have no general category to which I can assign those different shades. 

Two more puzzles here.  The first is Wittgensteinian.  I had forgotten, until going through this recently with my granddaughter, how early children master color.  By 18 months, she could identify the basic colors of things.  Multiple astounding things here.  How did she know we were referring to color and not to the thing itself when we call a blue ball “blue”?  What were we pointing out to her: the color or the thing?  Yet she appeared to have no trouble with that possible confusion.  Except.  For a while she called the fruit “orange” “apples.”  It would seem that she could not wrap her head around the fact that the same word could name both a color and a thing.  She knew “orange” as a color, so would not use that word to name a thing.  Even more amazing than sorting colors from things, was her accuracy in identifying a thing’s color.  Given sky blue and navy blue, she would call both “blue.”  A little bit later on (two or three months) she learned to call one “light blue” and the other “dark blue.”  But prior to that distinction, she showed no inclination to think the two were two different colors.  And she didn’t confuse them with purple or any other adjacent color.  So how is it that quite different percepts get tossed into the same category with just about no confusion (in relation to common usage) at all? It would seem more obvious to identify sky blue and dark blue as two different colors.

The second puzzle might be called the ”good enough” conundrum.  I walk in the forest and see “trees.”  The forester sees a number of specific species—and very likely also singles out specific trees as “sick” or of a certain age.  His judgments are very, very different from mine—and do not suffer from the paucity of my categorical terms.  Similarly, the vintner may rely on an almost comical similes to describe the taste of the wine, but I do not doubt that his perceptions are more intense and more nuanced than mine.  A chicken/egg question here about whether having the concepts then sharpens the percepts—or if sharper percepts then generate a richer vocabulary to describe them.  Or the prior question: do we only perceive with the acuity required by our purposes?  My walk in the woods is pleasant enough for me without my knowing which specific types of trees and ferns I am seeing.  What we “filter out,” in other words, is not just a function of the limitations of our perceptual equipment, or the paucity of our concepts/vocabulary, but also influenced by our purposes.  We attend to what we need to notice to achieve something. 

Push this last idea just a bit and we get “pragmatism” and its revision of the empiricist account of perception and the “way of ideas.”  The pragmatist maxim says that our “conception” of a thing is our understanding of its consequences.  That is, we perceive things in relation to the futures that thing makes possible.  Concepts are always dynamic, not static.  They categorize what perception offers in terms of how one wants to position oneself in the world.  Percept/concept is relational—and at issue is the relations I wish to establish (or maintain) between myself and what is “out there” (which includes other people.) 

Back to the artists.  The repugnance many artists (as well as other people) feel toward pragmatism stems from this narrowing down of attention, of what might be perceived.  Focused (in very Darwinian fashion) upon what avails toward the organism’s well-being, the pragmatist self only perceives, only attends to, that which it can turn to account.  It thereby misses much of what is in the world out there.  The artists want to fling open the “doors of perception” (to quote Blake)—and see pragmatism as a species of utilitarianism, a philosophy that notoriously reduces the range of what “matters” to humans as well as reducing the motives for action to a simple calculus of avoiding pain and maximizing pleasure.  To categorize percepts immediately into two bins–these things might benefit me, these things are potentially harmful—is to choose to live in a diminished, perversely impoverished world.

Of course, Dewey especially among the “classic” pragmatists worked hard to resist the identification of pragmatism to a joyless and bare-bones utilitarianism.  The key to this attempt is “qualia”—a term that is also central in the current philosophical debates about consciousness.  “Qualia” might be defined as the “feel of things.”  I don’t just see trees as I walk in the woods.  I also experience a particular type of pleasure—one that mixes peacefulness, the stimulus/joy of physical exertion, an apprehension of beauty, a diffuse sense of well-being etc.  “Consciousness” (as understood in everyday parlance) registers that pleasure. Consciousness entails that I not only feel the pleasure but can also say to myself that I am feeling this pleasure.  Percepts, in other words, are accompanied by specific feelings that are those percept’s “qualia.” And through consciousness we can register the fact of experiencing those feelings.

The relation of concepts to “qualia” is, I think, more complex—and leads directly to the next post on the fact/value dyad.  A concept like “fraud” does seem to me to have its own qualia.  Moral indignation is a feeling—and one very likely to be triggered by the thought of fraud.  Perhaps (I don’t know about this) only a specific instance of fraud, not just the general concept of it, is required to trigger moral indignation.  But I don’t think so.  The general idea that American financiers often deploy fraudulent practices seems to me enough to make me feel indignant.

On the other hand, the general concept of “tree” does not seem to me to generate any very specific qualia.  Perhaps a faint sense of approval.  Who doesn’t like trees?  But pretty close to neutral.  The issues, in short, are whether “neutral” percepts  or concepts are possible.  Or do all percepts and concepts generate some qualia, some feelings that can be specified?  And, secondly, are all qualia related to judgments of value?  If we mostly and instantaneously make a judgment about what category a percept belongs to (what concept covers this instance), do we also in most cases and instantaneously judge the “value” of any percept?  That’s what my next post on fact/value will try to consider.

Philosophy and How One Acts

A friend with whom I have been reading various philosophical attempts to come to terms with what consciousness is and does writes to me about “illusionism,” the claim that we do not have selves. We are simply mistaken in thinking the self exists. The basic argument is the classic empiricist case against “substance.” There are various phenomena (let’s call them “mental states” in this case), but no stuff, no thing, no self, to which those mental states adhere, or in which they are collected. Thomas Metzger is one philosopher who holds this position and in an interview tells us that his position has no experiential consequences. It is not clear to me whether Metzger thinks (in a Nietzschean way) that the self is an unavoidable illusion or if Metzger thinks that ll the phenomena we attribute to the self would just continue to be experienced in exactly the same way even if we dispensed with the notion (illusion) of the self. In either case, accepting or denying Metzger’s position changes nothing. Belief or non-belief in the self is not a “difference that makes a difference” to recall William James’s formula in the first chapter of his book, Pragmatism.

The issue, then, seems to be what motivates a certain kind of intellectual restlessness, a desire to describe the world (the terms of existence) in ways that “get it right”–especially if the motive does not seem to be any effect on actual behavior. It’s “pure” theory, abstracted from any consequences in how one goes about the actualities of daily life.

There does exist, for some people, a certain kind of restless questioning.  I have had a small number of close friends in my life, and what they share is that kind of restlessness.  A desire to come up with coherent accounts of why things are the way they are, especially of why people act the ways they do. People are endlessly surprising and fascinating. Accounting for them leads to speculations that are constantly being revised and restated because each account seems, in one way or another, to fail to “get things right.”  There is always the need for another round of words, of efforts to grasp the “why” and “how” of things.  Most people, in my experience, don’t feel this need to push at things.  I was always trying to get my students to push their thinking on to the next twist—and rarely succeeded in getting them to do so. And for myself this restless, endless inquiry generates a constant stream of words, since each inadequate account means a new effort to try to get it more accurately this time.

Clearly, since I tried to get my students to do this, I think of such relentless questioning as an intellectual virtue. But what is it good for?  I take that to be the core issue of your long email to me.  And I don’t have an answer.  Where id is, ego shall be.  But it seems very clear that being able to articulate one’s habitual ways of (for example) relating to one’s lover, to know what triggers anger or sadness or neediness, does little (if anything) to change the established patterns.  Understanding (even if there were any way to show that the understanding was actually accurate) doesn’t yield much in the way of behavioral results.

This gets to your comment that if people really believed Darwin was right, as many people do, then they wouldn’t eat animals.  William James came to believe that we have our convictions first—and then invent the intellectual accounts/theories that we say justify the convictions.  In other words, we mistake the causal sequence.  We take the cause (our convictions) as the effect (our theory), when it is really the other way around.  Nietzsche was prone to say the very same thing. 

One way to say this: we have Darwin, but will use him to justify exactly opposite behaviors.  You say if we believed Darwin we wouldn’t eat animals.  I assume that the logic is that Darwin reveals animals as our kin, so eating them is a kind of cannibalism.  We don’t eat dogs because they feel “too close” to us; that feeling should be extended to all animals, not just fellow humans and domestic pets.  (The French eat horse meat although Americans won’t).  But many people use Darwin to rationalize just the opposite.  We humans have evolved as protein seeking omnivores and we developed domesticating animals we eat just as we developed agriculture to grow plants we eat.  Even if we argue that domestication and agriculture were disasters, proponents of so-called “paleo diets” include meat eating in their attempt to get back to something thought basic to our evolved requirements.  So even is Darwin is absolutely right about how life—and specifically human life—emerged, people will use the content of his theory to justify completely contradictory behaviors.

This analysis, of course, raises two questions.  1) What is the cause of our convictions if it is not some set of articulable beliefs about how the world is?  James only answer is “temperament,” an in-built sensibility, a predilection to see the world in a certain way.  (Another book I have just finished reading, Kevin Mitchell’s Free Agents [Princeton UP, 2023], says about 50% of our personality is genetically determined and that less than 10% is derived from family environment.  Mitchell has an earlier book, titled Innate [Princeton UP, 2018], where he goes into detail about how such a claim is supported.)  Nietzsche, in some places, posits an in-built will to power.  All the articulations and intellectualisms are just after the fact rationalizations.  In any case, “temperament” is obviously no answer at all.  We do what we do because we are who we are—and how we got to be who we are is a black box.  Try your damndest, it’s just about impossible to make sure your child ends up heterosexual or with some other set of desires. 

2)So why are James and Nietzsche still pursuing an articulated account of “how it really works”?  Is there no consequence at all at “getting it right”?  Shouldn’t their theories also be understood as just another set of “after the fact” rationalization?  In other words, reason is always late to the party—which suggests that consciousness is not essential to behavior, just an after-effect.

That last statement, of course, is the conclusion put forward by the famous Libet tests.  The ones that say we move our hand milli-seconds before we consciously order our hand to move.  Both Dennett [in Freedom Evolves (Penguin, 2003) and Mitchell (in Free Agents) have to claim the Libet experiment is faulty in order to save any causal power for consciousness.  For the two of them, who want to show that humans actually possess free will, consciousness must be given a role in the unfolding of action.  There has to be a moment of deliberation, of choosing between options—and that choosing is guided by reason (by an evaluation of the options and a decision made between those options) and beliefs (some picture of how the world really is.)  I know, from experience, that I have trouble sleeping if I drink coffee after 2pm.  I reason that I should not drink coffee after 2pm if I want to sleep.  So I refrain from doing so.  A belief about a fact that is connected to a reasoned account of a causal sequence and a desire to have one thing happen rather than another: presto! I choose to do one thing rather than another based on that belief and those reasons.  To make that evaluation certainly seems to require consciousness—a consciousness that observes patterns, that remembers singular experiences that can be assembled into those patterns, that can have positive forward-looking desires to have some outcomes rather than others (hence evaluation of various possible bodily and worldly states of affairs), and that can reason about what courses of action are most likely to bring those states of affairs into being.  (In short, the classical account of “rationality” and of “reason-based action.”)

If this kind of feedback loop actually exists, if I can learn that some actions produce desirable results more dependably than others, then the question becomes (it seems to me): at what level of abstraction does “knowledge” no longer connect to action?  Here’s what I am struggling to see.  Learned behavior, directed by experiences that provide concrete feedback, seems fairly easy to describe in terms of very concrete instances.  But what happens when we get to belief in God—or Darwin?  With belief in God, we seem to see that humans can persist in beliefs without getting any positive feedback at all.  I believe in a loving god even as my child dies of cancer and all my prayers for divine intervention yield no result.  (The classic overdramatized example.)  Faced with this fact, many theologians will just say: it’s not reasonable, so your models of reasoned behavior are simply irrelevant at this point.  A form of dualism.  There’s another belief-to-action loop at play.  Another black box.

On Darwin it seems to me a question of intervention.  Natural selection exists entirely apart from human action/intention/desire etc.  It does its thing whether there are humans in the world or not.  That humans can “discover” the fact of natural selection’s existence and give detailed accounts of how it works is neither here nor there to natural selection itself.  This is science (in one idealized version of what science is): an accurate description of how nature works.  The next step seems to be: is there any way for humans to intervene in natural processes to either 1) change them (as when we try to combat cancer) or 2) harness the energies or processes of nature to serve specific human ends. (This is separate from how human actions inadvertently, unintentionally, alter natural processes–as is the case in global warming. I am currently reading Kim Stanley Robinson’s The Ministry for the Future–and will discuss it in a future post.)

In both cases (i.e intentionally changing a natural process of harnessing the energies of a natural process toward a specifically human-introduced end), what’s driving the human behavior are desires for certain outcomes (health in the case of the cancer patient), or any number of possible desires in the cases of intervention.  I don’t think the scientific explanation has any direct relation to those desires.  In other words, nothing about the Darwinian account of how the world is dictates how one should desire to stand in relation to that world.  Darwin’s theory of evolution, I am saying, has no obvious, necessary, or univocal ethical consequences.  It does not tell us how to live—even if certain Darwinian fundamentalists will bloviate about “survival of the fittest” and gender roles in hunter-gatherer societies. 

I keep trying to avoid it, but I am a dualist when it comes to ethics.  The non-human universe has no values, no meanings, no clues about how humans should live.  Hurricanes are facts, just like evolution is a fact.  As facts, they inform us about the world we inhabit—and mark out certain limits that it is very, very useful for us to know.  But the use we put them to is entirely human generated, just as the uses the mosquito puts his world to are entirely mosquito driven.  To ignore the facts, the limits, can be disastrous, but pushing against them, trying to alter them, is also a possibility.  And the scientific knowledge can be very useful in indicating which kinds of intervention will prove effective.  But it has nothing to say about what kinds of intervention are desirable.

I am deeply uncomfortable in reaching this position.  Like most of the philosophers I read, I do not want to be a dualist.  I want to be a naturalist—where “naturalism” means that everything that exists is a product of natural forces.  Hence all the efforts out there to offer an evolutionary account of “consciousness” (thus avoiding any kind of Cartesian dualism) and the complementary efforts to provide an evolutionary account of morality (for example, Philip Kitcher, The Ethical Project [Harvard UP, 2011.) I am down with the idea that morality is an evolutionary product—i.e. that it develops out of the history and “ecology” of humans as social animals.  But there still seems to me a discontinuity between the morality that humans have developed and the lack of morality of cancer cells, gravity, hurricanes, photosynthesis, and the laws of thermodynamics.  Similarly, there seems to me a gap between the non-consciousness of rocks and the consciousness of living beings.  So I can’t get down with panpsychism even if I am open to evolutionary accounts of the emergence of consciousness from more primitive forms to full-blown self-consciousness.

Of course, some Darwinians don’t see a problem.  Evolution does provide all living creatures with a purpose—to survive—and a meaning—to pass on one’s genes.  Success in life (satisfaction) derives from those two master motives—and morality could be derived from serving those two motives.  Human sociality is a product of those motives (driven in particular by the long immaturity, non-self-sustaining condition, of human children)—and morality is just the set of rules that makes sociality tenable.  So the theory of evolution gives us morality along with an account of how things are.  The fact/value gap overcome.  How to square this picture of evolution with its randomness, its not having any end state in view, is unclear.  The problem of attributing purposes to natural selection, to personifying it, has bedeviled evolutionary theory from the start.

For Dennett, if I am reading him correctly, the cross-over point is “culture,”—and, more specifically, language.  Language provides a storage device, a way of accumulating knowledge of how things work and of successful ways of coping in this world.  Culture is a natural product, but once in place it offers a vantage point for reflection upon and intervention in natural processes.  Humans are the unnatural animal, the ones who can perversely deviate from the two master motives of evolution (survival and procreation) even as they strive to submit nature to their whims.  It’s an old theme: humans appear more free from natural drivers, but even as freedom is a source of their pride and glory, it often is the cause of their downfall. (Hubris anyone?) Humans are not content with the natural order as they find it.  They constantly try to change it—with sometimes marvelous, with other times disastrous, results.

But that only returns us to the mystery of where this restless desire to revise the very terms of existence comes from.  To go back to James and Nietzsche: it doesn’t seem like our theories, our abstract reasonings and philosophies, are what generate the behavior.  Instead, the restlessness comes first—and the philosophizing comes after as a way of explaining the actions.  See, the philosophers say, the world is this particular way, so it makes sense for me to behave in this specific way.  But, says James, the inclination to behave that way came first—and then the philosophy was tailored to match. 

So, to end this overlong wandering, back where I began.  Bertrand Russell (in his A History of Western Philosophy) said that Darwin’s theory is the perfect expression of rapacious capitalism—and thus it is no surprise that it was devised during the heyday of laissez-faire.  That analysis troubles me because it offers a plausible suspicion of Darwin’s theory along the William James line.  The theory just says the “world is this way” in a manner that justifies the British empire and British capitalism in 1860.  But I really do believe Darwin is right, that he has not just transposed a capitalist world view into nature.  I am, however, having trouble squaring this circle.  That is, how much our philosophizing, our theories, just offer abstract versions of our pre-existing predilections—and how much those theories offer us genuine insights about the world we inhabit, insights that will then effect our behavior on the ground.  A very long-winded way of saying I can’t come up with a good answer to the questions your email posed.

Arguing with the Darwinians

In the consciousness literature, Darwin is king.  Whatever consciousness is (and there is plenty of disagreement about that), every one in the conversation accepts that consciousness must be a product of a Darwinian process of evolution.  There are various competing versions of an evolutionary narrative for the arrival of consciousness on the scene.

In its most extreme versions, panpsychism tries to avoid the sticky problem of identifying the “before” from the “after” moment.  The problem for any Darwinian account: once consciousness did not exist, but at a certain point in time it emerged, it arrived.  By saying something (just what is unclear) fundamental to consciousness was always already there, the panpsychist tries to sidestep the before/after conundrum; but even the panpsychist has to have a story about how the originating germ of consciousness develops into the full-blown consciousness of humans, higher primates, other mammals, and other creatures.  No one is claiming mollusks have a consciousness as fully elaborated as that in chimpanzees. Or that chimpanzees were there at the origins. There still needs to be a story about the elaboration of consciousness from its primitive beginnings into the sophisticated forms displayed in life forms that arrive on the scene at a much later date.

There are two things it would seem any plausible Darwinian account must provide.  First, it must provide a plausible bio-chemical account of a) how variants are produced for a process of selection to choose among, and b) the physiological/chemical processes that create consciousness itself.  Genetics (most directly random genetic mutations) is assumed to provide most of what is needed to answer the variant question.  As for b, a bio-chemical explanation for the phenomenon of consciousness, that such an explanation must exist is generally assumed in the literature, although everyone (with a few exceptions, as always) agrees that we are still a long way away from possessing anything like a complete and satisfactory bio-chemical understanding of consciousness.

I am going to leave these bio-chemical questions aside in this discussion—as do Veit and Humphrey in their books.  [Nicholas Humphrey, Sentience: The Invention of Consciousness (MIT Press, 2023) and Walter Veit, A Philosophy for the Science of Animal Consciousness (Routledge, 2023).]  Instead, I want to focus my attention on the second requirement of any Darwinian account.

Namely, such an account must identify the “advantage(s)” that consciousness would provide to an organism.  Only if there are such advantages would consciousness be “selected” for.  Since (of course) we don’t get to see the competition between creatures possessing different types and/or degrees of consciousness in real time, we must (as both Veit and Humphrey explicitly state in their books) “reverse engineer” the account of how one possible variant is “selected” over another.  Those more skeptical of Darwinian explanations call “reverse engineering” “just-so stories.”  Lacking direct evidence, a narrative is produced that assumes the mechanism of natural selection. 

Essential to such accounts is functionalism.  The writer must identify what consciousness does, what functions it performs, and from that basis argue for those functions as providing advantages in a struggle for existence understood in Darwinian terms.  Veit is what might be called a Darwinian fundamentalist.  He flatly states “the goal of biological systems is ultimately reproduction” (54); and identifies “the real purpose of the organism, which is to maximize its representation in future populations” (55).  Armed with this fundamental and overriding purpose, he can then assess how consciousness would provide a leg up in the effort to “maximize” an organism’s chances for reproductive success.  That such an account of “purpose” seems awfully reductive in the face of the wide variety of behavior exhibited by animals and humans does not seem to bother him a bit.

More interesting is Veit’s holistic understanding of the dynamic nature of the organism.  He calls his approach “teleonomic”: “organisms are goal-directed systems,” that “evolve to value states and behaviors that increase their own fitness and avoid those that are detrimental to their health” (9).  This approach falls in with other recent accounts that rehabilitate an Aristotelean notion of teleological or final causes.  The organism is directed toward something—and that something acts as one cause in the action of natural selection.  As Veit strongly puts it: “the external factors that matter to the evolutionary trajectory of the organism are themselves causally dependent on the organism” (8).  In other words, the organism is not a merely passive recipient of what external environmental and genetic factors produce.  The organism’s active pursuit of reproductive advantage guides its own selection process; the organism evaluates what the larger ecological scene makes available and works to exploit the elements in that scene that will serve its purpose(s). 

Of course, this approach still must posit an overriding purpose present (innate to) all organisms: the drive for reproductive success.  So one complaint I have about such Darwinian theorizing is that natural selection does not get off the ground unless there are randomly produced variants.  If we only have clones, then there is no range of actual variants for selection to choose among.  Yet the theory allows for no variance in the fundamental purpose of organisms.  Everything walks in lock step to the Darwinian command to maximize reproductive success.  And that leads to the absurdities of the endless worries about altruism, music, laughter, and play.  More and more implausible stories must be told about all of these behaviors to make them serve the overriding Darwinian purpose.  Reductionism doesn’t simply haunt Darwinian thought; it appears as absolutely central to such thinking.

Veit cheerfully accepts the hard-core utilitarianism of his approach.  “It is thus not unreasonable to treat them [organisms] as economic agents maximizing their utility (i.e. fitness).  Each individual within a species is fundamentally faced with a resource allocation problem. . . . This is the economy of nature” (19).  Casually swept aside are all the frivolous and downright counter-productive behaviors on view every day in the natural world.  The failure of humans and other animals to be reasonable resource maximizers must be ignored or subjected to torturous (and implausible) explanations.  Surely the simplest approach here would be to concede that the struggle for life and reproductive success doesn’t consume all of the organism’s time and energy—and the surplus is devoted to activities that don’t further Darwinian goals.  But very few of the Darwinian advocates take this easy way out.  Of course, it has been a commonplace that it is rich societies (Renaissance Florence, Elizabethan England etc.) that witness a flourishing of the arts.  The problem is that, in fact, we don’t know of any societies without music and dance. Variety, activities that are hard to account for by a strict Darwinian logic, appear baked in from the beginning.

I am hardly conversant with the vast literature about Darwinian evolution.  But I do know that whole books get written about the “problem” of altruism.  Whereas, as far as I know, no one takes up what seems to me the much more glaring problem of war.  After all, from the standpoint of reproductive success, war (which also seems endemic to all human societies of which we have any record) is a real puzzler.  Here’s an activity that places in danger precisely the (male) members of society who are in their prime years for passing on their genes.  And it would take a lot of ignoring of facts on the ground to claim that wars are primarily about securing resources necessary to life.  Despite the intuitive appeal of the Marxist notion that economics drives everything, wars are not only more often driven by issues of pique, status, and out-group hostility than by the scarcity of resources, but there is also the fact that wars destroy resources rather than augment them.  Wars are costly—more like potlatches dedicated to the wholesale destruction of goods and lives than a reasonable way to secure resources. (A side note: Engels realized that Marx and Darwin are at one in their appeals to economic reasoning and in identifying the pursuit of “interests” as the primary motive of “life.”)  In short, Darwinian reductionism (as well as its Marxist counterpart) offers little in the way of a plausible explanation for this all too frequent form of human behavior.

I want to end by moving on to more technical concerns about evolutionary accounts.  I will quote here from Humphrey’s book. “Since we are discussing evolution, we can assume three guiding principles.  First, there must have been a continuous sequence of stages with no unaccountable gaps.  Second, every stage must have been viable, at the time, on its own terms.  Third, the transition from one stage to the next must always have been an upgrade, adding to the chances of biological survival” (101).

I have trouble with both assumptions one and three. This may come down to semantics, to what exactly is meant by a “new stage.”  But let me state my worries.

On number one, I don’t see how unexpected and random genetic mutations (which are the engine of change, of movement from one stage to the next) are “continuous.”  It’s the discontinuity of mutations that seems much more obvious.  As I say, Humphrey might very well reply that I am misunderstanding the sense in which he is using “continuous.”  And, of course, various evolutionary theorists (most famously Stephen Jay Gould) talk of “punctuated equilibrium” to describe the sudden transformations that a genetic mutation can generate.  Still, Humphrey seems to think there will be an orderly “sequence” from one evolutionary stage to the next.  And that assumption will guide his “reverse engineering” account of the emergence of consciousness.  I think this confidence in an orderly sequence is misplaced—and thus makes the strategy of reverse engineering much more problematic.  It is harder to tell a story that, as Humphrey understands, has “gaps” that are not easily bridged.

The third assumption we might call the “things get better” thesis.  Each stage is an improvement over the last in terms of enhancing “chances for biological survival.”  But that way of stating things dismisses variants.  I had thought one goal of Darwinian theory is to explain diversity.  Variants are produced in the course of reproduction—and some of the variants (hardly all of them, but not just only one of them) prove viable (to use Humphrey’s term).  They are viable either because they exploit different ecological niches to secure the necessary resources to sustain life or because they are “good enough” to sustain life even as they differ from other variants.  Another way to say this: various traits get produced along the evolutionary track that do not undermine the ability to sustain life drastically enough to cause extinction of the organism carrying that trait.  A fairly obvious example is myopia.  Hardly a great asset in the “struggle” to survive, but apparently not enough of a deficit to have been discarded along the evolutionary pathway that leads to today’s humans.  In short, variants get produced all the time that are not “upgrades.”  They are simply not strong enough “downgrades” to prove fatal. 

By a similar logic, just as not all features of the organism necessarily make fully positive contributions to the effort to survive, not all features of the organism need be devoted to that effort.  The issue, once more, is surplus.  Just as the arts (in one reading) can seem irrelevant to the effort to sustain life, so various features of complex organisms may not be contributors to that effort.  A hard core Darwinian can consider them “free riders”—and that way of addressing the issue is fairly common in the literature.  Knowing how to read is not strictly necessary to survival—but humans get that extra ability because it develops from cognitive abilities that are essential to survival.  The problem becomes how do we identify (except by some seat of the pants appeals to what is necessary as opposed to what is “extra”) the abilities required by survival and those that free ride upon it.  Going back to war: perhaps we would want to argue that just like reading is a beneficial free rider, war is a disadvantageous one.  War, in other words, is an offshoot of some fundamentally necessary component of human physiology/psychology and thus can’t be jettisoned as evolution moves us toward “stages” where our chances our survival are enhanced. Even though war itself decreases chances of reproductive success.

I am hardly claiming that Darwinian evolution is a fundamentally mistaken theory.  But I am saying that the ham-handed, reductionist accounts of evolution overlook a number of puzzles that should give prevailing mono-causal biases some pause.  Furthermore, these puzzles should at least disturb any blithe confidence in “reverse engineering” stories.  Introducing a wider array of possible causes into such accounts, along with recognizing the random and disruptive effects of genetic mutations, would certainly complicate matters greatly.  But it also might yield more plausible accounts of how an evolutionary history got us the diversity and complexity of the animal and human worlds we can observe in the wide variety of behavior displayed in the present. 

Addendum (added January 19, 2024). I have just come across this relevant statement by Daniel Dennett on the error of thinking that evolution always produces enhancements of chances for biological success (quoted from Just Deserts: Debating Free Will, Polity 2021, p. 162):

“It is a fundamental mistake in evolutionary thinking to suppose that whatever ways (ideas, practices, concepts, policies) survived this process must have proven fitness-enhancing for the human species, the lineage, or even the individuals (or groups of individuals) who adopted them. Some, even many, of the established ways (of thinking, of acting) may have been cultural parasites, in effect, exploiting weaknesses in the psychology of their hosts.”