Category: Darwin

Richard Rorty and Qualia

I have recently finished reading Richard Rorty’s Pragmatism as Anti-Authoritarianism (Harvard UP, 2021).  The book is basically a series of lectures Rorty gave in Spain in 1996, but which (although published in Spanish in the late 1990s) were only published in English in 2021.  Some of the material from the lectures, however, were incorporated into essays Rorty did publish in English.

The occasion for reading the book was to participate in a conversation with my friends Graham Culbertson and Meili Steele—a conversation that will be published as a podcast on Graham’s site, Everyday Anarchism.  I will post a link when the episode goes live.

We did not, in our conversation, get deep into the weeds about Rorty’s epistemology.  We were more interested in his politics and directly in his “anti-authoritarianism” (as befits the context of an interest in anarchism).  Rorty’s claim in the lectures is that a “realist” epistemology (that aspires to a “true” account of how the world really is) is another version of theology.  That is, realism off-loads “authority” to some non-human entity, in this case “the world” instead of “God or gods.”  We must bow before the necessity of the way things are.

For Rorty, there is no definitive way things are, only different ways of describing things.  He understands that his claim 1) goes against “common sense” as well as the views of realist philosophers like Peirce, John Searle, and Thomas Nagel; and 2) cannot be itself justified as a statement of how humans process the world (i.e. as any kind of claim about how humans “are”).  There are no knock-down arguments for his non-realist position as contrasted to the realist one. 

The only basis for argument is comparative.  Try out my way of seeing things, says Rorty, and judge whether you find the consequences better.  I was surprised, reading Rorty this time, just how hard-core a consequentalist he is. The only criteria for making choices (whether they be choices of philosophical positions or choices about how to act) is whether one choice leads to more desirable outcomes than another.

Better for what?  That depends entirely on what one’s purposes are.  I can piss in the urinal or I can put it up on the museum wall and call it “Fountain.”  Neither use (or description) gets the urinal “right” (i.e. gets to some “truth” or “essence” or core identity about it).  Both are possible descriptions/uses of it amid a myriad of other possible uses/descriptions—and no one description/use is more fundamental or real than any other one.  Rorty’s is anti-reductionist and anti-depth. There is no basic “stuff” that underlies appearances, no essence lurking in the depths.  The physicist’s description makes sense within the language game of physics just as Duchamp’s description makes sense within the language game of modernist art.  But neither one can claim to tell the “real truth” about the urinal; each only illuminates another way the urinal can be described in relation to languages that humans deploy in pursuing different endeavors.

Along with being anti-reductionist (no description is more fundamental than any other or offers a way of comprehending other descriptions) and anti-depth, Rorty’s position is that identity is always relational and only activated (valid) in specific contexts.  Hence the appeal to Wittgensteinian “language games.”

What a thing “is” is only a product of its relation to the human who is describing it.  Rorty names his position “pan-relationalism.”  (Title of Chapter Five.)  His position is “that nothing is what it is under every and any description of it.  We can make no use of what is undescribed, apart from its relations to the human needs and interests which have generated one or another description. . . . A property is simply a hypostatized predicate, so there are no properties which are incapable of being captured in language.  Predication is a way of relating things to other things, a way of hooking up bits of the universe with other bits of the universe, or, if you like, of spotlighting certain webs of relationships rather than other webs.  All properties, therefore, are hypostatizations of webs of relationships” (85-86).

As a fellow pragmatist, I am inclined to accept pan-relationalism.  I very much want to be an anti-reductionist and an “appearances are all we get” kind of guy.  Many years ago I coined a slogan that I think captures this pragmatist view.  To wit:  “nothing is necessarily anything, but every thing is necessarily some thing.”  What this slogan says is that (to go back to the previous two posts) the things we encounter are plastic; they can be described—and related to—in a variety of ways.  Those things underdetermine our responses to, understandings of, and descriptions of them.  We can adopt any number of relational stances toward what experience offers.  So that’s the denial that anything is necessarily some particular, definitive, inescapable thing.

The other side of the slogan says: we do adopt a stance toward, we do make a judgment about, we do describe what we encounter.  We characterize it.  And in the Putnam/Burke manner of denying the fact/value divide, that adoption of a stance or a mode of relationship is dependent on the assessment we make of what experience (or the “situation”) offers.  We don’t just perceive or encounter something; we assess it, enter into a relationship to it (even if that relationship is indifference.  Relationships come in degrees of intensity, from judging this is something I needn’t attend to all the way to passionate involvement.) The claim is that we necessarily adopt some stance; there are multiple possibilities, but not the possibility of having no relation to the thing it all, to leave it utterly uncategorized. It will be “some thing,” although not necessarily any one thing.

Rorty offers his own version of this denial of the fact/value divide.  “To be a pan-relationalist means never using the terms ‘objective’ and ‘subjective’ except in the context of some well-defined expert culture in which we can distinguish between adherence to the procedures which lead the experts to agree and refusal to so adhere.  It also means never asking whether a description is better suited to an object than another description without being able to answer the question ‘what purpose is this description supposed to serve’? (87).  Since all relations to objects are relative to purposes, there is no such thing as a non-relational observation that allows one to “represent the object accurately” (87) as it is in itself. 

So I am down with Rorty’s pan-relationalism.  But where he and I part company—and what generates the title of this blog post—is his denial of any relations that are non-linguistically mediated.  What Rorty wants to jettison from the pragmatism that he inherits from his hero Dewey is the concept of “experience.”  To Rorty, “experience” is just another version of the realist’s desire to establish a direct contact with the stuff of the universe. 

“Pragmatists agree with Wittgenstein that there is no way to come between language and its objects.  Philosophy cannot answer the question: Is our vocabulary in accord with the way the world is?  It can only answer the question” Can we perspicuously relate the various vocabularies we use to one another, and thereby dissolve the philosophical problems which seem to arise at the places where we switch over from one vocabulary into another? . . . . If our awareness of things is always a linguistic affair, if Sellars is right that we cannot check our language against our non-linguistic awareness, then philosophy can never be anything more than a discussion of the utility and compatibility of beliefs—and, more particularly, of the vocabularies in which those beliefs are formulated” (165).

In the vocabulary of my last two posts, Rorty writes: “Sellars and Davidson can be read as saying that Aristotle’s slogan, constantly cited by the empiricists, ‘Nothing in the intellect which was not previously in the senses,’ was a wildly misleading way of describing the relation between the objects of knowledge and our knowledge of things. . . . [W]e [should] simply forget about sense-impressions, and other putative mental contents which cannot be identified with judgments” (160).  No percepts in the absence of concepts.  No sensual experiences or emotional states that have not already been judged, already been subsumed under a concept.

This position—that there is no “non-linguistic” experience against which our words are measured leads to Rorty’s denial of “qualia.”  He accepts Daniel Dennett’s assertion that “there are no such things as qualia” (113), a position Rorty must defend (as Dennett also attempts to do in his work) against “the most effective argument in favor of qualia,” namely “Frank Jackson’s story of Mary the Color Scientist—the history of a woman blind from birth who acquires all imaginable ‘physical’ information about the perception of color, and whose sight is thereafter restored.  Jackson claims that at the moment she can see she learns something that she didn’t know before—namely what blue, red, etc. are like” (113).

In his usual maddening fashion, Rorty tells us the debate between Jackson and Dennett is not resolvable; it just comes down to “intuitions”.”  “The antinomies around which philosophical discussions cluster are not tensions built into the human mind but merely reflections of the inability to decide whether to use an old tool or a new one.  The inability to have an argument which amounts to manning one or another intuition pump results from the fact that either tool will serve most of our purposes equally well” (115).  I always think of Rorty of the Alfred E. Neumann of philosophers.  “What me worry?” as he makes his characteristic deflationary move: nothing much hangs on this disagreement, and there is no way to rationally adjudicate it.  You talk your way, I’ll talk mine, and let a thousand flowers bloom.

I do believe, along with William James (in the first lecture of Pragmatism), that we should be concerned about disagreements that actually generate consequential differences in behavior and practices.  Perhaps it’s because I come from the literary side of thinking about how and why one writes that I do find this difference consequential.  From the literary viewpoint, there is a persistent experience of the inadequacy of words, and a resultant attempt to “get it right,” to capture the “raw feel” of love, jealousy, envy—or of the perceptions and/or emotions that arise during a walk down a crowded city street.  Rorty’s only approach to this sense of what a writer is striving to accomplish seems to me singularly unhelpful.  He tells us that “the alternative is to take them [our linguistic descriptions] as about things, but not as answering to anything, either objects or opinions.  Aboutness is all you need for intentionality.  Answering to things is what the infelicitous notion of representing adds to the harmless notion of aboutness. . . . Aboutness, like truth, is ineffable, and none the worse for that” (171). So, it seems, Rorty accepts that we talk and write “about” things, but denies that worrying about the “accuracy” of our talk in, in any way, useful. And tells us that there really is nothing we can say about “aboutness.” Not helpful.

Note how this passage trots in the notions of “things” and of “the ineffable.”  The problem with positions that eschew common sense (i.e. the prevailing way we think about things) is that they must strive to revise the “errors” built into the very ways we talk.  Think of Nietzsche’s fulminating against the idea of “selfhood” as generated by grammatical form.  In any case, it’s awfully hard to jettison the hypostatization of “things”—which possess a kind of permanence and relative stability in their modes of manifestation in favor of a purely relational account of them (of what exactly are we talking if things have no existence except in relationship; is it impossible to identify separate entities in any relationship, or are all boundaries dissolved?)  Rorty, in fact, smuggles the world that has been well lost back in when he tells us that “we are constantly interacting with things as well as persons, and one of the ways we interact with both is through their effects upon sensory organs. But [this view dispenses with] the notion of experience as a mediating tribunal.  [We} can be content with an account of the world as exerting control on our inquiries in a merely causal way” (178-79). 

That causation merits the adjective “merely” follows from Rorty’s insistence that the world’s (or others’) causal powers upon us are distinct from the practices of (the language games of) “justification.”  We should “avoid a confusion between justification and causation, [which] entails claiming that only a belief can justify a belief” (179).  Justification is not based on an appeal to some way the world is, but to the warrants I offer for my belief that a certain stance or a certain course of action is preferable in this context.  Understanding justification in this linguistic, practice-oriented way means “drawing a sharp line between experience as cause of the occurrence of a justification, and experience as itself justificatory.  It means reinterpreting “experience” as the ability to acquire belief non-inferentially as a result of neurologically describable causal transactions with the world” (179-180). 

At this point, I think I must totally misunderstand Rorty because it seems to me he has reintroduced everything that he claimed to be excluding when he declares “I see nothing worth saving in empiricism” (189).  If the world generates beliefs through some causal process—and, even worse, if that generation of beliefs is “non-inferential”—then how have we escaped from the “myth of the given?”  Here’s what Rorty writes immediately following the passage just quoted: “One can restate this reinterpretation of ‘experience’ as the claim that human beings’ only ‘confrontation’ with the world is the sort which computers also have.  Computers are programmed to respond to certain causal transactions with input devices by entering certain program states.  We humans program ourselves to respond to causal transactions between the higher brain centers and the sense organs with dispositions to make assertions.  There is no epistemologically interesting difference between a machine’s program state and our dispositions, and both may equally well be called ‘beliefs’ or ‘judgments’”(180).  Generously, we can translate “We humans program ourselves” to mean “natural selection” has done that work—since (surely) we have been “given” the neurological equipment required to be sensitive to (to register) the world’s causal inputs.  I am pretty sure I didn’t program myself—at least not consciously.  Then again, Rorty is inclined to deny the whole notion of “conscious experience” (see page 121).

To repeat: I must be missing something here, since Rorty thinks appealing to what the world “causally” provides is radically different than appeals to qualia, or conscious experience, or non-linguistic percepts.  And I just don’t see the difference.

More directly to the point, however, is Meili’s brilliant observation in our podcast conversation that Rorty’s politics is based upon sensitivity to suffering, which is hard to claim is linguistic.  Do computers feel pain? Presumably not, which does seem to introduce an “epistemologically interesting distinction” between the computer’s processes and human dispositions. In Contingency, Irony, and Solidarity (Cambridge UP, 1989), Rorty characterizes “moral progress” as “the ability to see more and more traditional differences (of tribe, religion, race, customs, and the like) as unimportant when compared with similarities with respect to pain and humiliation” (192).  His politics works from the hope of fostering solidarity through “imaginative identification with the details of others’ lives” (190), with an understanding of others’ vulnerability to pain and humiliation central to that identification, which can produce “a loathing for cruelty—a sense that it is the worst thing we do” (190).  Meili’s point was that “pain and humiliation” are distinctly non-linguistic.  We are being gas-lighted when someone tries to convince us we are not feeling pain, have not been humiliated.  Thus, Rorty’s political commitments seem to belie his insistence that it’s all linguistic, that there are no percepts, no experiences apart from linguistic categories/concepts.  Is a dog or a new-born incapable of feeling pain? Surely not. Maybe incapable of feeling humiliated—but even that seems an open question.

Still, I don’t want to end by suggesting some kind of full-scale repudiation of Rorty’s work, from which I have learned a lot, and I remain sympathetic to much of it.  So I want to close with the passage in the book where I think Rorty’s make his case most persuasively.  It is also where he is most Darwinian in precisely the ways that James and Dewey were Darwinian—namely, in viewing humans as “thrown” (Heidegger’s terms) into a world with which they must cope.  Humans are constantly interacting with their environment, assessing its affordances and its constraints/obstacles, adapting to that world (which includes others and social institutions/practices as well as “natural” things), learning how to negotiate it in ways that further their purposes, acting to change it where possible, suffering what it deals out when changes are not feasible.  In this view, there can never be a “neutral” view of something or some situation; things and situations are “always already” assessed and characterized in relation to needs and purposes.  “The trail of the human serpent is over all” as James memorably put it; there is no “innocent” seeing.

Here’s Rorty’s version of that way of understanding how humans are situated in the world.  “Brandom wants to get from the invidious comparison made in such de re ascriptions as ‘She believes of a cow that it is a deer,’ to the traditional distinction between subjective appearance and objective reality.  It seems to me that all such invidious comparisons give one is a distinction between better and worse tools for handling the situation at hand—the cow, the planets, whatever.  They do not give us a distinction between more or less accurate descriptions of what the thing really is, in the sense of what it is all by itself, apart from the utilities of human tools for human purposes. . . . I can restate my doubts by considering Brandom’s description of ‘intellectual progress’ as ‘making more and more true claims about the things that are really out there to be talked and thought about.’  I see intellectual progress as developing better and better tools for better and better purposes—better, of course, by our lights” (172).

We are in the Darwinian soup, always navigating our way through an environment that provides opportunities and poses threats.  There is no way to abstract ourselves from that immersion.  And Rorty thinks we will be better off if we make common cause with the others in the same predicament.

Fact/Value

I noted in my last post that many twentieth-century artists aspired to an “innocent” perception of the world; they wanted to see (and sense) the world’s furniture outside of the “concepts” by which we categorize things.  We don’t know if babies enjoy such innocence in the first few months of life—or if they only perceive an undifferentiated chaos.  It is certainly true that, by six months at the latest, infants have attached names to things.  Asked to reach for the cup, the six month old will grasp the cup, not the plate.

If the modernist artist (I have no idea what 21st century artists are trying to do) wants to sever the tight bond between percept and concept, it has been the scientists who want to disentangle fact from value.  The locus classicus of the fact/value divide is Hume’s insistence that we cannot derive an “ought” from an “is.”  For humanists, that argument appears to doom morality to irreality, to merely being something that humans make up.  So the humanists strive to reconnect fact and value.  But, for many scientists, the firewall between fact and value is exactly what underlies science’s ability to get at the “truth” of the way things are.  Only observations and propositions (assertions) shorn of value have any chance of being “objective.”  Values introduce a “bias” into accounts of what is the case, of what pertains in the world.

Thus, it has been that artists, the humanists, and philosophers friendly to aesthetics and qualia that have argued that fact and value cannot be disentangled.  Pragmatism offers the most aggressive of these philosophical assaults on the fact/value divide.  The tack pragmatism takes in these debates is not to argue against Hume’s logic, his “demonstration” that you can’t deduce an “ought” from an “is.” 

Instead, pragmatism offers a thoroughly Darwinian account of human (and not just human) being in the world.  Every living creature is always and everywhere “evaluating” its environment.  There are no passive perceivers.  Pragmatism denies what James and Dewey both labeled “the spectator view of knowledge.”  Humans (and other animals) are not distanced from the world, looking at it from afar, and making statements about it from that position of non-involvement.  Rather, all organisms are immersed in an environment, acting upon it even as they are being acted upon by it.  The organism is, from the start, engaged in evaluating what in that environment might be of use to it and what might be a threat.  The pursuit of knowledge (“inquiry” in the pragmatist jargon) is 1) driven by this need to evaluate the environment in terms of resource/threat and 2) an active process of doing things (experiments; trial and error) that will better show if what the environment offers will serve or should be avoided.

If, in this Darwinian/pragmatist view, an organism was to encounter anything that was neutral, that had no impact one way or the other on the organism’s purposes, that thing would most likely not be noticed at all, or would quickly disappear as a subject of interest or attention.  As I mentioned in the last post, this seems a flaw in pragmatist psychology.  Humans and other animals display considerable curiosity, prodding at things to learn about them even in the absence of any obvious or direct utility.  There are, I would argue, instances of “pure” research, where the pay-off is not any discernible improvement in an organism’s ability to navigate the world.  Sometimes we just want to know something to satisfy that particular kind of itch.

So maybe the idea is that scientists aspire to that kind of purity, just as so many 20th century artists aspired to the purity of a non-referential, non-thought-laden art.  And that scientific version of the desire for purity gets connected to an epistemological claim that only such purity can guarantee the non-biased truth of the conclusions the scientist reaches.  The pragmatist will respond: 1) there is still the desire for knowledge driving your inquiry, so you have not achieved a purity that removes the human agent and her interests; and 2) the very process of inquiry, which is interactive, means that the human observer has influenced what the world displays to her observations (which is why Heisenberg’s work was so crucial to Dewey—and seemed to Dewey a confirmation of what pragmatism had been saying for thirty years before Heisenberg articulated his axioms about observation and uncertainty.)  The larger point: since action (on the part of humans and for other organisms) is motivated, and because knowledge can only be achieved through action (not passively), there is no grasping of “fact” that has not been driven by some “value” being attached to gaining (seeking out) that knowledge. 

Even if we accept this pragmatist assault on the fact/value divide, we are left with multiple problems. One is linguistic.  Hilary Putnam, in The Collapse of the Fact/Value Dichotomy and Other Essays (Harvard UP, 2002), basically argues that there are no neutral words, at least no neutral nouns or verbs.  (I have written about Kenneth Burke’s similar argument in my book, Pragmatist Politics (University of Minnesota P, 2012).  Every statement about the world establishes the relation of the speaker to that world (and to the people to whom the statement is addressed.)  In other words, every speech act is a way of adjusting the speaker’s relation to the content and the audience of that utterance.  Speech acts, like all actions, are motivated—and thus can be linked back to what the speaker values, what the speaker strives to accomplish.  Our words are always shot through and through with values—from the start.  And those values cannot be drained from our words (or from our observations) to leave only a residue of “pure” fact.  Fact and value are intertwined from the get go—and cannot be disentangled. 

Putnam, however, (like James, Dewey, and Kenneth Burke) is a realist.  Even if, as James memorably puts it, “the trail of the human serpent is over all,” the entanglement of human aspirations with observations about the world and others does not mean the non-self must forego its innings.  There is feedback.  Reality does make itself known in the ways that it offers resistance to attempts to manipulate it.  James insists that we don’t know how plastic “reality” is until we have tried to push the boundaries of what we deem possible.  But limits will be reached, will be revealed, at certain points.  Pragmatism’s techno-optimism means that James and Dewey thought that today’s limits might be overcome tomorrow.  That’s what generates the controversial pragmatist “theory of truth.”  Truth is what the experiments, the inquiries, of today have revealed.  But those truths can only be reached as a result of a process of experimentation, not passively observed, and those truths are not “final,” because future experiments may reveal new possibilities in the objects that we currently describe in some particular way.  Science is constantly upending received notions of the way things are.  If the history of science tells us anything, it should be that “certainties” continually yield to new and different accounts.  Truth is “made” through the process of inquiry—and truth is provisional.  Truth is always, in Popper’s formulation, always open to disconfirmation.

In short, pragmatism destabilizes “fact” even as it proclaims “value” is ineliminable.

I have suggested that “fact” is best understood as what in the external world frustrates (or, at least, must be navigated by) desire.  Wishes are not horses.  Work must be done to accomplish some approximation of what one desires.  The point is simply that facts are not stable and that our account of facts will be the product of our interaction with them, an interaction that is driven by the desires that motivate that engagement.

What about “value”?  I have been using the term incredibly loosely.  If we desire something, then we value it.  But we usually want to distinguish between different types of value—and morality usually wants to gain a position from which it can endorse some desires and condemn others.  In short, value is a battleground, where there are constant attempts to identify what is “truly” valuable alongside attempts to banish imposters from the field.  There is economic value, eudemonic value, moral value, and health value.  So, for example, one can desire to smoke tobacco, but in terms of “value for health,” that desire will be deemed destructive. 

Any attempt to put some flesh on the bare bones term “value” will immediately run into the problem of “value for” and “pure” (or intrinsic) value.  Some values are instrumental; they are means toward a specific end.  If you want to be healthy, it is valuable not to smoke.  If you want to become a concert pianist, it is valuable to practice. 

The search for “intrinsic” values can quickly become circular—or lead to infinite regress. Is the desire to become a concert pianist “intrinsic”? It certainly seems to function as an end point, as something desired that motivates and organizes a whole set of actions.  But it is easy to ask “why” do I value becoming a concert pianist so highly?  For fame, for love of music, to develop what I seem to have talent for (since, given my inborn talents, I couldn’t become a professional baseball player)?  Do we—could we ever—reach rock bottom here? 

The Darwinians, of course, think they have hit rock bottom.  Survival to the point of being able to reproduce.  That’s the fundamental value that drives life.  The preservation of life across multiple generations.  When organisms are, from the get go, involved in “evaluation,” in assessing what in the environment is of value to them, that evaluation is in terms of what avails life.  (The phrase “wealth is what avails life” comes from a very different source: John Ruskin’s Unto this Last, his screed against classical liberalism’s utilitarian economics.) 

One problem for the Darwinians is that humans (at least, among animals) so often value things, and act in ways, that thwart or even contradict the Darwinian imperatives.  Daniel Dennett argues that such non-Darwinian desires are “parasites”; they hitch a ride on the capacities that the human organism has developed through natural selection’s overriding goal of making a creature well suited to passing on its genes.  Some parasites, Dennett writes, “surely enhance our fitness, making us more likely to have lots of descendants (e.g. methods of hygiene, child-rearing, food preparation); others are neutral—but may be good for us in other, more important regards (e.g., literacy, music, and art), and some are surely deleterious to our genetic future, but even they may be good for us in other ways that matter more to us (the techniques of birth control are an obvious example)” (Freedom Evolves, p. 177).

Whoa!  I love that Dennett is not a Darwinian fundamentalist. (In particular, it’s good to see him avoid the somersaults other Darwinians perform in their effort to reduce music and art to servants of the need to reproduce.) The Darwinian imperative does not drive all before it.  But surely it is surprising that Dennett would talk of things that “matter more to us” than the need to insure our “genetic future.”  He has introduced a pluralism of values into a Darwinian picture that is more usually deployed to identify an overriding fundamental value.

Other candidates for a bedrock intrinsic value run into similar difficulties.  Too much human behavior simply negates each candidate.  For example, we might, with Kant, want to declare that each individual human life is sacred, an end in itself, not to be used or violated.  But every society has articulated conditions under which the killing of another human being is acceptable.  And if we attempt to find “the” value that underwrites this acceptance of killing nothing emerges.  So it does seem that we are left with a pluralism of values.  Different humans value different things.  And that is true within a single society as well as across cultures.  Values, like facts, are in process—continually being made and re-made.  And, as with facts, there is feedback—in terms of the praise/blame provided by others, but also in terms of the self-satisfaction achieved by acting in accordance with one’s values.  Does becoming a concert pianist satisfy me?  Does it make me happy?  Does it make me respect myself?  Am I full of regrets about the missed opportunities that came with practicing five hours a day?  Would I do it all over again (that ultimate Nietzschean test)?

When it comes to entrenched values, things get even trickier.  Here’s the dilemma: we usually condemn actions that are “interested.”  We don’t trust the word of someone who is trying to sell us something.  We want independent information from that which the seller provides.  The seller has an “interest” in distorting the facts.  Now we are back to the urge to find an “objective” viewpoint.  In cases of lying, the issue is straightforward.  The interested party knows all the relevant information, but withholds some of it in order to deceive.

But what if one’s “interest” distorts one’s view of things?  What if the flaw is epistemological more than it is moral?  I see what I want to see. Confirmation bias.  My “values” dictate how I understand the circumstances within which I dwell.  My very assessment of my environment is to a large extent a product of my predilections.  Feedback is of very limited use here.  Humans seem extraordinarily impervious to feedback, able to doggedly pursue counterproductive actions for long periods of time.  In this scenario, “interest” can look a lot like what some theorists call “ideology.”  The question is how to correct for the distortions that are introduced by the interested viewer.  Isn’t there some “fact” of the matter that can settle disputes?

The despairing conclusion is that, in many instances, there is no settling of such disputes.   What would it take to convince Trumpian partisans that the 2020 election was not stolen?  Or that Covid vaccines do not cause cancer?  All the usual forms of “proof” have been unavailing.  Instead of having “fact” drive “value” out of the world as the humanists feared (that feat motivated Kant’s whole philosophy), here we have “value” driving “fact” to the wall.  A world of pluralistic values creates, it now appears, a world of pluralistic facts.  No wonder that we get a call for bolstering the bulwark of “fact.” 

As I have already made clear, I don’t think we can get back to (or even that we were ever really there) a world of unsullied facts.  Our understandings of the world have always been “interested” in the ways that pragmatism identifies.  The only safeguard against untrammeled fantasy is feedback—and the 2020 stolen election narrative shows how successfully feedback can be avoided.  We have the various rhetorical moves in our toolbox—the presentation of evidence, the making of arguments, the outlining of consequences, emotional appeals to loyalties, sympathies, and indignation—for getting people to change their minds. These techniques are the social forms of feedback that go along with the impersonal feedback provided by the world at large.  But that’s it.  There is no definitive clincher, no knock-down argument or proof that will get everyone to agree.  It’s pluralism all the way down.

Here’s something else that is deeply troubling—and that I don’t know exactly where I stand.  Is there any difference between “interest” and moral value?  Usually the two are portrayed as opposed.  Morality tries to get individuals to view things from a non-personal point of view (Thomas Nagel’s famous “view from nowhere” or Rawls’ “veil of ignorance”).  “Interest” is linked to what would personally benefit me—with nary a care for how it might harm you.  Some philosophers try to bridge this gap with the concept of “enlightened self-interest.”  The idea is that social interactions are an iterative game; I am in a long-term relation with you, so cheating you now may pay off in the short-term, but actually screws up the possibility of a sustained, and mutually beneficial, relationship over the long haul.  So it is not really in my interest to harm you in this moment.  Morality, then, becomes prudential; it is the wisest thing to do given that we must live together.  Humans are social animals and the basic rules of morality (which encode various forms of consideration of the other) make social relations much better for all involved.

In this scenario, then, “interest” and “moral value” are the same if (and only if) the individual takes a sufficiently “long view” of his interest.  The individual’s “interests” are what he values—and among the things he values is acting in ways his society deems “moral.”  There will remain a tension between what seems desirable in the moment and the longer term interest served by adhering to morality’s attempt to instill the “long view,” but that tension does not negate the idea that the individual is acting in his “interest.”

A more stringent view, one that would drive a deeper wedge between morality and interest, would hold that morality always calls for some degree of self-abnegation.  Morality requires altruism or sacrifice, the voluntary surrender of things I desire to others.  I must act against selfishness, against self-interest, in order to become truly moral.  This is the view of morality that says it entails the curbing of desire.  I must renounce some of my desires to be moral.  Thus, morality is not merely prudential, not just the most winning strategy for self-interest in the long run.  Morality introduces a set of values that are in contradiction with the values that characterize self-interest.  Morality brings along with it prohibitions, not just recommendations of the more prudent way to handle relations with one’s fellows.  It’s not simply practice justice if you want peace.  It’s practice justice even if there is no pay-off, even if you are only met with ingratitude and resentment.

To go back to the Darwinian/pragmatist basic scenario.  We have an organism embedded in an environment.  That organism is always involved in evaluating that environment in terms of its own needs and interests.  Thus values are there from the very start and inextricably involved in the perception of facts.  The question is whether we can derive the existence of morality in human societies from the needs that arise from the fact of human sociality.  That’s the Darwinian account of morality offered by Philip Kitcher among others, which understands morality in terms of its beneficial consequences for the preservation and reproduction of human life.  Morality in that account aligns with the fundamental interest identified in Darwinian theory.

Or is morality a Dennett-like parasite?  An intervention into the Darwinian scheme that moves away from a strict pursuit of interest, of what enhances the individual’s survival and ability to reproduce. 

To repeat: I don’t know which alternative I believe.  And am going to leave it there for now.

Percept/Concept

I tried to write a post on the distinction between cognitive and non-cognitive and got completely tangled up.  So, instead, I am taking a step backward and addressing the relation between percept and concept, where I feel on surer ground.  I will follow up this post with another on fact/value.  And then, with those two pairings sorted out, I may be able to say something coherent about the cognitive/non-cognitive pairing.

So here goes.  A percept is what is offered to thought by one of the five senses.  I see or smell something.  The stimulus for the percept is, in most but not all cases, something external to myself. Let’s stick to perception of external things for the moment.  I see a tree, or smell a rose, or hear the wind whistling through the trees.  I have what Hume calls “an impression.”

I have always wanted to follow the lead of J. L. Austin in his Sense and Sensibilia.  In that little book, Austin takes on the empiricist tradition that has insisted, since Locke, that one forms a “representation” or an “idea” (that is the term Locke uses) of the perceived thing. (In the philosophical tradition, this gets called “the way of ideas.”) In other words, there is an intermediary step.  One perceives something, then forms an “idea” of it, and then is able to name, think about, or otherwise manipulate that idea.  The powers of thought and reflection depend upon this translation of the impression, of the percept, into an idea (some sort of mental entity.)  Austin, to my mind, does a good job of destroying that empiricist account, opting instead for direct perception, dispensing with the intermediary step of forming an idea–and thus appealing to some kind of “mental state” to understand perception. 

But Kevin Mitchell in Free Agents (Princeton UP, 2023) makes a fairly compelling case for raw percepts being transformed into “representations.”  First, there are the differences in perceptual capabilities from one species to another, not to mention differences among members of the same species.  If I am more far-sighted than you, I will see something different from you.  True, that doesn’t necessarily entail indirection as contrasted to direct perception.  But it does mean that the “thing itself” (the external stimuli) does not present itself in exactly the same guise to every perceiving being.  What is perceived is a co-production, created out of the interaction between perceiver and perceived.  There is no “pure” perception.  Perception is always an act that is influenced by the sensory equipment possessed by the perceiver along with the qualities of the thing being perceived. Descriptions of how human sight works makes it clear how much “work” is done upon the raw materials of perception before the “thing” is actually seen. And, of course, we know that there are colors that the color-blind cannot perceive and noises that are in most cases beyond human perceptual notice.

Second, the experiences of both memory and language speak to the existence of “representations.”  We are able to think about a perceived thing even in its absence.  To say the word “elephant” is to bring an elephant into the room even when that elephant is not physically present.  Similarly, memory represents to us things that are absent.  Thus, even if we deny that the perception of present things has an intermediary step of transforming the percept into a “representation,” it seems indubitable that we then “store” the immediate impressions in the form of representations that can be called to mind after the moment of direct impression. 

Finally, the existence of representations, of mental analogues to what has been experienced in perception, opens the door for imagination and reflection.  I can play around with what perception has offered once I have a mental representation of it.  I can, in short, think about it.  The sheer weight of facticity is sidestepped once I am inside my head instead of in direct contact with the world.  A space, a distance, is opened up between perceiver and perceived that offers the opportunity to explore options, to consider possible actions upon, manipulations of, what the world offers.  Representation provides an ability to step back from the sensory manifold and take stock.

So it would seem that Austin’s appealing attempt to dispense with the elaborate machinery of empiricist psychology won’t fly.  As accounts of how human vision works show, too much is going on to make a “direct” account of perception true to how perception actually works. Stimuli sensed by the senses are “processed” before being registered, not directly apprehended.

So the next issue is what “registering” or “apprehending” consist of.  But first a short digression.  We typically think of perception as the encounter with external things through one of the five senses.  But we can also perceive internal states, like a headache or sore muscle.  In those cases, however, perception does not seem to be tied to one of the five senses, but to some sort of ability to monitor one’s internal states.  Pain and pleasure are the crudest terms for the signals that trigger an awareness of internal states.  More broadly, I think it fair to say that the emotions in their full complex panoply are the markers of internal well-being (or its opposite or the many way stations between absolute euphoria and abject despair).  Emotions are both produced by the body (sometimes in relation to external stimuli, sometimes in relation to internal stimuli) and serve as the signal for self-conscious registering of one’s current states.  It’s as if a tree was not just a tree, but also a signal of “tree-ness.”  Anger is both the fact of anger and a signal to the self of its state of mind in response to some stimuli.  Certain things in the world or some internal state triggers an emotion—and then the emotion offers a path to self-awareness.  So there appears to be an “internal sense capacity,” ways of monitoring internal states and “apprehending” them that is parallel to the ways the five traditional senses provide for apprehending the external world. 

What, then, does it mean to “apprehend” something once the senses have provided the raw materials of an encounter with that thing?  Following Kant, apprehension requires a “determinate judgment.”  The percept is registered by the self when the percept is conceptualized.  Percept must become concept in order to be fully received.  To be concrete: I see the various visual stimuli that the tree offers, but I don’t apprehend the tree until I subsume this particular instance of a tree into the general category/concept “tree.”  I “recognize” the tree as a tree when I declare “that’s a tree.”  The tree in itself, standing there in the forest, does not know itself as a tree.  The concept “tree” is an artifact of human language and human culture.  Percepts only become occasions for knowledge when they are married to concepts.  Pure, non-conceptualized, percepts are just raw material—and cannot be used by human thought.  In other words, back to the representation notion.  Until what perception offers is transformed into a representation, it is unavailable for human apprehension, for being taken up by the human subject as part of its knowledge of the world. (Of course, famously in Kant, this yields the distinction between our representations and the “thing in itself.” The cost of “the way of ideas”–the cost that Austin was trying to overcome–is this gap in our knowledge of the world, our inability to see things independently of the limitations of human perceptual and conceptual equipment. Science attempts to overcome these limitations by using non-human instruments of perception (all those machines in our hospitals), but even science must acknowledge that what a machine registers, just like what a human registers, is a representation that is shaped by the nature of the representing apparatus.)

Determinate judgment appears to be instantaneous.  At least in the case of the encounter with most external things.  I see a tree and, without any discernible time lapse, identify it as a tree.  I have no awareness of processing the sensory signals and then coming to a judgment about what category the seen thing belongs to.  Percept and concept are cemented together.  Of course, there are some cases where I can’t at first make out what it is before me.  The lighting is bad, so I see a shape, but not enough more to determine what the thing is.  Such cases do indicate there is a distinction between percept and concept.  But in the vast majority of cases it is just about impossible to pry them apart.

For many artists from Blake on, the effort to pry the two apart is a central ambition.  The basic idea is that we see the world through our conceptual lenses—and thus fail to apprehend it in its full richness, its full sensual plenitude.  We filter out the particulars of this tree as we rush to assimilate the singular instance to the general category.  Thus painters strive to make us see things anew (cubism) or to offer ambiguous items that can’t be immediately or readily identified (surrealism).  They try to drive a wedge between percept and concept. “No ideas but in things,” proclaims William Carlos Williams—and this hostility to ideas, to their preeminence over things (over percepts), is shared by many modern artists.

One of the mysteries of the percept/concept pairing is the relative poverty of our linguistic terms to describe percepts.  We can in most cases quickly identify the tree as a tree, and we can certainly say that the tree’s leaves are green in the spring and rust-colored in the fall.  But more precise linguistic identification of colors eludes us.  We can perceive far more variations in colors than we can describe.  Hence the color chips at any hardware store, which offer 45 variants of the color we crudely call “blue” and invent fanciful names to distinguish each different shade from the rest.  The same, of course, holds for smells and for emotions.  We have a few, very crude terms for smell (pungent, sharp) but mostly can only identify smells in terms of the objects that produce such smells.  It smells flowery, or like hard boiled egg.  The same with taste.  Aside from sweet, sour, sharp, we enter the world of simile, so that descriptions of wine notoriously refer to things that are not wine. Notes of black currant, leather, and tobacco.  And when it comes to emotions we are entirely at sea—well aware that our crude generalized terms (love, anger, jealousy) get nowhere near to describing the complexities of what one feels.  Thus some artists (Updike comes to mind) specialize in elaborating on our descriptive vocabularies for physical and emotional percepts.  Thus a whole novel might be devoted to tracing the complexities of being jealous, to strive to get into words the full experience of that emotional state.

In any case, the paucity of our linguistic resources for describing various percepts, even in cases where the distinction between the percepts is obvious to us (as in the case of gradients of color), shows (I  believe) that there are ordinary cases where percept and concept are distinct.  We don’t immediately leap to judgment in every case.  Now, it is true that I conceptualize the various shades of blue as “color” and even as “blue.”  But I do not thereby deny that the various shades on the color chip are also different, even though I have no general category to which I can assign those different shades. 

Two more puzzles here.  The first is Wittgensteinian.  I had forgotten, until going through this recently with my granddaughter, how early children master color.  By 18 months, she could identify the basic colors of things.  Multiple astounding things here.  How did she know we were referring to color and not to the thing itself when we call a blue ball “blue”?  What were we pointing out to her: the color or the thing?  Yet she appeared to have no trouble with that possible confusion.  Except.  For a while she called the fruit “orange” “apples.”  It would seem that she could not wrap her head around the fact that the same word could name both a color and a thing.  She knew “orange” as a color, so would not use that word to name a thing.  Even more amazing than sorting colors from things, was her accuracy in identifying a thing’s color.  Given sky blue and navy blue, she would call both “blue.”  A little bit later on (two or three months) she learned to call one “light blue” and the other “dark blue.”  But prior to that distinction, she showed no inclination to think the two were two different colors.  And she didn’t confuse them with purple or any other adjacent color.  So how is it that quite different percepts get tossed into the same category with just about no confusion (in relation to common usage) at all? It would seem more obvious to identify sky blue and dark blue as two different colors.

The second puzzle might be called the ”good enough” conundrum.  I walk in the forest and see “trees.”  The forester sees a number of specific species—and very likely also singles out specific trees as “sick” or of a certain age.  His judgments are very, very different from mine—and do not suffer from the paucity of my categorical terms.  Similarly, the vintner may rely on an almost comical similes to describe the taste of the wine, but I do not doubt that his perceptions are more intense and more nuanced than mine.  A chicken/egg question here about whether having the concepts then sharpens the percepts—or if sharper percepts then generate a richer vocabulary to describe them.  Or the prior question: do we only perceive with the acuity required by our purposes?  My walk in the woods is pleasant enough for me without my knowing which specific types of trees and ferns I am seeing.  What we “filter out,” in other words, is not just a function of the limitations of our perceptual equipment, or the paucity of our concepts/vocabulary, but also influenced by our purposes.  We attend to what we need to notice to achieve something. 

Push this last idea just a bit and we get “pragmatism” and its revision of the empiricist account of perception and the “way of ideas.”  The pragmatist maxim says that our “conception” of a thing is our understanding of its consequences.  That is, we perceive things in relation to the futures that thing makes possible.  Concepts are always dynamic, not static.  They categorize what perception offers in terms of how one wants to position oneself in the world.  Percept/concept is relational—and at issue is the relations I wish to establish (or maintain) between myself and what is “out there” (which includes other people.) 

Back to the artists.  The repugnance many artists (as well as other people) feel toward pragmatism stems from this narrowing down of attention, of what might be perceived.  Focused (in very Darwinian fashion) upon what avails toward the organism’s well-being, the pragmatist self only perceives, only attends to, that which it can turn to account.  It thereby misses much of what is in the world out there.  The artists want to fling open the “doors of perception” (to quote Blake)—and see pragmatism as a species of utilitarianism, a philosophy that notoriously reduces the range of what “matters” to humans as well as reducing the motives for action to a simple calculus of avoiding pain and maximizing pleasure.  To categorize percepts immediately into two bins–these things might benefit me, these things are potentially harmful—is to choose to live in a diminished, perversely impoverished world.

Of course, Dewey especially among the “classic” pragmatists worked hard to resist the identification of pragmatism to a joyless and bare-bones utilitarianism.  The key to this attempt is “qualia”—a term that is also central in the current philosophical debates about consciousness.  “Qualia” might be defined as the “feel of things.”  I don’t just see trees as I walk in the woods.  I also experience a particular type of pleasure—one that mixes peacefulness, the stimulus/joy of physical exertion, an apprehension of beauty, a diffuse sense of well-being etc.  “Consciousness” (as understood in everyday parlance) registers that pleasure. Consciousness entails that I not only feel the pleasure but can also say to myself that I am feeling this pleasure.  Percepts, in other words, are accompanied by specific feelings that are those percept’s “qualia.” And through consciousness we can register the fact of experiencing those feelings.

The relation of concepts to “qualia” is, I think, more complex—and leads directly to the next post on the fact/value dyad.  A concept like “fraud” does seem to me to have its own qualia.  Moral indignation is a feeling—and one very likely to be triggered by the thought of fraud.  Perhaps (I don’t know about this) only a specific instance of fraud, not just the general concept of it, is required to trigger moral indignation.  But I don’t think so.  The general idea that American financiers often deploy fraudulent practices seems to me enough to make me feel indignant.

On the other hand, the general concept of “tree” does not seem to me to generate any very specific qualia.  Perhaps a faint sense of approval.  Who doesn’t like trees?  But pretty close to neutral.  The issues, in short, are whether “neutral” percepts  or concepts are possible.  Or do all percepts and concepts generate some qualia, some feelings that can be specified?  And, secondly, are all qualia related to judgments of value?  If we mostly and instantaneously make a judgment about what category a percept belongs to (what concept covers this instance), do we also in most cases and instantaneously judge the “value” of any percept?  That’s what my next post on fact/value will try to consider.

Philosophy and How One Acts

A friend with whom I have been reading various philosophical attempts to come to terms with what consciousness is and does writes to me about “illusionism,” the claim that we do not have selves. We are simply mistaken in thinking the self exists. The basic argument is the classic empiricist case against “substance.” There are various phenomena (let’s call them “mental states” in this case), but no stuff, no thing, no self, to which those mental states adhere, or in which they are collected. Thomas Metzger is one philosopher who holds this position and in an interview tells us that his position has no experiential consequences. It is not clear to me whether Metzger thinks (in a Nietzschean way) that the self is an unavoidable illusion or if Metzger thinks that ll the phenomena we attribute to the self would just continue to be experienced in exactly the same way even if we dispensed with the notion (illusion) of the self. In either case, accepting or denying Metzger’s position changes nothing. Belief or non-belief in the self is not a “difference that makes a difference” to recall William James’s formula in the first chapter of his book, Pragmatism.

The issue, then, seems to be what motivates a certain kind of intellectual restlessness, a desire to describe the world (the terms of existence) in ways that “get it right”–especially if the motive does not seem to be any effect on actual behavior. It’s “pure” theory, abstracted from any consequences in how one goes about the actualities of daily life.

There does exist, for some people, a certain kind of restless questioning.  I have had a small number of close friends in my life, and what they share is that kind of restlessness.  A desire to come up with coherent accounts of why things are the way they are, especially of why people act the ways they do. People are endlessly surprising and fascinating. Accounting for them leads to speculations that are constantly being revised and restated because each account seems, in one way or another, to fail to “get things right.”  There is always the need for another round of words, of efforts to grasp the “why” and “how” of things.  Most people, in my experience, don’t feel this need to push at things.  I was always trying to get my students to push their thinking on to the next twist—and rarely succeeded in getting them to do so. And for myself this restless, endless inquiry generates a constant stream of words, since each inadequate account means a new effort to try to get it more accurately this time.

Clearly, since I tried to get my students to do this, I think of such relentless questioning as an intellectual virtue. But what is it good for?  I take that to be the core issue of your long email to me.  And I don’t have an answer.  Where id is, ego shall be.  But it seems very clear that being able to articulate one’s habitual ways of (for example) relating to one’s lover, to know what triggers anger or sadness or neediness, does little (if anything) to change the established patterns.  Understanding (even if there were any way to show that the understanding was actually accurate) doesn’t yield much in the way of behavioral results.

This gets to your comment that if people really believed Darwin was right, as many people do, then they wouldn’t eat animals.  William James came to believe that we have our convictions first—and then invent the intellectual accounts/theories that we say justify the convictions.  In other words, we mistake the causal sequence.  We take the cause (our convictions) as the effect (our theory), when it is really the other way around.  Nietzsche was prone to say the very same thing. 

One way to say this: we have Darwin, but will use him to justify exactly opposite behaviors.  You say if we believed Darwin we wouldn’t eat animals.  I assume that the logic is that Darwin reveals animals as our kin, so eating them is a kind of cannibalism.  We don’t eat dogs because they feel “too close” to us; that feeling should be extended to all animals, not just fellow humans and domestic pets.  (The French eat horse meat although Americans won’t).  But many people use Darwin to rationalize just the opposite.  We humans have evolved as protein seeking omnivores and we developed domesticating animals we eat just as we developed agriculture to grow plants we eat.  Even if we argue that domestication and agriculture were disasters, proponents of so-called “paleo diets” include meat eating in their attempt to get back to something thought basic to our evolved requirements.  So even is Darwin is absolutely right about how life—and specifically human life—emerged, people will use the content of his theory to justify completely contradictory behaviors.

This analysis, of course, raises two questions.  1) What is the cause of our convictions if it is not some set of articulable beliefs about how the world is?  James only answer is “temperament,” an in-built sensibility, a predilection to see the world in a certain way.  (Another book I have just finished reading, Kevin Mitchell’s Free Agents [Princeton UP, 2023], says about 50% of our personality is genetically determined and that less than 10% is derived from family environment.  Mitchell has an earlier book, titled Innate [Princeton UP, 2018], where he goes into detail about how such a claim is supported.)  Nietzsche, in some places, posits an in-built will to power.  All the articulations and intellectualisms are just after the fact rationalizations.  In any case, “temperament” is obviously no answer at all.  We do what we do because we are who we are—and how we got to be who we are is a black box.  Try your damndest, it’s just about impossible to make sure your child ends up heterosexual or with some other set of desires. 

2)So why are James and Nietzsche still pursuing an articulated account of “how it really works”?  Is there no consequence at all at “getting it right”?  Shouldn’t their theories also be understood as just another set of “after the fact” rationalization?  In other words, reason is always late to the party—which suggests that consciousness is not essential to behavior, just an after-effect.

That last statement, of course, is the conclusion put forward by the famous Libet tests.  The ones that say we move our hand milli-seconds before we consciously order our hand to move.  Both Dennett [in Freedom Evolves (Penguin, 2003) and Mitchell (in Free Agents) have to claim the Libet experiment is faulty in order to save any causal power for consciousness.  For the two of them, who want to show that humans actually possess free will, consciousness must be given a role in the unfolding of action.  There has to be a moment of deliberation, of choosing between options—and that choosing is guided by reason (by an evaluation of the options and a decision made between those options) and beliefs (some picture of how the world really is.)  I know, from experience, that I have trouble sleeping if I drink coffee after 2pm.  I reason that I should not drink coffee after 2pm if I want to sleep.  So I refrain from doing so.  A belief about a fact that is connected to a reasoned account of a causal sequence and a desire to have one thing happen rather than another: presto! I choose to do one thing rather than another based on that belief and those reasons.  To make that evaluation certainly seems to require consciousness—a consciousness that observes patterns, that remembers singular experiences that can be assembled into those patterns, that can have positive forward-looking desires to have some outcomes rather than others (hence evaluation of various possible bodily and worldly states of affairs), and that can reason about what courses of action are most likely to bring those states of affairs into being.  (In short, the classical account of “rationality” and of “reason-based action.”)

If this kind of feedback loop actually exists, if I can learn that some actions produce desirable results more dependably than others, then the question becomes (it seems to me): at what level of abstraction does “knowledge” no longer connect to action?  Here’s what I am struggling to see.  Learned behavior, directed by experiences that provide concrete feedback, seems fairly easy to describe in terms of very concrete instances.  But what happens when we get to belief in God—or Darwin?  With belief in God, we seem to see that humans can persist in beliefs without getting any positive feedback at all.  I believe in a loving god even as my child dies of cancer and all my prayers for divine intervention yield no result.  (The classic overdramatized example.)  Faced with this fact, many theologians will just say: it’s not reasonable, so your models of reasoned behavior are simply irrelevant at this point.  A form of dualism.  There’s another belief-to-action loop at play.  Another black box.

On Darwin it seems to me a question of intervention.  Natural selection exists entirely apart from human action/intention/desire etc.  It does its thing whether there are humans in the world or not.  That humans can “discover” the fact of natural selection’s existence and give detailed accounts of how it works is neither here nor there to natural selection itself.  This is science (in one idealized version of what science is): an accurate description of how nature works.  The next step seems to be: is there any way for humans to intervene in natural processes to either 1) change them (as when we try to combat cancer) or 2) harness the energies or processes of nature to serve specific human ends. (This is separate from how human actions inadvertently, unintentionally, alter natural processes–as is the case in global warming. I am currently reading Kim Stanley Robinson’s The Ministry for the Future–and will discuss it in a future post.)

In both cases (i.e intentionally changing a natural process of harnessing the energies of a natural process toward a specifically human-introduced end), what’s driving the human behavior are desires for certain outcomes (health in the case of the cancer patient), or any number of possible desires in the cases of intervention.  I don’t think the scientific explanation has any direct relation to those desires.  In other words, nothing about the Darwinian account of how the world is dictates how one should desire to stand in relation to that world.  Darwin’s theory of evolution, I am saying, has no obvious, necessary, or univocal ethical consequences.  It does not tell us how to live—even if certain Darwinian fundamentalists will bloviate about “survival of the fittest” and gender roles in hunter-gatherer societies. 

I keep trying to avoid it, but I am a dualist when it comes to ethics.  The non-human universe has no values, no meanings, no clues about how humans should live.  Hurricanes are facts, just like evolution is a fact.  As facts, they inform us about the world we inhabit—and mark out certain limits that it is very, very useful for us to know.  But the use we put them to is entirely human generated, just as the uses the mosquito puts his world to are entirely mosquito driven.  To ignore the facts, the limits, can be disastrous, but pushing against them, trying to alter them, is also a possibility.  And the scientific knowledge can be very useful in indicating which kinds of intervention will prove effective.  But it has nothing to say about what kinds of intervention are desirable.

I am deeply uncomfortable in reaching this position.  Like most of the philosophers I read, I do not want to be a dualist.  I want to be a naturalist—where “naturalism” means that everything that exists is a product of natural forces.  Hence all the efforts out there to offer an evolutionary account of “consciousness” (thus avoiding any kind of Cartesian dualism) and the complementary efforts to provide an evolutionary account of morality (for example, Philip Kitcher, The Ethical Project [Harvard UP, 2011.) I am down with the idea that morality is an evolutionary product—i.e. that it develops out of the history and “ecology” of humans as social animals.  But there still seems to me a discontinuity between the morality that humans have developed and the lack of morality of cancer cells, gravity, hurricanes, photosynthesis, and the laws of thermodynamics.  Similarly, there seems to me a gap between the non-consciousness of rocks and the consciousness of living beings.  So I can’t get down with panpsychism even if I am open to evolutionary accounts of the emergence of consciousness from more primitive forms to full-blown self-consciousness.

Of course, some Darwinians don’t see a problem.  Evolution does provide all living creatures with a purpose—to survive—and a meaning—to pass on one’s genes.  Success in life (satisfaction) derives from those two master motives—and morality could be derived from serving those two motives.  Human sociality is a product of those motives (driven in particular by the long immaturity, non-self-sustaining condition, of human children)—and morality is just the set of rules that makes sociality tenable.  So the theory of evolution gives us morality along with an account of how things are.  The fact/value gap overcome.  How to square this picture of evolution with its randomness, its not having any end state in view, is unclear.  The problem of attributing purposes to natural selection, to personifying it, has bedeviled evolutionary theory from the start.

For Dennett, if I am reading him correctly, the cross-over point is “culture,”—and, more specifically, language.  Language provides a storage device, a way of accumulating knowledge of how things work and of successful ways of coping in this world.  Culture is a natural product, but once in place it offers a vantage point for reflection upon and intervention in natural processes.  Humans are the unnatural animal, the ones who can perversely deviate from the two master motives of evolution (survival and procreation) even as they strive to submit nature to their whims.  It’s an old theme: humans appear more free from natural drivers, but even as freedom is a source of their pride and glory, it often is the cause of their downfall. (Hubris anyone?) Humans are not content with the natural order as they find it.  They constantly try to change it—with sometimes marvelous, with other times disastrous, results.

But that only returns us to the mystery of where this restless desire to revise the very terms of existence comes from.  To go back to James and Nietzsche: it doesn’t seem like our theories, our abstract reasonings and philosophies, are what generate the behavior.  Instead, the restlessness comes first—and the philosophizing comes after as a way of explaining the actions.  See, the philosophers say, the world is this particular way, so it makes sense for me to behave in this specific way.  But, says James, the inclination to behave that way came first—and then the philosophy was tailored to match. 

So, to end this overlong wandering, back where I began.  Bertrand Russell (in his A History of Western Philosophy) said that Darwin’s theory is the perfect expression of rapacious capitalism—and thus it is no surprise that it was devised during the heyday of laissez-faire.  That analysis troubles me because it offers a plausible suspicion of Darwin’s theory along the William James line.  The theory just says the “world is this way” in a manner that justifies the British empire and British capitalism in 1860.  But I really do believe Darwin is right, that he has not just transposed a capitalist world view into nature.  I am, however, having trouble squaring this circle.  That is, how much our philosophizing, our theories, just offer abstract versions of our pre-existing predilections—and how much those theories offer us genuine insights about the world we inhabit, insights that will then effect our behavior on the ground.  A very long-winded way of saying I can’t come up with a good answer to the questions your email posed.