Category: Judgment

Percept/Concept (3): The Power of Culture

Culture is a notoriously vague term employed to designate groupings as small as a particular school or affinity group (sometimes labeled as “sub-cultures) and groupings as large as “the West.”  Despite its seemingly inevitable vagueness (no blood test for culture and any culture that one dares to identify will also be riven with conflicts and dissenters that belie its coherence), it is also hard to deny culture’s “stickiness.”  Habits of daily practice, the ways people interact, and the beliefs/values they hold prove fairly difficult to alter.  Efforts to wipe out religion are a good example.  Over a thousand years of hostility to Jews across Europe into Eurasia couldn’t kill Judaism off. 

I mentioned in my previous post, when commenting on the work of Andy Clark, that his understanding of how expectations (pre-existing categories and projections of what any situation is likely to present to the self) seemed excessively individualistic.  So the following sentence from Nicola Raihani’s book, The Social Instinct (St Martin’s Press, 2021) resonated for me:  “The idea that beliefs function more as signals of group membership than as vessels of epistemic truth might help us to understand why our brains seem to be chock-full of software that enables us to defend these ideas, even in the face of countervailing evidence” (218).  At the very least, our take on what the world presents to us is influenced by our need to establish solidarity with some particular others as much as that take is tuned into the non-human elements of the situation.  Not only are our beliefs in many cases adopted from others, but we cling to those beliefs in order to remain in good standing with those others. 

The other side of this coin is what I have called the desire of many post-Romantic artists to see things straight off, free from any prior cultural designation.  Here is Nietzsche’s version of that desire (aphorism 261 of The Gay Science, quoted here in full from the Walter Kaufmann translation).  As we would expect from Nietzsche, he recognizes the paradoxes embedded in that desire—and how it runs straight into conflict with Kantian “communicability.” “Most” originals bow, in the end, to the conditions imposed by communicability; these geniuses (to use Kant’s term) end up assigning names, bringing what they have apprehended back into culture’s warehouse.

“What is originality?  To see something that has no name as yet and hence cannot be mentioned although it stares us all in the face.  The way men usually are , it takes a name to make something visible for them.—Those with originality have for the most part also assigned names.”

Percept/Concept 2

I wrote a post sometime back that tried to sort out the relation between percepts and concepts.

Here’s the link to that post: https://jzmcgowan.com/2024/03/13/percept-concept/

The issue in that post was the relation of percepts (information taken in via the five senses) to concepts (the categories by which we identify what a percept has given to us).  Mostly (although not entirely) that post assumes that the percept comes first, followed by the judgment (assignment) of the appropriate concept.  The puzzle, in part, was that this kind of temporal sequencing is not experienced in most cases.  The percept and the concept arrive together.  I see a tree.  I don’t see some amorphous set of sense impressions and then decide they form a tree.  The percept comes already conceptualized, categorized.

There are cases where percept and concept are pried apart.  And many artists, especially since the Romantics, strive hard to separate the two, to overcome our habitual associations and expectations.  To break the crust of habit, the received categorizations of culture, is one of their top artistic goals.  Thus, “difficult” poetry strives to slow the reader down, to use words in unfamiliar ways so we have to puzzle out the meaning instead of simply swallowing it at one gulp.  The same for many modern paintings or music.  A moment of confusion, of disorientation, is deliberately created.

However, according to what has become the reigning orthodoxy in current consciousness studies, I was working with the wrong model of perception.  The new orthodoxy says 1) percept and concept cannot be pried apart, but even more consequently, 2) that concept always precedes percept.  Here’s is Andy Clark’s description of the current consensus: “the world we experience is to some degree the world we predict.  Perception itself, far from being a simple window onto the world, is permeated from the get-go by our own predictions and expectations.  It is permeated not simply in the sense that our own ideas and biases impact how we later judge things to be, but in some deeper, more primal, sense.  The perceptual process, the very machinery that keeps us in contact with the world, is itself fueled by a rich seam of prediction and expectation” (The Experience Machine: How Our Minds Predict and Shape Reality, Pantheon Books, 2023, p. 17).  “Every time we make sense of our worlds through perceptual encounters, we do so by means of both the incoming sensory signals and a rich invisible stream of knowledge-based predictions” (22).

This vision does seem very close to the classic pragmatism of Peirce and William James.  We move through the world in a kind of semi-somnolent habitual gliding.  Not closely attending, we walk, see trees, eat food, carry on conversations that move along predictable paths. In the ordinary course of events, very little surprises us, brings us up short.  All unfolds almost entirely as expected.  It takes pretty dramatic deviations from the expected to break through, to make us question what we have casually assumed to be the case. Inquiry (in the pragmatist parlance) begins from doubt. We must set about trying to figure out what we have seen, what is happening, when things don’t go as expected.

Andy Clark is sunnily optimistic about all this.  But it is easy to see how it could be given a pessimistic spin, as a writer like Flaubert (with his fierce hatred of received ideas) does.  That we process the world through our expectations explains confirmation bias and our bog-stupid inability to alter our expectations and prejudices (this latter word the exactly precise one for this state of affairs) in light of new evidence, new percepts.  We quite literally do not see what is in front of us; we see what we expect to see.

In Thomas Kuhn’s Structure of Scientific Revolutions, he reports on an experiment in which the subjects are shown playing cards.  The trick is that the cards have the spades and clubs colored red, and the hearts and diamonds colored black.  Over 80% of the subjects will identify a 6 with black hearts as either a 6 of hearts or a 6 of spades.  Less than 20% will say: that’s a six with black hearts.  And increasing the time subjects were given to view the cards did not substantially change the results.  We look at something quickly, make our judgment of what it is, and move on.  Anomalies are hard for us to see.

Of course, it is hopeful that a certain percentage do recognize that something is awry, that what perception is offering does not match what was expected/predicted.  Clark sees humans as self-correcting animals, adjusting their judgments as we go along.  His model is basically one in which “errors” in prediction are registered—and then serve to alter expectations. 

Clark’s reliance on a virtuous feedback loop becomes clear when he turns to an account of action. (Again, his account chimes with classic pragmatism.)  “[S]uccessful action involves a kind of self-fulfilling prophecy.  Predicting the detailed sensory effects of a movement is what causes the movement to unfold.  By making prediction the common root of both perception and action, predictive processing (active inference) reveals a hidden unity in the workings of the mind.  Action and perception form a single whole, jointly orchestrated by the drive to eliminate errors in prediction.  . . . In other words, the idea of a completed action is what brings the actual action about” (70-71).  We are guided by our vision of consequences; we then act to bring the desired consequences about.

The feedback loop comes into play in terms of what Clark calls “precision weightings.”  “Various estimates of precision alter patterns of post-synaptic influence and so determine what (right here, right now) to rely on and what to ignore.  This is also the way brains balance the influence of sensory evidence against predictions.  In other words, precision variations control what bits of what we know and what we sense will be most influential, moment by moment, in bringing about further processing and actions.  Expressed like that, the intimacy of precision and attention is apparent.  Precision variation is what attention (a useful but somewhat nebulous concept) really is.  . . . [A]ttention is the brain adjusting its precision-weightings as we go about our daily tasks, using knowledge and sensing to their best effect.  By attending correctly, I become better able to spot and respond to whatever matters most for the task I am trying to perform.  Precision estimation is thus the heart and soul of flexible, fluid intelligence” (50). 

Although he doesn’t say this, Clark has here added “purposes” to expectations and predictions.  We attend to (notice) the elements of a situation relevant to our current purposes.  And we adjust our understanding of those situational elements (attain more precise readings of the situation) in response to the feedback received as our purposes are attained or thwarted.  So the senses (perceptions) do have their role to play; they do provide information about the situation.  But what information is taken in and how it is processed (evaluated) is guided by the prior purposes/expectations. 

Clark invokes William James briefly at this point—and accepts that this account reverses common-sense notions of the causal sequence: i.e. that we see something first, then act upon it; here, by contrast, we see something by virtue of the fact that we are looking for it in relation to our purposes/expectations.  Attention is influenced more by what we expect to see or are specifically looking for than by what is actually present in the world we encounter.  But the ability to shift attention, to move toward a more precise apprehension of the actual situation is “the heart and soul of flexible, fluid intelligence.”

A corollary of this view is that situations, in almost all cases, contain more elements than any perceiver/agent takes in. Attention is selective; we simply do not see what is neither expected nor relevant to our purposes. William James constantly stressed this “more”: the fact that our knowledge is almost always incomplete. Again, it does seem that many artists are dedicated to bringing more of a situation into view–or, at least, at bringing what has not been seen, what had been neglected by common sense (a loaded term), to our attention.

Clark does not take up the issue of just what is required to break through the crust of expectations. He only briefly considers the problems of entrenched prejudices, of failures to apprehend the real—or even the problem of bad judgments about the actual affordances of a particular situation.  He does talk of “disordered attention” (51) and “aberrant attention” (52) and considers clinical ways of intervening to redirect attention in such cases.  To my mind, however, he is over-optimistic about the ability to shift our incoming biases.

Clark also takes a very individualistic stance on the nature of our preconceptions and expectations.  He sees them as the product of individual experience much more than of socialization (however you want to conceive of the process by which individuals are provided with a set of cultural expectations and beliefs.)  And he doesn’t address the problem of the loss of flexibility as one ages.  At what point are the individual’s expectations hardened to the point where they are very hard to revise, to un-fix. 

So I have posed two questions: 1) how strong does the disconfirmation of expectations have to be to actually break through and garner attention? What kinds of shocks actually move us to doubt and inquiry (as conceived in the pragmatist model)? And 2) at what age are expectations mostly entrenched and thus resistant to revision?  An open mind is a wonderful thing in part because it is so rare.

And just as open minds are rare, so are true idiosyncratic individuals.  There is no reason to deny individual variations, but behavior and beliefs also, to a very large extent, clump.  We are all strongly influenced by our closest fellow humans, adopting their styles, beliefs, values, habits etc. 

William James famously wrote “the trail of the human serpent is over all.”  I will admit that acknowledging the conceptual overlay through which all perceptions are processed depresses me.  I am enough of a Romantic that I want, along with William Blake, to throw open the doors of perception. I don’t deny that the available evidence speaks strongly in favor of the new orthodoxy about how we process the world.  I just want to be among the twenty percent who see red spades.

Richard Rorty and Qualia

I have recently finished reading Richard Rorty’s Pragmatism as Anti-Authoritarianism (Harvard UP, 2021).  The book is basically a series of lectures Rorty gave in Spain in 1996, but which (although published in Spanish in the late 1990s) were only published in English in 2021.  Some of the material from the lectures, however, were incorporated into essays Rorty did publish in English.

The occasion for reading the book was to participate in a conversation with my friends Graham Culbertson and Meili Steele—a conversation that will be published as a podcast on Graham’s site, Everyday Anarchism.  I will post a link when the episode goes live.

We did not, in our conversation, get deep into the weeds about Rorty’s epistemology.  We were more interested in his politics and directly in his “anti-authoritarianism” (as befits the context of an interest in anarchism).  Rorty’s claim in the lectures is that a “realist” epistemology (that aspires to a “true” account of how the world really is) is another version of theology.  That is, realism off-loads “authority” to some non-human entity, in this case “the world” instead of “God or gods.”  We must bow before the necessity of the way things are.

For Rorty, there is no definitive way things are, only different ways of describing things.  He understands that his claim 1) goes against “common sense” as well as the views of realist philosophers like Peirce, John Searle, and Thomas Nagel; and 2) cannot be itself justified as a statement of how humans process the world (i.e. as any kind of claim about how humans “are”).  There are no knock-down arguments for his non-realist position as contrasted to the realist one. 

The only basis for argument is comparative.  Try out my way of seeing things, says Rorty, and judge whether you find the consequences better.  I was surprised, reading Rorty this time, just how hard-core a consequentalist he is. The only criteria for making choices (whether they be choices of philosophical positions or choices about how to act) is whether one choice leads to more desirable outcomes than another.

Better for what?  That depends entirely on what one’s purposes are.  I can piss in the urinal or I can put it up on the museum wall and call it “Fountain.”  Neither use (or description) gets the urinal “right” (i.e. gets to some “truth” or “essence” or core identity about it).  Both are possible descriptions/uses of it amid a myriad of other possible uses/descriptions—and no one description/use is more fundamental or real than any other one.  Rorty’s is anti-reductionist and anti-depth. There is no basic “stuff” that underlies appearances, no essence lurking in the depths.  The physicist’s description makes sense within the language game of physics just as Duchamp’s description makes sense within the language game of modernist art.  But neither one can claim to tell the “real truth” about the urinal; each only illuminates another way the urinal can be described in relation to languages that humans deploy in pursuing different endeavors.

Along with being anti-reductionist (no description is more fundamental than any other or offers a way of comprehending other descriptions) and anti-depth, Rorty’s position is that identity is always relational and only activated (valid) in specific contexts.  Hence the appeal to Wittgensteinian “language games.”

What a thing “is” is only a product of its relation to the human who is describing it.  Rorty names his position “pan-relationalism.”  (Title of Chapter Five.)  His position is “that nothing is what it is under every and any description of it.  We can make no use of what is undescribed, apart from its relations to the human needs and interests which have generated one or another description. . . . A property is simply a hypostatized predicate, so there are no properties which are incapable of being captured in language.  Predication is a way of relating things to other things, a way of hooking up bits of the universe with other bits of the universe, or, if you like, of spotlighting certain webs of relationships rather than other webs.  All properties, therefore, are hypostatizations of webs of relationships” (85-86).

As a fellow pragmatist, I am inclined to accept pan-relationalism.  I very much want to be an anti-reductionist and an “appearances are all we get” kind of guy.  Many years ago I coined a slogan that I think captures this pragmatist view.  To wit:  “nothing is necessarily anything, but every thing is necessarily some thing.”  What this slogan says is that (to go back to the previous two posts) the things we encounter are plastic; they can be described—and related to—in a variety of ways.  Those things underdetermine our responses to, understandings of, and descriptions of them.  We can adopt any number of relational stances toward what experience offers.  So that’s the denial that anything is necessarily some particular, definitive, inescapable thing.

The other side of the slogan says: we do adopt a stance toward, we do make a judgment about, we do describe what we encounter.  We characterize it.  And in the Putnam/Burke manner of denying the fact/value divide, that adoption of a stance or a mode of relationship is dependent on the assessment we make of what experience (or the “situation”) offers.  We don’t just perceive or encounter something; we assess it, enter into a relationship to it (even if that relationship is indifference.  Relationships come in degrees of intensity, from judging this is something I needn’t attend to all the way to passionate involvement.) The claim is that we necessarily adopt some stance; there are multiple possibilities, but not the possibility of having no relation to the thing it all, to leave it utterly uncategorized. It will be “some thing,” although not necessarily any one thing.

Rorty offers his own version of this denial of the fact/value divide.  “To be a pan-relationalist means never using the terms ‘objective’ and ‘subjective’ except in the context of some well-defined expert culture in which we can distinguish between adherence to the procedures which lead the experts to agree and refusal to so adhere.  It also means never asking whether a description is better suited to an object than another description without being able to answer the question ‘what purpose is this description supposed to serve’? (87).  Since all relations to objects are relative to purposes, there is no such thing as a non-relational observation that allows one to “represent the object accurately” (87) as it is in itself. 

So I am down with Rorty’s pan-relationalism.  But where he and I part company—and what generates the title of this blog post—is his denial of any relations that are non-linguistically mediated.  What Rorty wants to jettison from the pragmatism that he inherits from his hero Dewey is the concept of “experience.”  To Rorty, “experience” is just another version of the realist’s desire to establish a direct contact with the stuff of the universe. 

“Pragmatists agree with Wittgenstein that there is no way to come between language and its objects.  Philosophy cannot answer the question: Is our vocabulary in accord with the way the world is?  It can only answer the question” Can we perspicuously relate the various vocabularies we use to one another, and thereby dissolve the philosophical problems which seem to arise at the places where we switch over from one vocabulary into another? . . . . If our awareness of things is always a linguistic affair, if Sellars is right that we cannot check our language against our non-linguistic awareness, then philosophy can never be anything more than a discussion of the utility and compatibility of beliefs—and, more particularly, of the vocabularies in which those beliefs are formulated” (165).

In the vocabulary of my last two posts, Rorty writes: “Sellars and Davidson can be read as saying that Aristotle’s slogan, constantly cited by the empiricists, ‘Nothing in the intellect which was not previously in the senses,’ was a wildly misleading way of describing the relation between the objects of knowledge and our knowledge of things. . . . [W]e [should] simply forget about sense-impressions, and other putative mental contents which cannot be identified with judgments” (160).  No percepts in the absence of concepts.  No sensual experiences or emotional states that have not already been judged, already been subsumed under a concept.

This position—that there is no “non-linguistic” experience against which our words are measured leads to Rorty’s denial of “qualia.”  He accepts Daniel Dennett’s assertion that “there are no such things as qualia” (113), a position Rorty must defend (as Dennett also attempts to do in his work) against “the most effective argument in favor of qualia,” namely “Frank Jackson’s story of Mary the Color Scientist—the history of a woman blind from birth who acquires all imaginable ‘physical’ information about the perception of color, and whose sight is thereafter restored.  Jackson claims that at the moment she can see she learns something that she didn’t know before—namely what blue, red, etc. are like” (113).

In his usual maddening fashion, Rorty tells us the debate between Jackson and Dennett is not resolvable; it just comes down to “intuitions”.”  “The antinomies around which philosophical discussions cluster are not tensions built into the human mind but merely reflections of the inability to decide whether to use an old tool or a new one.  The inability to have an argument which amounts to manning one or another intuition pump results from the fact that either tool will serve most of our purposes equally well” (115).  I always think of Rorty of the Alfred E. Neumann of philosophers.  “What me worry?” as he makes his characteristic deflationary move: nothing much hangs on this disagreement, and there is no way to rationally adjudicate it.  You talk your way, I’ll talk mine, and let a thousand flowers bloom.

I do believe, along with William James (in the first lecture of Pragmatism), that we should be concerned about disagreements that actually generate consequential differences in behavior and practices.  Perhaps it’s because I come from the literary side of thinking about how and why one writes that I do find this difference consequential.  From the literary viewpoint, there is a persistent experience of the inadequacy of words, and a resultant attempt to “get it right,” to capture the “raw feel” of love, jealousy, envy—or of the perceptions and/or emotions that arise during a walk down a crowded city street.  Rorty’s only approach to this sense of what a writer is striving to accomplish seems to me singularly unhelpful.  He tells us that “the alternative is to take them [our linguistic descriptions] as about things, but not as answering to anything, either objects or opinions.  Aboutness is all you need for intentionality.  Answering to things is what the infelicitous notion of representing adds to the harmless notion of aboutness. . . . Aboutness, like truth, is ineffable, and none the worse for that” (171). So, it seems, Rorty accepts that we talk and write “about” things, but denies that worrying about the “accuracy” of our talk in, in any way, useful. And tells us that there really is nothing we can say about “aboutness.” Not helpful.

Note how this passage trots in the notions of “things” and of “the ineffable.”  The problem with positions that eschew common sense (i.e. the prevailing way we think about things) is that they must strive to revise the “errors” built into the very ways we talk.  Think of Nietzsche’s fulminating against the idea of “selfhood” as generated by grammatical form.  In any case, it’s awfully hard to jettison the hypostatization of “things”—which possess a kind of permanence and relative stability in their modes of manifestation in favor of a purely relational account of them (of what exactly are we talking if things have no existence except in relationship; is it impossible to identify separate entities in any relationship, or are all boundaries dissolved?)  Rorty, in fact, smuggles the world that has been well lost back in when he tells us that “we are constantly interacting with things as well as persons, and one of the ways we interact with both is through their effects upon sensory organs. But [this view dispenses with] the notion of experience as a mediating tribunal.  [We} can be content with an account of the world as exerting control on our inquiries in a merely causal way” (178-79). 

That causation merits the adjective “merely” follows from Rorty’s insistence that the world’s (or others’) causal powers upon us are distinct from the practices of (the language games of) “justification.”  We should “avoid a confusion between justification and causation, [which] entails claiming that only a belief can justify a belief” (179).  Justification is not based on an appeal to some way the world is, but to the warrants I offer for my belief that a certain stance or a certain course of action is preferable in this context.  Understanding justification in this linguistic, practice-oriented way means “drawing a sharp line between experience as cause of the occurrence of a justification, and experience as itself justificatory.  It means reinterpreting “experience” as the ability to acquire belief non-inferentially as a result of neurologically describable causal transactions with the world” (179-180). 

At this point, I think I must totally misunderstand Rorty because it seems to me he has reintroduced everything that he claimed to be excluding when he declares “I see nothing worth saving in empiricism” (189).  If the world generates beliefs through some causal process—and, even worse, if that generation of beliefs is “non-inferential”—then how have we escaped from the “myth of the given?”  Here’s what Rorty writes immediately following the passage just quoted: “One can restate this reinterpretation of ‘experience’ as the claim that human beings’ only ‘confrontation’ with the world is the sort which computers also have.  Computers are programmed to respond to certain causal transactions with input devices by entering certain program states.  We humans program ourselves to respond to causal transactions between the higher brain centers and the sense organs with dispositions to make assertions.  There is no epistemologically interesting difference between a machine’s program state and our dispositions, and both may equally well be called ‘beliefs’ or ‘judgments’”(180).  Generously, we can translate “We humans program ourselves” to mean “natural selection” has done that work—since (surely) we have been “given” the neurological equipment required to be sensitive to (to register) the world’s causal inputs.  I am pretty sure I didn’t program myself—at least not consciously.  Then again, Rorty is inclined to deny the whole notion of “conscious experience” (see page 121).

To repeat: I must be missing something here, since Rorty thinks appealing to what the world “causally” provides is radically different than appeals to qualia, or conscious experience, or non-linguistic percepts.  And I just don’t see the difference.

More directly to the point, however, is Meili’s brilliant observation in our podcast conversation that Rorty’s politics is based upon sensitivity to suffering, which is hard to claim is linguistic.  Do computers feel pain? Presumably not, which does seem to introduce an “epistemologically interesting distinction” between the computer’s processes and human dispositions. In Contingency, Irony, and Solidarity (Cambridge UP, 1989), Rorty characterizes “moral progress” as “the ability to see more and more traditional differences (of tribe, religion, race, customs, and the like) as unimportant when compared with similarities with respect to pain and humiliation” (192).  His politics works from the hope of fostering solidarity through “imaginative identification with the details of others’ lives” (190), with an understanding of others’ vulnerability to pain and humiliation central to that identification, which can produce “a loathing for cruelty—a sense that it is the worst thing we do” (190).  Meili’s point was that “pain and humiliation” are distinctly non-linguistic.  We are being gas-lighted when someone tries to convince us we are not feeling pain, have not been humiliated.  Thus, Rorty’s political commitments seem to belie his insistence that it’s all linguistic, that there are no percepts, no experiences apart from linguistic categories/concepts.  Is a dog or a new-born incapable of feeling pain? Surely not. Maybe incapable of feeling humiliated—but even that seems an open question.

Still, I don’t want to end by suggesting some kind of full-scale repudiation of Rorty’s work, from which I have learned a lot, and I remain sympathetic to much of it.  So I want to close with the passage in the book where I think Rorty’s make his case most persuasively.  It is also where he is most Darwinian in precisely the ways that James and Dewey were Darwinian—namely, in viewing humans as “thrown” (Heidegger’s terms) into a world with which they must cope.  Humans are constantly interacting with their environment, assessing its affordances and its constraints/obstacles, adapting to that world (which includes others and social institutions/practices as well as “natural” things), learning how to negotiate it in ways that further their purposes, acting to change it where possible, suffering what it deals out when changes are not feasible.  In this view, there can never be a “neutral” view of something or some situation; things and situations are “always already” assessed and characterized in relation to needs and purposes.  “The trail of the human serpent is over all” as James memorably put it; there is no “innocent” seeing.

Here’s Rorty’s version of that way of understanding how humans are situated in the world.  “Brandom wants to get from the invidious comparison made in such de re ascriptions as ‘She believes of a cow that it is a deer,’ to the traditional distinction between subjective appearance and objective reality.  It seems to me that all such invidious comparisons give one is a distinction between better and worse tools for handling the situation at hand—the cow, the planets, whatever.  They do not give us a distinction between more or less accurate descriptions of what the thing really is, in the sense of what it is all by itself, apart from the utilities of human tools for human purposes. . . . I can restate my doubts by considering Brandom’s description of ‘intellectual progress’ as ‘making more and more true claims about the things that are really out there to be talked and thought about.’  I see intellectual progress as developing better and better tools for better and better purposes—better, of course, by our lights” (172).

We are in the Darwinian soup, always navigating our way through an environment that provides opportunities and poses threats.  There is no way to abstract ourselves from that immersion.  And Rorty thinks we will be better off if we make common cause with the others in the same predicament.

Fact/Value

I noted in my last post that many twentieth-century artists aspired to an “innocent” perception of the world; they wanted to see (and sense) the world’s furniture outside of the “concepts” by which we categorize things.  We don’t know if babies enjoy such innocence in the first few months of life—or if they only perceive an undifferentiated chaos.  It is certainly true that, by six months at the latest, infants have attached names to things.  Asked to reach for the cup, the six month old will grasp the cup, not the plate.

If the modernist artist (I have no idea what 21st century artists are trying to do) wants to sever the tight bond between percept and concept, it has been the scientists who want to disentangle fact from value.  The locus classicus of the fact/value divide is Hume’s insistence that we cannot derive an “ought” from an “is.”  For humanists, that argument appears to doom morality to irreality, to merely being something that humans make up.  So the humanists strive to reconnect fact and value.  But, for many scientists, the firewall between fact and value is exactly what underlies science’s ability to get at the “truth” of the way things are.  Only observations and propositions (assertions) shorn of value have any chance of being “objective.”  Values introduce a “bias” into accounts of what is the case, of what pertains in the world.

Thus, it has been that artists, the humanists, and philosophers friendly to aesthetics and qualia that have argued that fact and value cannot be disentangled.  Pragmatism offers the most aggressive of these philosophical assaults on the fact/value divide.  The tack pragmatism takes in these debates is not to argue against Hume’s logic, his “demonstration” that you can’t deduce an “ought” from an “is.” 

Instead, pragmatism offers a thoroughly Darwinian account of human (and not just human) being in the world.  Every living creature is always and everywhere “evaluating” its environment.  There are no passive perceivers.  Pragmatism denies what James and Dewey both labeled “the spectator view of knowledge.”  Humans (and other animals) are not distanced from the world, looking at it from afar, and making statements about it from that position of non-involvement.  Rather, all organisms are immersed in an environment, acting upon it even as they are being acted upon by it.  The organism is, from the start, engaged in evaluating what in that environment might be of use to it and what might be a threat.  The pursuit of knowledge (“inquiry” in the pragmatist jargon) is 1) driven by this need to evaluate the environment in terms of resource/threat and 2) an active process of doing things (experiments; trial and error) that will better show if what the environment offers will serve or should be avoided.

If, in this Darwinian/pragmatist view, an organism was to encounter anything that was neutral, that had no impact one way or the other on the organism’s purposes, that thing would most likely not be noticed at all, or would quickly disappear as a subject of interest or attention.  As I mentioned in the last post, this seems a flaw in pragmatist psychology.  Humans and other animals display considerable curiosity, prodding at things to learn about them even in the absence of any obvious or direct utility.  There are, I would argue, instances of “pure” research, where the pay-off is not any discernible improvement in an organism’s ability to navigate the world.  Sometimes we just want to know something to satisfy that particular kind of itch.

So maybe the idea is that scientists aspire to that kind of purity, just as so many 20th century artists aspired to the purity of a non-referential, non-thought-laden art.  And that scientific version of the desire for purity gets connected to an epistemological claim that only such purity can guarantee the non-biased truth of the conclusions the scientist reaches.  The pragmatist will respond: 1) there is still the desire for knowledge driving your inquiry, so you have not achieved a purity that removes the human agent and her interests; and 2) the very process of inquiry, which is interactive, means that the human observer has influenced what the world displays to her observations (which is why Heisenberg’s work was so crucial to Dewey—and seemed to Dewey a confirmation of what pragmatism had been saying for thirty years before Heisenberg articulated his axioms about observation and uncertainty.)  The larger point: since action (on the part of humans and for other organisms) is motivated, and because knowledge can only be achieved through action (not passively), there is no grasping of “fact” that has not been driven by some “value” being attached to gaining (seeking out) that knowledge. 

Even if we accept this pragmatist assault on the fact/value divide, we are left with multiple problems. One is linguistic.  Hilary Putnam, in The Collapse of the Fact/Value Dichotomy and Other Essays (Harvard UP, 2002), basically argues that there are no neutral words, at least no neutral nouns or verbs.  (I have written about Kenneth Burke’s similar argument in my book, Pragmatist Politics (University of Minnesota P, 2012).  Every statement about the world establishes the relation of the speaker to that world (and to the people to whom the statement is addressed.)  In other words, every speech act is a way of adjusting the speaker’s relation to the content and the audience of that utterance.  Speech acts, like all actions, are motivated—and thus can be linked back to what the speaker values, what the speaker strives to accomplish.  Our words are always shot through and through with values—from the start.  And those values cannot be drained from our words (or from our observations) to leave only a residue of “pure” fact.  Fact and value are intertwined from the get go—and cannot be disentangled. 

Putnam, however, (like James, Dewey, and Kenneth Burke) is a realist.  Even if, as James memorably puts it, “the trail of the human serpent is over all,” the entanglement of human aspirations with observations about the world and others does not mean the non-self must forego its innings.  There is feedback.  Reality does make itself known in the ways that it offers resistance to attempts to manipulate it.  James insists that we don’t know how plastic “reality” is until we have tried to push the boundaries of what we deem possible.  But limits will be reached, will be revealed, at certain points.  Pragmatism’s techno-optimism means that James and Dewey thought that today’s limits might be overcome tomorrow.  That’s what generates the controversial pragmatist “theory of truth.”  Truth is what the experiments, the inquiries, of today have revealed.  But those truths can only be reached as a result of a process of experimentation, not passively observed, and those truths are not “final,” because future experiments may reveal new possibilities in the objects that we currently describe in some particular way.  Science is constantly upending received notions of the way things are.  If the history of science tells us anything, it should be that “certainties” continually yield to new and different accounts.  Truth is “made” through the process of inquiry—and truth is provisional.  Truth is always, in Popper’s formulation, always open to disconfirmation.

In short, pragmatism destabilizes “fact” even as it proclaims “value” is ineliminable.

I have suggested that “fact” is best understood as what in the external world frustrates (or, at least, must be navigated by) desire.  Wishes are not horses.  Work must be done to accomplish some approximation of what one desires.  The point is simply that facts are not stable and that our account of facts will be the product of our interaction with them, an interaction that is driven by the desires that motivate that engagement.

What about “value”?  I have been using the term incredibly loosely.  If we desire something, then we value it.  But we usually want to distinguish between different types of value—and morality usually wants to gain a position from which it can endorse some desires and condemn others.  In short, value is a battleground, where there are constant attempts to identify what is “truly” valuable alongside attempts to banish imposters from the field.  There is economic value, eudemonic value, moral value, and health value.  So, for example, one can desire to smoke tobacco, but in terms of “value for health,” that desire will be deemed destructive. 

Any attempt to put some flesh on the bare bones term “value” will immediately run into the problem of “value for” and “pure” (or intrinsic) value.  Some values are instrumental; they are means toward a specific end.  If you want to be healthy, it is valuable not to smoke.  If you want to become a concert pianist, it is valuable to practice. 

The search for “intrinsic” values can quickly become circular—or lead to infinite regress. Is the desire to become a concert pianist “intrinsic”? It certainly seems to function as an end point, as something desired that motivates and organizes a whole set of actions.  But it is easy to ask “why” do I value becoming a concert pianist so highly?  For fame, for love of music, to develop what I seem to have talent for (since, given my inborn talents, I couldn’t become a professional baseball player)?  Do we—could we ever—reach rock bottom here? 

The Darwinians, of course, think they have hit rock bottom.  Survival to the point of being able to reproduce.  That’s the fundamental value that drives life.  The preservation of life across multiple generations.  When organisms are, from the get go, involved in “evaluation,” in assessing what in the environment is of value to them, that evaluation is in terms of what avails life.  (The phrase “wealth is what avails life” comes from a very different source: John Ruskin’s Unto this Last, his screed against classical liberalism’s utilitarian economics.) 

One problem for the Darwinians is that humans (at least, among animals) so often value things, and act in ways, that thwart or even contradict the Darwinian imperatives.  Daniel Dennett argues that such non-Darwinian desires are “parasites”; they hitch a ride on the capacities that the human organism has developed through natural selection’s overriding goal of making a creature well suited to passing on its genes.  Some parasites, Dennett writes, “surely enhance our fitness, making us more likely to have lots of descendants (e.g. methods of hygiene, child-rearing, food preparation); others are neutral—but may be good for us in other, more important regards (e.g., literacy, music, and art), and some are surely deleterious to our genetic future, but even they may be good for us in other ways that matter more to us (the techniques of birth control are an obvious example)” (Freedom Evolves, p. 177).

Whoa!  I love that Dennett is not a Darwinian fundamentalist. (In particular, it’s good to see him avoid the somersaults other Darwinians perform in their effort to reduce music and art to servants of the need to reproduce.) The Darwinian imperative does not drive all before it.  But surely it is surprising that Dennett would talk of things that “matter more to us” than the need to insure our “genetic future.”  He has introduced a pluralism of values into a Darwinian picture that is more usually deployed to identify an overriding fundamental value.

Other candidates for a bedrock intrinsic value run into similar difficulties.  Too much human behavior simply negates each candidate.  For example, we might, with Kant, want to declare that each individual human life is sacred, an end in itself, not to be used or violated.  But every society has articulated conditions under which the killing of another human being is acceptable.  And if we attempt to find “the” value that underwrites this acceptance of killing nothing emerges.  So it does seem that we are left with a pluralism of values.  Different humans value different things.  And that is true within a single society as well as across cultures.  Values, like facts, are in process—continually being made and re-made.  And, as with facts, there is feedback—in terms of the praise/blame provided by others, but also in terms of the self-satisfaction achieved by acting in accordance with one’s values.  Does becoming a concert pianist satisfy me?  Does it make me happy?  Does it make me respect myself?  Am I full of regrets about the missed opportunities that came with practicing five hours a day?  Would I do it all over again (that ultimate Nietzschean test)?

When it comes to entrenched values, things get even trickier.  Here’s the dilemma: we usually condemn actions that are “interested.”  We don’t trust the word of someone who is trying to sell us something.  We want independent information from that which the seller provides.  The seller has an “interest” in distorting the facts.  Now we are back to the urge to find an “objective” viewpoint.  In cases of lying, the issue is straightforward.  The interested party knows all the relevant information, but withholds some of it in order to deceive.

But what if one’s “interest” distorts one’s view of things?  What if the flaw is epistemological more than it is moral?  I see what I want to see. Confirmation bias.  My “values” dictate how I understand the circumstances within which I dwell.  My very assessment of my environment is to a large extent a product of my predilections.  Feedback is of very limited use here.  Humans seem extraordinarily impervious to feedback, able to doggedly pursue counterproductive actions for long periods of time.  In this scenario, “interest” can look a lot like what some theorists call “ideology.”  The question is how to correct for the distortions that are introduced by the interested viewer.  Isn’t there some “fact” of the matter that can settle disputes?

The despairing conclusion is that, in many instances, there is no settling of such disputes.   What would it take to convince Trumpian partisans that the 2020 election was not stolen?  Or that Covid vaccines do not cause cancer?  All the usual forms of “proof” have been unavailing.  Instead of having “fact” drive “value” out of the world as the humanists feared (that feat motivated Kant’s whole philosophy), here we have “value” driving “fact” to the wall.  A world of pluralistic values creates, it now appears, a world of pluralistic facts.  No wonder that we get a call for bolstering the bulwark of “fact.” 

As I have already made clear, I don’t think we can get back to (or even that we were ever really there) a world of unsullied facts.  Our understandings of the world have always been “interested” in the ways that pragmatism identifies.  The only safeguard against untrammeled fantasy is feedback—and the 2020 stolen election narrative shows how successfully feedback can be avoided.  We have the various rhetorical moves in our toolbox—the presentation of evidence, the making of arguments, the outlining of consequences, emotional appeals to loyalties, sympathies, and indignation—for getting people to change their minds. These techniques are the social forms of feedback that go along with the impersonal feedback provided by the world at large.  But that’s it.  There is no definitive clincher, no knock-down argument or proof that will get everyone to agree.  It’s pluralism all the way down.

Here’s something else that is deeply troubling—and that I don’t know exactly where I stand.  Is there any difference between “interest” and moral value?  Usually the two are portrayed as opposed.  Morality tries to get individuals to view things from a non-personal point of view (Thomas Nagel’s famous “view from nowhere” or Rawls’ “veil of ignorance”).  “Interest” is linked to what would personally benefit me—with nary a care for how it might harm you.  Some philosophers try to bridge this gap with the concept of “enlightened self-interest.”  The idea is that social interactions are an iterative game; I am in a long-term relation with you, so cheating you now may pay off in the short-term, but actually screws up the possibility of a sustained, and mutually beneficial, relationship over the long haul.  So it is not really in my interest to harm you in this moment.  Morality, then, becomes prudential; it is the wisest thing to do given that we must live together.  Humans are social animals and the basic rules of morality (which encode various forms of consideration of the other) make social relations much better for all involved.

In this scenario, then, “interest” and “moral value” are the same if (and only if) the individual takes a sufficiently “long view” of his interest.  The individual’s “interests” are what he values—and among the things he values is acting in ways his society deems “moral.”  There will remain a tension between what seems desirable in the moment and the longer term interest served by adhering to morality’s attempt to instill the “long view,” but that tension does not negate the idea that the individual is acting in his “interest.”

A more stringent view, one that would drive a deeper wedge between morality and interest, would hold that morality always calls for some degree of self-abnegation.  Morality requires altruism or sacrifice, the voluntary surrender of things I desire to others.  I must act against selfishness, against self-interest, in order to become truly moral.  This is the view of morality that says it entails the curbing of desire.  I must renounce some of my desires to be moral.  Thus, morality is not merely prudential, not just the most winning strategy for self-interest in the long run.  Morality introduces a set of values that are in contradiction with the values that characterize self-interest.  Morality brings along with it prohibitions, not just recommendations of the more prudent way to handle relations with one’s fellows.  It’s not simply practice justice if you want peace.  It’s practice justice even if there is no pay-off, even if you are only met with ingratitude and resentment.

To go back to the Darwinian/pragmatist basic scenario.  We have an organism embedded in an environment.  That organism is always involved in evaluating that environment in terms of its own needs and interests.  Thus values are there from the very start and inextricably involved in the perception of facts.  The question is whether we can derive the existence of morality in human societies from the needs that arise from the fact of human sociality.  That’s the Darwinian account of morality offered by Philip Kitcher among others, which understands morality in terms of its beneficial consequences for the preservation and reproduction of human life.  Morality in that account aligns with the fundamental interest identified in Darwinian theory.

Or is morality a Dennett-like parasite?  An intervention into the Darwinian scheme that moves away from a strict pursuit of interest, of what enhances the individual’s survival and ability to reproduce. 

To repeat: I don’t know which alternative I believe.  And am going to leave it there for now.