Category: philosophy

Richard Rorty and Qualia

I have recently finished reading Richard Rorty’s Pragmatism as Anti-Authoritarianism (Harvard UP, 2021).  The book is basically a series of lectures Rorty gave in Spain in 1996, but which (although published in Spanish in the late 1990s) were only published in English in 2021.  Some of the material from the lectures, however, were incorporated into essays Rorty did publish in English.

The occasion for reading the book was to participate in a conversation with my friends Graham Culbertson and Meili Steele—a conversation that will be published as a podcast on Graham’s site, Everyday Anarchism.  I will post a link when the episode goes live.

We did not, in our conversation, get deep into the weeds about Rorty’s epistemology.  We were more interested in his politics and directly in his “anti-authoritarianism” (as befits the context of an interest in anarchism).  Rorty’s claim in the lectures is that a “realist” epistemology (that aspires to a “true” account of how the world really is) is another version of theology.  That is, realism off-loads “authority” to some non-human entity, in this case “the world” instead of “God or gods.”  We must bow before the necessity of the way things are.

For Rorty, there is no definitive way things are, only different ways of describing things.  He understands that his claim 1) goes against “common sense” as well as the views of realist philosophers like Peirce, John Searle, and Thomas Nagel; and 2) cannot be itself justified as a statement of how humans process the world (i.e. as any kind of claim about how humans “are”).  There are no knock-down arguments for his non-realist position as contrasted to the realist one. 

The only basis for argument is comparative.  Try out my way of seeing things, says Rorty, and judge whether you find the consequences better.  I was surprised, reading Rorty this time, just how hard-core a consequentalist he is. The only criteria for making choices (whether they be choices of philosophical positions or choices about how to act) is whether one choice leads to more desirable outcomes than another.

Better for what?  That depends entirely on what one’s purposes are.  I can piss in the urinal or I can put it up on the museum wall and call it “Fountain.”  Neither use (or description) gets the urinal “right” (i.e. gets to some “truth” or “essence” or core identity about it).  Both are possible descriptions/uses of it amid a myriad of other possible uses/descriptions—and no one description/use is more fundamental or real than any other one.  Rorty’s is anti-reductionist and anti-depth. There is no basic “stuff” that underlies appearances, no essence lurking in the depths.  The physicist’s description makes sense within the language game of physics just as Duchamp’s description makes sense within the language game of modernist art.  But neither one can claim to tell the “real truth” about the urinal; each only illuminates another way the urinal can be described in relation to languages that humans deploy in pursuing different endeavors.

Along with being anti-reductionist (no description is more fundamental than any other or offers a way of comprehending other descriptions) and anti-depth, Rorty’s position is that identity is always relational and only activated (valid) in specific contexts.  Hence the appeal to Wittgensteinian “language games.”

What a thing “is” is only a product of its relation to the human who is describing it.  Rorty names his position “pan-relationalism.”  (Title of Chapter Five.)  His position is “that nothing is what it is under every and any description of it.  We can make no use of what is undescribed, apart from its relations to the human needs and interests which have generated one or another description. . . . A property is simply a hypostatized predicate, so there are no properties which are incapable of being captured in language.  Predication is a way of relating things to other things, a way of hooking up bits of the universe with other bits of the universe, or, if you like, of spotlighting certain webs of relationships rather than other webs.  All properties, therefore, are hypostatizations of webs of relationships” (85-86).

As a fellow pragmatist, I am inclined to accept pan-relationalism.  I very much want to be an anti-reductionist and an “appearances are all we get” kind of guy.  Many years ago I coined a slogan that I think captures this pragmatist view.  To wit:  “nothing is necessarily anything, but every thing is necessarily some thing.”  What this slogan says is that (to go back to the previous two posts) the things we encounter are plastic; they can be described—and related to—in a variety of ways.  Those things underdetermine our responses to, understandings of, and descriptions of them.  We can adopt any number of relational stances toward what experience offers.  So that’s the denial that anything is necessarily some particular, definitive, inescapable thing.

The other side of the slogan says: we do adopt a stance toward, we do make a judgment about, we do describe what we encounter.  We characterize it.  And in the Putnam/Burke manner of denying the fact/value divide, that adoption of a stance or a mode of relationship is dependent on the assessment we make of what experience (or the “situation”) offers.  We don’t just perceive or encounter something; we assess it, enter into a relationship to it (even if that relationship is indifference.  Relationships come in degrees of intensity, from judging this is something I needn’t attend to all the way to passionate involvement.) The claim is that we necessarily adopt some stance; there are multiple possibilities, but not the possibility of having no relation to the thing it all, to leave it utterly uncategorized. It will be “some thing,” although not necessarily any one thing.

Rorty offers his own version of this denial of the fact/value divide.  “To be a pan-relationalist means never using the terms ‘objective’ and ‘subjective’ except in the context of some well-defined expert culture in which we can distinguish between adherence to the procedures which lead the experts to agree and refusal to so adhere.  It also means never asking whether a description is better suited to an object than another description without being able to answer the question ‘what purpose is this description supposed to serve’? (87).  Since all relations to objects are relative to purposes, there is no such thing as a non-relational observation that allows one to “represent the object accurately” (87) as it is in itself. 

So I am down with Rorty’s pan-relationalism.  But where he and I part company—and what generates the title of this blog post—is his denial of any relations that are non-linguistically mediated.  What Rorty wants to jettison from the pragmatism that he inherits from his hero Dewey is the concept of “experience.”  To Rorty, “experience” is just another version of the realist’s desire to establish a direct contact with the stuff of the universe. 

“Pragmatists agree with Wittgenstein that there is no way to come between language and its objects.  Philosophy cannot answer the question: Is our vocabulary in accord with the way the world is?  It can only answer the question” Can we perspicuously relate the various vocabularies we use to one another, and thereby dissolve the philosophical problems which seem to arise at the places where we switch over from one vocabulary into another? . . . . If our awareness of things is always a linguistic affair, if Sellars is right that we cannot check our language against our non-linguistic awareness, then philosophy can never be anything more than a discussion of the utility and compatibility of beliefs—and, more particularly, of the vocabularies in which those beliefs are formulated” (165).

In the vocabulary of my last two posts, Rorty writes: “Sellars and Davidson can be read as saying that Aristotle’s slogan, constantly cited by the empiricists, ‘Nothing in the intellect which was not previously in the senses,’ was a wildly misleading way of describing the relation between the objects of knowledge and our knowledge of things. . . . [W]e [should] simply forget about sense-impressions, and other putative mental contents which cannot be identified with judgments” (160).  No percepts in the absence of concepts.  No sensual experiences or emotional states that have not already been judged, already been subsumed under a concept.

This position—that there is no “non-linguistic” experience against which our words are measured leads to Rorty’s denial of “qualia.”  He accepts Daniel Dennett’s assertion that “there are no such things as qualia” (113), a position Rorty must defend (as Dennett also attempts to do in his work) against “the most effective argument in favor of qualia,” namely “Frank Jackson’s story of Mary the Color Scientist—the history of a woman blind from birth who acquires all imaginable ‘physical’ information about the perception of color, and whose sight is thereafter restored.  Jackson claims that at the moment she can see she learns something that she didn’t know before—namely what blue, red, etc. are like” (113).

In his usual maddening fashion, Rorty tells us the debate between Jackson and Dennett is not resolvable; it just comes down to “intuitions”.”  “The antinomies around which philosophical discussions cluster are not tensions built into the human mind but merely reflections of the inability to decide whether to use an old tool or a new one.  The inability to have an argument which amounts to manning one or another intuition pump results from the fact that either tool will serve most of our purposes equally well” (115).  I always think of Rorty of the Alfred E. Neumann of philosophers.  “What me worry?” as he makes his characteristic deflationary move: nothing much hangs on this disagreement, and there is no way to rationally adjudicate it.  You talk your way, I’ll talk mine, and let a thousand flowers bloom.

I do believe, along with William James (in the first lecture of Pragmatism), that we should be concerned about disagreements that actually generate consequential differences in behavior and practices.  Perhaps it’s because I come from the literary side of thinking about how and why one writes that I do find this difference consequential.  From the literary viewpoint, there is a persistent experience of the inadequacy of words, and a resultant attempt to “get it right,” to capture the “raw feel” of love, jealousy, envy—or of the perceptions and/or emotions that arise during a walk down a crowded city street.  Rorty’s only approach to this sense of what a writer is striving to accomplish seems to me singularly unhelpful.  He tells us that “the alternative is to take them [our linguistic descriptions] as about things, but not as answering to anything, either objects or opinions.  Aboutness is all you need for intentionality.  Answering to things is what the infelicitous notion of representing adds to the harmless notion of aboutness. . . . Aboutness, like truth, is ineffable, and none the worse for that” (171). So, it seems, Rorty accepts that we talk and write “about” things, but denies that worrying about the “accuracy” of our talk in, in any way, useful. And tells us that there really is nothing we can say about “aboutness.” Not helpful.

Note how this passage trots in the notions of “things” and of “the ineffable.”  The problem with positions that eschew common sense (i.e. the prevailing way we think about things) is that they must strive to revise the “errors” built into the very ways we talk.  Think of Nietzsche’s fulminating against the idea of “selfhood” as generated by grammatical form.  In any case, it’s awfully hard to jettison the hypostatization of “things”—which possess a kind of permanence and relative stability in their modes of manifestation in favor of a purely relational account of them (of what exactly are we talking if things have no existence except in relationship; is it impossible to identify separate entities in any relationship, or are all boundaries dissolved?)  Rorty, in fact, smuggles the world that has been well lost back in when he tells us that “we are constantly interacting with things as well as persons, and one of the ways we interact with both is through their effects upon sensory organs. But [this view dispenses with] the notion of experience as a mediating tribunal.  [We} can be content with an account of the world as exerting control on our inquiries in a merely causal way” (178-79). 

That causation merits the adjective “merely” follows from Rorty’s insistence that the world’s (or others’) causal powers upon us are distinct from the practices of (the language games of) “justification.”  We should “avoid a confusion between justification and causation, [which] entails claiming that only a belief can justify a belief” (179).  Justification is not based on an appeal to some way the world is, but to the warrants I offer for my belief that a certain stance or a certain course of action is preferable in this context.  Understanding justification in this linguistic, practice-oriented way means “drawing a sharp line between experience as cause of the occurrence of a justification, and experience as itself justificatory.  It means reinterpreting “experience” as the ability to acquire belief non-inferentially as a result of neurologically describable causal transactions with the world” (179-180). 

At this point, I think I must totally misunderstand Rorty because it seems to me he has reintroduced everything that he claimed to be excluding when he declares “I see nothing worth saving in empiricism” (189).  If the world generates beliefs through some causal process—and, even worse, if that generation of beliefs is “non-inferential”—then how have we escaped from the “myth of the given?”  Here’s what Rorty writes immediately following the passage just quoted: “One can restate this reinterpretation of ‘experience’ as the claim that human beings’ only ‘confrontation’ with the world is the sort which computers also have.  Computers are programmed to respond to certain causal transactions with input devices by entering certain program states.  We humans program ourselves to respond to causal transactions between the higher brain centers and the sense organs with dispositions to make assertions.  There is no epistemologically interesting difference between a machine’s program state and our dispositions, and both may equally well be called ‘beliefs’ or ‘judgments’”(180).  Generously, we can translate “We humans program ourselves” to mean “natural selection” has done that work—since (surely) we have been “given” the neurological equipment required to be sensitive to (to register) the world’s causal inputs.  I am pretty sure I didn’t program myself—at least not consciously.  Then again, Rorty is inclined to deny the whole notion of “conscious experience” (see page 121).

To repeat: I must be missing something here, since Rorty thinks appealing to what the world “causally” provides is radically different than appeals to qualia, or conscious experience, or non-linguistic percepts.  And I just don’t see the difference.

More directly to the point, however, is Meili’s brilliant observation in our podcast conversation that Rorty’s politics is based upon sensitivity to suffering, which is hard to claim is linguistic.  Do computers feel pain? Presumably not, which does seem to introduce an “epistemologically interesting distinction” between the computer’s processes and human dispositions. In Contingency, Irony, and Solidarity (Cambridge UP, 1989), Rorty characterizes “moral progress” as “the ability to see more and more traditional differences (of tribe, religion, race, customs, and the like) as unimportant when compared with similarities with respect to pain and humiliation” (192).  His politics works from the hope of fostering solidarity through “imaginative identification with the details of others’ lives” (190), with an understanding of others’ vulnerability to pain and humiliation central to that identification, which can produce “a loathing for cruelty—a sense that it is the worst thing we do” (190).  Meili’s point was that “pain and humiliation” are distinctly non-linguistic.  We are being gas-lighted when someone tries to convince us we are not feeling pain, have not been humiliated.  Thus, Rorty’s political commitments seem to belie his insistence that it’s all linguistic, that there are no percepts, no experiences apart from linguistic categories/concepts.  Is a dog or a new-born incapable of feeling pain? Surely not. Maybe incapable of feeling humiliated—but even that seems an open question.

Still, I don’t want to end by suggesting some kind of full-scale repudiation of Rorty’s work, from which I have learned a lot, and I remain sympathetic to much of it.  So I want to close with the passage in the book where I think Rorty’s make his case most persuasively.  It is also where he is most Darwinian in precisely the ways that James and Dewey were Darwinian—namely, in viewing humans as “thrown” (Heidegger’s terms) into a world with which they must cope.  Humans are constantly interacting with their environment, assessing its affordances and its constraints/obstacles, adapting to that world (which includes others and social institutions/practices as well as “natural” things), learning how to negotiate it in ways that further their purposes, acting to change it where possible, suffering what it deals out when changes are not feasible.  In this view, there can never be a “neutral” view of something or some situation; things and situations are “always already” assessed and characterized in relation to needs and purposes.  “The trail of the human serpent is over all” as James memorably put it; there is no “innocent” seeing.

Here’s Rorty’s version of that way of understanding how humans are situated in the world.  “Brandom wants to get from the invidious comparison made in such de re ascriptions as ‘She believes of a cow that it is a deer,’ to the traditional distinction between subjective appearance and objective reality.  It seems to me that all such invidious comparisons give one is a distinction between better and worse tools for handling the situation at hand—the cow, the planets, whatever.  They do not give us a distinction between more or less accurate descriptions of what the thing really is, in the sense of what it is all by itself, apart from the utilities of human tools for human purposes. . . . I can restate my doubts by considering Brandom’s description of ‘intellectual progress’ as ‘making more and more true claims about the things that are really out there to be talked and thought about.’  I see intellectual progress as developing better and better tools for better and better purposes—better, of course, by our lights” (172).

We are in the Darwinian soup, always navigating our way through an environment that provides opportunities and poses threats.  There is no way to abstract ourselves from that immersion.  And Rorty thinks we will be better off if we make common cause with the others in the same predicament.

Percept/Concept

I tried to write a post on the distinction between cognitive and non-cognitive and got completely tangled up.  So, instead, I am taking a step backward and addressing the relation between percept and concept, where I feel on surer ground.  I will follow up this post with another on fact/value.  And then, with those two pairings sorted out, I may be able to say something coherent about the cognitive/non-cognitive pairing.

So here goes.  A percept is what is offered to thought by one of the five senses.  I see or smell something.  The stimulus for the percept is, in most but not all cases, something external to myself. Let’s stick to perception of external things for the moment.  I see a tree, or smell a rose, or hear the wind whistling through the trees.  I have what Hume calls “an impression.”

I have always wanted to follow the lead of J. L. Austin in his Sense and Sensibilia.  In that little book, Austin takes on the empiricist tradition that has insisted, since Locke, that one forms a “representation” or an “idea” (that is the term Locke uses) of the perceived thing. (In the philosophical tradition, this gets called “the way of ideas.”) In other words, there is an intermediary step.  One perceives something, then forms an “idea” of it, and then is able to name, think about, or otherwise manipulate that idea.  The powers of thought and reflection depend upon this translation of the impression, of the percept, into an idea (some sort of mental entity.)  Austin, to my mind, does a good job of destroying that empiricist account, opting instead for direct perception, dispensing with the intermediary step of forming an idea–and thus appealing to some kind of “mental state” to understand perception. 

But Kevin Mitchell in Free Agents (Princeton UP, 2023) makes a fairly compelling case for raw percepts being transformed into “representations.”  First, there are the differences in perceptual capabilities from one species to another, not to mention differences among members of the same species.  If I am more far-sighted than you, I will see something different from you.  True, that doesn’t necessarily entail indirection as contrasted to direct perception.  But it does mean that the “thing itself” (the external stimuli) does not present itself in exactly the same guise to every perceiving being.  What is perceived is a co-production, created out of the interaction between perceiver and perceived.  There is no “pure” perception.  Perception is always an act that is influenced by the sensory equipment possessed by the perceiver along with the qualities of the thing being perceived. Descriptions of how human sight works makes it clear how much “work” is done upon the raw materials of perception before the “thing” is actually seen. And, of course, we know that there are colors that the color-blind cannot perceive and noises that are in most cases beyond human perceptual notice.

Second, the experiences of both memory and language speak to the existence of “representations.”  We are able to think about a perceived thing even in its absence.  To say the word “elephant” is to bring an elephant into the room even when that elephant is not physically present.  Similarly, memory represents to us things that are absent.  Thus, even if we deny that the perception of present things has an intermediary step of transforming the percept into a “representation,” it seems indubitable that we then “store” the immediate impressions in the form of representations that can be called to mind after the moment of direct impression. 

Finally, the existence of representations, of mental analogues to what has been experienced in perception, opens the door for imagination and reflection.  I can play around with what perception has offered once I have a mental representation of it.  I can, in short, think about it.  The sheer weight of facticity is sidestepped once I am inside my head instead of in direct contact with the world.  A space, a distance, is opened up between perceiver and perceived that offers the opportunity to explore options, to consider possible actions upon, manipulations of, what the world offers.  Representation provides an ability to step back from the sensory manifold and take stock.

So it would seem that Austin’s appealing attempt to dispense with the elaborate machinery of empiricist psychology won’t fly.  As accounts of how human vision works show, too much is going on to make a “direct” account of perception true to how perception actually works. Stimuli sensed by the senses are “processed” before being registered, not directly apprehended.

So the next issue is what “registering” or “apprehending” consist of.  But first a short digression.  We typically think of perception as the encounter with external things through one of the five senses.  But we can also perceive internal states, like a headache or sore muscle.  In those cases, however, perception does not seem to be tied to one of the five senses, but to some sort of ability to monitor one’s internal states.  Pain and pleasure are the crudest terms for the signals that trigger an awareness of internal states.  More broadly, I think it fair to say that the emotions in their full complex panoply are the markers of internal well-being (or its opposite or the many way stations between absolute euphoria and abject despair).  Emotions are both produced by the body (sometimes in relation to external stimuli, sometimes in relation to internal stimuli) and serve as the signal for self-conscious registering of one’s current states.  It’s as if a tree was not just a tree, but also a signal of “tree-ness.”  Anger is both the fact of anger and a signal to the self of its state of mind in response to some stimuli.  Certain things in the world or some internal state triggers an emotion—and then the emotion offers a path to self-awareness.  So there appears to be an “internal sense capacity,” ways of monitoring internal states and “apprehending” them that is parallel to the ways the five traditional senses provide for apprehending the external world. 

What, then, does it mean to “apprehend” something once the senses have provided the raw materials of an encounter with that thing?  Following Kant, apprehension requires a “determinate judgment.”  The percept is registered by the self when the percept is conceptualized.  Percept must become concept in order to be fully received.  To be concrete: I see the various visual stimuli that the tree offers, but I don’t apprehend the tree until I subsume this particular instance of a tree into the general category/concept “tree.”  I “recognize” the tree as a tree when I declare “that’s a tree.”  The tree in itself, standing there in the forest, does not know itself as a tree.  The concept “tree” is an artifact of human language and human culture.  Percepts only become occasions for knowledge when they are married to concepts.  Pure, non-conceptualized, percepts are just raw material—and cannot be used by human thought.  In other words, back to the representation notion.  Until what perception offers is transformed into a representation, it is unavailable for human apprehension, for being taken up by the human subject as part of its knowledge of the world. (Of course, famously in Kant, this yields the distinction between our representations and the “thing in itself.” The cost of “the way of ideas”–the cost that Austin was trying to overcome–is this gap in our knowledge of the world, our inability to see things independently of the limitations of human perceptual and conceptual equipment. Science attempts to overcome these limitations by using non-human instruments of perception (all those machines in our hospitals), but even science must acknowledge that what a machine registers, just like what a human registers, is a representation that is shaped by the nature of the representing apparatus.)

Determinate judgment appears to be instantaneous.  At least in the case of the encounter with most external things.  I see a tree and, without any discernible time lapse, identify it as a tree.  I have no awareness of processing the sensory signals and then coming to a judgment about what category the seen thing belongs to.  Percept and concept are cemented together.  Of course, there are some cases where I can’t at first make out what it is before me.  The lighting is bad, so I see a shape, but not enough more to determine what the thing is.  Such cases do indicate there is a distinction between percept and concept.  But in the vast majority of cases it is just about impossible to pry them apart.

For many artists from Blake on, the effort to pry the two apart is a central ambition.  The basic idea is that we see the world through our conceptual lenses—and thus fail to apprehend it in its full richness, its full sensual plenitude.  We filter out the particulars of this tree as we rush to assimilate the singular instance to the general category.  Thus painters strive to make us see things anew (cubism) or to offer ambiguous items that can’t be immediately or readily identified (surrealism).  They try to drive a wedge between percept and concept. “No ideas but in things,” proclaims William Carlos Williams—and this hostility to ideas, to their preeminence over things (over percepts), is shared by many modern artists.

One of the mysteries of the percept/concept pairing is the relative poverty of our linguistic terms to describe percepts.  We can in most cases quickly identify the tree as a tree, and we can certainly say that the tree’s leaves are green in the spring and rust-colored in the fall.  But more precise linguistic identification of colors eludes us.  We can perceive far more variations in colors than we can describe.  Hence the color chips at any hardware store, which offer 45 variants of the color we crudely call “blue” and invent fanciful names to distinguish each different shade from the rest.  The same, of course, holds for smells and for emotions.  We have a few, very crude terms for smell (pungent, sharp) but mostly can only identify smells in terms of the objects that produce such smells.  It smells flowery, or like hard boiled egg.  The same with taste.  Aside from sweet, sour, sharp, we enter the world of simile, so that descriptions of wine notoriously refer to things that are not wine. Notes of black currant, leather, and tobacco.  And when it comes to emotions we are entirely at sea—well aware that our crude generalized terms (love, anger, jealousy) get nowhere near to describing the complexities of what one feels.  Thus some artists (Updike comes to mind) specialize in elaborating on our descriptive vocabularies for physical and emotional percepts.  Thus a whole novel might be devoted to tracing the complexities of being jealous, to strive to get into words the full experience of that emotional state.

In any case, the paucity of our linguistic resources for describing various percepts, even in cases where the distinction between the percepts is obvious to us (as in the case of gradients of color), shows (I  believe) that there are ordinary cases where percept and concept are distinct.  We don’t immediately leap to judgment in every case.  Now, it is true that I conceptualize the various shades of blue as “color” and even as “blue.”  But I do not thereby deny that the various shades on the color chip are also different, even though I have no general category to which I can assign those different shades. 

Two more puzzles here.  The first is Wittgensteinian.  I had forgotten, until going through this recently with my granddaughter, how early children master color.  By 18 months, she could identify the basic colors of things.  Multiple astounding things here.  How did she know we were referring to color and not to the thing itself when we call a blue ball “blue”?  What were we pointing out to her: the color or the thing?  Yet she appeared to have no trouble with that possible confusion.  Except.  For a while she called the fruit “orange” “apples.”  It would seem that she could not wrap her head around the fact that the same word could name both a color and a thing.  She knew “orange” as a color, so would not use that word to name a thing.  Even more amazing than sorting colors from things, was her accuracy in identifying a thing’s color.  Given sky blue and navy blue, she would call both “blue.”  A little bit later on (two or three months) she learned to call one “light blue” and the other “dark blue.”  But prior to that distinction, she showed no inclination to think the two were two different colors.  And she didn’t confuse them with purple or any other adjacent color.  So how is it that quite different percepts get tossed into the same category with just about no confusion (in relation to common usage) at all? It would seem more obvious to identify sky blue and dark blue as two different colors.

The second puzzle might be called the ”good enough” conundrum.  I walk in the forest and see “trees.”  The forester sees a number of specific species—and very likely also singles out specific trees as “sick” or of a certain age.  His judgments are very, very different from mine—and do not suffer from the paucity of my categorical terms.  Similarly, the vintner may rely on an almost comical similes to describe the taste of the wine, but I do not doubt that his perceptions are more intense and more nuanced than mine.  A chicken/egg question here about whether having the concepts then sharpens the percepts—or if sharper percepts then generate a richer vocabulary to describe them.  Or the prior question: do we only perceive with the acuity required by our purposes?  My walk in the woods is pleasant enough for me without my knowing which specific types of trees and ferns I am seeing.  What we “filter out,” in other words, is not just a function of the limitations of our perceptual equipment, or the paucity of our concepts/vocabulary, but also influenced by our purposes.  We attend to what we need to notice to achieve something. 

Push this last idea just a bit and we get “pragmatism” and its revision of the empiricist account of perception and the “way of ideas.”  The pragmatist maxim says that our “conception” of a thing is our understanding of its consequences.  That is, we perceive things in relation to the futures that thing makes possible.  Concepts are always dynamic, not static.  They categorize what perception offers in terms of how one wants to position oneself in the world.  Percept/concept is relational—and at issue is the relations I wish to establish (or maintain) between myself and what is “out there” (which includes other people.) 

Back to the artists.  The repugnance many artists (as well as other people) feel toward pragmatism stems from this narrowing down of attention, of what might be perceived.  Focused (in very Darwinian fashion) upon what avails toward the organism’s well-being, the pragmatist self only perceives, only attends to, that which it can turn to account.  It thereby misses much of what is in the world out there.  The artists want to fling open the “doors of perception” (to quote Blake)—and see pragmatism as a species of utilitarianism, a philosophy that notoriously reduces the range of what “matters” to humans as well as reducing the motives for action to a simple calculus of avoiding pain and maximizing pleasure.  To categorize percepts immediately into two bins–these things might benefit me, these things are potentially harmful—is to choose to live in a diminished, perversely impoverished world.

Of course, Dewey especially among the “classic” pragmatists worked hard to resist the identification of pragmatism to a joyless and bare-bones utilitarianism.  The key to this attempt is “qualia”—a term that is also central in the current philosophical debates about consciousness.  “Qualia” might be defined as the “feel of things.”  I don’t just see trees as I walk in the woods.  I also experience a particular type of pleasure—one that mixes peacefulness, the stimulus/joy of physical exertion, an apprehension of beauty, a diffuse sense of well-being etc.  “Consciousness” (as understood in everyday parlance) registers that pleasure. Consciousness entails that I not only feel the pleasure but can also say to myself that I am feeling this pleasure.  Percepts, in other words, are accompanied by specific feelings that are those percept’s “qualia.” And through consciousness we can register the fact of experiencing those feelings.

The relation of concepts to “qualia” is, I think, more complex—and leads directly to the next post on the fact/value dyad.  A concept like “fraud” does seem to me to have its own qualia.  Moral indignation is a feeling—and one very likely to be triggered by the thought of fraud.  Perhaps (I don’t know about this) only a specific instance of fraud, not just the general concept of it, is required to trigger moral indignation.  But I don’t think so.  The general idea that American financiers often deploy fraudulent practices seems to me enough to make me feel indignant.

On the other hand, the general concept of “tree” does not seem to me to generate any very specific qualia.  Perhaps a faint sense of approval.  Who doesn’t like trees?  But pretty close to neutral.  The issues, in short, are whether “neutral” percepts  or concepts are possible.  Or do all percepts and concepts generate some qualia, some feelings that can be specified?  And, secondly, are all qualia related to judgments of value?  If we mostly and instantaneously make a judgment about what category a percept belongs to (what concept covers this instance), do we also in most cases and instantaneously judge the “value” of any percept?  That’s what my next post on fact/value will try to consider.

Philosophy and How One Acts

A friend with whom I have been reading various philosophical attempts to come to terms with what consciousness is and does writes to me about “illusionism,” the claim that we do not have selves. We are simply mistaken in thinking the self exists. The basic argument is the classic empiricist case against “substance.” There are various phenomena (let’s call them “mental states” in this case), but no stuff, no thing, no self, to which those mental states adhere, or in which they are collected. Thomas Metzger is one philosopher who holds this position and in an interview tells us that his position has no experiential consequences. It is not clear to me whether Metzger thinks (in a Nietzschean way) that the self is an unavoidable illusion or if Metzger thinks that ll the phenomena we attribute to the self would just continue to be experienced in exactly the same way even if we dispensed with the notion (illusion) of the self. In either case, accepting or denying Metzger’s position changes nothing. Belief or non-belief in the self is not a “difference that makes a difference” to recall William James’s formula in the first chapter of his book, Pragmatism.

The issue, then, seems to be what motivates a certain kind of intellectual restlessness, a desire to describe the world (the terms of existence) in ways that “get it right”–especially if the motive does not seem to be any effect on actual behavior. It’s “pure” theory, abstracted from any consequences in how one goes about the actualities of daily life.

There does exist, for some people, a certain kind of restless questioning.  I have had a small number of close friends in my life, and what they share is that kind of restlessness.  A desire to come up with coherent accounts of why things are the way they are, especially of why people act the ways they do. People are endlessly surprising and fascinating. Accounting for them leads to speculations that are constantly being revised and restated because each account seems, in one way or another, to fail to “get things right.”  There is always the need for another round of words, of efforts to grasp the “why” and “how” of things.  Most people, in my experience, don’t feel this need to push at things.  I was always trying to get my students to push their thinking on to the next twist—and rarely succeeded in getting them to do so. And for myself this restless, endless inquiry generates a constant stream of words, since each inadequate account means a new effort to try to get it more accurately this time.

Clearly, since I tried to get my students to do this, I think of such relentless questioning as an intellectual virtue. But what is it good for?  I take that to be the core issue of your long email to me.  And I don’t have an answer.  Where id is, ego shall be.  But it seems very clear that being able to articulate one’s habitual ways of (for example) relating to one’s lover, to know what triggers anger or sadness or neediness, does little (if anything) to change the established patterns.  Understanding (even if there were any way to show that the understanding was actually accurate) doesn’t yield much in the way of behavioral results.

This gets to your comment that if people really believed Darwin was right, as many people do, then they wouldn’t eat animals.  William James came to believe that we have our convictions first—and then invent the intellectual accounts/theories that we say justify the convictions.  In other words, we mistake the causal sequence.  We take the cause (our convictions) as the effect (our theory), when it is really the other way around.  Nietzsche was prone to say the very same thing. 

One way to say this: we have Darwin, but will use him to justify exactly opposite behaviors.  You say if we believed Darwin we wouldn’t eat animals.  I assume that the logic is that Darwin reveals animals as our kin, so eating them is a kind of cannibalism.  We don’t eat dogs because they feel “too close” to us; that feeling should be extended to all animals, not just fellow humans and domestic pets.  (The French eat horse meat although Americans won’t).  But many people use Darwin to rationalize just the opposite.  We humans have evolved as protein seeking omnivores and we developed domesticating animals we eat just as we developed agriculture to grow plants we eat.  Even if we argue that domestication and agriculture were disasters, proponents of so-called “paleo diets” include meat eating in their attempt to get back to something thought basic to our evolved requirements.  So even is Darwin is absolutely right about how life—and specifically human life—emerged, people will use the content of his theory to justify completely contradictory behaviors.

This analysis, of course, raises two questions.  1) What is the cause of our convictions if it is not some set of articulable beliefs about how the world is?  James only answer is “temperament,” an in-built sensibility, a predilection to see the world in a certain way.  (Another book I have just finished reading, Kevin Mitchell’s Free Agents [Princeton UP, 2023], says about 50% of our personality is genetically determined and that less than 10% is derived from family environment.  Mitchell has an earlier book, titled Innate [Princeton UP, 2018], where he goes into detail about how such a claim is supported.)  Nietzsche, in some places, posits an in-built will to power.  All the articulations and intellectualisms are just after the fact rationalizations.  In any case, “temperament” is obviously no answer at all.  We do what we do because we are who we are—and how we got to be who we are is a black box.  Try your damndest, it’s just about impossible to make sure your child ends up heterosexual or with some other set of desires. 

2)So why are James and Nietzsche still pursuing an articulated account of “how it really works”?  Is there no consequence at all at “getting it right”?  Shouldn’t their theories also be understood as just another set of “after the fact” rationalization?  In other words, reason is always late to the party—which suggests that consciousness is not essential to behavior, just an after-effect.

That last statement, of course, is the conclusion put forward by the famous Libet tests.  The ones that say we move our hand milli-seconds before we consciously order our hand to move.  Both Dennett [in Freedom Evolves (Penguin, 2003) and Mitchell (in Free Agents) have to claim the Libet experiment is faulty in order to save any causal power for consciousness.  For the two of them, who want to show that humans actually possess free will, consciousness must be given a role in the unfolding of action.  There has to be a moment of deliberation, of choosing between options—and that choosing is guided by reason (by an evaluation of the options and a decision made between those options) and beliefs (some picture of how the world really is.)  I know, from experience, that I have trouble sleeping if I drink coffee after 2pm.  I reason that I should not drink coffee after 2pm if I want to sleep.  So I refrain from doing so.  A belief about a fact that is connected to a reasoned account of a causal sequence and a desire to have one thing happen rather than another: presto! I choose to do one thing rather than another based on that belief and those reasons.  To make that evaluation certainly seems to require consciousness—a consciousness that observes patterns, that remembers singular experiences that can be assembled into those patterns, that can have positive forward-looking desires to have some outcomes rather than others (hence evaluation of various possible bodily and worldly states of affairs), and that can reason about what courses of action are most likely to bring those states of affairs into being.  (In short, the classical account of “rationality” and of “reason-based action.”)

If this kind of feedback loop actually exists, if I can learn that some actions produce desirable results more dependably than others, then the question becomes (it seems to me): at what level of abstraction does “knowledge” no longer connect to action?  Here’s what I am struggling to see.  Learned behavior, directed by experiences that provide concrete feedback, seems fairly easy to describe in terms of very concrete instances.  But what happens when we get to belief in God—or Darwin?  With belief in God, we seem to see that humans can persist in beliefs without getting any positive feedback at all.  I believe in a loving god even as my child dies of cancer and all my prayers for divine intervention yield no result.  (The classic overdramatized example.)  Faced with this fact, many theologians will just say: it’s not reasonable, so your models of reasoned behavior are simply irrelevant at this point.  A form of dualism.  There’s another belief-to-action loop at play.  Another black box.

On Darwin it seems to me a question of intervention.  Natural selection exists entirely apart from human action/intention/desire etc.  It does its thing whether there are humans in the world or not.  That humans can “discover” the fact of natural selection’s existence and give detailed accounts of how it works is neither here nor there to natural selection itself.  This is science (in one idealized version of what science is): an accurate description of how nature works.  The next step seems to be: is there any way for humans to intervene in natural processes to either 1) change them (as when we try to combat cancer) or 2) harness the energies or processes of nature to serve specific human ends. (This is separate from how human actions inadvertently, unintentionally, alter natural processes–as is the case in global warming. I am currently reading Kim Stanley Robinson’s The Ministry for the Future–and will discuss it in a future post.)

In both cases (i.e intentionally changing a natural process of harnessing the energies of a natural process toward a specifically human-introduced end), what’s driving the human behavior are desires for certain outcomes (health in the case of the cancer patient), or any number of possible desires in the cases of intervention.  I don’t think the scientific explanation has any direct relation to those desires.  In other words, nothing about the Darwinian account of how the world is dictates how one should desire to stand in relation to that world.  Darwin’s theory of evolution, I am saying, has no obvious, necessary, or univocal ethical consequences.  It does not tell us how to live—even if certain Darwinian fundamentalists will bloviate about “survival of the fittest” and gender roles in hunter-gatherer societies. 

I keep trying to avoid it, but I am a dualist when it comes to ethics.  The non-human universe has no values, no meanings, no clues about how humans should live.  Hurricanes are facts, just like evolution is a fact.  As facts, they inform us about the world we inhabit—and mark out certain limits that it is very, very useful for us to know.  But the use we put them to is entirely human generated, just as the uses the mosquito puts his world to are entirely mosquito driven.  To ignore the facts, the limits, can be disastrous, but pushing against them, trying to alter them, is also a possibility.  And the scientific knowledge can be very useful in indicating which kinds of intervention will prove effective.  But it has nothing to say about what kinds of intervention are desirable.

I am deeply uncomfortable in reaching this position.  Like most of the philosophers I read, I do not want to be a dualist.  I want to be a naturalist—where “naturalism” means that everything that exists is a product of natural forces.  Hence all the efforts out there to offer an evolutionary account of “consciousness” (thus avoiding any kind of Cartesian dualism) and the complementary efforts to provide an evolutionary account of morality (for example, Philip Kitcher, The Ethical Project [Harvard UP, 2011.) I am down with the idea that morality is an evolutionary product—i.e. that it develops out of the history and “ecology” of humans as social animals.  But there still seems to me a discontinuity between the morality that humans have developed and the lack of morality of cancer cells, gravity, hurricanes, photosynthesis, and the laws of thermodynamics.  Similarly, there seems to me a gap between the non-consciousness of rocks and the consciousness of living beings.  So I can’t get down with panpsychism even if I am open to evolutionary accounts of the emergence of consciousness from more primitive forms to full-blown self-consciousness.

Of course, some Darwinians don’t see a problem.  Evolution does provide all living creatures with a purpose—to survive—and a meaning—to pass on one’s genes.  Success in life (satisfaction) derives from those two master motives—and morality could be derived from serving those two motives.  Human sociality is a product of those motives (driven in particular by the long immaturity, non-self-sustaining condition, of human children)—and morality is just the set of rules that makes sociality tenable.  So the theory of evolution gives us morality along with an account of how things are.  The fact/value gap overcome.  How to square this picture of evolution with its randomness, its not having any end state in view, is unclear.  The problem of attributing purposes to natural selection, to personifying it, has bedeviled evolutionary theory from the start.

For Dennett, if I am reading him correctly, the cross-over point is “culture,”—and, more specifically, language.  Language provides a storage device, a way of accumulating knowledge of how things work and of successful ways of coping in this world.  Culture is a natural product, but once in place it offers a vantage point for reflection upon and intervention in natural processes.  Humans are the unnatural animal, the ones who can perversely deviate from the two master motives of evolution (survival and procreation) even as they strive to submit nature to their whims.  It’s an old theme: humans appear more free from natural drivers, but even as freedom is a source of their pride and glory, it often is the cause of their downfall. (Hubris anyone?) Humans are not content with the natural order as they find it.  They constantly try to change it—with sometimes marvelous, with other times disastrous, results.

But that only returns us to the mystery of where this restless desire to revise the very terms of existence comes from.  To go back to James and Nietzsche: it doesn’t seem like our theories, our abstract reasonings and philosophies, are what generate the behavior.  Instead, the restlessness comes first—and the philosophizing comes after as a way of explaining the actions.  See, the philosophers say, the world is this particular way, so it makes sense for me to behave in this specific way.  But, says James, the inclination to behave that way came first—and then the philosophy was tailored to match. 

So, to end this overlong wandering, back where I began.  Bertrand Russell (in his A History of Western Philosophy) said that Darwin’s theory is the perfect expression of rapacious capitalism—and thus it is no surprise that it was devised during the heyday of laissez-faire.  That analysis troubles me because it offers a plausible suspicion of Darwin’s theory along the William James line.  The theory just says the “world is this way” in a manner that justifies the British empire and British capitalism in 1860.  But I really do believe Darwin is right, that he has not just transposed a capitalist world view into nature.  I am, however, having trouble squaring this circle.  That is, how much our philosophizing, our theories, just offer abstract versions of our pre-existing predilections—and how much those theories offer us genuine insights about the world we inhabit, insights that will then effect our behavior on the ground.  A very long-winded way of saying I can’t come up with a good answer to the questions your email posed.

Panpsychism and the Philosophers Pondering Consciousness

“In a 2019 essay David Chalmers notes that when he was in graduate school, there was a saying about philosophers. ‘One starts as a materialist, then one becomes a dualist, then a panpsychist, and one ends up an idealist.’ Although Chalmers cannot account for where the truism originated, he argues that its logic is more or less intuitive.  In the beginning one is impressed by the success of science and its ability to reduce everything to causal mechanism.  Then, once it becomes clear that materialism has not managed to explain consciousness, dualism begins to seem more attractive.  Eventually, the inelegance of dualism leads one to a greater appreciation for the inscrutability of matter, which leads to an embrace of panpsychism.  By taking each of these frameworks to their logical unsatisfying conclusions, ‘one comes to think that there is little reason to believe in anything beyond consciousness and that the physical world is wholly constituted by consciousness.’ This is idealism.” (Quoted from Meghan O’Gieblyn, God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning, first paragraph of Chapter 10, publication date: 2021).

I take Chalmers here to be joking, to be offering this little journey as a reductio.  What could be more absurd, more illustrative of human pretension, than to end up claiming consciousness is all there is?  We spin the whole universe out of our own brains. 

But Chalmers is a philosopher—so maybe I am wrong and he is dead serious.  (My last two posts have considered how seriously philosophers are addicted to being serious.)  Ever since Thales philosophers have been searching for that one definitive thing that everything else boils down to.  The perennial problem of the One and the Many.  The search for some fundamental stuff is on.

Note the language of the quoted passage. Nothing about how what we encounter is experience contributes to the position a philosopher takes.  At issue is whether a claim is logical, if it avoids being “inelegant.”  Yes, there is a nod to that one mysterious phenomenon—consciousness—that throws a spanner in the works.  How can it be explained, accounted for?  Only, the joke goes, by asserting its dominance over all the rest. 

My beef here has three prongs.  First, philosophers believe they can think their way to the answer.  Quantum theory alone should suggest that “logic” and “elegance” and “non-contradiction” are not likely to win the day when trying to explain “inscrutable matter.”

Two, the search for a “primitive.”  Most philosophers start out with a strong bias toward some version of monism.  It’s just a question of identifying the correct basic thing from which all else emanates.  Except for a few apostates (and it is one of the glories of the philosophic tradition that it constantly generates dissidents), most philosophers abhor dualism, not to mention the messy horror that is pluralism.  The inelegance of dualism just fries their order-seeking souls. 

Here’s a wonderful William James passage on the philosopher’s avoidance of mess.  

“The world of concrete personal experiences to which the street belongs is multitudinous beyond imagination.  The world to which your philosophy-professor introduces you is simple, clean, and noble.  The contradictions of real life are absent from it.  Its architecture is classic.  Principles of reason trace its outlines, logical necessities cement its parts.  Purity and dignity are what it most expresses.  It is a kind of marble temple shining on a hill.  In point of fact it is less an account of the actual world than a clear addition built upon it, a classic sanctuary in which the rationalist may take refuge from the intolerably confused and gothic character which mere facts present.  It is no explanation of our concrete universe, it is another thing altogether, a substitute for it, a remedy, a way of escape” (Pragmatism, p. 15 in the Penguin edition.)

Third, the focus on origins, on the generative.  William James said that his pragmatism concerned itself not with “first things,” but with consequences, with the “fruits” of experience or situations.  He offers “an attitude of orientation: . . . the attitude of looking away from first things, principles, ‘categories,’ supposed necessities; and of looking towards last things, fruits, consequences, facts” (p. 29, Pragmatism, Penguin Edition).

James, of course, was living in the aftermath of the Darwinian revolution—which should mean, among many other things, introducing a strong dose of temporality into our accounts of how we got to our current pass.  Panpsychism—the notion that consciousness is inherent in all matter—seems to me to have given up on the possibility of telling any kind of evolutionary story about how consciousness comes into existence.  Rather, since we have consciousness now, it must have always been there.  Yes, I understand that the panpsychist will say it was only there in potential, that everything necessary for its emergence was always already in place, but that it would take certain triggering events to bring it into full actual presence.  The principled claim here is that all the materials of life are there from the outset; there can be nothing new.  Matter can neither be created nor destroyed.  Take that claim as axiomatic and panpsychism stands on fairly solid ground. 

The panpsychist is then at one with Spinoza.  There is an eternal substance and all that history yields are modes in which that substance is expressed (or instantiated).  But everything can be traced back to that substance.

I am disinclined to this position.  I am agnostic as to whether new matter can come into existence as opposed to an assertion that it was all there from the start.  Which means I want to avoid some kind of substance/mode distinction where “substance” is the real stuff and “mode” just some emanation of substance.  I want to avoid that kind of appearance/reality distinction altogether.  What appears is real—as is evidenced by one’s responses to and engagements with what appears.  To ask if what appears is “really real” seems fruitless to me (a deliberate echo of James’ injunction to look to the “fruits” of things, not their origin.) The key point is that the interactions of the “stuff” of which the world and experience is made keeps yielding novelties that exceed the ability of our principles or theories to predict.  Theories, as what James calls “answers to enigmas” (Pragmatism, p. 28), always (finally) disappoint, always fail to answer our questions—because they always reduce complexity and multitudiousness to simplicities that don’t, when push comes to shove, do the job.  The world exceeds the intellectual models we construct in an attempt to encompass it.  There is, James insists, always “more.”

All of which is to say that consciousness emerges in particular circumstances and through particular interactions.  Figuring out the actual form of that emergence has proved remarkably difficult—the “hard problem” that Chalmers famously identified almost forty years ago now.  I will be discussing in subsequent posts some of these emergentist accounts. 

For now, I will just identify two tracks on which the hard problem has been approached.  The first is bio-chemical.  There has been remarkable progress in setting out the bio-chemical processes by which sight works.  What has proved more elusive (as Chalmers indicated) is nailing down the bio-chemical processes by which I self-consciously understand that I am currently seeing something.  It’s this experience of an experiencing self that continues to defy a bio-chemical account.  (With the consequence that some strict materialists then argue that this “self-consciousness” is an illusion.  If the science can’t account for it, then it must not exist.)

The second track is Darwinian, where the effort is focused on providing an evolutionary account of the emergence of consciousness.  The panpsychists are not necessarily opposed to a Darwinian account. They are just committed to insisting that some rudimentary consciousness is—or, even more minimally, the necessary ingredients for the recipe for consciousness being enacted are—always already present. What I am saying is that I don’t see 1) how it makes any difference if those elements were there from the beginning or only emerge later on and 2) how we could ever adjudicate disputes about this assertion about origins.  Hence my agnosticism.  And my suspicion that it’s one of the professional deformations of philosophy to spend so much time obsessing about getting an account of an origin that is also monistic.  The messy world unfolds irrespective of whether there is some “fundamental stuff” and whether that stuff has always been there. 

My position: matter is multi-faceted; its components share no fundamental, essential quality (so, for example, living and non-living things are both, in some sense, matter, but the differences between rocks and giraffes are more significant than some very abstract similarity they share); and the interactions among elements of this multitudinous matter keep producing things that surprise us, that are novelties.  Probabilities are the best we can do by way of predicting the results of those interactions; there is not some straight-forward line of causation from origin point A to produced result E (with our being able to trace the path through B, C, and D that got us there.)  Of course, after the fact we can often trace that path from A to E.  But before the fact, E was only one possible outcome of starting from A, and we might be able to assess the probability of E’s occurrence, but cannot guarantee it—unless we stringently control what interactions A enters into.  It is precisely the multiple things with which A will interact that makes prediction so uncertain once we are beyond laboratory controls.

Does that leave me throwing up my hands, and joining the likes of Colin McGinn and John Searle, who tend to believe we will never have a satisfactory account of consciousness. Certainly the insistence that “you scientists are guilty of hubris when you set out to explain everything” is familiar ground for humanists.  “There are more things in heaven and earth than are dreamt of in your philosophy.”

I don’t think that’s where I want to take my stand.  There are, it seems to me, interesting things to say about consciousness, about its qualities, its emergence, its capabilities, about what enhances it and what disables it.  I will try to consider some of these is subsequent posts.  My point here is that panpsychism has just about nothing to say about these more concrete issues: that is, the capabilities of consciousness and how they can be activated or thwarted. Panpsychism exists on way too abstract a level, trying to answer global questions about origins and possibility that have little bite, little consequence, when more specific questions are taken up.  And that I think the development of a theory like panpsychism comes from the philosopher’s bias toward identifying ultimate building blocks of the universe, a bias that yields speculation about fairly unanswerable questions, but (more crucially) yields answers that contribute little to more concrete engagements with the elements of experience. 

To ask the William James question: what would be different about our experience of consciousness (and how we act as conscious beings) if panpsychism were true instead of dualism?  I don’t see how it would make any difference at all.  I might say: nice to know that consciousness is out there potentially in all matter (maybe I shouldn’t, like Dr. Johnson, kick rocks), but that offers me nothing in regards to how I think about my own consciousness and how I wish to activate it.  It doesn’t help in the least for addressing the perplexing questions of how consciousness varies among the wide range of living creatures, from molds through plants through insects through vertebrates to humans. 

I set out writing this post with the title “Panpsychism and Moral Realism.”  But, obviously, never got to moral realism.  So I will write about moral realism before eventually moving on to what seem to me more enlightening approaches than panpsychism to the enigmas of consciousness.