Author: john mcgowan

Percept/Concept

I tried to write a post on the distinction between cognitive and non-cognitive and got completely tangled up.  So, instead, I am taking a step backward and addressing the relation between percept and concept, where I feel on surer ground.  I will follow up this post with another on fact/value.  And then, with those two pairings sorted out, I may be able to say something coherent about the cognitive/non-cognitive pairing.

So here goes.  A percept is what is offered to thought by one of the five senses.  I see or smell something.  The stimulus for the percept is, in most but not all cases, something external to myself. Let’s stick to perception of external things for the moment.  I see a tree, or smell a rose, or hear the wind whistling through the trees.  I have what Hume calls “an impression.”

I have always wanted to follow the lead of J. L. Austin in his Sense and Sensibilia.  In that little book, Austin takes on the empiricist tradition that has insisted, since Locke, that one forms a “representation” or an “idea” (that is the term Locke uses) of the perceived thing. (In the philosophical tradition, this gets called “the way of ideas.”) In other words, there is an intermediary step.  One perceives something, then forms an “idea” of it, and then is able to name, think about, or otherwise manipulate that idea.  The powers of thought and reflection depend upon this translation of the impression, of the percept, into an idea (some sort of mental entity.)  Austin, to my mind, does a good job of destroying that empiricist account, opting instead for direct perception, dispensing with the intermediary step of forming an idea–and thus appealing to some kind of “mental state” to understand perception. 

But Kevin Mitchell in Free Agents (Princeton UP, 2023) makes a fairly compelling case for raw percepts being transformed into “representations.”  First, there are the differences in perceptual capabilities from one species to another, not to mention differences among members of the same species.  If I am more far-sighted than you, I will see something different from you.  True, that doesn’t necessarily entail indirection as contrasted to direct perception.  But it does mean that the “thing itself” (the external stimuli) does not present itself in exactly the same guise to every perceiving being.  What is perceived is a co-production, created out of the interaction between perceiver and perceived.  There is no “pure” perception.  Perception is always an act that is influenced by the sensory equipment possessed by the perceiver along with the qualities of the thing being perceived. Descriptions of how human sight works makes it clear how much “work” is done upon the raw materials of perception before the “thing” is actually seen. And, of course, we know that there are colors that the color-blind cannot perceive and noises that are in most cases beyond human perceptual notice.

Second, the experiences of both memory and language speak to the existence of “representations.”  We are able to think about a perceived thing even in its absence.  To say the word “elephant” is to bring an elephant into the room even when that elephant is not physically present.  Similarly, memory represents to us things that are absent.  Thus, even if we deny that the perception of present things has an intermediary step of transforming the percept into a “representation,” it seems indubitable that we then “store” the immediate impressions in the form of representations that can be called to mind after the moment of direct impression. 

Finally, the existence of representations, of mental analogues to what has been experienced in perception, opens the door for imagination and reflection.  I can play around with what perception has offered once I have a mental representation of it.  I can, in short, think about it.  The sheer weight of facticity is sidestepped once I am inside my head instead of in direct contact with the world.  A space, a distance, is opened up between perceiver and perceived that offers the opportunity to explore options, to consider possible actions upon, manipulations of, what the world offers.  Representation provides an ability to step back from the sensory manifold and take stock.

So it would seem that Austin’s appealing attempt to dispense with the elaborate machinery of empiricist psychology won’t fly.  As accounts of how human vision works show, too much is going on to make a “direct” account of perception true to how perception actually works. Stimuli sensed by the senses are “processed” before being registered, not directly apprehended.

So the next issue is what “registering” or “apprehending” consist of.  But first a short digression.  We typically think of perception as the encounter with external things through one of the five senses.  But we can also perceive internal states, like a headache or sore muscle.  In those cases, however, perception does not seem to be tied to one of the five senses, but to some sort of ability to monitor one’s internal states.  Pain and pleasure are the crudest terms for the signals that trigger an awareness of internal states.  More broadly, I think it fair to say that the emotions in their full complex panoply are the markers of internal well-being (or its opposite or the many way stations between absolute euphoria and abject despair).  Emotions are both produced by the body (sometimes in relation to external stimuli, sometimes in relation to internal stimuli) and serve as the signal for self-conscious registering of one’s current states.  It’s as if a tree was not just a tree, but also a signal of “tree-ness.”  Anger is both the fact of anger and a signal to the self of its state of mind in response to some stimuli.  Certain things in the world or some internal state triggers an emotion—and then the emotion offers a path to self-awareness.  So there appears to be an “internal sense capacity,” ways of monitoring internal states and “apprehending” them that is parallel to the ways the five traditional senses provide for apprehending the external world. 

What, then, does it mean to “apprehend” something once the senses have provided the raw materials of an encounter with that thing?  Following Kant, apprehension requires a “determinate judgment.”  The percept is registered by the self when the percept is conceptualized.  Percept must become concept in order to be fully received.  To be concrete: I see the various visual stimuli that the tree offers, but I don’t apprehend the tree until I subsume this particular instance of a tree into the general category/concept “tree.”  I “recognize” the tree as a tree when I declare “that’s a tree.”  The tree in itself, standing there in the forest, does not know itself as a tree.  The concept “tree” is an artifact of human language and human culture.  Percepts only become occasions for knowledge when they are married to concepts.  Pure, non-conceptualized, percepts are just raw material—and cannot be used by human thought.  In other words, back to the representation notion.  Until what perception offers is transformed into a representation, it is unavailable for human apprehension, for being taken up by the human subject as part of its knowledge of the world. (Of course, famously in Kant, this yields the distinction between our representations and the “thing in itself.” The cost of “the way of ideas”–the cost that Austin was trying to overcome–is this gap in our knowledge of the world, our inability to see things independently of the limitations of human perceptual and conceptual equipment. Science attempts to overcome these limitations by using non-human instruments of perception (all those machines in our hospitals), but even science must acknowledge that what a machine registers, just like what a human registers, is a representation that is shaped by the nature of the representing apparatus.)

Determinate judgment appears to be instantaneous.  At least in the case of the encounter with most external things.  I see a tree and, without any discernible time lapse, identify it as a tree.  I have no awareness of processing the sensory signals and then coming to a judgment about what category the seen thing belongs to.  Percept and concept are cemented together.  Of course, there are some cases where I can’t at first make out what it is before me.  The lighting is bad, so I see a shape, but not enough more to determine what the thing is.  Such cases do indicate there is a distinction between percept and concept.  But in the vast majority of cases it is just about impossible to pry them apart.

For many artists from Blake on, the effort to pry the two apart is a central ambition.  The basic idea is that we see the world through our conceptual lenses—and thus fail to apprehend it in its full richness, its full sensual plenitude.  We filter out the particulars of this tree as we rush to assimilate the singular instance to the general category.  Thus painters strive to make us see things anew (cubism) or to offer ambiguous items that can’t be immediately or readily identified (surrealism).  They try to drive a wedge between percept and concept. “No ideas but in things,” proclaims William Carlos Williams—and this hostility to ideas, to their preeminence over things (over percepts), is shared by many modern artists.

One of the mysteries of the percept/concept pairing is the relative poverty of our linguistic terms to describe percepts.  We can in most cases quickly identify the tree as a tree, and we can certainly say that the tree’s leaves are green in the spring and rust-colored in the fall.  But more precise linguistic identification of colors eludes us.  We can perceive far more variations in colors than we can describe.  Hence the color chips at any hardware store, which offer 45 variants of the color we crudely call “blue” and invent fanciful names to distinguish each different shade from the rest.  The same, of course, holds for smells and for emotions.  We have a few, very crude terms for smell (pungent, sharp) but mostly can only identify smells in terms of the objects that produce such smells.  It smells flowery, or like hard boiled egg.  The same with taste.  Aside from sweet, sour, sharp, we enter the world of simile, so that descriptions of wine notoriously refer to things that are not wine. Notes of black currant, leather, and tobacco.  And when it comes to emotions we are entirely at sea—well aware that our crude generalized terms (love, anger, jealousy) get nowhere near to describing the complexities of what one feels.  Thus some artists (Updike comes to mind) specialize in elaborating on our descriptive vocabularies for physical and emotional percepts.  Thus a whole novel might be devoted to tracing the complexities of being jealous, to strive to get into words the full experience of that emotional state.

In any case, the paucity of our linguistic resources for describing various percepts, even in cases where the distinction between the percepts is obvious to us (as in the case of gradients of color), shows (I  believe) that there are ordinary cases where percept and concept are distinct.  We don’t immediately leap to judgment in every case.  Now, it is true that I conceptualize the various shades of blue as “color” and even as “blue.”  But I do not thereby deny that the various shades on the color chip are also different, even though I have no general category to which I can assign those different shades. 

Two more puzzles here.  The first is Wittgensteinian.  I had forgotten, until going through this recently with my granddaughter, how early children master color.  By 18 months, she could identify the basic colors of things.  Multiple astounding things here.  How did she know we were referring to color and not to the thing itself when we call a blue ball “blue”?  What were we pointing out to her: the color or the thing?  Yet she appeared to have no trouble with that possible confusion.  Except.  For a while she called the fruit “orange” “apples.”  It would seem that she could not wrap her head around the fact that the same word could name both a color and a thing.  She knew “orange” as a color, so would not use that word to name a thing.  Even more amazing than sorting colors from things, was her accuracy in identifying a thing’s color.  Given sky blue and navy blue, she would call both “blue.”  A little bit later on (two or three months) she learned to call one “light blue” and the other “dark blue.”  But prior to that distinction, she showed no inclination to think the two were two different colors.  And she didn’t confuse them with purple or any other adjacent color.  So how is it that quite different percepts get tossed into the same category with just about no confusion (in relation to common usage) at all? It would seem more obvious to identify sky blue and dark blue as two different colors.

The second puzzle might be called the ”good enough” conundrum.  I walk in the forest and see “trees.”  The forester sees a number of specific species—and very likely also singles out specific trees as “sick” or of a certain age.  His judgments are very, very different from mine—and do not suffer from the paucity of my categorical terms.  Similarly, the vintner may rely on an almost comical similes to describe the taste of the wine, but I do not doubt that his perceptions are more intense and more nuanced than mine.  A chicken/egg question here about whether having the concepts then sharpens the percepts—or if sharper percepts then generate a richer vocabulary to describe them.  Or the prior question: do we only perceive with the acuity required by our purposes?  My walk in the woods is pleasant enough for me without my knowing which specific types of trees and ferns I am seeing.  What we “filter out,” in other words, is not just a function of the limitations of our perceptual equipment, or the paucity of our concepts/vocabulary, but also influenced by our purposes.  We attend to what we need to notice to achieve something. 

Push this last idea just a bit and we get “pragmatism” and its revision of the empiricist account of perception and the “way of ideas.”  The pragmatist maxim says that our “conception” of a thing is our understanding of its consequences.  That is, we perceive things in relation to the futures that thing makes possible.  Concepts are always dynamic, not static.  They categorize what perception offers in terms of how one wants to position oneself in the world.  Percept/concept is relational—and at issue is the relations I wish to establish (or maintain) between myself and what is “out there” (which includes other people.) 

Back to the artists.  The repugnance many artists (as well as other people) feel toward pragmatism stems from this narrowing down of attention, of what might be perceived.  Focused (in very Darwinian fashion) upon what avails toward the organism’s well-being, the pragmatist self only perceives, only attends to, that which it can turn to account.  It thereby misses much of what is in the world out there.  The artists want to fling open the “doors of perception” (to quote Blake)—and see pragmatism as a species of utilitarianism, a philosophy that notoriously reduces the range of what “matters” to humans as well as reducing the motives for action to a simple calculus of avoiding pain and maximizing pleasure.  To categorize percepts immediately into two bins–these things might benefit me, these things are potentially harmful—is to choose to live in a diminished, perversely impoverished world.

Of course, Dewey especially among the “classic” pragmatists worked hard to resist the identification of pragmatism to a joyless and bare-bones utilitarianism.  The key to this attempt is “qualia”—a term that is also central in the current philosophical debates about consciousness.  “Qualia” might be defined as the “feel of things.”  I don’t just see trees as I walk in the woods.  I also experience a particular type of pleasure—one that mixes peacefulness, the stimulus/joy of physical exertion, an apprehension of beauty, a diffuse sense of well-being etc.  “Consciousness” (as understood in everyday parlance) registers that pleasure. Consciousness entails that I not only feel the pleasure but can also say to myself that I am feeling this pleasure.  Percepts, in other words, are accompanied by specific feelings that are those percept’s “qualia.” And through consciousness we can register the fact of experiencing those feelings.

The relation of concepts to “qualia” is, I think, more complex—and leads directly to the next post on the fact/value dyad.  A concept like “fraud” does seem to me to have its own qualia.  Moral indignation is a feeling—and one very likely to be triggered by the thought of fraud.  Perhaps (I don’t know about this) only a specific instance of fraud, not just the general concept of it, is required to trigger moral indignation.  But I don’t think so.  The general idea that American financiers often deploy fraudulent practices seems to me enough to make me feel indignant.

On the other hand, the general concept of “tree” does not seem to me to generate any very specific qualia.  Perhaps a faint sense of approval.  Who doesn’t like trees?  But pretty close to neutral.  The issues, in short, are whether “neutral” percepts  or concepts are possible.  Or do all percepts and concepts generate some qualia, some feelings that can be specified?  And, secondly, are all qualia related to judgments of value?  If we mostly and instantaneously make a judgment about what category a percept belongs to (what concept covers this instance), do we also in most cases and instantaneously judge the “value” of any percept?  That’s what my next post on fact/value will try to consider.

Philosophy and How One Acts

A friend with whom I have been reading various philosophical attempts to come to terms with what consciousness is and does writes to me about “illusionism,” the claim that we do not have selves. We are simply mistaken in thinking the self exists. The basic argument is the classic empiricist case against “substance.” There are various phenomena (let’s call them “mental states” in this case), but no stuff, no thing, no self, to which those mental states adhere, or in which they are collected. Thomas Metzger is one philosopher who holds this position and in an interview tells us that his position has no experiential consequences. It is not clear to me whether Metzger thinks (in a Nietzschean way) that the self is an unavoidable illusion or if Metzger thinks that ll the phenomena we attribute to the self would just continue to be experienced in exactly the same way even if we dispensed with the notion (illusion) of the self. In either case, accepting or denying Metzger’s position changes nothing. Belief or non-belief in the self is not a “difference that makes a difference” to recall William James’s formula in the first chapter of his book, Pragmatism.

The issue, then, seems to be what motivates a certain kind of intellectual restlessness, a desire to describe the world (the terms of existence) in ways that “get it right”–especially if the motive does not seem to be any effect on actual behavior. It’s “pure” theory, abstracted from any consequences in how one goes about the actualities of daily life.

There does exist, for some people, a certain kind of restless questioning.  I have had a small number of close friends in my life, and what they share is that kind of restlessness.  A desire to come up with coherent accounts of why things are the way they are, especially of why people act the ways they do. People are endlessly surprising and fascinating. Accounting for them leads to speculations that are constantly being revised and restated because each account seems, in one way or another, to fail to “get things right.”  There is always the need for another round of words, of efforts to grasp the “why” and “how” of things.  Most people, in my experience, don’t feel this need to push at things.  I was always trying to get my students to push their thinking on to the next twist—and rarely succeeded in getting them to do so. And for myself this restless, endless inquiry generates a constant stream of words, since each inadequate account means a new effort to try to get it more accurately this time.

Clearly, since I tried to get my students to do this, I think of such relentless questioning as an intellectual virtue. But what is it good for?  I take that to be the core issue of your long email to me.  And I don’t have an answer.  Where id is, ego shall be.  But it seems very clear that being able to articulate one’s habitual ways of (for example) relating to one’s lover, to know what triggers anger or sadness or neediness, does little (if anything) to change the established patterns.  Understanding (even if there were any way to show that the understanding was actually accurate) doesn’t yield much in the way of behavioral results.

This gets to your comment that if people really believed Darwin was right, as many people do, then they wouldn’t eat animals.  William James came to believe that we have our convictions first—and then invent the intellectual accounts/theories that we say justify the convictions.  In other words, we mistake the causal sequence.  We take the cause (our convictions) as the effect (our theory), when it is really the other way around.  Nietzsche was prone to say the very same thing. 

One way to say this: we have Darwin, but will use him to justify exactly opposite behaviors.  You say if we believed Darwin we wouldn’t eat animals.  I assume that the logic is that Darwin reveals animals as our kin, so eating them is a kind of cannibalism.  We don’t eat dogs because they feel “too close” to us; that feeling should be extended to all animals, not just fellow humans and domestic pets.  (The French eat horse meat although Americans won’t).  But many people use Darwin to rationalize just the opposite.  We humans have evolved as protein seeking omnivores and we developed domesticating animals we eat just as we developed agriculture to grow plants we eat.  Even if we argue that domestication and agriculture were disasters, proponents of so-called “paleo diets” include meat eating in their attempt to get back to something thought basic to our evolved requirements.  So even is Darwin is absolutely right about how life—and specifically human life—emerged, people will use the content of his theory to justify completely contradictory behaviors.

This analysis, of course, raises two questions.  1) What is the cause of our convictions if it is not some set of articulable beliefs about how the world is?  James only answer is “temperament,” an in-built sensibility, a predilection to see the world in a certain way.  (Another book I have just finished reading, Kevin Mitchell’s Free Agents [Princeton UP, 2023], says about 50% of our personality is genetically determined and that less than 10% is derived from family environment.  Mitchell has an earlier book, titled Innate [Princeton UP, 2018], where he goes into detail about how such a claim is supported.)  Nietzsche, in some places, posits an in-built will to power.  All the articulations and intellectualisms are just after the fact rationalizations.  In any case, “temperament” is obviously no answer at all.  We do what we do because we are who we are—and how we got to be who we are is a black box.  Try your damndest, it’s just about impossible to make sure your child ends up heterosexual or with some other set of desires. 

2)So why are James and Nietzsche still pursuing an articulated account of “how it really works”?  Is there no consequence at all at “getting it right”?  Shouldn’t their theories also be understood as just another set of “after the fact” rationalization?  In other words, reason is always late to the party—which suggests that consciousness is not essential to behavior, just an after-effect.

That last statement, of course, is the conclusion put forward by the famous Libet tests.  The ones that say we move our hand milli-seconds before we consciously order our hand to move.  Both Dennett [in Freedom Evolves (Penguin, 2003) and Mitchell (in Free Agents) have to claim the Libet experiment is faulty in order to save any causal power for consciousness.  For the two of them, who want to show that humans actually possess free will, consciousness must be given a role in the unfolding of action.  There has to be a moment of deliberation, of choosing between options—and that choosing is guided by reason (by an evaluation of the options and a decision made between those options) and beliefs (some picture of how the world really is.)  I know, from experience, that I have trouble sleeping if I drink coffee after 2pm.  I reason that I should not drink coffee after 2pm if I want to sleep.  So I refrain from doing so.  A belief about a fact that is connected to a reasoned account of a causal sequence and a desire to have one thing happen rather than another: presto! I choose to do one thing rather than another based on that belief and those reasons.  To make that evaluation certainly seems to require consciousness—a consciousness that observes patterns, that remembers singular experiences that can be assembled into those patterns, that can have positive forward-looking desires to have some outcomes rather than others (hence evaluation of various possible bodily and worldly states of affairs), and that can reason about what courses of action are most likely to bring those states of affairs into being.  (In short, the classical account of “rationality” and of “reason-based action.”)

If this kind of feedback loop actually exists, if I can learn that some actions produce desirable results more dependably than others, then the question becomes (it seems to me): at what level of abstraction does “knowledge” no longer connect to action?  Here’s what I am struggling to see.  Learned behavior, directed by experiences that provide concrete feedback, seems fairly easy to describe in terms of very concrete instances.  But what happens when we get to belief in God—or Darwin?  With belief in God, we seem to see that humans can persist in beliefs without getting any positive feedback at all.  I believe in a loving god even as my child dies of cancer and all my prayers for divine intervention yield no result.  (The classic overdramatized example.)  Faced with this fact, many theologians will just say: it’s not reasonable, so your models of reasoned behavior are simply irrelevant at this point.  A form of dualism.  There’s another belief-to-action loop at play.  Another black box.

On Darwin it seems to me a question of intervention.  Natural selection exists entirely apart from human action/intention/desire etc.  It does its thing whether there are humans in the world or not.  That humans can “discover” the fact of natural selection’s existence and give detailed accounts of how it works is neither here nor there to natural selection itself.  This is science (in one idealized version of what science is): an accurate description of how nature works.  The next step seems to be: is there any way for humans to intervene in natural processes to either 1) change them (as when we try to combat cancer) or 2) harness the energies or processes of nature to serve specific human ends. (This is separate from how human actions inadvertently, unintentionally, alter natural processes–as is the case in global warming. I am currently reading Kim Stanley Robinson’s The Ministry for the Future–and will discuss it in a future post.)

In both cases (i.e intentionally changing a natural process of harnessing the energies of a natural process toward a specifically human-introduced end), what’s driving the human behavior are desires for certain outcomes (health in the case of the cancer patient), or any number of possible desires in the cases of intervention.  I don’t think the scientific explanation has any direct relation to those desires.  In other words, nothing about the Darwinian account of how the world is dictates how one should desire to stand in relation to that world.  Darwin’s theory of evolution, I am saying, has no obvious, necessary, or univocal ethical consequences.  It does not tell us how to live—even if certain Darwinian fundamentalists will bloviate about “survival of the fittest” and gender roles in hunter-gatherer societies. 

I keep trying to avoid it, but I am a dualist when it comes to ethics.  The non-human universe has no values, no meanings, no clues about how humans should live.  Hurricanes are facts, just like evolution is a fact.  As facts, they inform us about the world we inhabit—and mark out certain limits that it is very, very useful for us to know.  But the use we put them to is entirely human generated, just as the uses the mosquito puts his world to are entirely mosquito driven.  To ignore the facts, the limits, can be disastrous, but pushing against them, trying to alter them, is also a possibility.  And the scientific knowledge can be very useful in indicating which kinds of intervention will prove effective.  But it has nothing to say about what kinds of intervention are desirable.

I am deeply uncomfortable in reaching this position.  Like most of the philosophers I read, I do not want to be a dualist.  I want to be a naturalist—where “naturalism” means that everything that exists is a product of natural forces.  Hence all the efforts out there to offer an evolutionary account of “consciousness” (thus avoiding any kind of Cartesian dualism) and the complementary efforts to provide an evolutionary account of morality (for example, Philip Kitcher, The Ethical Project [Harvard UP, 2011.) I am down with the idea that morality is an evolutionary product—i.e. that it develops out of the history and “ecology” of humans as social animals.  But there still seems to me a discontinuity between the morality that humans have developed and the lack of morality of cancer cells, gravity, hurricanes, photosynthesis, and the laws of thermodynamics.  Similarly, there seems to me a gap between the non-consciousness of rocks and the consciousness of living beings.  So I can’t get down with panpsychism even if I am open to evolutionary accounts of the emergence of consciousness from more primitive forms to full-blown self-consciousness.

Of course, some Darwinians don’t see a problem.  Evolution does provide all living creatures with a purpose—to survive—and a meaning—to pass on one’s genes.  Success in life (satisfaction) derives from those two master motives—and morality could be derived from serving those two motives.  Human sociality is a product of those motives (driven in particular by the long immaturity, non-self-sustaining condition, of human children)—and morality is just the set of rules that makes sociality tenable.  So the theory of evolution gives us morality along with an account of how things are.  The fact/value gap overcome.  How to square this picture of evolution with its randomness, its not having any end state in view, is unclear.  The problem of attributing purposes to natural selection, to personifying it, has bedeviled evolutionary theory from the start.

For Dennett, if I am reading him correctly, the cross-over point is “culture,”—and, more specifically, language.  Language provides a storage device, a way of accumulating knowledge of how things work and of successful ways of coping in this world.  Culture is a natural product, but once in place it offers a vantage point for reflection upon and intervention in natural processes.  Humans are the unnatural animal, the ones who can perversely deviate from the two master motives of evolution (survival and procreation) even as they strive to submit nature to their whims.  It’s an old theme: humans appear more free from natural drivers, but even as freedom is a source of their pride and glory, it often is the cause of their downfall. (Hubris anyone?) Humans are not content with the natural order as they find it.  They constantly try to change it—with sometimes marvelous, with other times disastrous, results.

But that only returns us to the mystery of where this restless desire to revise the very terms of existence comes from.  To go back to James and Nietzsche: it doesn’t seem like our theories, our abstract reasonings and philosophies, are what generate the behavior.  Instead, the restlessness comes first—and the philosophizing comes after as a way of explaining the actions.  See, the philosophers say, the world is this particular way, so it makes sense for me to behave in this specific way.  But, says James, the inclination to behave that way came first—and then the philosophy was tailored to match. 

So, to end this overlong wandering, back where I began.  Bertrand Russell (in his A History of Western Philosophy) said that Darwin’s theory is the perfect expression of rapacious capitalism—and thus it is no surprise that it was devised during the heyday of laissez-faire.  That analysis troubles me because it offers a plausible suspicion of Darwin’s theory along the William James line.  The theory just says the “world is this way” in a manner that justifies the British empire and British capitalism in 1860.  But I really do believe Darwin is right, that he has not just transposed a capitalist world view into nature.  I am, however, having trouble squaring this circle.  That is, how much our philosophizing, our theories, just offer abstract versions of our pre-existing predilections—and how much those theories offer us genuine insights about the world we inhabit, insights that will then effect our behavior on the ground.  A very long-winded way of saying I can’t come up with a good answer to the questions your email posed.

E.M. Forster’s Howard’s End

In our reading group discussion of Howard’s End, I called Henry Wilcox “despicable”—a comment that generated some push back from other members of the group.  And in discussing our meeting afterwards with Jane, I realized that I had not made myself clear.  Straight out: I was not saying business people are despicable.  Henry is a fictional character; if he is despicable, that’s because Forster has portrayed him as such. Henry is Forster’s portrayal of a certain type, not the reality of how actual business people (who come in all stripes) are. That Forster did not portray a practical man we could respect and like is the central failure of the novel.  I believe Forster wanted to give “both sides” their due, but that he failed to do so.

To explain: I agree with Scott in our group that Forster wanted to write a novel that accepts (even “demonstrates”) that we need both the artsy intellectuals (who attend to the “inner life) and the practical business people who get things done.  Forster endorses Meg’s insight that their artsy life is utterly dependent on the business men.  Meg’s income—and even more concretely the clothes she wears and the food she eats—exist only through the work done by other hands.  She does nothing herself to make those necessities appear—as they regularly do.  She is a parasite, feeding off what others produce. By extension, Forster–and yours truly–are also parasites.

Given that fact—and Forster’s desire to live up to that fact—the novel strives to provide a bridge, to forge a connection, between the intellectual world and the practical one.  It wants to pay homage to the strengths of each side, to show that each has a contribution to make to creating the “good life.”  And this notion of the “good life” is grounded in the somewhat more concrete question of national identity.  What should we, as a people, make of England?  The novel is shot through with a deep love for the English landscape and of English places—and suffused with a fear that modern speed (cars are a menace even as they move too fast to allow a true appreciation of the countryside through which they move) and modern rootlessness (loss of attachment to place) threaten any ability to retain identity.

But when it comes to executing his design, Forster (in my opinion) fails spectacularly.  The deck is just too stacked against Wilcox and in favor of Margaret.  Wilcox is obtuse and selfish.  What are his sins? He fails to understand his first wife and her attachment to Howard’s End.  He ignores the first wife’s desire for the house to go to Margaret, even though he has no love for or even use for the house himself.  He is unfaithful to his first wife—and thinks the sin is the sexual act not the violation of his relationship to his wife.  He fails to “connect” his off-hand remark about Leonard Bast’s place of business to Leonard’s subsequent economic ruin (part of his more general inability to connect large-scale business decisions with their real impact on individual lives).  He is generally (and persistently) condescending to women, convinced of their inability to understand or handle anything, of their congenital hysteria.  This attitude culminates in his attempt to prevent Meg (now his wife) from going with him to see her sister Helen when she has finally reappeared after an eight month absence.  His behavior toward women convinces Helen and Meg that men have absolutely no ability to understand women.  There is only the slim chance that in two thousand years from the present some connection (that key word in the novel) between the sexes will be achieved.

Finally, of course, there is the crucial moment when Meg challenges Henry to “connect,” to see that Helen (whom he condemns) has only done what Henry himself has done and Meg has forgiven him for doing.  To see that his real crime was the deceit and infidelity to the first Mrs. Wilcox (not the sexual act); to see that he is, at least in part, responsible for Leonard’s plight.  But Henry cannot see any of that. And, at that point, Meg’s forbearance comes to an end.  She cannot forgive this moral blindness—and there will now be a break-up of their marriage. 

So the novel does not portray the virtues and vices of each side (in the duality it sets up between the artsy and the practical) evenly.  Throughout the novel, Meg has had to discount Henry’s flaws in ways that he does not have to discount hers.  All the yielding and forbearance is on her side.  Forster has failed to make Henry likeable or to paint a convincing picture of his virtues.  In fact, the narrator from the start keeps making snide comments about the “Wilcox” way of thinking and of not feeling.  True, the narrator also offers some criticisms of the Schlegel girls—and the portrait of the selfish and ineffectual brother Tibby is devastating.  But (at least for this reader) I never lose sympathy for Meg while I never get any sympathy for Henry.  His flaws accumulate until their culmination in his failure to “connect” at the key moment in his history. The mystery of the book is why Meg would ever agree to marry Henry, knowing his general attitude to (and contempt for) women and his specific ill treatment of and disregard for the first Mrs. Wilcox.

And then Henry collapses when the crisis also entails his son Charles going to prison.  Henry becomes a child at the end, helpless and totally dependent on the care of the women.  Instead of portraying a world that pays equal homage to the virtues of practical men and artsy women, it looks instead like Forster really (in his heart of hearts?) can’t feel any respect or liking for men—and would, ideally, like to see a world run by women.  All his sympathies, it seems to me, lie on the feminine side.  Forster, too, if you will, fails to connect.  He knows, intellectually, that men are necessary—but that doesn’t mean he has to like that fact or the men who embody it.  He tries to write a book that overcomes some of his most basic feelings (prejudices).  That’s a noble enterprise (I think), but one I don’t believe he pulls off.  His emotional commitments override his intellectual effort to write against their grain.

Disparate Economies 4: Power

Warning: this post is even more essayistic than most. A lot of speculation as I drunkenly weave through a variety of topics and musings.

The previous posts on disparate economies have tried to consider how economies of status, love/sex, and fame are structured.  What is the “good” or “goods” that such markets make available, and what are the terms under which those goods are acquired, competed for, and exchanged.  Finally, what power enforces the structures and the norms that keep a market from being an anarchic free-for-all.  Markets (or specific economies among the multiple economies that exist—hence my overall heading “disparate economies”) are institutions, by which I mean they a) have discernible organizational shape, along with legitimated and non-legitimated practices by human agents within them; b) are not the product of any individual actor or even a small cadre of actors but are socially produced over a fairly long span of time; and c) change only through collective action (sometimes explicit as in the case of new laws, but much more often implicitly as practices and norms shift almost imperceptibly through the repetitions of use.)  Institutions exist on a different scale than individual actors—or even collective actors.  A sports team exists within the larger container of the institution that is the sport itself, just as a business corporation exists within the market in which it strives to compete.

It is a well recognized fact that power is among the goods that human compete for.  In one sense, this fact is very odd.  Here is one of Hobbes’ many reflections on power:

The signs by which we know our own power are those actions which proceed from the same; and the signs by which other men know it, are such actions, gesture, countenance and speech, as usually such powers produce: and the acknowledgment of power is called Honour; and to honour a man (inwardly in the mind) is to conceive or acknowledge, that that man hath the odds or excess of power above him that contendeth or compareth himself . . . and according to the signs of honour and dishonour, so we estimate and make the value or Worth of a man. (1969 [1640], 34–35) The Elements of Law: Natural and Politic. Ed. Ferdinand Tönnies. London: Frank Cass and Co.

Hobbes, sensibly it would seem, focuses on what power can “produce.”  For him, power is a means not an end.  Power is capacity.  We know someone is powerful when he is able to produce the ends toward which he aims.  This is power to, the possession of the resources and capabilities required for successful action.  Such power, Hobbes goes on to say, also produces, as a by-product, “honor.”  The powerful man is esteemed by others; in fact, Hobbes states, power is the ultimate measure by which we determine a person’s “value or worth.”  Since, presumably, we want others to esteem us and to think of us as worth something, as having value, it makes sense that we would seek power not only because it yields the satisfaction of accomplishing our aims, but also because it gains us the respect of our peers.

Still, power is instrumental here; it is valuable for what it enables one to get.  There is no sense of power as an end-in-itself.  In that respect, power is like money.  Human perversity is such that something (money or power) which has no intrinsic value of its own, but is only a means toward something else that is of intrinsic value, nonetheless becomes the object of one’s desires.  Power, like money, is stored capacity—and, like money, one can devote oneself to increasing one’s store.  Yes, spending power, like spending money, has its own pleasures, but there is an independent urge, an independent compulsion, to increase one’s holdings.  And that urge can become a dominant, even over-riding, compulsion.

Of course, money and power can be converted into one another. Still, the insanities of current-day American plutocracy illustrate that the conversion is not easy or straight-forward.  Think of the Koch brothers (or any other number of megalomaniac billionaires).  The Kochs think their money should allow them to dictate public policy.  Why, as Gary Will asked many years ago, are these rich people so angry?  Why are they so convinced that their country is in terribly bad shape—when they have done and are doing extremely well?  They don’t lack money, but they believe their will is being thwarted. Their money has been able to buy them power—but not the kind of absolute power they aspire to.  They meet obstacles at every turn, obstacles they can only partially overcome.  And from all appearances, it seems to drive them crazy.  They want to be able to dictate to the nation in the same way they can dictate to their employees.  The thrill of being able to say “you’re fired.” Donald Trump on The Apprentice. Apparently, just the thrill of watching some one else exercise that absolute power is a turn-on for lots of people.

Which reminds us that power is not only capacity, power to, but also domination, power over.  Returning to the issue of “an economy,” in this matter of power, the competition is over the resources necessary to possess power.  On the one hand, power to depends on assembling enough resources (time, money, health, opportunity, freedom) to set one’s own goals and accomplish them.  On the other hand, among the resources one can require, especially for complex enterprises, is the cooperation of others.  One person alone cannot accomplish many of the things humans find worth aiming for.  How to ensure the contributions of others to one’s projects?  Being the person who controls the flow of resources to those people is one solution.  Help me—or you won’t be given the necessities for pursuing your own projects.  Hegel famously reduces this dynamic to its most fundamental terms.  Your project is to live—and unless you do my bidding, you will not be given the means to live.  The calculus of power over, of mastery over another human being, is based on life being valued—and thus serving as the basic unit of exchange—in struggles for mastery.

I have in my previous posts on these different economies attempted to specify the norms (or rules in more formal economies) that structure competition and exchange in each case.  And I have tried to indicate the power(s) that enforce those norms/rules.  Thus in the sex/love market there is an ideal of reciprocity; the partners to an exchange freely and willingly give to each other.  Where that norm is violated (most frequently in male coercion of women) family and/or the state will, in some cases, intervene.  The deck is stacked against women because family and state intervention is imperfect and intermittent.  But there are still some mechanisms of enforcement, even if they are not terribly effective, just as there are recognized ideal norms even if they are frequently violated.  Similarly, the billionaire may have gained his wealth through shady means, but he has still operated in a structured market where violation of the rules can lead to prison (even if it seldom does).  Outright theft, just like rape in the sex/love market, is generally deemed a crime.

How to translate these considerations over into the competition for power? It would seem that slavery is the equivalent of rape and theft—something now universally condemned as beyond the pale.  But it seems significant to me that the condemnation of slavery is not even 200 years old—while slavery as a practice persists.  Of course, rape and theft persist as well.  And I guess we could say that minimum wage laws and various labor protecting regulations/statutes also aim at limiting the kinds of resource withholding that allows one to gain power over another.  So there is some attempt to avoid a Hobbesian war of all against all, with no holds barred.  Still, within any economy that enables—and mostly allows—large inequalities, the ability of some to leverage those avenues to inequality into power over others will go mostly unchecked. 

Where there is no structure and no norms, the result appears to be endless violence.  From Plato on, the insecurity of tyrants has been often noted.  Power might be accumulated as a means to warding off the threat that others will gain the upper hand.  In this free-for-all, no one is to be trusted.  Hence the endless civil wars in ancient Rome and late medieval England (as documented in Shakespeare’s plays among other places), along with the murders of one’s political rivals—and erstwhile allies.  From Stalin’s murderous paranoia to Mafia killings, we have ample evidence that struggles for power/dominance are very, very hard to bring to closure.  Competition simply breeds more competition—and the establishment of some kind of modus vivendi among the contenders that allows them to live is elusive.  Power does seem, at least to the most extreme competitors in this contest, a zero sum game.  If my rival has any power at all, he is a threat. 

In his life of Mark Antony, Plutarch has this to say of Julius Caesar:  “The real motive which drove him to make war upon mankind, just as it had urged Alexander and Cyrus before him, was an insatiable love of power and an insane desire to be the first and greatest man in the world” (Makers of Rome, Penguin Classics, 1965: p. 277.)  There’s a reason we think of men like Caesar—or like some of today’s billionaires—as megalomaniacs.  They harbor an “insane desire” for preeminence over all other humans. If power equals preeminence, then, in their case, it is an end-in-itself.  They desire that all bow before them—which is what power over entails.  There is still the suspicion, however, that power is the means to the “honor” of being deemed “the first and greatest man in the world.”  And there is certainly no doubt in Plutarch’s mind, as there was no doubt in Hegel’s, that killing others is a requirement for gaining such power.  Only a man who “makes war upon mankind” can ascend to that kind of preeminence.

For Nietzsche, of course, the desire for power is primary.  But even in his case, it’s not clear if power is an end or merely a means.  What is insufferable to Nietzsche is submission.  Life is a struggle among beings who each strive to make others submit to them.  It would seem that “autonomy” is the ultimate good in Nietzsche, the ability to be complete master over one’s own fate.  That’s what power means: having utter control over one’s self.  Except . . .  everything is always contradictory in Nietzsche.  At times he doesn’t even believe there is a self to gain mastery over.  And there is his insistence that one must submit completely to powers external to the self; amor fati is the difficult attitude one should strive to cultivate.  We are, he seems to say, ultimately powerless in the face of larger, nonhuman forces, that dwarf us. In short, I don’t think Nietzsche is very helpful in thinking about power.  His descriptions of it and of the things that threaten it are just too contradictory.

Machiavelli is, I think, a better guide.  His work returns us to the issue of security.  When I teach Machiavelli, I always have some students who say he is absolutely right: it’s a dog eat dog world.  Arm yourself against the inevitable aggression of the other or you will be easily and ignominously defeated.  I think this is a very prevalent belief system out there in the world—usually attached to a certain brand of right wing politics.  To ventriloquize this position: It is naïve to expect cooperation or good will from others, especially from others not part of your tribe.  They are out to get you—and you must arm yourself for self-protection (if nothing else).  Your good intentions or behavior is worth nothing because there are bad actors out there.  It is inevitable that you will have to fight to defend what is yours against these predators.  

This right wing attitude often goes hand-in-hand with a deeply felt acknowledgement that war is hell, the most horrible thing known.  But it’s sentimental and weak to think that war can be avoided.  It is necessary—and the clear-eyed, manly thing is to face that necessity squarely.  Trying to sidestep that necessity, to come to accommodations that avoid it (appeasement!) are just liberal self-delusions, the liberal inability to believe in the existence of evil.  Power in this case is the only surety in an insecure world—and even power will still get involved in the tragedy of war, where the costs will be borne by one’s own side as well as by the evil persons one is trying to subdue.  Power cannot fully insulate you from harm. (I think John McCain embodied this view–along with the notions of warrior honor that often accompany it.)

It is a testament to the human desire (need? compulsion?) to structure our economies, our competitions, that there are also “rules” of war.  On the extreme right wing, there is utter contempt for that effort.  There are no rules for a knife fight, as we learn in Butch Cassidy.  It’s silly to attempt to establish rules of war—and crazy to abide by them since it only hands an advantage to your adversary. And certainly it is odd, on the face of things, to try to establish what counts as legitimate killing as contrasted to illegitimate killing when the enterprise is to kill so many people that your adversary can no longer fight against you, no longer having the human resources required to continue the fight. 

I don’t know what to think about this.  Except to say that the specter of completely unstructured competitions scares humans enough that they will attempt to establish rules of engagement even as they are involved in a struggle to the death.  But I guess this fact also makes clear how indispensable, how built in as a fundamental psychological/social fact, morality has become for humans.  On very tricky and speculative grounds here.  But it seems to me that any effort to distinguish between murder and non-murder means that some kind of system of morality is in play.  Murder will be punished, whereas non-murder will be deemed acceptable.  The most basic case, of course, is that soldiers are not deemed guilty of murder.  The killing they do falls into a different category.  What I am saying is that once you take the same basic action—killing someone—and begin to sort it into different categories, you have a moral system.  The rules of war offer one instance of the proliferation of such categories as moral systems get refined; differentiations between degrees of murder, manslaughter, self-defense and the like offer another example of such refinements.  My suspicion (although I don’t have all the evidence that would be required to justify the universal claim I am about to make) is that every society makes some distinction between murder (unsanctioned and punished) and non-murder (cases where killing is seen as justified and, then, non-punishable.)  At its most rudimentary, I suspect that distinction follows in-group and out-of-group lines.  That is, killing outsiders, especially in states of war, is not murder, whereas killing insiders often is.  The idea of a distinction between combatants and non-combatants comes along much later.

Similarly, worrying about “just” versus “unjust” wars also comes much later.  Morality is no slouch when it comes to generating endless complications.

I may seem to have wandered far from the issue of an economy in which the good that is competed for is power.  But not really.  War is the inevitable end game of struggles for power if Hegel is right to say that life is the ultimate stake in the effort to gain mastery over others.  If the economy of power is utterly anarchic, is not structured by any rules, then conquest is its only possible conclusion.  It is the ultimate zero-sum game.  The introduction of rules is an attempt to avoid that harsh zero-sum logic.  Putin out to conquer the Ukraine and Netanyahu out to destroy Hamas are zero-sum logics in action.  As is the Greek practice of killing all the male inhabitants of a conquered city while taking the women off into slavery.  The rules—like negotiated peace deals—try to leave both parties to the conflict some life, to avoid its being a fight to the total destruction of one party. 

The alternative (dare I say “liberal”) model is the attempt to distribute power (understood as the capacity to do things that one has chosen for oneself as worth doing) widely.  This is not just an ideology of individual liberty, of equal worth and its right to self-determination free from the domination of others.  It is also about checks and balances, on the theory that power is only checked by other powers—and that all outsized accumulations of power lead to various abuses.  Various mechanisms (not the least of which is a constitution, but also some version of a “separation of powers”) are put in place to prevent power being gathered into one or into a small number of hands.  The problem, of course, in current day America is that there are not parallel mechanisms to prevent the accumulation of wealth into a few hands—and there are no safeguards against using that wealth to gain power in other domains, including the political one.  That’s why we live in a plutocracy.  Our safeguards against accumulations of power are not capable of effectively counteracting the kinds of accumulation that are taking place in real time.

Recently, on the Crooked Timber blog, Kevin Munger offers this nugget (it appears to be a quote from somewhere, not Munger’s own formulation.  But he does not offer a source for it.)

“There is a great gap between the overthrow of authority and the creation of a substitute. That gap is called liberalism: a period of drift and doubt. We are in it today.”

On this pessimistic reading, power, like nature, abhors a vacuum.  Any situation in which authority/power is dispersed (as it is in the ideal liberal polity) will be experienced as unstable, unsettling, and chaotic.  The desire for order will triumph over the liberties and capacities for self-determination that the “overthrow of authority” enables.  Authoritarianism, the concentration (centralization) of power into a few hands, will rise again. Liberalism is always only a temporary stop-gap between authoritarian regimes. Humans, in this pessimistic scenario, simply prefer the certainties of domination to the fluidity (“drifts and doubts”) generated by less hierarchical social orders.  Just keep your head down and let those insane for power fight it out among themselves, hoping they will mostly leave you alone and let you focus on the struggles of your not-very-capaciously resourced life. 

Unfair as a characterization of a certain form of political quietism that skews rightward?  I don’t know.  But many people are content to not strive terribly hard for riches, power, or fame—and think their moderation of desire is the only sensible way to live.  They just want to be left in peace to make of life what they can with the extremely modest resources available to them.  Here we see yet another great divide in current-day American politics.  (It is hardly the only divide and not, I think, among even the three most important divides between left and right in our time.  But it still exists.) Namely, the idea that it is authoritarian government that will give them the peace they desire, get government off their backs, and curb the chaos of social mores that they feel threatens their children.  Liberal permissiveness, along with the liberal coddling of the unworthy, is the real danger to the country and to their “values”—and a healthy dose of authority is just the remedy we need.