Category: philosophy

Evil

I have finally (after more than three months) finished reading Raimond Gaita’s Good and Evil: An Absolute Conception (Routledge, 2nd. Edition 2004).  It is a dense, brilliant, and in many ways, wise book.  It was very much worth reading it slowly—not just because of the intricacy of many of its arguments, but also to give myself time to ponder their implications before hurrying on.  The book is, in many ways, a mess; Gaita chases any number of hares and their relevance to his central concerns are often tenuous at best.  Plus he is full of prejudices—some of which lead him into taking positions that are egregiously wrong and are simply asserted, not argued for.  In addition, he likes to argue against positions others are claimed to have taken without even naming who these mistaken souls are, and having his representation of their views stand as accurate since they don’t get to speak for themselves. 

All these flaws are readily forgiven because there is something to stir reflection on just about every page.  And now that evil (after the Hamas assault and Israel’s response) is being pondered as it was in the aftermath of 9/11, a book offering its “absolute conception” of that term should be welcome. 

On one reading, Gaita’s absolute conception is awfully thin.  His starting point (at least as I see it) is to accept the Kantian prescription that there is a non-negotiable obligation to respect the other; that human beings are only to be treated as ends in themselves, never as means to some other end.  What Gaita adds is that this obligation is “absolute”—and that philosophers are badly mistaken when they think they can argue their way to making that obligation rational, or binding, or some such substitute for its simply being obligatory. 

He presents this first assertion by way of an argument against Kant.  Basically, he says it is a travesty of the sort to which only a philosopher could subscribe to think our obligation to help a suffering human being is based on a rationally arrived at conclusion that I could not will that everyone neglect that suffering person (i.e. Kant’s categorical imperative).  The direction of obligation runs in exactly the opposite direction.  The appeal to me to help that person is direct; it does not go through the detour of a rational calculation (of either a Kantian or a utilitarian—or even a virtue ethics—kind.)  I don’t think of what the consequences of my helping will be, or what I owe myself as a rational being, or what action would reinforce my virtuous character.  I am called to simply respond to the need of another.  That call is absolute.  Nothing more to be said.

Except of course there are 300 plus pages of more to be said.  But let me first offer some of the ways Gaita strives to express this “absolute” notion of good and evil.  This vision of “goodness” is grounded on “the inalienable preciousness or the infinite preciousness of every human being” (xv).  “Sometimes I speak of seeing the full humanity of someone” (xv).  Moral probity entails “an understanding of the distinctive kind of limit another human being should be to our will” (xix). “When we say that we are treating someone as a means to our ends, we mean that his reality as a human being does not limit our will as it should. Or, to put it more accurately,: it is part of our sense of the reality of another human being, that he be the kind of limit to our will that we express when we say that he must never be treated merely as a means to an end but as an end in himself. We express this more simply when we say we must treat him as a human being. To acknowledge the reality of another human being is to have our will engaged and limited” (278). Gaita is fond of recalling Iris Murdoch’s understanding of the “ethical task” as “seeing the world as it is,” with the primary requirement of “coming to see the reality of another person” (211), which means seeing that person as a “human being” with claims upon us. But an adequate undertaking of that task is not a matter of correct knowledge or correct principles or of following a rational procedure of either observation or decision-making. Rather, it “depends on what we attend to and on the quality of our attention” (269); such qualitative attention is best characterized as “love,” and best understood as “not prompted by love as an investigation might be prompted by curiosity, but . . . [as] itself an expression of love” (211). Goodness is a way of being in the world, a stance of careful (in every sense of the word) attention to all that occupies the world apart from one’s own self, especially attentive care of other humans.

Evil, then, is the failure to acknowledge, and actions that follow upon that failure, the preciousness of the other. A failure to attend to and to care for the other. “Because evil, as I understand it, requires a conception of preciousness violated, and because people can do evil for banal reasons, the concept of evil (that I develop) has little or no place in the characterization of people or their motives.  For that reason, people who say that the concept of evil does not help explain the actions of evildoers are right.  Sometime, however, appeal to the concept is necessary to characterize adequately people’s responses—the person whose remorse is informed by a sense of his victim was infinitely precious, or a spectator who responds to wrongdoing in a way informed by that same sense” (xxvi).

We might conclude from this statement that Gaita’s whole project is hopelessly abstract since it will not offer any help in solving the “mystery” of evil (i.e. how it is that people can do evil things). But what Gaita does think long and hard about is how people might be brought to “an understanding” of how others are precious or should be a limit on their own will. He calls that realization “moral understanding” and is especially good on how such understanding does not coincide with what most philosophers would understand as “knowledge” or as “justified belief.” 

Thus, he wants to reject both sides of the cognitivist/non-cognitivist argument in meta-ethics.  To understand the preciousness of other human beings is not like knowing that water is H2O because moral understanding is not definitive or conclusive; it doesn’t end an inquiry but in fact opens one up.  How am I to act on that understanding in the almost infinite varieties of my encounters with other people?  And the way I do act on that understanding is constitutive of my own character, my own way of living a life.  The understanding, and how I act on it, is therefore individuating.  I have not gained some general truth in reaching that understanding; I have instead been given the puzzle of how to instantiate the understanding. The ethical “task is one that cannot be completed in the sense of issuing in results that could count of the realisation if its end(s) (291). There is no recipe or formula that answers the relevant questions and gives me a blueprint for how to proceed. I can’t ever “know” all there is to know about how to act ethically. How to live a good life, one that eschews evil, means taking into account at every turn the obligation I have toward treating others as precious. What that means in different circumstances is something I need to discover in the specific instance. And there are other considerations besides avoiding evil that influence my choices about how to live—just as there are different circumstances that offer widely various options for actions that are “good.” 

In short, Gaita is arguing that “true” means different things within different discourses or different “conceptual spaces” (a term he likes).  His point is derived from Wittgenstein.  The cognitive/noncognitive choice is forced upon us by a too rigid positivism; that false choice derives from an overly constrictive account of what counts as “true” or “real.”  Either we must join the cognitivist and say that the statement “murder is bad” is “true” exactly in the same was that 2 +2 = 4 is true—or we are trapped into saying that “murder is bad” is not cognitive because it cannot meet that positivist standard of “true.”  Appeals to “ordinary language” do no good here; either they are used to say moral assertions come with a claim to truth and thus underwrite “moral realism,”, or to say that people making moral assertions are just in “error” and need philosophers to show them that their truth claims are unjustified. Gaita is surely right (in my humble opinion) to say we should avoid this whole unproductive and wrongly framed debate.  The whole empirical tradition from Hume through to Dewey that aspired to articulate moral truths that would be as non-contestable as mathematical ones simply failed to see that the standards of truth internal to the edifice of mathematics could not be transferred wholesale over to the standards of truth for moral statements.  The canons of persuasiveness, evidence, argumentation etc. are very different in the two discursive domains. 

Of course, Gaita’s “absolute” conception of good and evil means he can look very much like a non-rationalist.  That is, he does seem at time to be saying that the preciousness of each human being is not something open to argumentation, to refutation by way of rational or philosophical argument. His final chapter offers a very unconvincing (to me) claim that the moral skeptic cannot be “serious”–and therefore should not be argued with. To argue with the skeptic is to already cede the terms of debate to him. Instead, the “absolute” position of an a-rational or pre-rational preciousness of every human being must hold the floor since no one (Gaita implausibly states) really denies that position. (I will have more to say about this stance in subsequent posts, partly because it returns us, I think, to the “mystery” of evil.)

Oddly in light of this grounding claim about the preciousness of the human being, Gaita insists that his argument is non-foundational.  “[My] book is marked, on the one hand, by its strong opposition of fondationalism and, on the other, by its equally strong commitment to a version of the Socratic claim that an unexamined life—a life that does not rise to the requirement to be lucid about its meaning(s)—is unworthy of a human being”(xxii).  What he means is that “my affirmation [of preciousness] is as firm and unreserved as it is metaphysically groundless” (xxvii).  There is no philosophical demonstration available to prove that each of us is precious—just as there are no conclusive arguments to show that one fails to live a “worthy” human life is one is not “lucid” (a favorite Gaita word) about what one is doing with that life. And there is no ontological claim about the status of human beings apart from how human beings regard (and attend to ) one another. Gaita calls his position “non-reductive humanism” (xxiv); that is, the assertion of human preciousness does not “reduce” to something else.  It is absolute in and of itself; self-standing, not resting upon something underneath or more fundamental than itself.  Morality, he is claiming, can only rest on this absolute; attempts to ground morality on other bases—reason, consequences, notions of virtue or of flourishing, or some metaphysical reality—obscure what is actually (and awfully, in the fullest sense of that word) at stake: our treating others and ourselves in a way that attends (in the deepest and fullest possible way) to our humanness, which is given to us absolutely (no rationale for why one exists instead of not existing, and no rationale for what humans are capable to doing with that existence they have been given.)

One last point and I am going to leave it for today.  The result of all this can seem like Gaita spends much of his book hectoring us (in the fashion of his hero Socrates) for not living up to the full possibilities our humanity affords us.  It is true that it would hard to read this book without feeling that one has failed to live up to snuff.  The other side of that coin, however, is that Gaita has an inspiring view of what a life worthy of being human could (should) look like.  Much of the book plays out this vision of what can seem like super-human virtue.  Far better, it seems to me, to take it as inspiring than to respond defensively to its portrait of one’s shortcomings.  I will try to take that approach in future posts on the book—even as I am afraid that I will also be arguing at points that he asks more of humans than they are capable of delivering.  And following that second line will bring Hamas and Israel back into focus.

Non-Cognitive Theories of Art (2): Sensibility Formation

A quick follow-up to the last post.  Nussbaum’s project is to show how the reason/feeling divide misrepresents how we actually come to know things.  Judgments are guided by feelings.  There is no way to separate out feelings when we come to “cognize.”  Rather, feelings are an indispensable component in our acts of knowing.

Still: kindness, grievance, tolerance, sympathy, envy, hatred, and the like are not themselves “knowledge.”  They are better characterized as “dispositions,” as feeling states that influence how we judge situations, people, ourselves, and events.  Because different people have different sensibilities, different sensitivities, they process the world differently.  They come to different conclusions, different judgments, not only abut the significance of what they deem to be the case, but make different assertions about what is the case.

That different dispositions can lead to radically different assertions about the facts of the matter has become very apparent in 2020 America. 

The Trump cult has been created not simply by the man himself but by a right-wing media (led by Fox News) that has inculcated a sensibility best described as combining a perpetual sense of grievance with an openness to believing the worst about designated enemies.  (Immigrants are criminals, Democrats steal elections, liberals are socialists, and leftists are pederasts who are kidnapping massive numbers of children.) 

If cognition is so dependent on disposition, then it is no surprise that one theory of art would say that art is more directed to creating (fostering) certain sensibilities, certain predispositions, in its audience, than in making concrete assertions about what is.  The success of Fox News, or of the “lost cause” narrative in the American South, testify to the power of word, image, story, anecdote, staged emotions (outrage, condemnation, fellow feeling for those on one’s side), ceremony, and ritual to shape how people understand the world and their place in it. 

In our day, “culture” appears more and more intractable.  More than 150 years after the American Civil War, and the set of shared feelings and grievances that ignited that war still shape the American political and social landscape. 

The creation of culture, of shared dispositions across a group of people, is, it can be argued, aesthetic.  It is a matter of shaping feelings, of shaping how we “sense” things, and what “sense” they have for us (to go back to the root meaning of “aesthetic.”)

Thus, one non-cognitive theory of art would look not at any knowledge art might convey, but (instead) to the ways art fosters sensibilities.  If a novel, as Nussbaum claims, makes me more “sympathetic” with the sufferings of orphans, it is not primarily because it has given me new information about orphans.  It is because it has changed my general disposition toward suffering by making me “see” it, experience it, differently—and in a way that moves me beyond just responding to this particular case, this particular orphan, to a more general care for suffering orphans in the plural. 

I want to say more about “sensibility” and what that term might mean in subsequent posts.  And that discussion would connect up with Nick Gaskill’s interest in “aesthetic education” (a concern he shares with Joseph North).  Is the goal of aesthetic education to create certain kinds of sensibilities—and how might that creation be achieved?  I am inclined to think (as a teaser for where I think I am heading on this topic) that Kenneth Burke’s focus on “attitudes” will prove useful here.

But, first, I want to return to the effort to overcome (or, at least, mitigate the fact/value divide)—and that will be the subject of my next post.

Non-Cognitive Theories of Art (1)

Non-Cognitive Theories of Art

Enough of this election anxiety.  Back to the airy heights of theories of the aesthetic.

My four posts on cognitive theories of the aesthetic were really just a prelude to considering non-cognitive theories.  And I am going to start with Martha Nussbaum (although she can be seen as just the latest in a long line that would include David Hume and George Eliot).

Basically, Nussbaum believes that art works activate sympathy.  A novel can portray the sufferings of Oliver Twist and children like him.  Such a novel may serve to bring to our attention facts about orphans and workhouses, thus adding to our knowledge.  But more crucial is the way the story inspires fellow-feeling, a new sympathy for the plight of orphans.  It is one thing to know that orphans are often underfed; it is another thing to respond to that fact feelingly, to experience it as something that should be rectified.  The moral emotions of indignation and sympathy are brought into play through the power of the story, a power that a simple recitation of the facts does not have.

Such a way of explaining what is going on rests on a fairly stark fact/value divide, Hume’s worry about deriving an “ought” from an “is.”  One can see that an orphan does not have enough to eat.  But that seeing does not entail the judgment that the orphan’s hunger is “wrong” (or “unjust”) and that it should be rectified.  Rationalist theories of moral value (Kant or Mill, one deontological, the other utilitarian) believe that reason provides the basis for moral judgments.  But the Humean school hands that job over to feelings.  Our moral judgments come from those moral emotions, from our indignation at suffering felt (perceived?) as unnecessary or cruelly inflicted, from our sympathy with those who suffer.  

Some may be able to see the hungry child and feel no sympathy, may even be able to claim the child is getting what he deserves.  Those seeking to convert such a person to their sympathetic view needs to find a way to pull on the heartstrings, to call forth the needful feelings.  Arguments and reasons will not do the trick.  We don’t know something is heinous simply by looking at it.  Thus it is unlike knowing something is red.  We don’t need some particular “feeling state” to judge the thing is red.  But we do need the appropriate feelings to judge something is unjust, should be condemned and, if possible, rectified.

This is philosophy, so of course it gets complicated.  My own theoretical and moral commitments mean that I really would like to avoid such a sharp fact/value divide.  There are, as far as I can see, two pathways to lessening the gap between fact and value.  Neither, I think, closes that gap completely.

The first path is one I think Nussbaum takes.  She is very committed to the assertion that feeling and cognition are not distinct—that, in fact, a feeling-less cognition is monstrous and mostly impossible.  For her, sympathy enhances understanding.  The story of Oliver Twist increases our understanding of the plight of orphans. (George Eliot would make this claim as well.) If we define “empathy” as the ability to get a sense of another’s experience, then sympathy is the gateway to empathy.  We know more about others when we are able to sympathize with them—and that ability is feeling-dependent.  No amount of simple or “rational” looking will do the job.  The feelings must be activated for the most adequate knowledge to be accessed. 

Thus, Nussbaum (ultimately) is a cognitivist when it comes to (at least) literature. (What she would have to say about non-literary artistic forms is not clear; she seldom writes of them.)  But there still lingers the difference between explanation and understanding, or determinative and reflective judgment.  To know that the house is red is a determinate judgment (in Kant’s terms).  We don’t claim to “understand” the house; we just state what its color is, and would presumably “reduce” that judgment to the physics of wavelengths and the semantic facts about English if we had to explain to someone the basis for the judgment. 

[A digression: I continue to struggle with the possibility that there is a significant difference between “explanation” and “understanding.” To “understand” the orphan’s plight is not to “explain” it; to understand can mean either I now see that he is hungry or now empathize with, have a sense of, his suffering. To explain his hunger would, presumably, be to trace its causes, what factors have deprived him of enough food, or what physiological processes lead to hunger. Since Dilthey (at least) there has been an effort to see “explanation” as characteristic of the sciences, and “understanding” as characteristic of the humanities. My problem–shared with many others–is not being able to work out a clear distinction between explanation and understanding. Plus there is the problem that making such a clear distinction threatens to create another gap similar to the fact/value divide. Do I really want to see the sciences and the humanities as doing fundamentally different things, with fundamentally different goals and methods? How drastic a dualism do I want to embrace–even when a thorough going collapse of all distinctions between the science and humanities is also unattractive? The trouble with many aesthetic theories, in my eyes, is their desperate commitment to finding something that renders the aesthetic distinct from every other human practice and endeavor. I don’t think the aesthetic is so completely distinctive–and I don’t see what’s gained (in any case) if one could prove it unique. So my struggle in this long series of posts on the aesthetic is to find some characteristics of the aesthetic that do seem to hold over a fairly large set of aesthetic objects and practices–while at the same time considering how those characteristics also operate in other domains of practice, domains that we wouldn’t (in ordinary language as well as for analytical reasons) deem aesthetic. And, to name once again the golden fleece I am chasing, I think some account of meaning-creating and meaning-conferring practices is the best bet to provide the theory I am questing for.]

To return to the matter at hand: The judgment that the plight of orphans is unjust or outrageous is a reflective judgment in Kant’s sense.  Reflective judgments have two features that distinguish them from determinative judgments:

1. The category to which this instance is being assigned is itself not fixed.  Thus, for Kant, “beauty” is not a stable standard.  A new work of art comes along and is beautiful in a way we have never experienced before and/or had hardly expected.  But we judge that the term “beauty” is appropriate in this case, even though it is novel—and even though our judgment revises our previous senses of the category “beauty.” 

2. Kant is also very clear that reflective judgments originate in subjective feelings.  He is concerned, of course, to find a way to move from that subjectivism to “universal validity” and “universal communicability.”  But the starting point is individual feeling in a way that it is not for determinative judgments.  My feeling about the house plays no role in my assertion that is red.  But my feelings about the Matisse painting are necessary, although not necessarily sufficient, to my judging it “beautiful.” (not necessarily sufficient because my judgment also takes the sensus communis into account. I judge, as Arendt puts it, in the company of others. Reflective judgment is neither entirely personal nor entirely social. Its public character comes from the fact that it will be stated for/to others.)

Thus, even if we (as Nussbaum wants to do) say our aesthetic and moral judgments count as knowledge, as assertions that we make with confidence and expect others to understand (at least) and agree with (at best), those judgments still arise from a different basis than judgments of fact. (N.B.: I am following Arendt here in taking Kant’s aesthetics as a more plausible basis for morality than Kant’s own moral theory.)

To summarize: aesthetic judgments (“this is beautiful”) and moral judgments (“this is unjust”) would still be seen as “cognitive.”  Such judgments are assertions about how some thing in the world (an art work, an orphan’s hunger) should be understood, should be labeled—and purport to say something substantial about that thing in the world.  But the process by which that judgment is reached—and the process by which I would get others to assent to it—is distinct (in certain ways) from the processes that underwrite statements of fact. A key feature of that difference is the role feelings play in reaching the two different kinds of judgment.

So maybe Nussbaum’s approach is not non-cognitive; instead, it is committed to their being different forms/processes of cognition.  Then we would just get into a fight over what we are willing to label “cognitive.”  How capacious are we willing to let that term be? Is calling the Matisse painting “beautiful” a knowledge claim or not. The positivists, of course, pronounced aesthetic and moral judgments non-cognitive in the 1930s–and philosophers (of whom Nussbaum is prominently one) have been pushing against that banishment ever since. The only stake (it seems to me) would be whether being deemed “cognitive” is also seen as conferring some kind of advantage over things deemed “non-cognitive.”  Nussbaum certainly seems to think so. She is very committed to expanding the realm of the cognitive and the rational to include feeling-dependent judgments—and seems to believe that enhancing the status of such feeling-dependent judgments will increase the respect and credence they command.

But the alternative would be to say credence does not rest on something being cognitive; that we should look elsewhere for what leads people to make judgments and to assent to the judgments that others make. Standard understandings of cognition are just too simple, too restrictive, to account for the complexities of how people actually judge and come to have beliefs. Better to abandon the cognitive/non-cognitive distinction altogether–and provide an alternative story about how we come to think and feel about things.

I am going to leave it here for today—and discuss in my next post an alternative way to lessen the fact/value gap, one that does move toward ignoring characterizing judgments and beliefs as either cognitive or non-cognitive.

The Critique of Experience

Nick’s question about the status of Dewey’s concept of experience—and the preference for the term “practice” in writers like Latour—makes me feel like I have fallen into a deep well.  I will try to talk about “practice” and what that concept entails in future posts.  For now, I just want to consider the critique of experience.  I will start out with Joan Scott’s extremely influential 1991 essay “The Evidence of Experience” (Critical Inquiry, Summer 1991) and then move on to Richard Rorty’s explicit critique of Dewey’s reliance on experience (in the essay “Dewey’s Metaphysics” in Consequences of Pragmatism [University of Minnesota Press, 1982: 72-89).

Here’s a long passage from Scott that lays out her argument (note her reliance on the term “practice” in making her case):

Michel de Certeau’s description is apt. “Historical discourse,” he writes, “gives itself credibility in the name of the reality which it is supposed to represent, but this authorized appearance of the ‘real’ serves precisely to camouflage the practice which in fact determines it. Representation thus disguises the praxis that organizes it.”

When the evidence offered is the evidence of “experience,” the claim for referentiality is further buttressed–what could be truer, after all, than a subject’s own account of what he or she has lived through? It is precisely this kind of appeal to experience as uncontestable evidence and as an originary point of explanation–as a foundation on which analysis is based–that weakens the critical thrust of histories of difference. By remaining within the epistemological frame of orthodox history, these studies lose the possibility of examining those assumptions and practices that excluded considerations of difference in the first place. They take as self-evident the identities of those whose experience is being documented and thus naturalize their difference. They locate resistance outside its discursive construction and reify agency as an inherent attribute of individuals, thus decontextualizing it.

When experience is taken as the origin of knowledge, the vision of the individual subject (the person who had the experience or the historian who recounts it) becomes the bedrock of evidence on which explanation is built. Questions about the constructed nature of experience, about how subjects are constituted as different in the first place, about how one’s vision is structured–about language (or discourse) and history–are left aside. The evidence of experience then becomes evidence for the fact of difference, rather than a way of exploring how difference is established, how it operates, how and in what ways it constitutes subjects who see and act in the world.

To put it another way, the evidence of experience, whether conceived through a metaphor of visibility or in any other way that takes meaning as transparent, reproduces rather than contests given ideological systems–those that assume that the facts of history speak for themselves and those that rest on notions of a natural or established opposition between, say, sexual practices and social conventions, between homosexuality and heterosexuality. Histories that document the “hidden” world of homosexuality, for example, show the impact [of] silence and repression on the lives of those affected by it and bring [to] light the history of their suppression and exploitation. But the project making experience visible precludes critical examination of the workings of the ideological system itself, its categories of representation (homosexual/heterosexual, man/woman, black/white as fixed immutable identities), its premises about what these categories mean and how they operate, and of its notions of subjects, origin, and cause.

Homosexual practices are seen as the result of desire, conceived as a natural force operating outside or in opposition to social regulation. In these stories homosexuality is presented as a repressed desire (experience denied), made to seem invisible, abnormal, and silenced by a “society” that legislates heterosexuality as the only normal practice. Because this kind (homosexual) desire cannot ultimately be repressed–because experience is there–it invents institutions to accommodate itself. These institutions are unacknowledged but not invisible; indeed, it is the possibility that they can be seen that threatens order and ultimately overcomes repression. Resistance and agency are presented as driven by uncontainable desire; emancipation is a teleological story in which desire ultimately overcomes social control and becomes visible. History is a chronology that makes experience visible, but in which categories appear as nonetheless ahistorical: desire, homosexuality, heterosexuality, femininity, masculinity, sex, and even sexual practices become so many fixed entities being played out over time, but not themselves historicized. Presenting the story in this way excludes, or at least understates, the historically variable interrelationship between the meanings “homosexual” and “heterosexual,” the constitutive force each has for the other, and the contested and changing nature of the terrain that they simultaneously occupy. (pages 777-778.)

Scott’s position is clear enough.  Inspired by Foucault’s notion of “discursive power,” she is saying that there is no innocent experience.  Rather, what we experience is shaped by the categories through which we process and understand what happens to us, what we see, and whom/what we encounter.  Furthermore, the experiencing self has also been shaped by the culture/society of which it is a member.  A consequential analysis of an historical scene must take those shaping processes into account, must make evident that that scene is historical through and through, the contingent product of a construction that could have been otherwise.

Rorty’s critique of Dewey takes the same path.  “Experience” in Dewey is a metaphysical term—and belies Dewey’s more productive efforts to escape metaphysics altogether.  For Scott, experience “naturalizes” that which should be understood as historical and constructed.  Rorty makes much the same move.  He opens the essay by quoting, approvingly, a late letter from Dewey to Bentley in which Dewey says he is thinking of a writing a new edition of Experience and Nature.  This time around, Dewey will “change the title as well as the subject matter . . . to Nature and Culture.  I was dumb not to have seen the need for such a shift when the old text was written.  I was still hopeful that the philosophic word ‘Experience’ could be redeemed by being returned to its idiomatic usages—which was a piece of historic folly, the hope I mean” (quoted in Rorty, 72).

For Rorty, it’s a choice between Kant and Hegel.  Rorty sees Dewey as accepting the break with Humean empiricism which recognizes “that intuitions without concepts [are] blind and that no data [are] ever ‘raw’”(83).  Once accepting that basic fact, the Kantian sees the concepts as universal, shared by all rational creatures, while the Hegelian sees the concepts as historically and culturally relative all the way down.  Rorty writes: “By being ‘Hegelian’ I mean here treating the cultural developments which Kant thought it was the task of philosophy to preserve and protect as simply temporary stopping-places for the World Spirit” (85) Dewey, Rorty tells us, “agrees with Hegel that the starting point of philosophic thought is bound to be the dialectical situation in which one finds oneself caught in one’s own historical period—the problems of the men of one’s time” (81).

In his inimitable fashion, Rorty offers us a pocket-sized definition of metaphysics, utilizing a term from Dewey’s Experience and Nature.  Dewey’s metaphysics aim to designate “the generic traits of experience.” For Nick and me, Dewey’s metaphysics are most fully and fruitfully present in his interactionist account of human being-in-the-world.  It is that account, complete with its notion of “funded experience,” its unsettling of subject/object and other dualisms, and its dynamic picture of the ongoing production of identities, meanings, and novelty that we find attractive and see as adopted by Latour (and, presumably, Stengers, whose work I don’t know, but which Nick admires greatly).

Rorty is unimpressed.  “What Kant had called ‘the constitution of the empirical world by synthesis of intuitions under concepts,’ Dewey wanted to call ‘interactions in which both extra-organic things and organisms partake.’ But he wanted this harmless-sounding naturalistic phrase to have the same generality, and to accomplish the same epistemological feats, which Kant’s talk of the ‘constitution of objects’ had performed.  He wanted phrases like ‘transactions with the environment’ and ‘adaptation to conditions’ to be simultaneously naturalistic and transcendental—to be common-sense remarks about human perception and knowledge viewed as the psychologist views it and also to be expressions of ‘generic traits of existence.’  So he blew up notions like ‘transaction’ and ‘situation’ until they sounded as mysterious as ‘prime matter’ or ‘thing-in-itself” (84).

It is the easiest thing in the world—and so is done constantly—to say Rorty himself cannot escape transcendental or metaphysical claims.  After all, to say all thinking starts from the historical position in which one finds oneself is to identify a generic trait.  But such a critique of Rorty misses the point—and would miss his very significant difference from Joan Scott.  Scott wants to replace one kind of historical claim—the kind that relies on the evidence of experience—with another kind of claim—one that analyzes what enables (serves as the transcendental conditions of) experience.  She is looking for a more accurate or more adequate way of understanding discursive, ideological forces and the way they construct how humans constitute and are constituted by history.  Rorty finds that enterprise just another way of remaining trapped within the wrong-headed set of metaphysical and epistemological questions that philosophy has obsessed over since Descartes.  Rorty thinks we should just walk away from that game.

Why?  What’s wrong with that game?  Rorty has a complicated, but coherent (if not utterly convincing) answer to that question.

That answer hinges on what I am fond of calling “transcendental blackmail.”  In most every case, the metaphysician is trying to sell his audience on something.  The tactic used is to get that audience to accept a seemingly neutral and irrefutable (and almost invariably universalistic) description of the human condition (“the generic facts of human existence”).  Once the writer thinks he has established that irrefutable fact, its consequences are unfolded.  I followed this strategy in my liberalism book.  I tried to begin with the most uncontroversial claims and then lead the reader down the primrose path to liberalism by showing that, if they bought in to the foundational claims, then they, as a matter of logic and consistency, should accept positions that didn’t seem as self-evident and attractive to them at first blush.  Thus, Scott’s critique of experience attempts to establish its constructed nature  and is meant, eventually, to serve to get her reader to question established powers and the categories that serve that power’s ends.

Rorty, first of all, hates any kind of blackmail, any strategy for establishing an authority that deems itself irrefutable.  Everything is up for question in his preferred version of liberalism, just as everything could be constituted differently in a different historical period or culture.  There are no transcendentals, just historical contingencies.

Rorty would like that last sentence to be true.  But often recognizes that it is not.  His talk about “common-sense naturalism” in the passage I quoted above is a nod to that recognition.  Let’s be concrete about this.  Here is a universalized, metaphysical statement of a generic fact of human experience:  All humans die.  Rorty would not deny that statement.  What he denies is that it has any necessary consequences beyond the brute fact of death.  How to face death, think about it, avoid or embrace it, respond to the death of others, etc. are all underdetermined by the brute fact.  We know that various cultures have established an incredibly wide range of practices in the face of the brute fact.

Thus, for Rorty, all humans die is a common-sense platitude that has no straight-forward or inevitable consequence for human beliefs, values, or behavior.  I think this position—while tied to Rorty’s resolute anti-authoritarianism—is also linked to his positivist origins.  Rorty maintains a strict fact/value dichotomy.  Facts are value-neutral.  How we understand and interpret them is radically disconnected from their existence.  In his metaphysics, Rorty tells us, “Dewey betrayed precisely the insight . . . that nothing is to be gained for an understanding of human knowledge by running together the vocabularies by which we describe the causal antecedents of knowledge with those in which we offer justifications of our claims to knowledge. . . . [W]hat Green and Hegel had seen, and Dewey himself saw perfectly well except when he was sidetracked into doing ‘metaphysics,’ was that we can eliminate epistemological problems by eliminating the assumption that justifications must rest on something other than social practices and human needs” (81-82).  What Rorty says about epistemology here is fully consistent with his position on value as articulated in other works.  Our commitments in terms of value rest on “social practices” and what we (and/or our society) understands to be “human needs” and not on any facts that transcend (or dictate) a humanly produced vocabulary. [Note that Rorty’s quick acknowledgement of “causal antecedents” of some statements belies the idea that he is an anti-realist.  He is perfectly willing to say that the statement, “all humans die,” is motivated—or caused—by the encounter with death.  He is not denying the fact’s existence, just its consequences, while he also—as I am about to discuss—does not think that the elaborate gymnastics of modern philosophy’s epistemologies do anything at all to either confirm or unsettle one’s beliefs about facts. Philosophic metaphysics and epistemology are unproductive games.]

An unproductive game because metaphysics has no consequences—a position taken up aggressively by Knapp and Michaels in their essay “Against Theory” and in numerous works by Stanley Fish. Describe the facts of existence—and nothing necessarily follows. Scott’s essay seems to belie that conclusion. Surely, the practitioners of historical studies will proceed differently if convinced by her case that an appeal to experience is not sufficient.  I think that the pragmatist response (certainly it would be William James’s position and probably Rorty’s and Fish’s) is that my last sentence puts the cart before the horse.  What comes first is the commitment to a certain set of values—and then the theoretical (or transcendental) claim is constructed as a buttress for that commitment.  Certainly that was how my liberalism book was germinated.  Rawls’ Theory of Justice offers a more grandiose and even comical case. The painstaking elaboration of his Rube Goldberg-like argument is clearly motivated by where he wants to end up.

Rorty certainly insisted that his philosophical commitments and arguments had no political consequences.  His liberalism was not a product of—or even connected in any way—to his pragmatism.  He described himself as mostly in tune with Habermas’ and Rawls’ versions of liberalism (characterized by equality, open unimpeded discussion/deliberation, and hostility to concentrations of power) while disagreeing with their conviction that liberalism required theoretical or transcendental underpinnings.  The first-order commitments to certain values—commitments generated by upbringing, by sensibility/temperament, social practices, and comparisons among varied ways of living on display in the word and in the historical record—were more than sufficient for taking a stand.

Philosophers are no different from any one else in  trying to persuade others to adopt a particular stand—while stories, images, emotional appeals, and displayed loyalties are very likely much more effective tools of persuasion than philosophical argument.  “[P]hilsophers’ criticism of culture are not more ‘scientific,’ more ‘fundamental,’ or more ‘deep’ than those of labor leaders, literary critics, retired statesmen or sculptors” (87).  Philosophers just start out by working on a different set or materials—“the history of philosophy and the contemporary effects of those ideas called ‘philosophic’ upon the rest of the culture—the remains of past attempts to describe the ‘generic traits of existences’” (87).  And philosophers use different rhetorical means to persuade—means that are presumably effective for some people, that minority (?) which likes (prefers?) their commitments to be underwritten by a certain kind of argumentation instead of only by stories, images etc.  Rorty is guilty of unjustified metaphysical generalization when he claims (as he often does) that stories are always more effective than arguments.

I find this radical levelling—both of the hierarchy of thinkers and of planes of existence (no deep undergirding truth about our daily round)—attractive.  I find it harder to credit that understanding being-in-the-world and action-in-the-world in this way has no consequences.  Maybe adopting a particular attitude is not ratified by some set of metaphysical facts.  But the description itself is fruitful. How we understand the facts influences our reaction/adaptation to them.  [That’s simply straight-forward Peircean pragmatism.] Of course, it is not clear that Rorty would deny that.  He thinks the vocabulary we choose to work in does have consequences.  Those vocabularies (and the activities/practices that accompany them) just need to be recognized as ways of “adapting and coping rather than copying” (86).  Still, the adapting must be to something—like the COVID 19 virus.  Coping requires, it would seem at least in some cases, accurate modeling.

To conclude—and to set up the next posts on practice—it seems clear that the critique of experience is a resistance to the way the term takes as self-evident the naturalistic placement of an experiencing self in an environment.  The preference for the term “practice” is meant to introduce the social influences (determinants would be, for me but not I suspect for Scott, too strong a term) that shape what any individual might experience or might articulate as her experience.  Certainly, with his notion of “funded” experience, Dewey is not utterly naïve about the ways that experience has social and historical dimensions.  But the term “practices” tries (as I will discuss in the next posts) to put much more flesh on this idea of the ways in which experience is embedded within social settings that have prevailing norms, preferred ways of “going on,” and pre-established goals/ends.