Tag: moral philosophy

Disparate Economies 4: Power

Warning: this post is even more essayistic than most. A lot of speculation as I drunkenly weave through a variety of topics and musings.

The previous posts on disparate economies have tried to consider how economies of status, love/sex, and fame are structured.  What is the “good” or “goods” that such markets make available, and what are the terms under which those goods are acquired, competed for, and exchanged.  Finally, what power enforces the structures and the norms that keep a market from being an anarchic free-for-all.  Markets (or specific economies among the multiple economies that exist—hence my overall heading “disparate economies”) are institutions, by which I mean they a) have discernible organizational shape, along with legitimated and non-legitimated practices by human agents within them; b) are not the product of any individual actor or even a small cadre of actors but are socially produced over a fairly long span of time; and c) change only through collective action (sometimes explicit as in the case of new laws, but much more often implicitly as practices and norms shift almost imperceptibly through the repetitions of use.)  Institutions exist on a different scale than individual actors—or even collective actors.  A sports team exists within the larger container of the institution that is the sport itself, just as a business corporation exists within the market in which it strives to compete.

It is a well recognized fact that power is among the goods that human compete for.  In one sense, this fact is very odd.  Here is one of Hobbes’ many reflections on power:

The signs by which we know our own power are those actions which proceed from the same; and the signs by which other men know it, are such actions, gesture, countenance and speech, as usually such powers produce: and the acknowledgment of power is called Honour; and to honour a man (inwardly in the mind) is to conceive or acknowledge, that that man hath the odds or excess of power above him that contendeth or compareth himself . . . and according to the signs of honour and dishonour, so we estimate and make the value or Worth of a man. (1969 [1640], 34–35) The Elements of Law: Natural and Politic. Ed. Ferdinand Tönnies. London: Frank Cass and Co.

Hobbes, sensibly it would seem, focuses on what power can “produce.”  For him, power is a means not an end.  Power is capacity.  We know someone is powerful when he is able to produce the ends toward which he aims.  This is power to, the possession of the resources and capabilities required for successful action.  Such power, Hobbes goes on to say, also produces, as a by-product, “honor.”  The powerful man is esteemed by others; in fact, Hobbes states, power is the ultimate measure by which we determine a person’s “value or worth.”  Since, presumably, we want others to esteem us and to think of us as worth something, as having value, it makes sense that we would seek power not only because it yields the satisfaction of accomplishing our aims, but also because it gains us the respect of our peers.

Still, power is instrumental here; it is valuable for what it enables one to get.  There is no sense of power as an end-in-itself.  In that respect, power is like money.  Human perversity is such that something (money or power) which has no intrinsic value of its own, but is only a means toward something else that is of intrinsic value, nonetheless becomes the object of one’s desires.  Power, like money, is stored capacity—and, like money, one can devote oneself to increasing one’s store.  Yes, spending power, like spending money, has its own pleasures, but there is an independent urge, an independent compulsion, to increase one’s holdings.  And that urge can become a dominant, even over-riding, compulsion.

Of course, money and power can be converted into one another. Still, the insanities of current-day American plutocracy illustrate that the conversion is not easy or straight-forward.  Think of the Koch brothers (or any other number of megalomaniac billionaires).  The Kochs think their money should allow them to dictate public policy.  Why, as Gary Will asked many years ago, are these rich people so angry?  Why are they so convinced that their country is in terribly bad shape—when they have done and are doing extremely well?  They don’t lack money, but they believe their will is being thwarted. Their money has been able to buy them power—but not the kind of absolute power they aspire to.  They meet obstacles at every turn, obstacles they can only partially overcome.  And from all appearances, it seems to drive them crazy.  They want to be able to dictate to the nation in the same way they can dictate to their employees.  The thrill of being able to say “you’re fired.” Donald Trump on The Apprentice. Apparently, just the thrill of watching some one else exercise that absolute power is a turn-on for lots of people.

Which reminds us that power is not only capacity, power to, but also domination, power over.  Returning to the issue of “an economy,” in this matter of power, the competition is over the resources necessary to possess power.  On the one hand, power to depends on assembling enough resources (time, money, health, opportunity, freedom) to set one’s own goals and accomplish them.  On the other hand, among the resources one can require, especially for complex enterprises, is the cooperation of others.  One person alone cannot accomplish many of the things humans find worth aiming for.  How to ensure the contributions of others to one’s projects?  Being the person who controls the flow of resources to those people is one solution.  Help me—or you won’t be given the necessities for pursuing your own projects.  Hegel famously reduces this dynamic to its most fundamental terms.  Your project is to live—and unless you do my bidding, you will not be given the means to live.  The calculus of power over, of mastery over another human being, is based on life being valued—and thus serving as the basic unit of exchange—in struggles for mastery.

I have in my previous posts on these different economies attempted to specify the norms (or rules in more formal economies) that structure competition and exchange in each case.  And I have tried to indicate the power(s) that enforce those norms/rules.  Thus in the sex/love market there is an ideal of reciprocity; the partners to an exchange freely and willingly give to each other.  Where that norm is violated (most frequently in male coercion of women) family and/or the state will, in some cases, intervene.  The deck is stacked against women because family and state intervention is imperfect and intermittent.  But there are still some mechanisms of enforcement, even if they are not terribly effective, just as there are recognized ideal norms even if they are frequently violated.  Similarly, the billionaire may have gained his wealth through shady means, but he has still operated in a structured market where violation of the rules can lead to prison (even if it seldom does).  Outright theft, just like rape in the sex/love market, is generally deemed a crime.

How to translate these considerations over into the competition for power? It would seem that slavery is the equivalent of rape and theft—something now universally condemned as beyond the pale.  But it seems significant to me that the condemnation of slavery is not even 200 years old—while slavery as a practice persists.  Of course, rape and theft persist as well.  And I guess we could say that minimum wage laws and various labor protecting regulations/statutes also aim at limiting the kinds of resource withholding that allows one to gain power over another.  So there is some attempt to avoid a Hobbesian war of all against all, with no holds barred.  Still, within any economy that enables—and mostly allows—large inequalities, the ability of some to leverage those avenues to inequality into power over others will go mostly unchecked. 

Where there is no structure and no norms, the result appears to be endless violence.  From Plato on, the insecurity of tyrants has been often noted.  Power might be accumulated as a means to warding off the threat that others will gain the upper hand.  In this free-for-all, no one is to be trusted.  Hence the endless civil wars in ancient Rome and late medieval England (as documented in Shakespeare’s plays among other places), along with the murders of one’s political rivals—and erstwhile allies.  From Stalin’s murderous paranoia to Mafia killings, we have ample evidence that struggles for power/dominance are very, very hard to bring to closure.  Competition simply breeds more competition—and the establishment of some kind of modus vivendi among the contenders that allows them to live is elusive.  Power does seem, at least to the most extreme competitors in this contest, a zero sum game.  If my rival has any power at all, he is a threat. 

In his life of Mark Antony, Plutarch has this to say of Julius Caesar:  “The real motive which drove him to make war upon mankind, just as it had urged Alexander and Cyrus before him, was an insatiable love of power and an insane desire to be the first and greatest man in the world” (Makers of Rome, Penguin Classics, 1965: p. 277.)  There’s a reason we think of men like Caesar—or like some of today’s billionaires—as megalomaniacs.  They harbor an “insane desire” for preeminence over all other humans. If power equals preeminence, then, in their case, it is an end-in-itself.  They desire that all bow before them—which is what power over entails.  There is still the suspicion, however, that power is the means to the “honor” of being deemed “the first and greatest man in the world.”  And there is certainly no doubt in Plutarch’s mind, as there was no doubt in Hegel’s, that killing others is a requirement for gaining such power.  Only a man who “makes war upon mankind” can ascend to that kind of preeminence.

For Nietzsche, of course, the desire for power is primary.  But even in his case, it’s not clear if power is an end or merely a means.  What is insufferable to Nietzsche is submission.  Life is a struggle among beings who each strive to make others submit to them.  It would seem that “autonomy” is the ultimate good in Nietzsche, the ability to be complete master over one’s own fate.  That’s what power means: having utter control over one’s self.  Except . . .  everything is always contradictory in Nietzsche.  At times he doesn’t even believe there is a self to gain mastery over.  And there is his insistence that one must submit completely to powers external to the self; amor fati is the difficult attitude one should strive to cultivate.  We are, he seems to say, ultimately powerless in the face of larger, nonhuman forces, that dwarf us. In short, I don’t think Nietzsche is very helpful in thinking about power.  His descriptions of it and of the things that threaten it are just too contradictory.

Machiavelli is, I think, a better guide.  His work returns us to the issue of security.  When I teach Machiavelli, I always have some students who say he is absolutely right: it’s a dog eat dog world.  Arm yourself against the inevitable aggression of the other or you will be easily and ignominously defeated.  I think this is a very prevalent belief system out there in the world—usually attached to a certain brand of right wing politics.  To ventriloquize this position: It is naïve to expect cooperation or good will from others, especially from others not part of your tribe.  They are out to get you—and you must arm yourself for self-protection (if nothing else).  Your good intentions or behavior is worth nothing because there are bad actors out there.  It is inevitable that you will have to fight to defend what is yours against these predators.  

This right wing attitude often goes hand-in-hand with a deeply felt acknowledgement that war is hell, the most horrible thing known.  But it’s sentimental and weak to think that war can be avoided.  It is necessary—and the clear-eyed, manly thing is to face that necessity squarely.  Trying to sidestep that necessity, to come to accommodations that avoid it (appeasement!) are just liberal self-delusions, the liberal inability to believe in the existence of evil.  Power in this case is the only surety in an insecure world—and even power will still get involved in the tragedy of war, where the costs will be borne by one’s own side as well as by the evil persons one is trying to subdue.  Power cannot fully insulate you from harm. (I think John McCain embodied this view–along with the notions of warrior honor that often accompany it.)

It is a testament to the human desire (need? compulsion?) to structure our economies, our competitions, that there are also “rules” of war.  On the extreme right wing, there is utter contempt for that effort.  There are no rules for a knife fight, as we learn in Butch Cassidy.  It’s silly to attempt to establish rules of war—and crazy to abide by them since it only hands an advantage to your adversary. And certainly it is odd, on the face of things, to try to establish what counts as legitimate killing as contrasted to illegitimate killing when the enterprise is to kill so many people that your adversary can no longer fight against you, no longer having the human resources required to continue the fight. 

I don’t know what to think about this.  Except to say that the specter of completely unstructured competitions scares humans enough that they will attempt to establish rules of engagement even as they are involved in a struggle to the death.  But I guess this fact also makes clear how indispensable, how built in as a fundamental psychological/social fact, morality has become for humans.  On very tricky and speculative grounds here.  But it seems to me that any effort to distinguish between murder and non-murder means that some kind of system of morality is in play.  Murder will be punished, whereas non-murder will be deemed acceptable.  The most basic case, of course, is that soldiers are not deemed guilty of murder.  The killing they do falls into a different category.  What I am saying is that once you take the same basic action—killing someone—and begin to sort it into different categories, you have a moral system.  The rules of war offer one instance of the proliferation of such categories as moral systems get refined; differentiations between degrees of murder, manslaughter, self-defense and the like offer another example of such refinements.  My suspicion (although I don’t have all the evidence that would be required to justify the universal claim I am about to make) is that every society makes some distinction between murder (unsanctioned and punished) and non-murder (cases where killing is seen as justified and, then, non-punishable.)  At its most rudimentary, I suspect that distinction follows in-group and out-of-group lines.  That is, killing outsiders, especially in states of war, is not murder, whereas killing insiders often is.  The idea of a distinction between combatants and non-combatants comes along much later.

Similarly, worrying about “just” versus “unjust” wars also comes much later.  Morality is no slouch when it comes to generating endless complications.

I may seem to have wandered far from the issue of an economy in which the good that is competed for is power.  But not really.  War is the inevitable end game of struggles for power if Hegel is right to say that life is the ultimate stake in the effort to gain mastery over others.  If the economy of power is utterly anarchic, is not structured by any rules, then conquest is its only possible conclusion.  It is the ultimate zero-sum game.  The introduction of rules is an attempt to avoid that harsh zero-sum logic.  Putin out to conquer the Ukraine and Netanyahu out to destroy Hamas are zero-sum logics in action.  As is the Greek practice of killing all the male inhabitants of a conquered city while taking the women off into slavery.  The rules—like negotiated peace deals—try to leave both parties to the conflict some life, to avoid its being a fight to the total destruction of one party. 

The alternative (dare I say “liberal”) model is the attempt to distribute power (understood as the capacity to do things that one has chosen for oneself as worth doing) widely.  This is not just an ideology of individual liberty, of equal worth and its right to self-determination free from the domination of others.  It is also about checks and balances, on the theory that power is only checked by other powers—and that all outsized accumulations of power lead to various abuses.  Various mechanisms (not the least of which is a constitution, but also some version of a “separation of powers”) are put in place to prevent power being gathered into one or into a small number of hands.  The problem, of course, in current day America is that there are not parallel mechanisms to prevent the accumulation of wealth into a few hands—and there are no safeguards against using that wealth to gain power in other domains, including the political one.  That’s why we live in a plutocracy.  Our safeguards against accumulations of power are not capable of effectively counteracting the kinds of accumulation that are taking place in real time.

Recently, on the Crooked Timber blog, Kevin Munger offers this nugget (it appears to be a quote from somewhere, not Munger’s own formulation.  But he does not offer a source for it.)

“There is a great gap between the overthrow of authority and the creation of a substitute. That gap is called liberalism: a period of drift and doubt. We are in it today.”

On this pessimistic reading, power, like nature, abhors a vacuum.  Any situation in which authority/power is dispersed (as it is in the ideal liberal polity) will be experienced as unstable, unsettling, and chaotic.  The desire for order will triumph over the liberties and capacities for self-determination that the “overthrow of authority” enables.  Authoritarianism, the concentration (centralization) of power into a few hands, will rise again. Liberalism is always only a temporary stop-gap between authoritarian regimes. Humans, in this pessimistic scenario, simply prefer the certainties of domination to the fluidity (“drifts and doubts”) generated by less hierarchical social orders.  Just keep your head down and let those insane for power fight it out among themselves, hoping they will mostly leave you alone and let you focus on the struggles of your not-very-capaciously resourced life. 

Unfair as a characterization of a certain form of political quietism that skews rightward?  I don’t know.  But many people are content to not strive terribly hard for riches, power, or fame—and think their moderation of desire is the only sensible way to live.  They just want to be left in peace to make of life what they can with the extremely modest resources available to them.  Here we see yet another great divide in current-day American politics.  (It is hardly the only divide and not, I think, among even the three most important divides between left and right in our time.  But it still exists.) Namely, the idea that it is authoritarian government that will give them the peace they desire, get government off their backs, and curb the chaos of social mores that they feel threatens their children.  Liberal permissiveness, along with the liberal coddling of the unworthy, is the real danger to the country and to their “values”—and a healthy dose of authority is just the remedy we need.

Disparate Economies (3): Fame (Honor, Meritocracy, Status)

This post will be even more tentative than most.  I am not very certain of how to distinguish fame from status.  Probably I should just accept that they overlap in various ways, even as I resist the idea that they are synonymous.

In any case, let me try to articulate my basic intuitions about what differentiates them while also attending to what they share.  I do want to stick to the notion that status is restricted to circumscribed groups while fame aims for the regard of more heterogeneous and less well-formed multitudes. 

The quest for status is a quest a) for inclusion into a certain group and b) subsequently, for respect within that group.  First one needs to have been granted entry, to become a member.  Then one can strive for a “high place” in the group’s own hierarchy.  With status, social power comes in two forms: a) the power to confer membership on those desiring it, and b) the power that comes with being deemed a particularly esteemed member of the group.

It’s easiest to see how this works with professional cadres.  All professions control their own credentialing procedures, the rites and hurdles that must be endured to be granted membership.  Once within the profession, relative standing is supposed to accrue to the most accomplished, the most competent. But other factors—such as networked relations, where one received one’s credentials, sheer economic resources etc.—can also influence standing.  Such non-accomplishment based “boosts” to status will often be decried as illegitimate, even corrupt, once the notion of meritocracy has taken hold.

Of course, tying status to merit is a new idea historically, dating back to 1750 at the earliest.  Austen’s partial, but fairly fervent, endorsement of meritocracy is evident in her novels, which in many ways trace the transition from other sources of status to claims based on merit.  Once “careers are open to talent” the army and the church will no longer only be open to “gentlemen,” to those who can buy their commissions or convince the local squire to appoint one as the pastor in a parish within his (the squire’s) “gift.”  The transition to meritocracy was long and arduous—and a pure meritocracy never achieved.

The opponents of meritocracy have two major complaints against it.  The first is the Tory Radical distaste for the competition of all against all that “careers open to talent” initiates.  It’s a mad scramble for economic and status once determinants like family origins, whom one knows, and other non-economic and non-accomplishment markers of distinction are cast aside.  This is the traditionalist, conservative case against capitalism—and against democracy.  We get a good taste of it in Tocqueville’s reaction to America.  He is aghast at the way that economic success becomes just about the only measure of a person’s worth once traditional ranks are abolished in favor a general “equality.”  And is equally aghast at the chaos of the generalized competition that ensues once everyone is told they can aspire to anything.  No bars to advancement beyond what the individual can procure for himself (or herself). (I must note that Tocqueville struggles hard to master his antipathy to democratic equality because he is convinced it is the future. Thus we must learn some way to live with it.) A suspicion of what one has solely earned for oneself is Lady Catherine’s objection to Elizabeth.  How dare she presume herself fully worthy of Darcy simply on the basis of her beauty and wit.  Austen allows Elizabeth’s personal qualifications to carry all before them.  The legitimacy of any social impediment to her match with Darcy is fiercely denied. We might call this a rejection of the relevance of any “preexisting conditions.”

The second objection to meritocracy is that it threatens to obliterate “honor” or “character.”  This, of course, is where Austen (to some extent) plays both sides of the fence.  Elizabeth deserves her reward precisely because she is honorable.  She rejects the enormous economic prize that would follow from marrying Darcy because her integrity demands that she marry someone she loves and respects.  Her refusal of his first proposal proves her character.  What appalled Tocqueville, contrastingly, was the shamelessness, the ignorance of and contempt for any form of “honor,” that characterized the (to him, mad) scramble for economic success.  Where such success is the only goal and the only marker of social standing, people will stoop to anything.  All notions of personal integrity as absolutely essential to one’s own self-regard as well as to the regard of others will disappear.  And when we consider the shamelessness of many people of wealth and of many politicians—and of those less successful who ape them—Tocqueville would seem to have a point.  Where success is the only goal, moral considerations are merely an annoyance to be brushed aside whenever and wherever one can get away with it.  To bring up such peccadilloes is to be a killjoy, or a sententious bore (like sister Mary in Pride and Prejudice).  There is nothing quite so old-fashioned as hammering on about integrity or character or how someone should be ashamed of themselves.  Such objections will be easily dismissed as envy, at the scold’s inability to play the game.

It seems to me a fair historical generalization (again in relation to England, France, and the US, the only societies I am in any position to opine about) that “fame” was the prevalent concept used to discuss such matters before 1750 while “status” (although the term itself is not often invoked until the sociologists emerge in the very late nineteenth century) more accurately describes the situation after 1750.  In other words, once some ideology of “equality” (however, imperfect and non-inclusive since women and non-whites were decidedly not “equal”) emerges, the general competition for eminence that Tocqueville observed is on.  That’s what generates the “social” novels of the 19th century (of which Proust’s novel is the great culmination) in which “social climbing” is the master passion of so many characters.  Rastignac in Balzac’s Pere Goriot is not seeking fame; he is seeking status, which means acceptance into a Paris that is closed to him as young man newly arrived from the provinces.  Goriot’s master passion is parental love, which motivates his sacrificing contact with his daughters because they believe they must disavow him to maintain their newly won—and very precarious—toehold in the social circles they wish to be members of.  Swann, by way of contrast, already has membership in the most exclusive circles—and his presence there (as a Jew and a commoner) attests to the fact such circles can be penetrated by outsiders and to the qualities of his character since only its agreeableness (what Austen calls “amiability”—literally lovable) could assure him the access and esteem he enjoys.

Swann shows that, in the informal world of “the social,” as contrasted to the more structured world of “the economic” (where money provides an “objective” marker of success), character can still count.  In other words, the social in the 19th century tries to hold out against the complete triumph of the economic.  Tocqueville is saying, among other things, that the social is much weaker in America than in France.  There is not the prejudice against trade, against the vulgarities of “conspicuous consumption,” against the sharp tricks of commerce in America that there is in France.  Yes, as Edith Wharton shows, there are some pockets of resistance in “old New York” and perhaps in Brahman Boston, but both are reckoned anomalies and doomed to extinction.  Of course, Proust is also an elegist; he knows that the world he describes is not long for this world. 

Meritocracy, as measured by economic success, will sweep all before it. That is actually too monolithic a view.  Sub-groups will continue to form, with different criteria of entry.  And as I noted in the previous posts, those sub-groups in contemporary America are generally distinguished along lines of taste.  Thus, a huge divide between the rich who go in for the competition over who has the biggest, most luxurious, yachts and the rich who scorn such displays.  Or, much further down the economic scale, between those devoted to footballs and NASCAR and those who go to the opera and theater.  To the bemusement of leftists everywhere, contemporary politics in the US and Britain certainly, a bit less so in France, follows the lines of these taste divides, not class divides (where class is a technical term designating whether one earns one’s living primarily from the ownership of capital or primarily as income for labor preformed.)  That members of taste cultures that are deemed less prestigious are “looked down upon” by their presumptuous betters motivates voters more than any economic hurt they receive from those who possess economic power (as employers or as the providers of necessities).  In America, this divide is particularly aggravated by the feeling among less prestigious sub-groups that they are constantly being accused of being “racist,” a charge they vehemently deny and deeply, deeply resent.  The cultural elite are thus perceived as those sanctimonious scolds who moralize as just another way (along with their scorn for NASCAR and Burger King) to assert their (unjustified) belief in their superiority.  When taste is moralized—or to say it another way, when meritocracy extends to taste (i.e. some tastes are more meritorious than others, and tastes themselves become forms of merit)—social and political toxicity/animosity appears to reign almost free of any check.

All of which is to say that the desire to be esteemed by one’s peers is pretty basic.  And that desire encompasses the complexity of determining whom one’s peers are even as scorn is often directed at those who are not deemed my peers—and who, even worse, might be trying to pretend to be or to thrust themselves upon me as my peers.  That’s what snobbery is: I scorn the temerity with which you claim the right to associate with me as an equal.  You are beneath my notice. And snobbery, in contemporary America, rankles with an intensity hard to overestimate.

Fame is a bit different it seems to me.  Crucially, fame like status can only be conferred on someone by others.  All discussions of fame are troubled by this fact: that it is fickle and that it rests on nothing other than being noticed and known by others.  There seems nothing substantial about it—and moralists from the pre-Socratic philosophers to the present day warn us that it is a cheat.  To pursue it is madness—and to believe one’s press notices (as the saying goes) true insanity.  Still, the moralists have no effect on the minority who crave fame.  (How big is that minority?  Who knows? But a desire for fame is among the prime motivators of human agents.) 

Commentators prior to 1750 had more good to say of fame than ones after that date.  Despite warnings about its possible deceptions, the desire for fame has a nobility about it which various writers commend.  It is the spur to ambition, it raises the level of one’s game.  (Is it fair to say that competition is now seen to play that role: as the pathway to upping one’s game?  Of course, competition for fame can be one form competition takes.  In any case, encomiums to the benefits of competition are rare before 1750.)  Just as Austen delineates the positives that come from a “justified” pride in Pride and Prejudice even while denouncing the ill effects of pride and the bad behavior of the prideful, so writers like Shakespeare, Milton, and Edmund Burke offer qualified praise for the desire for fame.

To anchor that praise, of course, one has to try to give fame some substance.  It can’t just be gaining the attention of the fickle crowd.  It must be based on real accomplishment.  And here, I think, is where the difference between fame and status resides.  It’s a matter of scale.  Status is confined to one’s contemporaries and to confined groups identified as one’s peers.  Hence the idea of a “succès d’estime, or of a “poet’s poet.”  Esteem is not fame.  Fame is more general; it is being known beyond the circle of those devoted to your kind of accomplishment.  The sports blogger Joe Posnanski is currently trying to rank the 50 most famous baseball plays of the last fifty years.  His criteria is that these players must be known through their accomplishments on the field, but (crucially) are known to even the most casual fan and even to those who do not follow baseball at all.  Thus he is clear that he is not identifying the 50 “best” players of the stated time frame, but the 50 “most famous.”  To take an earlier example (i.e. prior to 1973) Mickey Mantle was certainly more famous in the 1950s than Henry Aaron or Stan Musial, but it is debatable that he was a better player.  So it was possible that Mantle was less esteemed by his fellow players than Musial even though he was more famous.

Similar effects are often seen in cultural matters.  Mailer is more famous than Roth, but I think it fair to say Roth is more esteemed among the cultural cognoscenti. Of course, such distinctions provide fodder for snobbery.  Those “in the know” can scorn someone who declares Mailer a better writer than Roth. 

Still, especially for the pre-1750 writers, fame’s larger scale recommends it.  The seeker of fame is daring to play in the largest game.  Milton aspires to be one of the immortals, remembered for all time as Homer, Dante, and Shakespeare are.  Hence the nobility of the quest for fame: it is to risk all, it is to aspire to be among the greats, and, thus, to hold oneself to the standard set by the greats.  Such ambition is certainly presumptuous, but only such presumption yields the highest results.

A poet friend of mine once remarked that he wrote for posterity; his great desire was that his poetry would “last,” that it would still be read after his death.  The remark made me consider my own ambitions—which upon reflection I had to realize were of a much different cast.  I wanted to be read in my own time, to garner responses from my peers (readers of similar interests) and, in my wildest dreams, of a wider readership.  I would have loved to write a best-seller.  If I wanted to be “known,” it was by my contemporaries.  I have no interest in, no desire for, readers after my death.

Partly that lack of desire is that I won’t be around to enjoy the attention of others once I am dead.  I can’t have a desire for something I will not experience.  But the lack of desire is also diffidence, an inability to take my chances in a larger game.  My poet friend is playing for higher stakes than I am—and that very fact surely shapes how he goes about his work in contrast to how I go about mine.  His ambitions do appear more noble than mine—even if negative words like “grandiose” and “presumptuous” could also be used to describe his aims.

To end by returning to economy.  The competition for fame differs from the competition for status, then, mostly as a matter of scale.  The seeker of fame wants, to be very extreme about it, to be “known” by everyone, even by people who will be alive after he is dead.  The downside of fame (as all the moralists point out) is that is can be empty.  To be famous is not necessarily to be esteemed, while it is certainly true that the more people “know” who you are, the fewer of those among that number are actually in a position to “esteem” you, to judge with any degree of accuracy the quality of the achievements that made you known.  And fame is notoriously self-referential.  Your achievements, after all, may have various effects in the world; they accomplish something.  But fame accomplishes nothing.  It is just a garnish on top of your actual deeds.  And this garnish is something you can strive to bring about, but which you cannot command.  It is offered entirely on the whim of others, manufactured by the various engines of publicity that a given society possesses. 

To be concrete: I can work at and make myself an adept at hitting a baseball.  I can make that happen as a consequence of my actions.  Success in that endeavor is much more under my control than making myself famous, which is more a by-product of my accomplishments than a product.  Of course, I can do various things in the way of self-publicity to achieve fame, but such work is often deemed vulgar.  Which returns us to the issue of shamelessness.  Shameless self-promotion is often scorned—but can still be fairly successful for all that.  As Tocqueville saw with a shiver, shamelessness pays.  Which is why the economy of fame is always viewed with some suspicion.

Flanagan and Darwin

Flanagan takes seriously “Charles Darwin’s proposal in The Expression of Emotions in Man and Animals (1872) that there are universal emotional expressions that have been naturally selected for because of their contributions to fitness, possibly in ancestral species” (120).  Thus, psychology, at least to some extent, is working from a basis of “human nature,” in the sense of emotional and cognitive capabilities and habits that are built in by way of evolution.

“Ever since Darwin, attention to the evolutionary sources of morality has brought a plausible theoretical grounding to claims about ultimate sources of some moral foundations and sensibilities in natural history” (12).  Presumably, identifying that bedrock will alert us to constraints beyond which it will be practically impossible to go.  We cannot ask of humans (“ought implies can”) what they are incapable of doing.

Flanagan then devotes Chapters 3-5 of his book to considering possible candidates for the basic equipment, starting with the “seeds” proposed by the Confucian philosopher Mengzi (or Mencius in the Jesuit’s translations of his work) in the fourth century BCE and moving on to a consideration of the “modules” proposed by Jonathan Haidt.  He offers (page 59) a strong set of evidential conditions that would have to be met if “seed” or “module” theory is to be convincing.  These conditions are:

  1. The seed or module would have to be associated with an automatic affective reaction.
  2. The seed or module should ground common sense judgments.
  3. These judgments and affective responses should be widespread, perhaps even universal, among human communities.
  4. The judgments and affective responses should be directly tied to corresponding actions.
  5. There should be a plausible evolutionary explanation for the selection of these judgments and affective responses (generated by the “seeds” or “modules”).

Because they are so specific [Haidt’s five modules are care/harm; fairness/cheating; loyalty/betrayal; authority/subversion; and sanctity (purity)/degradation], the modules strain credulity as actual products of evolution.  (Haidt has recently attracted the ire of leftists by claiming that liberals are deficient in the “loyalty” module and, hence, their moral views are not as comprehensive as those of conservatives.  Flanagan nobly—and correctly—tells us that Haidt’s views on the political valence of his modules is logically separable from any assessment of the modules themselves.)

It is not just that children seem to have no innate sense of sanctity (purity) or that modern Western societies have fairly relaxed attitudes toward authority.  It is also that the emotional responses to harm (from horror to delight—think of the crowds at executions and lynchings) and to purity (from disgust to the joys of the carnivalesque) run the whole gamut.  The modules do seem useful as ways to analytically designate the different dimensions of morality, but to posit them as in-built products of evolution seems an effort to ground morality through a just-so story.  Plus there are other dimensions of morality we could consider.  For example, the commitment to doing a job correctly; the pleasure and pride taken in competence, in a job well done, and the disapproval of shoddy work.  Do we want to suggest a module for that—and tell an evolutionary story about our dissatisfaction with “good enough” work?

Flanagan—he is a philosopher after all—cares about whether the modules are “real” in the sense of being in-built equipment.  But he is, finally, agnostic on the question, admitting that a pragmatic (these are just useful conceptual tools) rather than a realistic (the modules actually exist) take might be most plausible.

He then falls back on a less specific alternative in an effort to save some remnant of realism.

“A ‘basic equipment’ model says that what you start with is whatever—the kitchen sink, as it were—there is in first nature, and that whatever you end up with in second nature is the emergent product of whatever all the dispositional resources of first nature can yield when mixed with the forces of the environment, history, and culture” (110).  The key words here are “can yield.”  So the quest is still for the constraints, the limits, that “first nature” imposes.

I have two basic beefs with even this less specific way of giving Darwin his due.

The first is that the basic equipment is not necessarily a product of selection.  Flanagan is very careful to avoid Darwinian reductionism.  Lots of things—his favorite example is literacy—are just by-products of abilities that were selected for.

“There was no selection for literacy.  In order to read we utilize brain areas originally selected (not even in our lineage but in ancestors) to track animals.  One way to out the matter is that literacy didn’t initially matter one iota for fitness.  It couldn’t have.  We were not literate for almost the entire history of our species” (25).

My problem here is what criteria are we to use for deciding which human capabilities are the product of evolution and which are not?  It seems like the only test is whether we can tell a plausible story about a trait’s being very, very old and being connected to the passing on of one’s genes.  We all know too well what kinds of ingenious stories get told to pull something into the evolutionary camp.

Let’s take three problematic issues.  1. Myopia.  Surely it’s ancient and, presumably, we have to say that evolution is indifferent to it—and then tell a story to explain that indifference.  Because it is hard to explain how myopia contributes to fitness.  2. War.  Every human society has an experience of war.  Yet war is most dangerous for precisely the society members—young men—who are in a vital position for transmitting their genes.  From an evolutionary perspective, war seems particularly perverse.  So, since the simple fitness tale fails in this case, all kinds of mental gymnastics are called upon to explain the phenomenon, to save the appearances. 3. Homosexuality.  Another puzzler when it comes to any straight-forward fitness explanation.

My point is simply that some apparently widespread (maybe even universal) human traits lend themselves to Darwinian explanation and others do not.  Do we really want to claim that only the Darwinian ones are really human nature and the others are not?  And what would be the basis of such a claim?

The second issue is central to Flanagan’s work.  Namely, one way to judge a trait is in relation to fitness; another way to judge a trait is in relation to “flourishing.”  “The distinction between a trait that is an adaptation in the fitness-enhancing sense(s) and one that is adaptive, functional, conducive to happiness, flourishing, and what is different still, good or right; or in a thicker idiom still, what is compassionate, just, fair, loving, faithful, patient, kind, and generous” (83).

Flanagan’s basic point is the we don’t have to settle for what evolution dishes out to us.  Rather, morality entails our judging our basic equipment—and working to change it where it violates our sense of “flourishing” or our sense of “right and wrong” or “good and evil.”  And Flanagan stresses throughout the plasticity of humans, the many varieties of feeling and behavior of which we have proven capable through the evidence of individual and cultural differences.

Allow for moral judgment of what nature provides and for plasticity and I really don’t see what’s left of Darwinism.  What does it matter if a trait is evolutionarily produced or a by-product of in-built capacities or a cultural product?  I have already suggested that I find it very difficult to sort traits out into those different bins.  Now I am saying, why would it matter? All the traits—no matter their origin—are to be subjected to our judgments about their morality and their desirability.  And we will work to reform, alter, revise, and adapt any trait in response to our judgments.  The origin of the trait will make no difference to how we set about to work upon it.

But, comes the objection, the chances for successful revision will be different depending on the trait’s origin.  That’s a species of what I consider “false necessity.”  Why think we know ahead of time, theoretically as it were, which traits are revisable and which are not?  The proof is in the pudding.  Only practice will teach us our limits.  It is a bad mistake to let someone tell you ahead of time what you are capable of and what you are not capable of.  The tyranny of low expectations.  Morality, after all, is always aspirational.  It always paints a picture of a better us—more loving, more generous, more caring that we often manage to be.  Why takes an a priori pessimistic stance about our capabilities?

Not surprisingly, I guess, since this question is at the heart of any moral philosophy, the issue is one about determinism versus free will.  I resist Darwinism (especially in many of its crudely fundamentalist forms) precisely because it is deterministic, trying to legislate that certain things just can’t be done, or to apologize for certain kinds of behavior (male sexual aggression, for one) as inevitable and thus should be shrugged off.  Morality would hold us to a higher standard—and refuse to capitulate to the notion that those standards of flourishing or the right and good are “unrealistic.”

Owen Flanagan’s The Geography of Morals

I am a big Owen Flanagan fan and have just finished reading his most recent book, The Geography of Morals (Oxford UP, 2017).  Because I am an academic, with all the pathologies of my tribe, I will have a bone to pick with Flanagan in my next post.  But praise should always precede criticism–and there is so much to praise in this book.

Flanagan has worked for years to broaden the scope and interests of moral philosophy beyond the sterile deontologist/consequentialist debates into which so much moral philosophy has cornered itself.  Of course, he has plenty of company in that quest, with Alisdair MacIntyre, Bernard Williams, Martha Nussbaum, Richard Rorty, and Charles Taylor the most notable writers to walk away from “technical” philosophical ethics.

I think it is fair to say that the five philosophers just mentioned all, to some extent, pay attention to the emotional bases of ethical judgments in order to downplay the idea of a rational deliberating self, whose actions follow from a weighing up of reasons (whether those reasons be Kantian or Millian).  Flanagan’s persistent interest (over the past 25 years at least) has been in “moral psychology.”  He wants to identify the psychological processes–both emotional and rational–that generate moral conviction.

He has two persistent reasons for wanting to pursue questions of moral psychology.

1.) He wants a realistic, empirically based sense of the constraints underlying human behavior.  His version of “ought implies can” is to identify deep-seated tendencies in human reasoning and human responses to environment that will, at least, suggest (I use this weaker word advisedly) limits to human capabilities and things that will prove difficult for humans to accomplish.  His favorite example here is in-group prejudice.  He takes it as universally true that humans care more for their family members and for a limited range of others.  Thus, it is difficult, although not impossible, to extend the set of others to whom humans will offer sympathy and care.

2.) Flanagan insists that any individual’s morality is developed within a “form of life.”  He adapts the word “ecology” to describe the environment in which moral intuitions, convictions, reasons, and emotions emerge in individual humans.  We are born into “a preexisting but ever-changing cultural ecology.  The ecology is the normative force field in which we grow and develop, and it is authorized, regulated, and maintained outside the head, in the common, but possibly fractious, social ecology” (93).  Following Wittgenstein’s thoughts on private language, Flanagan’s position is that worries about subjectivist moralities are entirely misplaced.  No one invents–or could possibly live by–a private morality.

Several important consequences follow from Flanagan’s approach.  The first is that morality is about persons-in-relation (to other persons, to the environment, to animals, to the traditions and cultures into which they are “thrown”—to use Heidegger’s term.)  Morality is social, inter-subjective, inter-species, inter-relational through and through.  Flanagan mentions Williams’ famous distinction between morality as applying to norms of social interaction as contrasted to ethics as pertaining to the individualistic question “What is the good life for me”?  But Flanagan, correctly in my view, finds that distinction only moderately useful (in certain contexts) because it is almost impossible to conceive of a good life that doesn’t have establishing good relations with others, the environment, animals etc. at its core.

A second consequence of focusing on relations is to knock morality off a pedestal—either one that imagines us all doing some kind of Kantian deduction to reach the categorical imperative or worrying about which switch to pull on runaway trolleys.  Morality is mundane, implicated in the minute-by-minute monitoring and adjustment of our relations to all in which we are immersed.  “The moral problems of life vary with age and circumstance, but they are mostly . . . matters of tender mercies, love, attention, honesty, conscientiousness, guarding against projection, taming reactive emotions, deflating ego, and self-cultivation” (10).  Morality is ordinary.

A third consequence is the breakdown of barriers between philosophy and the human sciences.  Flanagan quotes Dewey (from Human Nature and Human Conduct) approvingly: “Moral science is not something with a separate province.  It is physical, biological, and historic knowledge placed in a humane context where it will illuminate and guide the activities of men” (44 in Flanagan).  Whatever the human sciences can tell us about human beings is relevant to thinking about how humans construct and structure “forms of life” that include “normative orders.”  Here’s Flanagan’s description of the latter.  “The normative order uses both the capacities of individuals to acquire reliable dispositions inside themselves—typically conceived as virtues–to do what is judged to be good, right, and expected, as well as public institutions and structures, such as law and tax codes, to accomplish, regulate, and enforce regiments of order and justice that individuals might not find easy to abide from reliable inner resources” 25-26).

As the appeal to “dispositions” indicates, Flanagan is firmly in the neo-Aristotelian camp.  Forming the right dispositions, cultivating virtues, is the primary moral work in his view—and his appeals to psychology are in service of that cultivation.  What are effective methods of creating dispositions—and what are the limits on what those methods can achieve?

But you will have noticed that the definition of “normative order” is content-light in terms of designating what the good or the right is.  That’s because Flanagan accepts that there is more than one “form of life” on the planet. The good and the right is not a constant, nor is it the same in all contexts. As the title of his book indicates, he wants to explore the possible variations in normative orders that history and geography offers us.  What might Western moral philosophy, in particular, learn from an encounter with Eastern sources, especially Confucianism and Buddhism?

Much of the book is devoted to this comparative work.  Specifically, three chapters are devoted to thinking about anger.  To what extent is anger an “inevitable” human emotion; to what extent is anger (in fact) part and parcel of morality insofar as moral indignation can seem to be the baseline moment of moral judgment; and, if there are alternatives to anger’s role in moral judgment, would we be better off adopting those alternatives?  Those chapters justify both Flanagan’s focus on moral psychology and his exploration of moral traditions that take fairly different approaches to similar problems.

Flanagan is, however, not simply an Aristotelian.  He is also a Darwinian.  And it is that aspect of his thought that I will examine in my next post.