A First Reaction to Atul Gawande’s Being Mortal

Behind the curve as usual, I have just finished reading Atul Gawande’s Being Mortal (NY: Henry Holt & Co., 2014).  I have many things to say about the book, but two preliminary comments for now.

At the broadest level, Gawande’s book advocates for a shift in focus from disease to “well-being,” especially in medicine’s dealing with the elderly.  Doctors and other care-givers should be enabling well-being, not focused on defeating disease.  His position resonates with the interest in well-being that is currently evident not only in the “medical humanities,” but in policy circles (such as the World Health Organization) as well.  In general, I would link this shift as part of the “capabilities” discourse initiated by Amartya Sen and carried on by Martha Nussbaum.  The goal is to think about—and to enable “flourishing.”  What counts as a full life, instead of “bare life” (to use Agamben’s term), should be at the center of our efforts.

Right now, however, I want to focus on a different point.  On assisted suicide, Gawande writes: “In the Netherlands, the system [for allowing and enacting assisted suicides] has existed for decades, faced no serious opposition, and significantly grown in use.  But the fact that, by 2012, one in thirty-five Dutch people sought assisted suicide at their death is not a measure of success.  It is a measure of failure.  Our ultimate goal, after all, is not a good death but a good life to the very end.  The Dutch have been slower than others to develop palliative care programs that might provide for it.  One reason, perhaps, is that their system of assisted death may have reinforced beliefs that reducing suffering and improving lives through other means is not feasible when one becomes debilitated or seriously ill” (244-45, my emphasis).

One in thirty-five seems to me a very low percentage.  But, more to the point, this passage comes fairly late in a book that has described, in excruciating (to this layperson) detail, massive surgical interventions on the bodies of eighty and ninety year old patients—while also documenting how such interventions rarely prolong life significantly.  A major theme of the book is how surgeons and others rarely manage to convey an accurate sense of the time-frames involved when such interventions are undertaken: patients and their families usually think they are buying five to ten years when twelve to eighteen months is much closer to the mark.

Yes, the book does consistently argue that those extra twelve to eighteen months can be worth living, especially if doctors, patient, and family have all explicitly identified concrete and realistic goals for that time period.  But the book also shows how difficult it is to say that this next intervention, even if it buys some extra time, will actually buy anything approaching a life worth having.  And everything in the book and in my own personal experience demonstrates just how difficult (close to impossible in fact) it is for patients and families to choose death, even where that is the most sensible choice.  Meanwhile, doctors are just about professionally and ethically completely forbidden to recommend death.

So, not to belabor the point, it would (it seems to me) require a sea-change in sensibility for people to face up to the ending—and to not grasp at medical straws.  That the Netherlands has made some progress (one in thirty-five?!) toward effecting that sea-change seems to me a noteworthy accomplishment. And the claim that their policy of assisted suicide has made the Dutch backwards in palliative care is tenuous at best and tendentious at worst. Gawande’s examples in the book  of what it means to live “to the very end” did not convince me that those last 12 months or so were actually worth living.  They more often seemed like medical horror shows to me.

 

 

 

John Barth

On about page 475 of an edition of Giles Goat Boy that runs to 650 pages, I have thrown in the towel.  I read the full novel back in graduate school (circa 1975), at a time when I took John Barth and his work very seriously indeed.  In 1975, I would have listed Barth along with Norman Mailer, Saul Bellow, and E. L. Doctorow as just about the only American novelists whose publication of a new work was an “event,”one that gave me something that I needed to read.  At that same time, Derrida, Foucault, and Habermas were the three thinkers whose work had a similar status for me (although it is also true that I read just about everything Ricoeur published because he was my real guide to the thickets of “theory”).

When did Barth fall off this pedestal, become just another writer who had written too much and whose work could be safely ignored?  Around 1980 in my case, when I tried to make my way through Sabbatical and felt I knew everything that was coming twenty pages before its arrival.  By 1995, the same was true of Derrida and Habermas.  They repeated themselves ad nauseum and all their tricks, all their insights, all the ways that their unique takes on the world had proved illuminating, were now all too familiar territory.  There was no need to keep reading them.

We readers are very cruel.  We crave novelty and praise extravagantly the writers who deliver it.  Then we grow jaded and move on.  A writer’s stock falls precipitously.  Just look at the Wikipedia page for John Barth and you will see how precipitously.  He has barely half a page and the novels after Lost in the Funhouse (1973) are not even discussed, although they are listed.  He’s yesterday’s news.

But having just tried and failed to get all of Giles down the hatch, I have to say that Barth has not aged well.  True, I do still teach The End of the Road from time to time–and students still enjoy its hip knowingness.  But what Giles makes clear is how sophomoric Barth’s humor is, and how thin his Camus inspired insistence that whether or not to commit suicide is the overwhelming question of one’s existence.  Hard to tangle with the details of life when it is seen from the height of such an abstraction.

Of course, Mailer and Bellow haven’t aged well either.  I recently re-read Herzog (Bellows’ 1964 National Book award winner) and it’s simply a bad novel.  It’s not just Bellow’s ante-deluvian attitude toward women, or the unreflective whining of his hero; it’s also that the philosophical musings are jejune.  All this might be somewhat OK if there was a sense that Moses Herzog was a self-pitying sap whose moanings told us something about contemporary culture.  But there’s no indication of a distance between Moses’s self-image and Bellow’s assessment of his response to his misfortunes or his belief in the perfidy of those around him.  At least the similar paranoias of Philip Roth’s characters are entertaining in their wild exuberance, not just the mid-level pissing and moaning of Bellow’s hero.  Here I was just amazed that people even found Herzog good or important.

At any rate, I do think we readers are cruel to our writers.  But there is also the question of what writers manage to fulfill Ezra Pound’s famous injunction to “be news that remains news.”  How not to age out, if not so quickly as today’s newspaper, still just as inexorably?

And how do we readers keep from playing the game of ranking?  Of saying these are the writers who still count for something?  And how do writers age graciously, keep reinventing themselves instead of repeating themselves?

Anyway, I am not saying the Derrida and Habermas are worthless–only that a little of them goes a long way.  You hardly need to keep up with everything that poured from their pens.  Foucault and Barthes seem to me a different case.  Perhaps it’s because Foucault was always grounded in historical particulars instead of airy theorizing.  But it’s also because he was always striving to find a way to explain those particulars without ever really quite nailing an overarching explanation down.  In any case, I feel a need to read all of Foucault.  And Barthes just kept reinventing himself.  He is always surprising, so I have never gotten tired of him.

How about the novelists?  Who do I feel a need to keep reading?  Roth probably comes closest–as many people have noted.  He certainly has had the best old age of any American novelist of the past fifty years.  For a while (longer than for most authors), I felt moved to read everything new from Rushdie.  But now he has fallen off that list.  Pynchon and Delillo I find hopeless.  Doctorow is spotty; The March is a terrific novel and I think Ragtime has aged well.  His seems to me a rather unique case; he kept trying new things all the way to the end, and sometimes he pulled it off and other times he did not.

Right now, I look forward to anything new from Julia Glass.  But I can’t think of anyone else besides her.

I do want to praise more than disparage.  So future posts will concentrate on novels I have found deeply satisfying.  There are plenty of them, even if there are many more mediocre novels, the ones that I start to read, but then put aside.  And usually in those cases I do not get anywhere near as far as page 475.  So Barth can’t be all that terrible.

Rust Belt Blues

Jane and I, about a week ago, were having a disagreement about the year our house in Rochester’s 19th Ward had been built.  We bought the house–our first–for $48,000 in 1984 and sold it for around $52,000 in 1989.  So we hopped on to Zillo and discovered the house was built in 1912.  More incredibly, we discovered it would currently sell for $55,000.  Thirty years with almost no appreciation of value.

When we got to Rochester in 1984, Kodak employed almost 200,000 people.  When we left eight years later to move to North Carolina, Kodak employed slightly over 50,000 people in the city.  This, mind you, is before the digital camera revolution that Kodak allegedly missed and which is usually cited for its demise.

But the real story, I am convinced, of the destruction of Kodak has to do with the hostile takeover efforts against the company in the 1986-1987 years.  I have just finished reading Ron Chernow’s House of Morgan, a history of the 150 years of the Morgan financial companies, a history that ends in 1992.  Chernow is especially good on the rise of finance capital in the 1970s and 80s.  It’s a tragic tale with multiple villains to blame.  Certainly, the oil crisis and the inevitable (?) end of post-World War II American economic dominance are key factors.  But so is the failure of regulation to keep up with shifting banking practices.  By 1980, the banks had learned how to skirt the rail-guards put in place during the Depression.  Or maybe it would be more accurate to say that the banks and investment firms devised strategies and financial instruments that would allow them to create unregulated markets.  The novelties created would not be regulated because the regulations had been crafted to deal with a differently structured financial landscape.

Chief among the “innovations” was the leveraged buyout.  Those who succeeded in their hostile take-over would come into their new ownership saddled with debt.  Or the targeted firm, in fighting off the hostile bid, also had a huge new debt load.  In either case, corporations in the periods after these attempted or successful take-overs were in nine cases out of ten in worse economic condition than before hand.  The only winners were the banks that collected huge fees for crafting these “deals” and, in some cases, shareholders (if they had the sense to dump those shares almost immediately after the deal was done).  But the companies themselves were stripped of assets to ward off the take-over or to pay the costs of it, with the result of harming their ability to actually do business–or, of course, to actually employ people and make a product to bring to market.

Kodak was on life support by 1992.  Again, this is well before the digital camera revolution.

M&A (mergers and acquisitions) remains the hottest game on Wall Street some thirty years later.  Admittedly, the housing bubble (with its repackaged mortgage securities) of the 2000s was not driven by stock raids.  But it was a similar case of inflating value for short-term gains while inviting investors (especially institutional investors like mutual funds and pension funds) to put their money on stocks or bonds very unlikely to retain any value in the medium to long range.

Could America’s blue collar jobs have been saved by reining in the excesses of financial capital?  I am afraid that I don’t think so.  Automation and global competition, along with bad strategic thinking, are more to blame. (Germany provides an example of better strategic thinking; but we must remember that Germany secured the continued health of its manufacturing sector by forcing workers to take pay cuts in the early 2000s, and by exploiting the EU to secure a market for its goods.  The resulting imbalances in the EU have not proved sustainable–or, at least to date, fixable.)

America actually produces almost as many manufactured goods in 2016 as it did in 1980.  It just requires far fewer workers to produce those goods.  But that still doesn’t mean that America’s workers–and the companies that employed them–should have been stripped of all their assets as they became obsolete.  America’s rust belt was robbed blind, and nobody much noticed and, certainly, no one did anything to halt the thieves.

Now, of course, we are told that the victims of this crime are the ones who have put Trump in the White House.  But Wall Street’s response to his election has been very clear: they fully expect him to license exactly the kind of robbery that has been at the core of financial capital for the past 40 years.  We will have scandal and grift galore for the next five years, with a crash inevitably following in the next four to seven years.

Republicans always run the economy into the ground–or precipitate a Wall Street crisis.  We had the savings and loan disaster under Reagan and the first Bush, and the housing bubble under the second Bush, while the Republicans presided over the madness of the 1920s.  Only Eisenhower is an exception to this rule.

1491

I recently finished reading Charles C. Mann’s 2006 book: 1491: New Revelations of the Americas before Columbus.  Mann makes no pretense to presenting discoveries made by himself.  Rather, he is summarizing the work of archaeologists and anthropologists of the past 30 years.  But he tells his story well.  And the result is a book that basically says that everything you believed about the peoples of America prior to 1492 is false.

The story starts with–and really all hinges on–the assertion that the pre-Columbian Americans were much more numerous than was previously supposed.  Once that fact is accepted, pretty much everything else follows.  These very numerous Indians, in Mann’s account, were decimated by the illnesses brought by the first Europeans.  By 1700, the population decline was in the 90% range.

The biggest consequence of this shift in our understanding of the demographics is to insist that the pre-Colunbians were not hunters and gatherers.  They were agrarians who lived in settled communities. Only agricultural societies could support the numbers of people that the archeaological sites indicate.

Once that claim is accepted, the next consequence is that the “untouched wilderness” that the Europeans thought they were encountering was of very recent vintage.  The land returned to a seemingly wilderness state only as a result of the rapid depopulation.  It did not look like that before 1500.  There is almost no “old growth” forest in either North or South America if by that term we mean forests where trees have never been felled by humans.

Finally, and most significantly by my lights, Mann argues that there is an ecological lesson to be learned here.  There is no hope for–and perhaps not much rational reason to desire–purely natural environments.  We should recognize that pre-Columbians actively managed their environments.  And we should realize that humans always have the task of tending to their environments.  It is not a choice between “letting nature be” or interfering.  We are always interfering.  The choice is between better and worse ways of making our inevitable impact on non-human species and on non-human nature.  Passivity is a non-starter–and a delusion.