Gene Regulation—and Darwinian Fundamentalism

Turns out that regulation is biological and genetic as well as political.  One major functions of genes is to regulate, mostly in the form of on and off signals.  It is genes that have the job of activating immune system responses to external microbes—and then to shutting off those responses once the threat is defanged.  Hence, over-reactive immune systems are results of the off switch not being pulled, whereas immune system deficiencies occur because the on button does not get pushed.  The body, too, can overdo things, can push past limits best left intact.

Immune system responses are obvious cases where genes are activated by environmental triggers.  There are other fairly obvious environmental factors—like adequate nutrition.  More mysterious is “incomplete penetrance”—i.e. the fact that two organisms can possess the same gene, but it only expresses itself in one of them.  Such is the case, for example, for women who carry a gene highly associated with breast cancer.  But only highly associated; that gene does not result in 100% of the women who possess it eventually getting breast cancer.  More like 60%.  (Incomplete penetrance, then, is one way harmful genes do not get eliminated through natural selection.)

Mukherjee sums up what genes do with the three “R”s: regulation, replication, and recombination.  To which he then adds a fourth: repair.  It’s the replication part that gets most of the press: cloning and all that.  But it’s regulation and recombination that makes straight-forward genetic engineering and simple-minded Darwinian fundamentalism equally pipe dreams.  There’s just way too much slippage in the process, too large an element of chance along with environmental factors that would be difficult to control even if we understood them completely.

Hemophilia offers a good example of why a simple fitness story doesn’t work.  The gene that causes can be carried by the mother, but only expresses itself in male progeny.  Therefore, because the gene is recessive in women, natural selection doesn’t get the opportunity to eliminate it.  Even though there is no way to link hemophilia to fitness, it persists.

Furthermore, while hemophilia is one of those diseases that can be linked to a specific gene, there are lots of conditions and abilities—smell is one good example—that are the product of multiple genes, often in the dozens, sometimes over one hundred.  Natural selection can’t zero in on a defective gene in such cases, plus mucking around in such complex systems often has bad side effects.  The complexity of human biology, in other words, works against any simplistic understanding of natural selection.  Features of the human organism are just too inter-related to make targeted elimination of specific features possible in many, many (although not all) instances.

In addition, there are the benefits of diversity itself.  The trouble with Darwinian fundamentalism is that it takes the ranking approach to human behavior and personality and capabilities—as if it were the AP ranking of college basketball teams.  The “fittest”—absolute superlative—is what emerges from evolution.  But that’s nonsense.  There are multiple possible ways of surviving to the point of reproduction.  Evolution, at best, eliminates those that can’t survive until reproduction.  It does not optimize; it does not only select the best.  The “good enough” survive as well.

And the survival of that diverse range is crucial because a diverse gene pool is the vast reserve army by which a species arms itself against environmental change.  An over-specialized species, ill equipped to adapt to change, will not survive in the long run.  Far from needing the fittest, evolution needs the diverse, a whole range of types and variants.

To switch the focus to emphasizing how evolution selects a wide range instead of “the fittest” is to understand why evolution is not destiny nor even an indicator of set limits.  New environments will call forth new adaptive behaviors—and a rich gene pool will ensure that such adaptions are possible.  Not for everyone perhaps, but for some.   Not to mention that the complexity of the system lays itself open to new interactive patterns, along with innovative recombinations and the development of regulative responses to new triggers.  Of course, translating from a specific genetic make-up to behavioral patterns and capabilities is very difficult.  Such a holistic vision of human functioning–arising out of the base of genes–is still far beyond our analytic abilities.

Natural selection, in short, is not the conservative’s dream of a Hobbesian competition of all against all.  It is not just that humans, quite obviously given their extra-long immaturity, cannot survive except in mutual aid societies; it is also because the elimination of all diverse others would quickly lead to the extinction of the species.  To put it in the bluntest terms: if I only worked to ensure the survival of my genes—and not those of anyone else—I would be murdering the species. A purely selfish gene will not see too many generations.

Self-Regulation

I am reading Siddhartha Mukherjee’s The Gene: An Intimate History (Scribner, 2016).  Lots of interest here—and lots of scientific information that is simply new to me and sometimes beyond my ability to comprehend.  More on that, perhaps, later.

For the moment, I want to focus on a more political point.  Mukherjee devotes a few pages to a 1975 conference at Asilomar (near Monterey in California) in which genetic scientists hammered out an agreement to not pursue certain possible laboratory experiments and procedures because of the potential danger of loosing pathogens into the world.

Quoting from Mukherjee’s account:

“Extraordinary technologies demand extraordinary caution, and political forces could hardly be trusted to assess the perils or the promise of gene cloning (nor, for that matter, had political forces been particularly wise about handling genetic technologies in the past [a reference to forced sterilizations in the US and Nazi eugenics]).  In 1973, less than two years before Asilomar, President Nixon, fed up with his scientific advisors, had vengefully scrapped the Office of Science and Technology, sending spasms of anxiety through the scientific community.  Impulsive, authoritarian, and suspicious of science even at the best of times, the president might impose arbitrary control on scientists’ autonomy at any time.

A crucial choice was at stake: scientists could relinquish the control of gene cloning to unpredictable regulators and find their work arbitrarily constrained—or they could become science regulators themselves.  How were biologists to confront the risks and uncertainties of recombinant DNA?  By using the methods they knew best: gathering data, sifting evidence, evaluating risks, making decisions under uncertainty—and quarreling relentlessly.  ‘The most important lesson of Assilomar,’ [Paul] Berg [Stanford professor and key figure at the conference] said, ‘was to demonstrate that scientists were capable of self-governance.’ Those accustomed to the ‘unfettered pursuit of research’ would have to learn to fetter themselves” (232-233).

Except, of course, that they don’t—fetter themselves that is.  Oddly enough, Mukherjee doesn’t seem to see this.  He hails Asilomar as “a graduation ceremony for the new genetics” (235).  Less than ten pages later, as Mukherjee retails the story of the creation of synthetic insulin, we learn that the success comes from a private company, Genentech, which beats the Harvard team working on the same problem because unconstrained by university regulations and caution.  Later, Mukherjee treats Craig Venter, who creates a private company to compete with the government-funded Human Genome Project, much more kindly than many commentators do, while gingerly avoiding the issue of what corners Venters allowed himself to cut by stepping outside of a regulatory regime.

At issue, however, is not Mukherjee’s failure to develop a coherent stance on regulation.  Rather, I am interested in the whole notion of self-regulation—and in the paradoxes of regulation itself.

For starters, regulation is a tough one for people because it is not full-bore permission and it is not full-bore prohibition.  If I give my teen-age son a curfew, I am regulating his behavior, but not forbidding him to go out at night, and not granting permission for him to stay out all night.  Seems simple enough in principle—but it proves very difficult in practice.  The regulation sets a clearly visible limit which (as we know from the Garden of Eden) creates an immediate and powerful temptation.

With self-regulation, then, the limit setter and the tempted transgressor have to be one and the same.  Again, it is trivially true that learning how to regulate oneself, to set and abide by limits not externally imposed, is a crucial step toward maturation.  I am hardly saying that humans are incapable of avoiding “over-doing” something.

But the case is very different when strong social incentives are in place to reward going past a limit.  That situation appears particularly relevant in any competitive environment.  So, in sports, using performance enhancing drugs or even just over-training (to the point of self-harm) are such strong temptations because the rewards for success are so massive.  Similarly, in science, where getting there first is just about everything (to echo Vince Lombardi).  And the same is true, of course, in economic competitions, where various forms of unregulated or expressly forbidden, behaviors can reap one a market advantage.

George Bernard Shaw said “all professions are a conspiracy against the layman.”  By that, he meant that professions claim to have expertise and knowledge that the ordinary person does not possess.  One of the first consequences of that claim is that professions want to be self-governed, to get out from under any external oversight.  The outsiders cannot possibly understand the full complexity of our professional tasks—and hence can only muck things up by interfering.

I was in a room full of hedge fund managers and Wall Street financial guys (none of the finance people was a woman) shortly after the 2008 election of Obama in the aftermath of that fall’s financial meltdown.  To a man, this group lamented how the Democrats were going to cripple financial markets and the absolutely essential flow of capital by coming in and ignorantly regulating things.  There was not a single iota of self-doubt expressed by this group.  They were too focused on the image of themselves as victims of innocent politicians.

In short, it is hard to believe that any profession can ever successfully regulate itself. The reward structures internal to the profession are tied too closely to surpassing limits.  After all, regulation is about trimming back, about not letting everything that is possible be undertaken.  And the logic of the profession is to push relentlessly forward.

But, as Mukherjee’s anecdote about Nixon (making him sound remarkably like Trump) reminds us, are the politicians really in a better position to do the regulating?  When we watch the spectacle of our politicians denying climate change, endorsing nut-case theories about vaccinations and autism, and calling for a balanced federal budget and a return to the gold standard, aren’t we forced to agree that their ignorance should not be allowed to cripple the experts’ knowledge?

How, in other words, are we to establish true accountability?  Some, of course, say we should rely on markets for that.  But the market’s decision is always (even when it does come—and it does not always come) after the fact.  The harm has been done.  Regulations are often also created after the fact, to prevent a disaster happening a second time. But regulations are also anticipatory.  My curfew for my teen-age son was not motivated by any particular incident.  It is just a rule that seems to fit the circumstances—and some possible issues.  So regulation is not just in order to hold people accountable; it is also about prevention.  Don’t do this because it will have bad consequences.

That still leaves the question of who is the best judge of possible bad consequences.  I don’t think the profession itself is.  Professionals have their minds fixed on other things—on success as their profession defines it, on pushing the limits, on following a line of thought or action out to all its logical and possible conclusions.  But no one else seems to be in a very good position to set the boundaries.  We reach here a fundamental dilemma in democratic governance.  The professions need to be governed by a demos that actually lacks good credentials for doing the governing.  We are stuck, I would say, with trial and error, with repeated attempts to regulate that will be resisted by the professions and yet still must be enforced, with (hopefully) continual revision as some regulations prove salutary and others harmful or useless.

Regulation will also have to be dynamic—no once and for all fix will ever be achieved—because the attempt to evade regulations will be endless, as will be the emergence of new possibilities and innovations. (I scorn the oft-heard conservative argument that regulations are counter-productive because they generate evasion.  No one uses that argument against the prohibition of murder or the regulation of prescription drugs.) Some of those innovations will have arisen precisely as mechanisms to evade regulation.  But others arise just because human ingenuity knows no bounds and things undreamed of in the current regulatory scheme become possible.  Trying to tailor old regulations (for radio and TV) to handle new media (the internet), to take just one example, is a fool’s errand.  But in an atmosphere of knee-jerk hostility to regulation, devising a whole new regulatory framework is almost impossible.  The result is the current patent mess, which cries out for a reform that seems beyond our political capability to enact.

So let me conclude by considering that wide-spread hostility to regulation.  Every one of us has experienced it: some bureaucratic barrier placed between us and just getting the job done.  “Enough to make me a Republican,” was my exasperated way of responding to HR hurdles in the days when I was trying to hire staff for the Institute that I directed.  It was fairly easy (in almost all cases if one could look at the thing impartially) to see why a certain regulation was in place, what possible abuse it was trying to guard against, but that didn’t lessen the hassle of having to abide by the regulation.

But it is also worth thinking about just what regulations disallow—or enable.  Our heroic individualists always claim regulations stiffly ingenuity, creative thinking, going beyond the current sense of what is thinkable or doable.  Nonsense.  Just like those who talk most loudly about risk are actually risk averse (businesses make bets when the odds are stacked in their favor), what really irks most people about regulations is that they assault their habitual ways of doing things.  Many of those HR regulations were about insuring a diverse applicant pool and avoiding the nepotism and unconscious biases that lead to all-white offices.  Similarly, requiring that professors deposit their syllabi with a central office prior to the semester’s start means they must actually plan their classes and inform their students about the course’s content and expectations.  Regulations are ways of intervening in shoddy professional practices, of trying to not let habit rule the roost.

And regulations are also reminders that you, perhaps, are not the best judge of your performance.  In my corner of the professional world, college professors, there is deep resentment against the introduction of notions like “learning outcomes” and attempts to measure whether those outcomes have been attained by students.  Finding the right metrics is, no doubt, very difficult, but there is absolutely no denying at this late date the well-documented findings that lectures and reading are a poor way to transmit information to today’s students.  But deny those findings my colleagues will.  It was good enough for them—and they are also damned sure their students are learning lots.  How do they now this latter fact?  They can just tell.

External demands that any profession actually demonstrate, actually prove, its worth can only be to the good, in my opinion.  I sure as hell don’t want an unregulated Wall Street.  So how can I, in good faith, then argue for unregulated professors? The give-and-take, the endless jostling and disputes, between the professions and those external to them that try to regulate them is never going to be resolved.  But that process is far preferable to the delusion that the profession will self-regulate.  Just recall that every time a new environmental or economic policy is bruited in our fair land, some industry group will step forward and say: “We will voluntarily adopt this standard.  Just leave it up to us.”  How many times should we fall for that ploy?

As Michael Bérubé puts it in Life as Jamie Knows It, “bioethics is too important to be entrusted to the bioethicists.”  The same goes for every profession.  It has to be kept on its toes by knowing not just that outsiders are watching, but also by knowing that outsiders wield regulatory power to intervene in its practices.  And when such interventions come, let the fight begin.

Michael Bérubé’s Life as Jamie Knows It

I had one of those day-long plane trips (lay-overs, delays, the whole nine yards) earlier this week and used the occasion to read Michael Bérubé’s Life as Jamie Knows It: An Exceptional Child Grows Up (Beacon, 2016).  Jamie, born in 1991 with Down’s Syndrome, was the subject of Bérubé’s earlier account, Life as We Know It (Pantheon, 1996), which told the story of Jamie’s first three and a half years.  Readers of Bérubé’s now discontinued (alas!) blog had been kept abreast of Jamie’s development since then, but the new book includes various blog posts embedded within discussions of some of the more vexing issues facing children—and adults—with “intellectual and/or developmental disabilities.”

The topics covered include: the burden on siblings without disabilties; the successes that have followed from the Individuals with Disabilities Education Act (IDEA) which mandated the inclusion of children with intellectual and developmental disabilities in regular classrooms wherever possible; the wonderful accomplishments of the Special Olympics; the lack of similar progress in finding employment for those same children once they become adults and age out of school; the heartlessness of the American “health system,” in which roadblocks to abortion are coupled with a refusal to adequately fund health care, as if disabilities and different kinds of health problems were the individual’s fault and sole responsibility; the important conceptual difference between a “disability” and a “disease” (with a corresponding difference between thinking about amelioration of/accommodations for a disability’s effects as contrasted to looking for a “cure” for a disease); and, finally, to considering the tiresome and depressing need of humans to find ways of discriminating against other humans.  Bérubé’s exasperation breaks through in a sentence that I cherish because it so echoes my own current mood, as I watch the spectacle of the Republicans aiming to take health insurance away from millions.  “But I am getting old and crotchety, and increasingly impatient with people who spend their lives justifying inequality and oppression, no matter where on the globe they might happen to live” (196).

I am giving entirely the wrong impression, however, if I am making the book sound like a grim sermon about human failings or a long complaint about the ways Jamie (or others with disabilities) are mistreated.  In many ways, just the opposite.  While those thousands of miles away from Jamie (the politicians and the pundits) spill their distempered bile, Jamie (with some exceptions, of course) meets with kindness, and useful, competent, and loving care from numerous teachers, “paid companions,” classmates, his older brother and his older brother’s friends, and health care professionals.  Human decency breaks out everywhere in this book, as unexceptionally and yet as gratifyingly as Jamie’s own unpredictable—and sometimes astounding—breakthroughs to self-awareness and intellectual mastery. Face-to-face humanity comes off pretty good in this book.

Three further thoughts:  The first is that we don’t know nothin’.  As Bérubé puts it, early on: “[O]ur process of learning that our expectations for Jamie, and for people with Downs syndrome, are subject to constant revision—is very possibly the most important, the most consequential thing we can tell about our own journey” (16-17).  We simply do not know what any Downs syndrome person might be capable of doing for a whole host of reasons: 1. The variety within that catch-all category; 2. The refusal in the past (in our culture, not all cultures) to educate, mainstream, or challenge those with Downs Syndrome (here’s one place where the Special Olympics has been so salutary); 3. Continued medical advances that push the limits; 4. Continued (let’s hope) acknowledgment of and attention to the various forms intelligence takes.  (This last point is particularly relevant to me because I have a child who never tests about grade average on every standardized test—making school a perpetual scene of agony—but who has a variety of skills and competencies that play well in non-school settings.)  I am a William James pragmatist to the core, so that returns me to a point made in my previous post about Owen Flanagan and Darwin: we don’t know what’s possible until we’ve tried something in the real, material world.  Don’t let anyone tell you ahead of time that this or that will never happen.  Yes, there are limits; but let’s discover them in and through practice, not have them imposed by theory.

Second, I am very taken, in the various anecdotes about Jamie, with his will to understand.  The world is a puzzle to him—and he wants to know how it works.  And sometimes he is a puzzle to himself—and wants to figure that out as well.  Since this book pointed out to me my lamentable and culpable ignorance about Down’s syndrome (and, more broadly, other disabilities), I found Jamie’s intellectual curiosity, his need to know, inspiring.  Ignorance is so often a deliberate choice—and Jamie’s hatred of ignorance a shining virtue.

Finally, to return to this intelligence thing.  This book will make you cry in the good way, with its many tales of humans at their best.  But it also reminded me of how perverse we human creatures are.  At some level, we do know that intelligence, or making money, or besting others in this or that competition, or disdaining others for being less capable than oneself in the fifty yard dash or the SAT or the beauty contest, is not the road to a fulfilling life or a flourishing society.  But it seems like we can’t help ourselves.  We cling to the wrong priorities willfully and stubbornly.  Bérubé makes a compelling case that, in fact, the world is better off having people like Jamie in it.  We could even make a plausible case for a Darwinian natural selection argument for including Downs syndrome people among the varieties that survive in a Darwinian world.  Why?  Because Jamie reminds us of the real bases for happiness and meaning—and, presumably, happier humans are much more likely to want to keep this whole human species thing toddling along.

Flanagan and Darwin

Flanagan takes seriously “Charles Darwin’s proposal in The Expression of Emotions in Man and Animals (1872) that there are universal emotional expressions that have been naturally selected for because of their contributions to fitness, possibly in ancestral species” (120).  Thus, psychology, at least to some extent, is working from a basis of “human nature,” in the sense of emotional and cognitive capabilities and habits that are built in by way of evolution.

“Ever since Darwin, attention to the evolutionary sources of morality has brought a plausible theoretical grounding to claims about ultimate sources of some moral foundations and sensibilities in natural history” (12).  Presumably, identifying that bedrock will alert us to constraints beyond which it will be practically impossible to go.  We cannot ask of humans (“ought implies can”) what they are incapable of doing.

Flanagan then devotes Chapters 3-5 of his book to considering possible candidates for the basic equipment, starting with the “seeds” proposed by the Confucian philosopher Mengzi (or Mencius in the Jesuit’s translations of his work) in the fourth century BCE and moving on to a consideration of the “modules” proposed by Jonathan Haidt.  He offers (page 59) a strong set of evidential conditions that would have to be met if “seed” or “module” theory is to be convincing.  These conditions are:

  1. The seed or module would have to be associated with an automatic affective reaction.
  2. The seed or module should ground common sense judgments.
  3. These judgments and affective responses should be widespread, perhaps even universal, among human communities.
  4. The judgments and affective responses should be directly tied to corresponding actions.
  5. There should be a plausible evolutionary explanation for the selection of these judgments and affective responses (generated by the “seeds” or “modules”).

Because they are so specific [Haidt’s five modules are care/harm; fairness/cheating; loyalty/betrayal; authority/subversion; and sanctity (purity)/degradation], the modules strain credulity as actual products of evolution.  (Haidt has recently attracted the ire of leftists by claiming that liberals are deficient in the “loyalty” module and, hence, their moral views are not as comprehensive as those of conservatives.  Flanagan nobly—and correctly—tells us that Haidt’s views on the political valence of his modules is logically separable from any assessment of the modules themselves.)

It is not just that children seem to have no innate sense of sanctity (purity) or that modern Western societies have fairly relaxed attitudes toward authority.  It is also that the emotional responses to harm (from horror to delight—think of the crowds at executions and lynchings) and to purity (from disgust to the joys of the carnivalesque) run the whole gamut.  The modules do seem useful as ways to analytically designate the different dimensions of morality, but to posit them as in-built products of evolution seems an effort to ground morality through a just-so story.  Plus there are other dimensions of morality we could consider.  For example, the commitment to doing a job correctly; the pleasure and pride taken in competence, in a job well done, and the disapproval of shoddy work.  Do we want to suggest a module for that—and tell an evolutionary story about our dissatisfaction with “good enough” work?

Flanagan—he is a philosopher after all—cares about whether the modules are “real” in the sense of being in-built equipment.  But he is, finally, agnostic on the question, admitting that a pragmatic (these are just useful conceptual tools) rather than a realistic (the modules actually exist) take might be most plausible.

He then falls back on a less specific alternative in an effort to save some remnant of realism.

“A ‘basic equipment’ model says that what you start with is whatever—the kitchen sink, as it were—there is in first nature, and that whatever you end up with in second nature is the emergent product of whatever all the dispositional resources of first nature can yield when mixed with the forces of the environment, history, and culture” (110).  The key words here are “can yield.”  So the quest is still for the constraints, the limits, that “first nature” imposes.

I have two basic beefs with even this less specific way of giving Darwin his due.

The first is that the basic equipment is not necessarily a product of selection.  Flanagan is very careful to avoid Darwinian reductionism.  Lots of things—his favorite example is literacy—are just by-products of abilities that were selected for.

“There was no selection for literacy.  In order to read we utilize brain areas originally selected (not even in our lineage but in ancestors) to track animals.  One way to out the matter is that literacy didn’t initially matter one iota for fitness.  It couldn’t have.  We were not literate for almost the entire history of our species” (25).

My problem here is what criteria are we to use for deciding which human capabilities are the product of evolution and which are not?  It seems like the only test is whether we can tell a plausible story about a trait’s being very, very old and being connected to the passing on of one’s genes.  We all know too well what kinds of ingenious stories get told to pull something into the evolutionary camp.

Let’s take three problematic issues.  1. Myopia.  Surely it’s ancient and, presumably, we have to say that evolution is indifferent to it—and then tell a story to explain that indifference.  Because it is hard to explain how myopia contributes to fitness.  2. War.  Every human society has an experience of war.  Yet war is most dangerous for precisely the society members—young men—who are in a vital position for transmitting their genes.  From an evolutionary perspective, war seems particularly perverse.  So, since the simple fitness tale fails in this case, all kinds of mental gymnastics are called upon to explain the phenomenon, to save the appearances. 3. Homosexuality.  Another puzzler when it comes to any straight-forward fitness explanation.

My point is simply that some apparently widespread (maybe even universal) human traits lend themselves to Darwinian explanation and others do not.  Do we really want to claim that only the Darwinian ones are really human nature and the others are not?  And what would be the basis of such a claim?

The second issue is central to Flanagan’s work.  Namely, one way to judge a trait is in relation to fitness; another way to judge a trait is in relation to “flourishing.”  “The distinction between a trait that is an adaptation in the fitness-enhancing sense(s) and one that is adaptive, functional, conducive to happiness, flourishing, and what is different still, good or right; or in a thicker idiom still, what is compassionate, just, fair, loving, faithful, patient, kind, and generous” (83).

Flanagan’s basic point is the we don’t have to settle for what evolution dishes out to us.  Rather, morality entails our judging our basic equipment—and working to change it where it violates our sense of “flourishing” or our sense of “right and wrong” or “good and evil.”  And Flanagan stresses throughout the plasticity of humans, the many varieties of feeling and behavior of which we have proven capable through the evidence of individual and cultural differences.

Allow for moral judgment of what nature provides and for plasticity and I really don’t see what’s left of Darwinism.  What does it matter if a trait is evolutionarily produced or a by-product of in-built capacities or a cultural product?  I have already suggested that I find it very difficult to sort traits out into those different bins.  Now I am saying, why would it matter? All the traits—no matter their origin—are to be subjected to our judgments about their morality and their desirability.  And we will work to reform, alter, revise, and adapt any trait in response to our judgments.  The origin of the trait will make no difference to how we set about to work upon it.

But, comes the objection, the chances for successful revision will be different depending on the trait’s origin.  That’s a species of what I consider “false necessity.”  Why think we know ahead of time, theoretically as it were, which traits are revisable and which are not?  The proof is in the pudding.  Only practice will teach us our limits.  It is a bad mistake to let someone tell you ahead of time what you are capable of and what you are not capable of.  The tyranny of low expectations.  Morality, after all, is always aspirational.  It always paints a picture of a better us—more loving, more generous, more caring that we often manage to be.  Why takes an a priori pessimistic stance about our capabilities?

Not surprisingly, I guess, since this question is at the heart of any moral philosophy, the issue is one about determinism versus free will.  I resist Darwinism (especially in many of its crudely fundamentalist forms) precisely because it is deterministic, trying to legislate that certain things just can’t be done, or to apologize for certain kinds of behavior (male sexual aggression, for one) as inevitable and thus should be shrugged off.  Morality would hold us to a higher standard—and refuse to capitulate to the notion that those standards of flourishing or the right and good are “unrealistic.”