Henry Farrell remains my go-to guy in trying to wrap my head around AI. His latest post on that topic can be found here:
https://mail.google.com/mail/u/0/#search/henry+farrell/FMfcgzQdzcrgnzCDNhCcmXHCZTzKqBrP
Two caveats. First is that Farrell has zeroed in on large language models in his various posts about AI. So what he has to say may not be relevant if there are other modes of AI functioning. The second caveat follows from the first. I think that I understand LLMs. But not only may I be deluded on that score, but I may also totally miss the reality of AI because assimilating it to LLMs.
That said, the issue that I find myself most fixated on comes down to the word “generative.” Hannah Arendt appropriated the term “natality” from Augustine. She used the term to refer 1) to the way each birth of a human being brought something new into the world (thus increasing the world’s plurality). We can certainly say that Arendt was too humanist; there are other births besides human ones and they, too, add to the world’s plurality. (Recall that “plurality” is a fundamental concept—and value—for Arendt.)
However, 2) “natality” also indexes the way in which action is creative. Action initiates and serves as a base cause of the arrival of the “new.” Novelty and action go hand-in-hand for Arendt; it is the way in which action is unpredictable that is precious to her—and cements the connection between action and freedom in her work. Action is not totally unconstrained, but its constant ability to surprise us, and the ways in which we value creative and innovative responses to given situations, marks a special (and it seems for Arendt unique) human talent.
I have written before about the collapsing distinction between instinctual and deliberate (consciously chosen) behavior. The line between human and animal behavior gets fuzzier and fuzzier with everything we learn about animals and about consciousness. And there’s more evidence all the time that trees are much more active and conscious than was previously thought. In short, humanism as a theory of an unbridgeable, qualitative difference between humans and other living beings has become less and less tenable.
Of course, “humanism” is a term with many meanings. I am using it here to designate the belief that humans are unique among the furniture of the world. That belief often goes hand-in-hand with the additional beliefs that humans are superior to everything else that exists and that humans are entitled to “dominion” over everything else that exists. (The notion of “dominion” has one vastly influential articulation in Genesis. I don’t think the humanist claim to uniqueness necessarily entails assertions of superiority and/or dominion.)
In our current moment, the desire to distinguish between the human and the non-human has focused more intensely on machines, not animals. If I am reading Farrell correctly, he has focused in on what might seem to be a notable lacunae in Arendt’s theory of action: desire. What motivates action? What does the agent strive to accomplish? In Farrell’s post, this question brings him to the concept of “intentionality.” Agents—whether human, animal, or plant—act in order to accomplish something. In the strictest Darwinian terms, they act to accommodate themselves to their environment (which itself is in constant flux) or act to alter the environment to better suit their needs. (That environment includes other beings as well as less intentional forces such as the weather.) I am connecting that concern to the question of what being “generative” means.
Can a machine want anything? Can it initiate something out of its own needs/desires? Just how “generative” is AI going to prove to be? Think of a rock at the top of a hill. It sits there until some external force pushes it. Once pushed, it will, on its own momentum, roll down hill and (perhaps) do some surprising, unpredictable things. But it needs the initial push. Yes, it generates consequences, but only after something external to it begins (natality) the process.
Isn’t AI the same? Doesn’t it just sit there until it is given the starting prompt? I read somewhere the claim from a tech guy that “I haven’t met a program or computer yet that wanted to tell me something.” The machine doesn’t have anything it wants or needs to communicate. It will, of course, have lots to say if prompted to do so. But it will remain silent in the absence of that prompt.
And when it does speak, it will not be trying to accomplish any particular thing. It is indifferent to what it produces—and will alter its product in relation to further prompts and to the desires of the prompter. It is that indifference to (or, put more drastically, its ignorance of) the possible consequences of what it generates that underlies (it seems to me) the most prevalent fears expressed about AI. It is not that AI will develop its own desires and act upon them that is the threat. It is that AI will mindlessly follow a program out to its logical (?) conclusions without any sense of how destructive it will be to go down that path. Mindlessness vs mindfulness. The machine doesn’t intend anything; it just processes its data into new combinations in response to a prompt and the algorithms used to do the processing work.
I may very well have the wrong end of the stick here. It does seem to me that those who believe the distinction between man and machine is fated to go the direction of the now collapsed distinction between humans and animals argue in Cartesian fashion. Descartes said that animals are machines—and he made humans an exception to that rule. The neo-Cartesians deny the exception. They make humans another of the animals that are best understood as machines. Thus, in thinking through the categories of human and machines, they do not try to claim that the machine will develop desires and intentionality. Instead, they argue that humans are already (and always) machines, that our folk psychology of desires and intentions and consciousness are just mistakes. The human mind is simply a data processing entity, following its own algorithms. And as a data processing machine, the human mind is vastly inferior to what our computers can do. Match human intelligence against AI—and AI will win most times right now, and every time in the near future.
The machine will achieve “super-intelligence,” something humans are incapable of.
Perhaps, then, talk of desire and intentions, of wanting to communicate something, is only the last refuge of a desperate humanism, trying to hold on to a dubious distinction between humans and other beings in the world. We can allow for differences (how humans organize their relations to one another is different from how swans do), but not for some hierarchy of beings, nor for some qualitative distinction between human cognition functions and those functions in other beings. I have been convinced by the arguments (and the new empirical discoveries on which they are based) that collapse any such distinction between humans and animals. Humans are not superior to the other animals and are certainly not radically different from them as cognitive processors. Humans are as rational—and as irrational—as all the other animals. It is simply not true that the animals are instinctual beings and humans are conscious, reasoning ones. Both humans and animals (in my view, but this is not universally agreed on) rely on both instinctual and more conscious bases for action.
Since I believe consciousness is not epiphenomenal, but actually exists as a function that enables deliberate choice and strategic action aiming toward the satisfaction of desire, the question does (it seems to me) become how to think about non-conscious intelligence. Despite cinematic representations of computers that anthropomorphize them, I take it that no one is claiming the machines are conscious. As I have already said, the arguments (as far as I can tell) go in the opposite direction: that is, humans don’t have consciousness, not that machines do have it.
In sum, it seems that consciousness is where humanism is making its stand. Maybe its last stand. Which returns me to what I have gleaned from all the work on consciousness that I have read in the past two years. The function of consciousness is primarily one of evaluation. What consciousness provides is an ability to assess a situation and 1) to consider options in how to respond to and proceed within that situation and 2) to do an internal evaluation of one’s various desires, to see which one (or ones) to prioritize in this moment. I think machines follow an utterly, noncomparable, path toward what they produce. The distinction between human and machine seems firm to me. Which is not to say that humans are superior in every way to machines. Obviously that is not the case. There are many things machines can do that humans cannot. But those things are things humans want done—and devise their machines to accomplish. I don’t think the machines want anything at all.
One final complication. The Farrell post I have cited does ponder a case where human and LLM processing do seem not just comparable, but fairly similar. Farrell is looking toward the famous work of Alfred Lord and Milton Parry on the bards who perform long epic poems in what appear to be mind-boggling feats of improvisation. Farrell sees this bardic practice as shuffling through large, pre-existing bits of language to produce in the moment a coherent, comprehensible utterance. The analogy to LLMs seems clear. What, of course, still remains mysterious (but may become less so in the future) is the algorithm (if that is even the right term) the bards deploy. Like the chess master, the bard has a storage bank of remembered moves/phrases and is able to pick out one element from that bank very quickly. How the feat is accomplished remains unexplained right now, but it could be more similar than not to how a LLM performs its similar feat. But Farrell does not think this particular breakdown in the distinction between human and machine undermines the objection that machines do not have intentions and (my addition) do not have autonomous desires. Does the machine want to learn? Does the machine want to correct its mistakes? Only if humans tell it to.