In the titanic dispute between humans and robots—a confrontation that never ceases to astonish us and that grows daily with new and unprecedented moral dilemmas—what stands out is the need to assimilate robots to human beings, both in appearance and in substance. Thus, the fact that they possess features that, more or less awkwardly, recall the forms of the human body seems to serve the purpose of legitimizing the idea that, deep down, something very similar to us is taking shape there.
Even the use of the term intelligence—followed by artificial—does not denote an analogical use, as though the word were being improperly extended, or applied by mere resemblance, from its typical human domain to something else. On the contrary, it is as if “intelligence” in its artificial form were revealing potentials that remain as yet unexpressed in humans. This does not seem to happen, however, when the same term is paired with plant. In that case, the use of intelligence would be accompanied—if we could use our hands—by air quotes, as though signaling that intelligence is being invoked only “so to speak,” and this time indeed in an improper sense.
This discrepancy not only shows a certain difficulty in seeing plants for what they are (plant blindness) (Wandersee & Schussler 1999), but, above all, reveals an unspoken tendency to locate what is typically human more in the artificial than in the biological. And this is not necessarily meant to invoke any sort of “Promethean shame.” Rather, it reveals the human need to recognize a shared matrix between ourselves and what we have created, rather than between ourselves and that to which we actually belong: the living.
The difficulty in recognizing the animated body as an essential dimension of the human seems evident, despite the many attempts made in recent years to elevate emotions and feelings to the core of the human (Nussbaum 2001). But it seems to me that this intense, daily hand-to-hand encounter between humans and AI, humans and robots, reveals an attempt to anesthetize the corporeal, to overlook or subordinate the biological dimension that is so characteristic of human intelligence.
On the one hand, feelings are celebrated; on the other, the idea of an analytical, computational, aseptic, and disembodied form of intelligence is continually reinforced. It is as if one were first attempting to de-corporealize intelligence so that artificial intelligence, too, may be regarded as fully human; and then, conversely, attempting to make the human increasingly less biological and ever more aligned with artificial parameters, so that the initial assumption may ultimately be confirmed. In other words, one first provides a version of intelligence that is almost superimposable on artificial intelligence (thus reducing the gap between human and machine), and then justifies this overlap by factually “configuring” the human ever more closely to artificial parameters.
This provisional conclusion seems confirmed by the recent case involving YouTube. Music creators and YouTubers Rick Beato and Rhett Schull (as reported in Repubblica, 24 November 2025) noticed, upon reviewing their posted videos, that the features of their faces appeared smoothed, artificial, flattened. In fact, they looked like genuine artifacts. In particular, their ears, and generally anything that could disrupt regularity and youthfulness—and thus a visually pleasing viewing experience—had been subjected to “quality enhancement.” Only after a video-complaint did YouTube admit it was running tests to remove visual and audio “impurities” from Shorts, in order to render them cleaner. As if faces could be treated as imperfections to be brought back into alignment with a system. As Samuel Woolley, professor at the University of Pittsburgh, stated: “They are training the audience to perceive AI as the norm.” Ultimately, as anticipated, artificial intelligence is assimilated to human intelligence—while omitting its corporeal grounding—even as, if that were not enough, humans are being trained to perceive as real the faces and bodies reconfigured by AI.
But why should humans aspire to such a reconfiguration? And here I am not referring to CEOs or Big Tech companies, who obviously have plenty of reasons to do so. I mean humans themselves, who for the first time find themselves facing an “other” that is neither animal nor human, but an internal artifact—a product of their own intelligence. Why, when confronted with such artifacts of our own making, do we tend to “fall under their spell,” engaging in a mimetic process usually activated toward another subject: a subject who serves as a model for one’s own desire? (René Girard 1961) Why do we try to be like them, or at least conform to an image of ourselves more compatible with that of the “other”?
For the first time, humans feel displaced from a position of dominance not by an external discovery such as the cosmos, nature, or the unconscious, but by their own creation. This fourth narcissistic wound of a Freudian type (Catherine Malabou 2017) manifests not as a loss of cosmic, biological, or psychological centrality, but as a cognitive loss. Faced with this new and profound wound, humans can react in two opposing ways. On the one hand, by governing, limiting, and regulating AI—thus asserting human primacy—and, in doing so, somehow reproducing the very logic of control: technology as a territory to be managed in order to maintain privilege. On the other hand, paradoxically, humans auto-colonize by modeling themselves according to digital parameters: filtered faces, optimized voices, mental processes organized like algorithms, cognitive performances measured by computational criteria (Berry 2025). Humans seek to resemble machines precisely while attempting to dominate them, as if proximity to the artificial could mitigate the threat it poses. From this perspective, the ongoing symbolic taming process can be understood: making humans more “artificial” in order to keep artificiality central as an extension of the self.
Ultimately, it is as if humans constantly distrust themselves, doubting the power of their own origins: of life as the source of artifice. As if we were perpetually victims, so to speak, of the “chiasmus of Horace,” feeling constantly at the mercy of, and threatened by, the conquered—in this case, our very own creation (Quinto Orazio Flacco, Ep. II,1). Adapting Horace, we might say: Qui fecit, ab eo victus est, the maker is conquered by his own creation. Hence, the need for counterfeit, artificial faces that make less evident the disturbing power of life, of the lush and uncontrollable matter from which we come.
The post The Faces of AI appeared first on The Philosophical Salon.
From The Philosophical Salon via This RSS Feed.


