If you want to hide a problem, there is no better place than a generalisation. And perhaps nowhere do generalisations find such an easy home as in the field of so-called “artificial intelligence” — starting with the name itself. AI does not refer to a single technology, but to a heterogeneous set of tools and applications: from large language models to autonomous weapons systems, from deep-learning-based retinal image analysis for early diagnosis of disease to self-driving cars. Each of these systems raises profoundly different political, social and ethical questions.

This internal diversity within what we call “AI” is well understood by experts in the field. As Michael Wooldridge, Professor of Foundations of Artificial Intelligence at the University of Oxford, has observed, most subfields of AI have their own specialist research communities — to the extent that, in some cases, “the machine learning community simply don’t define themselves as being part of ‘AI’ at all”.

The issue I want to address here concerns two recurring generalisations encouraged by the uncritical adoption of the term ‘AI’: first, extending a brain-based model of intelligence to the whole of human experience; second, applying the underlying logic of machine learning to social practices as a whole.

There is little doubt that deep learning — a form of statistical computation inspired by the neural networks of the human brain — represents the most significant breakthrough of the current generation of ‘AI’. Beyond the neurological metaphor, however, this innovation rests primarily on a mathematical capacity: the ability to process vast amounts of data to optimise statistical learning. The underlying logic of machine learning, and of its deep-learning variant, therefore remains the same: statistical optimisation with respect to externally defined goals — for example, in a Google search, identifying the most “relevant” result in the shortest possible time.

This computational power is extremely effective in domains governed by an essentially statistical logic, such as medical diagnostics, where the rapid identification of disease indicators enables more timely intervention. The problem arises when this logic is extended indiscriminately across the whole of social life — and especially into education. Here, the failure to distinguish between different types and applications of what we call “AI” risks producing an inappropriate statistical reduction of practices that cannot be reduced to simple optimisation functions.

That the human brain is not a “cognitive box” traversed by mere computational processes is well established in psychology and neuroscience. The notion of mind that informs many theories in psychology and evolutionary paleoanthropology is far broader than that of the brain alone. It presupposes a continuous relationship between objects, practices and cognitive development. As Andy Clark’s extended mind thesis puts it, “who we are is in large part a function of the webs of surrounding structure”.

The French paleoanthropologist André Leroi-Gourhan was among the first to develop a theory of human evolution grounded not only in cognition, but in culture. In Gesture and Speech, he showed how human evolution emerges from relational processes linking individual actions, collective practices and symbolic tools such as language. Stone knapping, for example, far from being a simple repetitive activity, helped free the face and vocal apparatus for the development of speech. Over time, the socialisation of such practices enabled the creation of shared memories — the externalisation of gestures — which language then transmitted across generations.

The central insight of Leroi-Gourhan’s theory is that human development is co-determined by the tools we use and the practices they make possible. And this brings us to the second issue.

Today, thanks to so-called generative AI, students can complete university assignments — and even entire courses of study — simply by formulating a few prompts. For the first time in history, it has become possible to be a student without practising study itself. To use a sporting metaphor, this would be like winning Grand Slam tournaments without ever stepping onto the court: a paradox that would strip the tennis player of the very practice that defines them.

We often hear that students must be prepared for the labour market, and therefore for the “critical” use of these tools. What is frequently overlooked, however, is that the real power of “genAI” lies precisely in its ability to understand natural language — the language we use every day. This marks a crucial difference from software such as Excel or MATLAB, which require users to learn and practise specific coding languages. It is no coincidence that labour markets increasingly speak of a future “shortage” of soft skills such as critical thinking and creativity — another widely invoked but rarely defined cliché.

If we follow the arguments of scholars such as Clark, Leroi-Gourhan and, more recently, Lambros Malafouris,  we find that creativity and critical thinking are not innate or abstract traits, but reflective practices. In educational settings, these forms of thinking grow out of sustained immersive reading, engagement with the otherness of arguments, debate and intellectual friction. This has been the case for centuries.

Are we really prepared, today, to normalise the logic of statistical optimisation and efficiency — maximum output with minimum effort — in a human domain where effort and sacrifice for understanding are the only true springboards to new and creative ideas?

Public policy and university governance should reflect far more carefully on how to avoid the two generalisations discussed here — the brainisation of intelligence and the statistical reduction of knowledge — if they do not want the scarcity of critical thinking to become so extreme that it ultimately turns into something we can no longer even recognise.

The post What We Get Wrong When We Talk about ‘AI’ appeared first on The Philosophical Salon.


From The Philosophical Salon via This RSS Feed.