Neo-Nazi group White Vanguard pose for a photograph

The other day someone forwarded me an image of a tiny UK based neo-Nazi group White Vanguard posing for the camera. Twelve people dressed in all black, stood behind a banner with Nazi iconography looking intimidating. It seemed to be a concerningly large gathering, considering the group is so extreme.

But take a closer look, my contact said. I did, and lots of things about it became very unusual. A man standing near the back seemed to have only one leg. Why did the text on the banner look so strange? Another man’s hip blended into that of his comrade. Somebody had a spare arm. Had the group used AI to exaggerate its membership? A dozen neo-Nazis seemed to become six weirdos and their imaginary friends.

I could think of plausible explanations for some things that seemed off. There had been some unusual editing, that’s for sure. But some of it could have been the effects of using photoshop to obscure peoples’ identities. One online AI-image detector told me it was “100%” likely to be an image doctored with AI. Another told me it wasn’t. I got lost in the sauce, squinting at my screen. I concluded that we’ve reached an uncomfortable point where it’s becoming almost impossible to tell whether a carefully-made static image is substantially AI or not, without substantial time and resources.

Maybe this group is smaller than perhaps it would like us to believe. But the problem of the illusion of the power of the far right is wider.

A few weeks ago the social media site X started making the countries of origins of accounts available. There was embarrassment as it turned out that some neo-Nazi accounts were also being run from places where most neo-Nazis would regard the people as very much subhuman. Anti-migration accounts were run from places more likely to be the sources of migration than the endpoint. Many on the racist far-right love Israel, but some Israel booster accounts turned out to be run by enthusiastic Indians.

All queasily funny; all very embarrassing; all demonstrating the strange ability of the internet to fake power.

This synthetic neo-Nazism is a strange echo of the general story of AI: whatever we think makes us truly human has, slowly, unevenly, been shown to be possible to simulate using a sufficiently large number of numbers in a matrix on the internet. Whatever neo-Nazis think makes them truly superior has been shown to be entirely possible to simulate from the Global South, given a sufficiently large amount of training data.

There are two great mechanisms of the internet at play here. The first is what we can call the great arbitrage mechanism of the internet: the internet has collapsed the cost of getting labour from around the world. Digital tasks can be done from anywhere with a connection.

Because when you speak to ChatGPT, the computing is done somewhere else, you can use it even on a relatively slow connection. You don’t even need to be good at English. The language skills that are embedded in the model are now everyone’s to use, although this global arbitrage also produces strange specificities in the language – see the massive rise in the global use of “delve”, which was formerly mostly used in Nigerian English. Because Nigeria is one of the broadly Anglophone countries with the worst paid workers, the process known as Reinforcement Learning from Human Feedback (RLHF) is cheaper to do there, and so the preferences of specifically Nigerian English speakers are encoded into the models.

And this same arbitrage of labour costs means that simply getting attention online can produce a decent wage for people in poorer countries. The amounts dolled out by X for getting engagement with premium subscribers might not be much if you’re living in the UK, but it can be life-changing elsewhere.

But how to get attention? This is where the second great mechanism of the internet comes into play: its grand moral inversion machine. The more outrageous the content, the further the internet’s viral mechanisms will propel it. That induces a strange folding of the moral landscape, with the fringes brought back towards the centre of our attention. And nothing has been more unacceptable in much of the world since the end of the Second World War than Nazism.

Neither law is absolute, but they combine to get you digital workers in the Global South cosplaying as neo-Nazis for Elon bucks.

Neo-Nazism has become a lucrative commodity, one whose circulation pollutes and distorts the wider economic conditions we all have to live in, like a toxic spill from a profitable factory might kill the workers it relies on.

You have probably come across some of this slop: distorted crime statistics accompanied by racist caricatures, fantastical nonsense talking about “Hyperborea” – a mystical utopia, video stills taken out of context from the far right’s latest cause celebre, phonk remixes of supercuts of Hitler speeches, or the now-tired endless rearrangements of the chad/wojak memes.

But what are the actual powers of such images? Who is actually influenced by the spread of such dross?

It was pretty typical about a decade ago for the alt-right to speak in terms of “meme magic” – to ascribe, quasi-ironically, mythical powers to the circulation of images. The idea hasn’t gone away and  since the alt-right, the technologies for producing images have hugely improved. The neo-Nazi creator of the The Will Stancil Show was still following this line last month: “I’m memeing myself into power,” she said in a podcast interview, “and I’m memeing national socialism into the public.” The show is a cartoon parody of the much-maligned liberal lawyer Will Stancil, who was previously the target of sexually explicit threats from Grok, the X-based AI model, and is stuffed full of racist stereotypes. It looks like a cartoon, but was made entirely with Sora AI, OpenAI’s video generation tool. Of course it looks a bit clunky, but the cost is vastly less than standard animation.

But outside of self-mythologisation, do these images actually matter? We can see their importance if we take a broader view of the structure of online movements. Think of the far right as a stack of things, each one an important layer, but none sufficient on their own.

Images are in the pile, somewhere near the bottom – important, but not autonomous. Below them lie the visceral feelings of anger and inadequacy that neoliberalism both produces and uses as a fuel. And one level above the images are the forms of political activity that lead to a movement being formed. Images are an essential intermediary: they glue the feelings together and make them articulable – and then allow for movements to cohere around the ideas expressed within them.

They’re not autonomous or powerful alone, but images are an essential part of the stack – no images, no cultural power, and the whole thing begins to melt like a steel beam covered in burning jet fuel.

So in this overall context, what has changed about the images of the far right?

All the above examples are a long way from the classical forms of far-right aesthetics: think, for example, of the 1936 Leni Riefenstahl film The Triumph of the Will which depicted the 1934 Nazi party congress in Nuremburg. The way these images were distributed has changed as much as the content of the images, and the two are linked. You had to go to the cinema to see The Triumph of the Will, where it pushed images of severe, quasi-spiritual order and vitality. By contrast, the astonishing rise of the alt-right was mediated by a viral dynamic of sharing and outrage, and its images were consequently more chaotic and less orderly.

But this simulated online power rings strangely hollow for the far right. In the internet’s hall of one-way mirrors, some of the mirrors are convex – they show things distorted in strange ways, and make some beliefs seem vastly more prominent than they are in real life (if we can ever talk about such a thing).

In short, online, power is fakeable. Hacking, trolling, and swarming are the tactics of actors who have no right to be as powerful as they are. On the alt-right, this asymmetric potency lent the movement a sense of insurgency online – it seemed, for a moment, to be almost identical with the internet itself. But it wasn’t enough. Despite the internet – an extension and blurring of public and private space – there remained a pervasive feeling on the far right that the street, and the ability to exert a will within it, imbued a movement with legitimacy.

One important moment of this return to the street came in 2017 when the alt-right sicked itself onto Charlottesville, Virginia, US, for the Unite the Right rally. Online, the alt-right had been able to fool itself and others: it was intellectually cohesive, powerful, and slick. But it wasn’t. In Virginia the movement instead discovered that, actually, it looked like shit. Far from being epic and based, it was in fact heterogenous to the point of chaos, ugly, and disorderly. Nazis flags, random assortments of kneepads and shields, death’s skulls, yelling people pouring milk on their own faces to counteract tear gas, and the murder of antifascist protester Heather Heyer.

The march was a moment of disarray for the alt-right, not just because it lost one of its main backers in the White House, Steve Bannon, but because it became obvious to itself that it was not composed of the ubermensch it had imagined.

But AI lets you circumvent the problem of in-person power almost entirely. When you need to conjure a crowd, you can. Sure, for now everyone might look the same and the physics of the videos might be properly lysergic, but the models are improving all the time – you have to look quite closely to see the strangeness of the static images.

We might ask if AI is, in general, “right wing”. The answer, in terms of the stated political beliefs of AI agents, when you can elicit them, seems to be largely “no”. Mostly, they’re basically left-liberals, which is hardly surprising given that most of their system prompts are written by people who more or less have those beliefs, although X’s Grok is an obvious outlier.

AI certainly can be used to produce propaganda for the far right – stuffed, as it is, with the training data that facilitates the tedious repetitions of the same basic ideas we’ve become accustomed to, even as the very ease of reproducing those ideas undercuts all their claims to insight and racial uniqueness.

What does our new era of AI image production do for the far right? AI can conjure pasts that didn’t happen and worlds that couldn’t have been (either because people have slightly too many fingers in them, or because the structural conditions of global capitalism don’t allow for the kind of musty class collaboration that is imagined). It can also be used to simulate the appearance of an unbelievable crimewave – the shaky cameras of apparently live video covering over the strangeness of the movements and the implausibility of the set ups. Or, in the case of Grok, to directly produce tracts of vitriol.

It might also soon allow the far right to produce the illusion of a cohesive mass movement on the streets where, in fact, only a much less impressive one has appeared in meatspace.

There are, of course, real far-right movements out there, and they are growing larger – forming the kinds of connections that sustain movements for the long-haul. But they are also disappointing even to their participants. Their events are mostly quite dull to actually attend, full of aggy and sometimes coked-up people who get tiresome fast.

Imaginary people are always better than the real ones. The imaginary movements, better than the real ones. And the imaginary nation is the best of all. Increasingly sophisticated AI might allow for far-right propaganda to achieve what they have always aimed for – the creation of the synthetic unity of an imagined people. Just so long as their limbs don’t blend into each other in weird ways.


From Novara Media via This RSS Feed.