Normalising dystopia

by Yiorgos Savvinidis

Source: in-cyprus.philenews.com

I read Palantir’s much-discussed 22-point manifesto on X, and let’s just say the uproar it caused strikes me as entirely predictable, given that our minds are still struggling to digest the fact that we are living through the age of absolute cynicism and brazen audacity.

In this document, Palantir saw fit to “summarise” the much-talked-about book The Technological Republic, written by its own CEO, Alex Karp, together with the company’s head of corporate affairs, Nicolas Zamiska.

We didn’t need the techno-feudalists to confirm the Foucauldian intuition that knowledge and power are not separate concepts. But there is something almost… disarming about this bout of candour from Palantir, candour among rogues, admittedly. In an era when power prefers to hide behind words like “innovation,” “platform,” and “tools,” this controversial data-analytics colossus, which holds contracts with governments, armed forces, and corporations across the world that do not exactly cover themselves in glory, has chosen to step out of the shadows and tell us plainly what it believes. And what it believes is that we have entered an era in which technology is officially a theatre of war.

It feels like being inside a science fiction film where the lighting shifts almost imperceptibly, and you suddenly realise you are adrift in the heart of a dystopia. We have watched so many of these films: 1984, Blade Runner, Brazil, RoboCop, The Terminator, Gattaca, Minority Report, The Matrix, and more recently Snowpiercer. Well, reality has begun to make all of them look quaint. The ones who “Live Among Us,” as in Carpenter’s film, no longer need to hide, and we can see them now without the special glasses.

Palantir states with disarming frankness that its goal is to cement the conviction that the era of “soft power” is over, and that in the third decade of the twenty-first century, democracy, if it is to survive, must arm itself not only with high-tech killing machines but with software. In other words, artificial intelligence is the nuclear bomb of our time.

This is not a neutral technological intervention. It is an intensely political new narrative about democracy, one in which Silicon Valley must abandon its role as a “provider of tools” and be formally conscripted as an active arm of state power. Today’s Cold War is not defined by nuclear deterrence, but the doctrine of MAD, Mutually Assured Destruction, now finds its equivalent logic in weapons of artificial intelligence.

I might be tempted to think it was the AI itself that had already seized the controls and was trying to warn us, or present us with a fait accompli, were I not convinced that only a human being is capable of such bestial cynicism. Palantir’s narrative rests on the assumption that the world has entered a phase of inevitable, irreversible competition, in which the development of “dark” AI applications is the only road available. The only question that remains, then, is who gets there first, which is simply the management of a predetermined condition. As if History were something that flows on its own, independently of us.

The Palantir Doctrine touches on a coherent political theory, one that shifts the centre of gravity away from democratic pluralism towards the imperatives of cohesion, discipline, and power. What is striking is an openly expressed suspicion of pluralism itself: the conviction that a multiplicity of views and values, in a competitive world, is not a source of strength but of weakness.

The deeper problem is that the concentration of AI “weapons” has been preceded by a hyper-concentration of data and a capacity to transform an incomprehensible volume of scattered information into a coherent system of knowledge. That is where real power is now produced. A tool for classifying, connecting, and exploiting data is centralising and formidably powerful. Whoever controls it can predict, intervene, and direct.

The infrastructure that harnesses AI emerges from a dense web of specific power relationships, in which states and corporations, presenting themselves as “solution providers,” are almost inseparably intertwined, producing a contemporary form of techno-cronyism that makes its predecessors look like minor misdemeanours.

Let me remind you: you are not reading a work of science fiction or a fringe conspiracy theory. This technology has already been deployed, in military systems built on data analysis and algorithms used to identify and select targets. That changes the very nature of violence. It makes it faster, more systematic, more efficient, and less dependent on direct human judgement. Unlike nuclear deterrence, which rested on the premise that actually using the weapons was almost unthinkable, these new tools can be used continuously, at smaller scales, without the same political cost. That does not lead to stability. It leads to a more diffuse and more frequent use of automated violence.

There is another dimension worth noting: beyond striking targets, technology is being used relentlessly, through algorithms, platforms, deepfakes, and bots, to produce, manage, and circulate information, to manufacture consent, and to shape consciousness and perception, along with our sense of what we consider real and how it is interpreted. What is being sold to us here, in effect, is a reorganisation of democracy around the concept of permanent threat.

The real stakes, then, are not simply who will possess the most advanced AI systems and to what end, as point five of Palantir’s manifesto would have it. Equally important is who frames the context within which those systems acquire meaning and “purpose.”

The concentration of power flowing through private infrastructure and opaque mechanisms cannot be a precondition of democracy. In practice, it undermines democracy itself. The contradiction is so glaring it leaves no room for doubt. Democracy will not be saved by adopting the logic of a siege. Or rather: it won’t be worth saving if it does.

You may also like