
A few years ago, AI still felt like sci-fi. Now it’s just… Tuesday.
Models can pass exams, write code, solve your kid’s maths homework, and spit out film-quality images and video from a two-line prompt. The glossy AI assistants we used to see in movies? They’re basically here.
And the honest answer to “Can I have my own?” is: yes. If you’ve got the right hardware, anyone can. That’s because of a flood of open-weight and open-source models you can download, run locally, and fine-tune on your own data for very specific jobs.
That part is already happening at scale.
But before we get into the fallout, it’s worth being precise about what we’re talking about.
People tend to mash these terms together, but they’re not the same thing.
Right now, the energy – and the risk – sits with open weights, not true open source. There are no genuinely frontier-class fully open-source models in the wild. But there are plenty of powerful open-weight ones you can grab today.
Because they’re so capable and so widely available, they’re creating very real, very current problems.
Let’s be fair: this isn’t all doom.
The open-weight boom has enabled a legitimate ecosystem of companies that specialise in slices of the AI stack instead of burning billions training their own frontier model.
In other words: open weights have democratised access to serious AI. You no longer need to be OpenAI-sized to build something impressive.
But the exact same openness that powers all this innovation also enables something much darker.
So what happens when the same models are pointed, deliberately, at harm?
Picture an election year.
Your feed is full of angry, emotional posts. Hundreds of thousands of accounts hammering the same talking points, sharing “leaks,” pushing clips that feel designed to make you furious. It looks like a huge grassroots movement.
It doesn’t have to be.
It could be one well-funded organisation running a customised open-weight model as a digital propaganda engine. That system spins up and manages an army of fake accounts, each with its own personality, posting schedule, and language style.
The AI understands context. It replies to comments with tailored arguments. It cites “sources” that look credible unless you dig several layers deep. It’s tuned to ride right up to the edge of platform rules – just toxic enough to shift opinion, not quite bad enough to trigger bans.
And then it goes further.
These systems can generate deepfake video and audio in local accents, mirroring the slang, humour, and cultural cues of the exact group they’re trying to influence. They can scrape your public social media and run hyper-personalised psychological operations against you and people like you.
At that point, this isn’t just spam. It’s cognitive warfare.
Traditional propaganda tries to change what you think, whereas cognitive warfare aims to change how you think.
It exploits bugs in the human operating system: our biases, our fear of missing out, our tendency to trust familiar faces, our inability to fact-check a firehose of information in real time. The goal isn’t just to sell you a story – it’s to erode your ability to trust anything.
And open-weight AI is the missing piece that makes this scalable.
For years, this kind of operation was constrained by human effort. You needed legions of trolls, content farms, and call centres. Now, one well-engineered system can impersonate thousands of “real people” at once, 24/7.
We’re not theory-crafting. We’re already seeing early versions of this.
United States: the “phantom” candidate
In a recent US election cycle, voters received robocalls where President Biden apparently told them not to vote. It sounded like him. It wasn’t. It was a cheap AI voice clone that still required federal action to shut down.
At the same time, a Russian “Doppelganger” campaign went beyond fake articles. It used AI to recreate the entire look and feel of major news sites – think cloned versions of The Washington Post or Fox News – and filled them with anti-Ukraine stories that looked indistinguishable from the real thing at a glance.
Russia–Ukraine: the first AI war
Early in the invasion, hacked Ukrainian TV stations briefly ran a video of President Zelenskyy at a lectern, instructing his troops to surrender.
It never happened. It was a deepfake.
By today’s standards it was clunky, but it proved a chilling point: you can hijack the face and voice of a head of state and use it to try to break a country’s will.
Israel–Palestine: the “liar’s dividend” in action
During the Israel–Gaza conflict, reality and fabrication began to blur completely.
The “All Eyes on Rafah” image — a pristine, AI-generated camp scene — went mega-viral, shared tens of millions of times, shaping emotion and opinion around an event that never looked like that.
At the same time, genuine images of horrific violence were dismissed by many as “AI fakes.” That’s the liar’s dividend: once the public knows deepfakes exist, anyone can claim that inconvenient real footage is “just AI.”
The weapon is no longer just the fake. It’s the collapse of trust in anything that looks like evidence.
Major powers have noticed.
Open-weight models are being folded into state-sponsored influence operations to build what some analysts call “synthetic consensus”: flood the information space with bots until fringe views feel like the majority.
Examples:
This isn’t sci-fi. It’s already part of the day-to-day information environment.
If disinformation is the visible side, cybersecurity is the quiet, arguably more dangerous flank.
Open-weight models are force multipliers for attackers. A small Advanced Persistent Threat (APT) group no longer needs a floor of elite hackers. With the right model and training data, they can:
What used to take months of R&D and significant money can now be packaged into “Crime-as-a-Service” offerings on the dark web.
We’re already seeing products like WormGPT, FraudGPT, and DarkBERT – open-weight models fine-tuned on criminal data and sold on subscription. They help criminals write better scam emails, build more convincing fraud sites, and automate parts of their attacks.
Open weights have effectively put advanced offensive capability on the shelf.
Different regions are approaching this tension between “open” and “safe” in very different ways.
European Union & United States
China
China has chosen a very different path: tight domestic control, aggressive global release.
Open weights have become a geopolitical instrument, not just a technical choice.
Once you accept that powerful open-weight models are out in the wild, you’re forced into a new kind of realism.
You can’t regulate them out of existence without cutting yourself off from the global AI economy. They are essential building blocks for local innovation – the only way many countries and companies can realistically build AI tailored to their own languages, laws, and industries.
But that open door lets a cold wind in.
The same tools that power local startups and research labs also give small, hostile teams the ability to run operations that once required nation-state level capability. The buffer between “breakthrough” and “weapon” has basically disappeared.
So where does that leave policymakers and builders?
The focus can’t just be “who’s allowed to download a model” anymore. That ship has sailed.
The priority has to shift from controlling access to building resilience:
We’re not going back to a world where only a handful of companies can train or run powerful models.
The question now is whether we grow the immune system to match the power of the tools we’ve just handed to everyone.