Troll Farms, AI, and the Engineering of Public Opinion
From Coordinated Manipulation to Algorithmic Amplification

A few months ago, I wrote about the “Manufactured Web,” focusing mainly on bought reviews. Fake five-star ratings. Reputation padding. Synthetic trust signals.
But a recent MSSP podcast episode pushed me toward a much more serious dimension of the same phenomenon: troll farms.
Wikipedia defines a troll farm as “an institutionalized group of internet trolls that seeks to interfere in political opinions and decision-making.”
The most infamous example is Russia’s Internet Research Agency, widely reported to have conducted coordinated influence campaigns targeting U.S. audiences. The organization was linked to Yevgeny Prigozhin, who also founded the Wagner Group, a Russian private military company, before his death in 2023.
U.S. intelligence assessments concluded that Russian actors sought to influence the 2020 U.S. election. Public indictments and sanctions describe operations that included impersonating Americans online, amplifying divisive narratives, and attempting to undermine confidence in the electoral process. The IRA operated under a broader initiative often referred to as “Project Lakhta,” and in 2020 the U.S. Department of Justice charged individuals connected to these efforts.
The details matter, and the deeper issue matters more. Troll farms are not just about elections. They are about perception engineering. Social media is MK Ultra 2.0.
They exploit the same mechanisms that power modern marketing:
Engagement algorithms
Emotional amplification
Virality as credibility
Volume mistaken for consensus
And this is where the Manufactured Web evolves from annoying to existential.
Because now we have a knowledge ecosystem that includes:
Fake reviews
Coordinated influence operations
Engagement-optimized outrage
AI systems trained on all of it
For decades, mainstream media shaped narratives through centralized control. Today, influence is decentralized, gamified, and algorithmically accelerated. The gatekeepers have changed, but manipulation has not disappeared. It has scaled.
And here’s the uncomfortable question:
What happens when large language models are trained on content seeded by coordinated influence campaigns?
What happens when AI systems ingest years of emotionally amplified distortion and then summarize it back to us as neutral synthesis?
What happens when synthetic narratives become part of the training data that trains the next layer of synthetic narratives?
We move from misinformation to recursive misinformation.
From propaganda to programmable perception.
The danger is not just that troll farms exist. The danger is that engagement-driven systems reward them. Algorithms optimize for attention, not accuracy. Outrage travels faster than nuance. Divisive content outperforms balanced analysis. That incentive structure is the real vulnerability.
Which brings me to something I’ve heard repeatedly in 2025: “Higher education is a waste of money.”
Is that belief emerging organically? Is it a rational economic critique? Is it amplified by algorithmic outrage? Is it seeded by actors who benefit from institutional distrust?
The point is not to defend or condemn higher education.
The point is this: when the web becomes our primary epistemic authority, we must interrogate the supply chain of belief.
If fake reviews manufacture trust, troll farms manufacture consensus.
And if AI models are trained on manufactured consensus, the distortion compounds.
The Manufactured Web is no longer just about reputation management or brand manipulation. It is about the stability of shared reality.
You may think education is a waste, but ask yourself why you think so when critical thinking is not optional anymore. It is infrastructure in the age of disinformation.
Because in an AI-mediated world, the web does not just reflect reality. It increasingly trains it.





