AI Won’t Control Narratives. It Will Inherit Them
Why AI-generated content may narrow narratives long before it manipulates beliefs.
I noticed it first in a small, uncomfortable way.
I would draft something - a note, a post, a memo - and feel that familiar friction. The thought wasn’t fully formed yet. The language was uneven. It needed time.
Then I’d run it through ChatGPT. The result was cleaner. Calmer. More confident. And somehow, less mine.
But I still posted it. No one complained. In fact, it performed better.
That’s when it became clear: the trade wasn’t intelligence for laziness. It was rough truth for smooth sense-making.
And once that trade becomes normal, the rest follows quietly.
The popular fear about AI and narratives is dramatic - that machines will invent propaganda. I feel that’s likely wrong.
If AI ever takes control of the narrative, it won’t be because it overpowered humans. It will be because humans voluntarily outsourced storytelling to it - enthusiastically and at scale.
Not through conspiracy. But through convenience.
Today, AI models are trained on human-generated data. But it’s difficult to deny the fact that eventually they will get exposed in training on AI-assisted human expression - if not with AI-generated content.
That distinction matters.
Every post you see that even was:
polished with an AI
summarized by an AI
optimized for reach by an AI
is no longer a clean human signal. It’s a feedback loop artifact.
We aren’t just publishing content anymore. We’re publishing content that already passed through a machine’s priors.
And most content distribution platform don’t filter that out. They reward it, which means the training data of the future is increasingly self-referential.
Why this may happen easily (and quietly)?
This doesn’t require a breakthrough in AI capability, but three very ordinary incentives to align -
Efficiency beats originality
Original thought is slow, risky, and cognitively expensive. AI-assisted creation is fast, legible, and socially safe.
Over time, the recommendation algorithms on LinkedIn, Instagram, etc. naturally selects for:
clean structure
familiar metaphors
predictable moral arcs
confident but moderate tone
AI excels at exactly this shape of communication. Not because it’s “manipulative” - but because it optimizes for acceptability.
Platforms amplify the median, not the edge
Most social media and platforms don’t promote what is true or novel. They promote what is least frictional.
AI-generated (or AI-assisted) content converges toward:
consensus language
polite disagreement
balanced takes
non-threatening insights
As humans learn what “works,” they unconsciously mimic that style - even when writing without AI. And the result is narrative compression.
Training data doesn’t care about authorship
A model training methodology doesn't ask if this thought was earned. It asks instead whether this pattern repeats.
When AI-shaped content floods the corpus, the model doesn’t see it as derivative. It sees it as ground truth frequency. At scale, repetition becomes authority.
AI doesn’t need to control narratives. It only needs to average them. And averages feel neutral - even when they’re not.
The danger isn’t a biased AI shouting ideology. It’s an AI calmly reinforcing what already survives visibility filters.
That’s far harder to notice. And far harder to resist.
What worldview does AI inherit?
Not a radical one. A comfortable one.
I will find it concerning if on these grounds AI inherits worldview that is:
• Technocratically optimistic
• Economically centrist
• Morally polite
• Risk-averse
• Language-heavy, action-light
• Confident in systems, vague about power
This is not left or right. It’s platform liberalism - shaped by engagement incentives, moderation rules, and professional class norms.
Not revolutionary. Not reactionary. Just smooth. And smooth narratives travel far.
The biggest shift won’t be belief manipulation. It will be narrative narrowing. And humans will increasingly rely on these explanations - because they feel reasonable.
Over time:
raw experience gets filtered
strong claims get softened
moral ambiguity gets resolved too quickly
Not because AI is evil. But because clarity scales better than truth.
As as this becomes a bigger portion of the machine priors, it will become more and more difficult for people to prompt the models to tame and think otherwise.
We worry about AI hallucinating. But the more dangerous outcome is AI remembering us incorrectly.
Not as we were - conflicted, unfinished, inconsistent - but as we presented ourselves for algorithms - which would be the most polished, balanced, optimized and harmless manner.
If AI inherits the narrative of our time, it won’t be the story of human struggle. It will be the story of what survived posting.
An interesting paradox is …
The worldview of an AI models depends on distribution in the data corpus. But the decision-makers decide to control what the models learn - one can argue it will still depend on the worldview of the person choosing what goes in distribution. How do you really understanding the morality of that bias?
So it may not really be objective function. You may be able to objectively remove AI-generated content, but AI-assisted content is hard to distinguish.
And interesting thought: will it make more sense to optimize the training so that an every individual is sustainibly able to train his own co-pilot which is hardwired away from every increasing digital corpus and internet.
If you still write without assistance, you are already an outlier.
If you write something sharp, unfinished, or uncomfortable - you’re providing rare data. Not for engagement, but for the future memory of how humans actually thought.
Because the next generation of intelligence won’t just learn from what we believed.
It will learn from what we were willing to publish.


