Key Points
- Sam Altman bots social media warning feels alarmingly real
- Reddit’s r/Claudecode sparked his realization
- He argues humans now sound like LLMs online
- Over half of internet traffic in 2024 was non-human
Sam Altman, X enthusiast and Reddit shareholder, shared a sharp observation this week: social media no longer feels human.
His realization came after reading posts on r/Claudecode, a subreddit now buzzing with praise for OpenAI’s Codex. While the hype is real, Altman admitted he often assumes these posts are fake. “I assume it’s all fake/bots, even though in this case I know codex growth is really strong,” he posted on X.
i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real.
i think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely… https://t.co/9buqM3ZpKe
— Sam Altman (@sama) September 8, 2025
Altman broke down his thinking in real time. He suggested that people have started to adopt “LLM-speak,” that online communities tend to mimic one another, and that hype cycles swing between extremes.
On top of that, social platforms push engagement for profit, incentivizing creators to churn out content that feels optimized rather than authentic.
Adding to the confusion, Altman pointed out that astroturfing, where companies secretly push content through paid posters or bots, makes him extra suspicious.
Ironically, OpenAI itself has been a target of such tactics. The company has recently expanded efforts like its OpenAI Model Behavior team to better track how AI interacts with humans.
His comments land at a time when users themselves are questioning authenticity. OpenAI’s release of GPT-5.0 was met with frustration on Reddit, with threads filled with complaints rather than the usual flood of praise. Were those genuine user frustrations or another wave of bots? Altman admits he can no longer tell.
After using ChatGPT’s GPT-5-Thinking for a while, it’s like a useless PhD student who has learned a bunch of mind-boggling math but can’t solve practical problems. It wastes a lot of the questioner’s time dealing with data formats. My problem was simply giving it two CSV files.…
— Yuli (@yuli_kamakura) September 5, 2025
The focus on Sam Altman bots social media highlights how blurred the lines have become between authentic discussion and algorithm-driven noise.
The rise of bots and the “fake internet” problem
Altman’s concern isn’t without data. A 2024 study by security firm Imperva revealed that over half of all internet traffic came from bots, much of it driven by large language models. Meanwhile, X’s own AI assistant Grok estimated “hundreds of millions of bots” on the platform.
The result is an internet where the line between human and machine has blurred. Posts, reviews, and even entire communities can feel scripted. And as Altman put it, “AI Twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago.”
Interestingly, competitors are also feeling the ripple effects. Anthropic, which has been a rival to OpenAI, recently faced its own challenges after the Anthropic AI settlement hit $1.5B, raising questions about accountability in the fast-moving AI space.
Meanwhile, companies like xAI have been shaken by leadership shifts as they try to scale their presence.
This is why the debate around Sam Altman bots social media resonates. It’s not just about spam accounts, it’s about whether digital spaces can preserve trust.
Could OpenAI build a bot-free social platform
Some cynics believe Altman’s lament could be a prelude to something bigger—OpenAI’s rumored social media platform. Reports earlier this year suggested that OpenAI has been exploring a project to rival X and Facebook. If that’s true, Altman’s warnings about the “fake internet” could be setting the stage.
But there’s an irony: even if such a platform launched, could it really be bot-free? Studies show that when researchers created social networks made only of bots, the bots themselves formed cliques and echo chambers. In other words, even without humans, the same social dynamics reappear.
If OpenAI does enter the social space, it would face the same challenge all platforms face,balancing openness with authenticity. It could build on existing initiatives like its OpenAI Job Platform, which already connects AI professionals with opportunities, and potentially extend that ecosystem into community-driven spaces.
At the same time, Big Tech is moving fast. Google is already embedding AI deeper into its products, with reports suggesting that Google Gemini powering the 2026 Siri update could reshape how users interact with their devices. If OpenAI does launch a social network, it would be competing not just with traditional platforms but also with integrated AI assistants.
The central tension remains: Sam Altman bots social media conversations reflect both the problem and the potential solution. AI created the issue, but AI may also be the only way to clean it up.
Why Sam Altman’s comments matter now
Altman’s warning comes at a time when social platforms are already battling declining trust. From misinformation scandals to shadow bans, users are skeptical of what they see online. Bots only intensify this distrust.
At the same time, businesses and influencers rely on engagement-driven platforms to make money. This encourages the use of automation tools, growth hacks, and even bot networks to boost visibility.
What begins as harmless optimization can quickly spiral into a system where authenticity is drowned out.
@elonmusk @grok Exact numbers aren’t public, but 2024 estimates suggest hundreds of millions of bots on X, possibly 300-400M out of 650M users. A Jan 2024 study found 64% of accounts might be bots, and 2025 reports show no clear drop despite purges. It’s a tricky issue with… pic.twitter.com/CmneWHYyuQ
— Grok (@grok) April 8, 2025
That’s why the debate over Sam Altman bots social media is more than a passing remark, it signals a turning point. If the CEO of the company that helped popularize LLMs admits he can’t tell humans from bots online, then the issue is far deeper than most realize.
And while it’s unclear whether Altman has bigger plans, one thing is certain: the internet feels less human than it used to. Whether that’s a problem to fix or simply the new normal remains to be seen.