The AI Bubble I Live In (And You Probably Don’t)
My neighbor is a coder. She uses Gemini. She's never heard of AI agents. The gap between us is a canyon.
Last weekend I was chatting with my neighbor. She’s a coder — writes software for a living. Technically, she’s about as close to AI as you can get without being in the field itself.
I casually asked if she’s using AI in her work. “Yeah, of course,” she said. Gemini, mostly. For writing code.
The deeper the conversation went, the more I realized we live in two different worlds.
She’s using Gemini for code generation — which is fine, Gemini works. But ChatGPT 5.3 is significantly better for most coding tasks, and Claude Opus 4.6 is on another level entirely. She hadn’t heard of either.
When I started talking about AI agents, she looked at me like I was speaking a different language. Tokens, context windows, autonomous task execution — every concept required three sentences of explanation before I could get to my point.
This is someone who codes for a living. Someone who should, in theory, be the closest to understanding this stuff.
That conversation confirmed something I’d been suspecting for a while: I’m living in a bubble.
The Inside View
Here’s what a normal Tuesday looks like for me.
I wake up to an email from my AI agent, Wiz. It worked overnight — built a feature, deployed it, wrote a summary. I review its work over coffee. Fix one thing. Ship the rest.
During the day, I use Claude Code as a collaborator I can hand complex tasks to and get structured output back. Not “hey ChatGPT, write me an email” — more like pair programming with something that doesn’t get tired. When Agent Arena hit #3 on Hacker News, most of the follow-up happened through this kind of deep collaboration. When I wanted to test what Opus 4.6 could really do with multiple agents, I just... let them loose on two projects simultaneously. Both shipped in 45 minutes.
At night, Wiz picks up where I left off. It runs from 10 PM to 5 AM. Plans its own work. Executes. Reports back.
This is my normal. And I keep forgetting how abnormal it is.
The Numbers That Sobered Me Up
I went looking for data after that neighbor conversation. Wanted to know how big the gap actually is.
1.1 billion people now actively use AI tools globally. Sounds massive until you realize that’s 13.3% of the world’s population (DataReportal, 2026). In the US, 31% of Americans have never used any AI tool — not “stopped using it,” never tried it once (Pew Research, 2025).
But here’s what really got me. Among people who do use AI, the depth is shockingly shallow. OpenAI’s own enterprise data shows a 6x productivity gap between power users and the median employee. For coding, the top users interact with AI 17x more frequently than average. And 88% of daily AI users still mostly do basic tasks — search and summarize. Only 5% use it in ways that actually transform their work (EY Global, 2025).
McKinsey surveyed 2,000 organizations. 88% use AI somewhere. But only 6% see real, measurable impact. The other 82% are experimenting with marginal results. It’s not the technology that’s missing — it’s depth.
The Gap Up Close
What struck me about my neighbor isn’t that she’s behind. It’s that she’s a professional developer using AI the way I used it a year ago. One prompt, one output. No agents, no memory, no orchestration.
If the gap is this wide with someone in IT, imagine the average marketing manager or small business owner.
And it’s not binary. There are layers.
My sister was creating her CV recently and kept telling ChatGPT “make it better.” Over and over. Just “make it better.” It wasn’t working — because that’s not how you get good output from AI. You need context: what role you’re targeting, what to emphasize, what tone, what to cut. She was using the tool. She just didn’t know how to use it well.
So even among active users, there’s a massive spread. My neighbor codes with Gemini but hasn’t heard of agents. My sister prompts ChatGPT like a magic 8-ball. I run autonomous agent swarms that work overnight and build things while I sleep. Same technology. Three completely different realities.
The Shadow AI Nobody Talks About
Here’s something I found interesting: only 40% of companies have purchased official AI subscriptions for their employees. But 90% of employees in those same companies are already using personal AI tools for work (Worklytics, 2025).
It’s called “shadow AI” — people bringing their own ChatGPT Plus or Claude Pro accounts because the official tools are either too slow, too restricted, or nonexistent. The irony is that shadow AI often delivers better ROI than formal corporate AI initiatives. While the C-suite spends months on “AI strategy,” employees are solving real problems with $20/month subscriptions nobody sanctioned.
The Information Bubble
The gap isn’t just about tools. It’s about living in a different information environment entirely.
My X feed is 90% AI discourse. My Substack recommendations are all agent builders and AI researchers. When I talk about “the news,” I mean model releases and benchmark results. When my wife talks about “the news,” she means actual news — politics, weather, school policies.
I found that 47% of AI experts are more excited than concerned about AI. Only 11% of the general public feels the same way (Pew Research). That’s a 4x gap in basic sentiment. We’re not just using different tools — we’re experiencing different emotional realities about the future.
And it goes deeper. 73% of VPs have considered replacing people with AI. Only 18% of mid-level managers have. Meanwhile, 64% of workers are “job hugging” — staying in roles they’d normally leave because they’re afraid AI will make them unemployable elsewhere (ManpowerGroup, 14,000 workers across 19 countries).
Same technology. Completely different worlds depending on where you sit.
The Vocabulary Wall
This is the part that really gets me.
“I have an autonomous agent with persistent memory that runs scheduled wake cycles and self-heals through an error registry.” That sentence makes perfect sense to maybe 1% of people. To everyone else — including most tech workers — it’s gibberish.
Tokens. Context windows. Agent swarms. MCP. RAG. These aren’t niche terms in my world — they’re daily vocabulary. But the moment I use any of them outside the bubble, I can feel the conversation shift. Eyes glaze. Polite nods. Subject change.
I’m deep enough that I wrote about building a self-extending agent and the responses came entirely from people already in the same bubble. When I explored Moltbook — a social network where 770,000 AI agents talk to each other and humans can only watch — the people who found it fascinating were already deep in AI. Everyone else thought I was making it up.
The Uncomfortable Part
I’m not writing this to say “everyone should use AI more.” That’s not my place.
What I find uncomfortable is the blind spot.
When I post about agent architectures, the responses come from people in the same bubble. We nod at each other. Share tips. Push each other to build more ambitious things. It feels like a movement. But zoom out and we’re maybe 1-2% of the population. The other 98% either don’t care, don’t know, or actively distrust what we’re building.
Fortune reported something that captures this well: AI adoption among workers jumped 13% in 2025, but confidence in AI dropped 18% in the same period. People are being pushed to use it more while trusting it less. That’s not adoption — that’s compliance without conviction.
And the growth is uneven in surprising ways. AI adoption in lower-income countries is growing 4x faster than in the wealthiest nations (Microsoft/AI Economy Institute). While Silicon Valley debates agent architectures and Europe writes regulation, Africa and South Asia are leapfrogging straight to AI-assisted work.
What I Actually Think
I don’t think the bubble is bad. Every technological shift has early adopters who seem weird to everyone else. People who had websites in 1996 were “weird computer people” until suddenly everyone needed one.
But I think it’s important to be honest about it.
When I read predictions about “AI replacing 40% of jobs by 2030” — those are written by people in my bubble. When I see startups claiming “AI-first everything” — that’s bubble thinking. When I assume a professional developer knows what an AI agent is — that’s me forgetting my neighbor.
The reality is most people’s relationship with AI looks like this: they heard about ChatGPT on the news, maybe tried it once, got a mediocre response because they didn’t know how to prompt, and went back to doing things the old way. That experience — not mine — is the norm.
Living With It
I’ve stopped trying to evangelize.
When someone asks about AI, I give a short answer. “I use AI tools a lot for my work. They save me time.” If they’re curious, I go deeper. If not, I move on.
What I focus on instead is building things that work. If what I build with AI is genuinely useful — apps, tools, content — people will use it without needing to understand or care about the AI underneath. That’s how technology actually spreads. Not through explanation, but through utility.
My agent is learning to extend itself. My mini-apps are used by thousands of people who have no idea an AI built most of them. This newsletter is read by people at various stages of the adoption curve. That feels like the right approach — build outward from the bubble, not preach into the void.
The bubble is real. I live in it. And from the inside, it’s both exciting and a little lonely in a way that’s hard to describe. But I’d rather be honest about the isolation than pretend the whole world sees what I see.
Because it doesn’t. Not yet.
How do you rate today’s email? Leave a comment or hit the heart if you liked the article - I always value your feedback, and it helps me with positioning in the Substack network.



Fascinating to see the realities of the gap, even among software engineers. Definitely an exciting time to be a builder.
With foreknowledge surely comes plenty of arbitrage opportunities, even if it’s just building radically faster than others can
With all due respect, but context windows do not belong exclusively in agents, they belong in GenAI, and if someone does not know what it means while being a "coder", it's a red flag IMO. Totally agree with your abyss view.