Regulation, Big Tech, and Big AI
Connecting a WSJ Story, an Old Comedy Clip, and the Concentration Risk Shaping AI
Observe
I saw a WSJ article this morning about government efforts to regulate major technology platforms. The pattern was familiar: lawmakers trying to rein in powerful companies, companies publicly resisting, and yet regulations often ending up reinforcing the incumbents’ advantage anyway. It reminded me of something I’d seen before — literally.
Wonder
Why did this feel so familiar?
Then I remembered a humorous John Oliver video from years ago about Internet regulation. It joked about Big Tech complaining to Congress while quietly benefiting from the very rules they claimed would harm them. My question became: Is that video actually relevant to AI today? Or am I projecting?
Tech Monopolies: Last Week Tonight with John Oliver (HBO)
AI Exploration
When I asked the AI to help me track down that old John Oliver video, it not only surfaced the clip but also helped me examine how different AI is from the Internet era that video originally critiqued. One of the biggest differences is the massive barrier to entry created by the cost of training a frontier-level large language model.
In the early Internet days, a small startup could still compete. Today, training a modern foundation model requires compute budgets that only a handful of companies or governments can afford.
Here are the current industry estimates:
GPT-3 (2020): roughly $4.6–$12 million to train
(source: independent researchers analyzing OpenAI scaling laws)GPT-4 (2023–2024): likely $78+ million in compute
(source: Visual Capitalist estimate)OpenAI leadership comments: model training costs are “much more than $100 million”
(source: Sam Altman in interviews)Future projections: by 2027, training a frontier model could exceed $1 billion per run
(source: Epoch AI research on compute scaling)
These cost curves are not linear — they’re accelerating. That means the companies shaping AI policy are often the only ones who can even afford to play the game of building foundational models, which is a very different dynamic from the regulatory debates of the social-media era.
This becomes even more critical when you realize that nearly all AI products—the chat interface you’re using right now, image and video generators, and AI features woven into everyday apps like Copilot—depend on one of the four major frontier-model providers: OpenAI, Anthropic, xAI, or Meta.
It’s similar to how companies rely on the major cloud providers—Amazon Web Services, Microsoft Azure, and Google Cloud Platform—but with one key difference: cloud infrastructure can be insourced or replicated with enough investment and expertise, while building a foundational AI model practically cannot. The capital requirements, specialized hardware, research talent, and training data needed to create a competitive foundation model make it dramatically harder than standing up a data center or even building a private cloud.
A rare exception is the recent DeepSeek model, which reportedly achieved competitive performance at dramatically lower training costs—tens of millions rather than hundreds. But DeepSeek is the exception that proves the rule: competing at the true frontier still requires enormous capital, specialized infrastructure, and tightly optimized hardware supply chains.
And DeepSeek introduces another layer of complexity, because it is developed in China, raising its own geopolitical and regulatory challenges around data security, export controls, and global AI competition. Far from undermining the point, DeepSeek reinforces how difficult—and politically fraught—it is for most organizations to develop a foundational model outside the small group of dominant players.
This makes the regulatory conversation fundamentally different from the one in that old video. Back then, rules could inconvenience small players. Today, the very cost of entry — before regulation is even written — already tilts the field toward the giants.
Understanding
This loop — a real-world article, a memory from years earlier, a modern-day comparison, and an unexpected insight — is exactly why The Doodle Principle exists.
AI isn’t intelligent; it’s incomplete.
But paired with curiosity and some exploration, it helps us connect eras, patterns, and incentives in ways that deepen our understanding.
By comparing AI to past tech transformations, I’m not just trying to be clever. I’m trying to help people — consumers, parents, professionals, and yes, even local PTO members — build a clearer understanding of what’s happening right now so they can influence the decisions being made around them.
When the public understands the incentives, tradeoffs, and historical echoes, they can shape better conversations.
When lawmakers are pressured by informed citizens instead of confused ones, policy improves.
When people can articulate the why behind their concerns, not just the what, good things happen.
And that is the heart of the Doodle Principle:
You supply the curiosity and judgment.
AI supplies the scaffolding.
Together, you get clarity you wouldn’t reach alone.

