I work in AI. I sell it to enterprises. I believe in what it can do. And I'm genuinely worried that we are building something that a large portion of the country is going to reject — not because it doesn't work, but because the benefits aren't reaching them.
Here's what I see: the United States is not in a political crisis. It is in a class war that has been building for decades and is now running out of patience. American households carry $18.8 trillion in total debt — a record, $4.6 trillion more than before the pandemic. The average household owes over $105,000. The top 1% of Americans holds $55 trillion in assets — roughly equal to the combined wealth of the entire bottom 90%. The bottom 50% of this country owns 2.5% of its wealth. The California wealth tax, the first of its kind, is not a wonky fiscal experiment. It is a declaration. The vocabulary of class conflict is back, and it isn't going away.
Into this, we are deploying AI at extraordinary speed. And the distribution of its gains is not broad. It is concentrated — in the companies building it, in the workers skilled enough to use it, in the investors who got in early. That concentration is going to become a political story. Actually, I think it already is. We just haven't named it yet.
The evidence is piling up fast. In 2025, companies directly cited AI in 55,000 job cuts — more than twelve times the number just two years prior. By April of this year, over 92,000 tech workers had already been laid off in 2026 alone, with Meta and Microsoft together announcing more than 20,000 cuts in a single week. Coinbase, Cloudflare, PayPal, Accenture, Amazon — the list reads like a who's who of the companies that spent the last decade telling us technology creates more jobs than it destroys. McKinsey laid off 200 internal support staff after automating their roles. Chegg eliminated 45% of its workforce. These aren't anomalies. They're the early innings.
The biggest threat to AI isn't energy constraints or the timeline to AGI. It's whether this technology actually makes life better for most people — and whether people believe it does.
Anthropic CEO Dario Amodei has said AI could eliminate roughly 50% of entry-level white-collar jobs and drive unemployment up 10 to 20% within the next few years. The IMF puts 60% of jobs in high-income countries at meaningful exposure. These aren't fringe predictions. They're becoming the consensus view, and a growing number of serious people — including BlackRock CEO Larry Fink, who recently warned of the "real risk" that AI widens wealth inequality with huge rewards only for insiders — are now saying so publicly. What's interesting is that the actual job destruction, so far, is somewhat below public perception. But here's the thing: perception is the story. An NBC News poll found that 57% of registered voters already believe AI's risks outweigh its benefits. You don't need the displacement to be total for the political backlash to be.
And wait until robotics catches up and the disruption reaches physical labor. Blue-collar workers who already saw their wages fall 50–70% over the last four decades of automation and offshoring will face this next wave with less savings, less time, and far less faith that the system works for them. When that happens — and I think it will within the next presidential cycle — the political conversation won't be about immigration or abortion. It will be about AI.
But here's the part that doesn't get enough attention, because it's subtler than the "robots take your job" headline: even among people who keep their jobs, AI is creating a new hierarchy. Anthropic's own economic research, published earlier this year, found something that should alarm anyone paying attention. The divide isn't between people who use AI and people who don't. It's between sophisticated users and everyone else — and that gap is hardening in real time. Only about 5% of workers are genuinely AI-fluent. That minority earns 4.5 times higher wages and receives four times as many promotions. Roles requiring AI skills command a 15–30% salary premium over identical roles without it. Senior employees, who have more access to paid tools, dedicated training time, and the autonomy to experiment, are pulling ahead. Entry-level workers and frontline employees are largely left to figure it out on their own — or blocked from using the tools entirely.
What we are creating, in real time, is a two-tier labor market where AI literacy is the new class boundary. Not wealth. Not education, exactly. Fluency. The ability to use a tool that most people have never been taught. That's a skills gap hardening into a class gap.
I keep coming back to one thing. Human skill has always had value. Not just economic value — social value, dignity, identity. You were a carpenter, a driver, a nurse, a salesperson, and that skill gave you standing in the world. AI is beginning to decouple productivity from individual human skill in certain domains entirely. And when junior analysts, paralegals, and entry-level coders are replaced before they ever build the experience to move up, we don't just eliminate jobs — we eliminate the ladder. The apprenticeship model of professional growth, where you learn by doing the work, is being severed at its base. That's not a technology question. That's a civilizational one.
The wealth concentration story is writing itself. The employees of OpenAI, Anthropic, Google DeepMind — people I know, people I respect — are going to become extraordinarily wealthy. Their equity will vest. They'll buy houses. Their faces will circulate on social media. And the top 10% of income earners, already accounting for nearly half of all US consumer spending, will pull further ahead. For the truck driver whose route is being automated, or the call center worker who just got a restructuring notice, that story will be legible. It will confirm what they already suspect: that the gains of this revolution belong to someone else. Senators Mark Warner and Josh Hawley have already introduced the AI-Related Job Impacts Clarity Act, which would require companies to report layoffs directly attributed to AI. That's not a policy footnote. That's the beginning of a legislative reckoning.
OpenAI's founding charter says it exists for the benefit of all humanity. I don't think if you surveyed the country today — not the tech industry, the country — you'd get that result. You'd get something closer to: this benefits the people who already had advantages. That perception is dangerous. And it's going to get louder.
I don't have a clean solution. I'm not sure anyone does. But I think the companies building this technology are going to have to make a real choice — not a press release choice — about whether they are extracting value from society or returning it. Some economists are already drafting what they call "circuit breaker" plans: automatic stabilizers that kick in if AI-driven job displacement spikes past a threshold, triggering wage insurance and expanded income support. That's a start. But it's a policy conversation. What I think is actually needed is something harder: the companies and individuals winning from AI need to show up in the places that aren't winning. Materially. Not with a foundation. With presence, investment, and accountability.
The camel has been carrying this weight for a long time. AI isn't the cause of the fracture forming underneath us. But I think it might be what tips it. And if the people building it spend the next decade focused on infrastructure bottlenecks and capability benchmarks instead of the faces of the people being left behind — that will be a choice too. One we'll have to answer for.
~1,050 words