Column: Public trust is becoming AI’s real bottleneck
The Ghosts of Satsop: Why AI’s Future May Depend on Public Trust
If you drive west from Olympia, Washington, two skeletal concrete towers pierce the evergreen canopy like forgotten monuments to a future that never arrived. They weren’t meant to be monuments at all. They were meant to be engines—twin reactors of the Satsop Nuclear Power Plant, symbols of clean energy, technological prestige, and economic promise for the region.
Instead, they stand as a cautionary tale.
The project promised jobs, innovation, and a new era of American energy independence. But what materialized instead were cost overruns in the billions, construction delays stretching years, and a public confidence that evaporated faster than the steam those towers were designed to produce. Only one of five planned units was ever completed. The rest became eerie relics, their hollow cores now echoing with the ghosts of ambition unmoored from social license.
Nuclear engineering wasn’t the problem. The technology was sound. The failure was political and social—a breakdown of trust between an industry and the communities it claimed to serve.
And now, artificial intelligence finds itself walking a path eerily similar to that of nuclear power in the 1970s.
The Erosion of Permission
Industries rarely stall because they hit a hard technical ceiling. They slow when political and social permission erodes. Today, AI is facing a legitimacy crisis in real time.
Public trust in major institutions is already fragile. Trust in large technology companies? Even lower. Concerns about job displacement, wealth concentration, and infrastructure strain are no longer fringe anxieties—they are mainstream political energy. Across multiple states, lawmakers have introduced proposals to pause or restrict data center expansion. This momentum didn’t emerge overnight. It’s been building in the background, fed by years of opaque decision-making, boomtown economics that leave locals behind, and a perception that tech operates in a parallel universe insulated from the consequences of its growth.
Tech executives and investors are no longer background actors. Their statements travel faster than their products. As taxes, oversight, and regulation come under debate, tech’s most visible voices often frame them as hostility toward innovation. It may feel like a necessary defense, but it can reinforce the perception that the industry is unwilling to adapt to broader political realities.
Washington State: A Microcosm of the Crisis
Nowhere is this tension more visible than in Washington state, home to Seattle’s booming tech economy and the epicenter of America’s AI infrastructure race.
The debate around new capital gains and high-income tax proposals has laid bare the growing rift. Some startup leaders have framed tax proposals as existential threats to Seattle’s innovation economy, warning that Washington risks becoming “the next Cleveland.” To an average voter worried about job displacement or rising costs, this rhetoric can feel disconnected from broader economic anxieties. That contrast hardens the sense that tech operates in a separate lane from everyone else. And perception like that carries consequences.
When distrust hardens into political momentum, policy seldom arrives as a narrow correction. It tends to be broad and reactive. Telecommunications once represented the frontier of American innovation. As power consolidated and public suspicion grew, the response included structural control and heavy supervision. Innovation didn’t end, but it moved under tighter constraints and at a slower pace. The center of gravity shifted from experimentation to permission.
The Hidden Costs of Lost Legitimacy
What makes legitimacy risk particularly dangerous is that it rarely begins with statute. It begins with friction.
Hiring becomes harder in communities that feel antagonistic toward the industry. Government partnerships face louder opposition. Enterprise buyers extend diligence cycles. Distribution slows in subtle ways that don’t show up in quarterly dashboards but compound over time. These costs compound even if they are difficult to measure.
As a founder building risk and regulatory infrastructure for financial institutions, I think about these dynamics constantly. I expect guardrails. Thoughtful regulation is not the enemy. In many cases, it creates highly functional markets.
What concerns me is overcorrection. Sweeping licensing regimes, expansive liability standards for model outputs, escalating compliance overhead, infrastructure caps written in frustration rather than precision. Those burdens fall hardest on young companies without large compliance teams.
We are careful about pricing market and technical risk. We are far less disciplined about legitimacy risk—the moment an industry loses its social license to operate.
The Oxygen of Trust
Over the next decade, legitimacy may be the binding constraint. Durability matters more than short-term velocity, and durability is built on public trust.
Seattle became a technology hub because it was broadly trusted to build. That trust gave companies room to experiment and scale. It was a form of oxygen. You rarely notice it until it thins. By then, the towers are already standing.
The unfinished nuclear plant near Aberdeen isn’t just a relic of the past. It’s a mirror. And for AI, the question isn’t whether the technology will work. It’s whether society will let it.
Tags / Viral Phrases:
- AI legitimacy crisis
- Tech trust deficit
- Social license to operate
- Data center backlash
- Innovation under suspicion
- Regulatory overcorrection
- Washington state tech taxes
- Cleveland comparison meme
- Satsop Nuclear Plant
- Public trust as oxygen
- Permission economy
- Tech industry perception problem
- Hidden costs of distrust
- Future of AI infrastructure
- Nuclear power cautionary tale
,




Leave a Reply
Want to join the discussion?Feel free to contribute!