Example Image
Topic
Pursuit of Happiness
Published on
Feb 17, 2026
Contributors
Kevin Frazier
(Shutterstock)

The AI Future: Between Certain Doom and Endless Prosperity

Contributors
Kevin Frazier
Kevin Frazier
Kevin Frazier
Summary
If we resist the pull of extremes and commit to disciplined, rights-respecting, iterative governance, the AI age will not be defined by doom or delirium.

Summary
If we resist the pull of extremes and commit to disciplined, rights-respecting, iterative governance, the AI age will not be defined by doom or delirium.

Listen to this article

The generative AI tools that dominate headlines today were introduced more than three years ago. Technologists would tell you that the models have advanced significantly over that period. In fact, they’d likely say that AI has progressed beyond their wildest expectations. Experts at the leading AI labs may discuss how their models can do hours-long tasks on behalf of users. Startups’ founders may brag about AI tools that make doctors drastically more accurate and allow them to spend far more time with patients. Researchers at so-called “neolabs” may talk your ear off about new models with capabilities and characteristics even more impressive than those on the market today. The consensus will be that AI has become more complex and sophisticated. The same is not true of popular discourse around AI nor our public policy solutions. 

Since early 2023, the AI discourse in the popular press and in legislative chambers has been defined by extremes. Then-Majority Leader Chuck Schumer invited AI experts to the Senate and heard extensively about the existential risks posed by AI. He wasn’t the only one to associate AI with the potential end of humanity. Then-FTC Chair Lina Khan shared that she had a p(doom) of around 15 percent — her odds that AI would cause a cataclysmic event. A sense of inevitable demise continues to pervade some AI conversations. Dario Amodei, CEO of Anthropic, recently forecasted that AI would displace most entry-level white collar jobs in the span of just a few years. Others have envisioned AI contributing to authoritarianism and geopolitical order. Yet, not everyone has settled on this picture of the AI Age.  

Today, plenty of folks are convinced of exactly the opposite. Tech luminaries such as Elon Musk envision a bright future in which humanity is surrounded by abundance. Conversations around the end of work, universal basic income, and similar utopian outcomes (to some) pass for normal chatter these days. Perhaps paradoxically, some AI experts simultaneously suspect that dire and dreamy futures could lie ahead. Amodei, for one, has touted the possibility of AI curing most cancers.

Where does this leave most Americans? What does this mean for AI regulation? 

It means we’re dodging the much harder, more boring, and more detailed conversations we ought to be having about how to adjust to this new technology. The questions asked in polls, the headlines in the press, and the guests filling our podcast feeds have made it all seem like an all-or-nothing proposition. In turn, popular attention and legislative resources have been spent on edge cases. Legislators have been captivated by reports that AI will become effectively ungovernable by 2027. Communities have zeroed in on troubling and tragic stories of unrepresentative use cases of AI tools. And the AI celebrities that garner the most public attention seem keen to fuel these flames. 

Critically, this status quo is disempowering. Those who bought into the utopian vision may rest on others’ promises and simply await a bright future. Those sold on the end of the world presumably feel powerless to stop the AI madmen’s march. We’re consequently missing the often determinative sway of the silent majority. Those who have yet to join one camp or the other simply want society to adjust as necessary to drive and spread human flourishing.

Preparing for the AI future means settling in for a decade (if not decades) of adjustment and transition. That’s part of the reason an essay by Matt Schumer went viral -- he made clear that the tidy futures painted by some are unlikely and that instead we’re going to experience a mix of wonderful progress and difficult setbacks. People will lose jobs. People will experience mental disquiet when exposed to new ideas. Communities will change. Culture will shift. And politics will have to improve and not be subject to knee-jerk reactions if we’re going to navigate all these alterations.

Success amid technological transition requires the discipline to avoid distractions from doomers and excessive dreamers. Add to that the need for persistence: methodically testing, measuring, and revising new strategies to revive and spread prosperity. A few principles should inform this effort: 

  1. Long-term adjustments are necessary. Thinking over short-term time horizons — such as which policies are most likely to appease voters in November — will result in false starts. Policies like robot taxes have a certain appeal when tech is framed as the only source of our woes. We have to resist the temptation to buy into policies that seem too good to be true (because they are).

  2. Typical regulatory regimes are inadequate. Government policies are not designed to move at the speed of AI. New tools are being deployed, tested, and revised faster than any committee can keep pace with. Moreover, labs themselves are adjusting their internal policies in rapid response to user and public feedback. Rather than view this as a problem, it’s best to view it as an opportunity to update how we write, enforce, and measure laws.

  3. Fundamental rights must be safeguarded. Compelling solutions will emerge from strange places and cobble together broad support. The haste to “do something” to ease the unease of not knowing what the future holds will be tough to overcome, yet there must always be a backstop. Our freedom to think, to work, to raise our families, to practice our faith, to receive information, and to convey our ideas must be shielded from unnecessary and unjustified government intervention, even if supposedly motivated by popular support.

If we get this wrong, it will not be because we failed to predict the precise year when models surpass human performance on a benchmark, or because we miscalculated the probability of catastrophe. It will be because we chose spectacle over stewardship.

The AI transition is not a movie trailer. It is infrastructure. It is procurement reform. It is the licensing boards that decide whether to accept AI-assisted credentials. It is state workforce agencies rethinking training pipelines. It is judges grappling with evidentiary standards. It is school districts that determine how to teach writing in an era of copilots. None of that fits neatly into a p(doom) estimate or a utopian keynote. All of it determines whether this technology expands opportunity or narrows it.

The task before us is neither to freeze AI in place nor to surrender to it. It is to govern a moving target without pretending it will stand still. That requires humility about forecasts, seriousness about tradeoffs, and a willingness to iterate. We will need rigorously evaluated pilot programs. We will need regulatory frameworks that include sunset provisions and mandatory review. We will need agencies that measure outcomes rather than merely promulgate rules. And we will need political leaders who can say, without embarrassment, “We tried this. It did not work. We are adjusting.”

Most importantly, we need to re-center the conversation on agency, not the agency of AI systems, but the agency of citizens. Americans are not passive recipients of technological change. They are workers who retrain, entrepreneurs who experiment, parents who adapt, and voters who demand accountability. A serious AI policy agenda should equip them to navigate transition, not treat them as subjects to be managed.

There will be disruption. There will be overcorrections. There will be bad actors. But there will also be new firms, new forms of work, new medical breakthroughs, and new ways of learning. The question is not whether AI will change society. It already has. The question is whether we will do the slow, unspectacular work required to shape that change in ways consistent with a free and flourishing republic.

The loudest voices will continue to sell certainty. They will promise salvation or warn of extinction. The harder path is less emotionally satisfying. It asks for patience, institutional reform, and sustained civic attention. Yet history suggests that democratic societies succeed not by perfectly forecasting the future, but by building systems capable of adapting to it.

If we resist the pull of extremes and commit to disciplined, rights-respecting, iterative governance, the AI age will not be defined by doom or delirium. It will be defined by whether we have the maturity to match technological acceleration with institutional evolution. That work does not lend itself to viral clips. It does, however, determine whether the next decade of AI becomes a story of concentrated power and public anxiety or of broad participation and renewed confidence in our capacity to govern ourselves.

Kevin Frazier directs the AI Innovation and Law Program at the University of Texas School of Law. He is also a Senior Fellow at the Abundance Institute and an Adjunct Research Fellow at the Cato Institute.

10:13
1x
10:13
More articles

Another Reason for Regime Change: Iran’s Flagrant Assault on the Rules of War

Politics
Apr 3, 2026

The Iran War and the Future of the American Right

Politics
Apr 3, 2026
View all

Join the newsletter

Receive new publications, news, and updates from the Civitas Institute.

Sign up
The latest from
Pursuit of Happiness
View all
Welcome to the Manosphere
Welcome to the Manosphere

What counter-programming might resonate, reaching young men with the message that unhealthy conspiracism and cartoonish machismo need not be a part of a healthy striver mentality?

Tal Fortgang
April 3, 2026
Celebrating Passover in Communist Exile
Celebrating Passover in Communist Exile

When we children found out the name of our feast, we had already crossed the big sea, eaten lots of bread dipped in sour milk, and the bitter herbs were beginning to taste quite sweet.

Juliana Geran Pilon
April 1, 2026
The Dignity of Relational Beings
The Dignity of Relational Beings

Advancements in technology may lead us to discount the personal effort required to show up for others.

Ainsley Weber
March 31, 2026
Adams’ Duplicitous Cabinet
Adams’ Duplicitous Cabinet

A reader who doesn’t share Chervinsky’s complacent certitudes might find everything to reject in her assertions.

Myron Magnet
March 27, 2026
The Mores of Machines
The Mores of Machines

As AI agents begin to form societies of their own, the Frenchman who came to understand ours may yet again have the last word.

Thomas Dias
March 19, 2026
Kevin Frazier
Civitas Outlook
A Year of Tariff-Induced Stagnation

Broad-based tariffs won’t “liberate” anyone; they’re simply another way for the government to impose self-inflicted economic wounds.

Civitas Outlook
Another Reason for Regime Change: Iran’s Flagrant Assault on the Rules of War

The rules of war are not complicated. Militaries may strike military targets. Militaries may not deliberately target civilians or threaten the commerce of neutral nations.

Join the newsletter

Get the Civitas Outlook daily digest, plus new research and events.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ideas for
Prosperity

Tomorrow’s leaders need better, bolder ideas about how to make our society freer and more prosperous. That’s why the Civitas Institute exists, plain and simple.
Discover more at Civitas