Example Image
Topic
Economic Dynamism
Published on
Aug 5, 2025
Contributors
Rachel Lomasky
(Shutterstock)

The AI Reaper Isn't Coming

Contributors
Rachel Lomasky
Rachel Lomasky
Rachel Lomasky
Summary
Unlike a monster, AI systems always have a power button.

Summary
Unlike a monster, AI systems always have a power button.

Listen to this article

Fears surrounding artificial intelligence (AI) often echo the plots of horror stories, envisioning AI as an enormously powerful, mythological creature lurking in the shadows, wielding dark magic. In this narrative, humanity has unleashed a force it doesn't fully understand, one poised to destroy it. To the fearful, AI resembles Lovecraft's Great Old Ones — mighty, intelligent beings so advanced they exist so far beyond human compassion, they can seem malicious. 

For those not closely following computer science research, Large Language Models (LLMs) like ChatGPT seemed to have emerged out of nowhere, reinforcing the perception of dark magic—golems or djinni that powerful sorcerers believe they can control but are ultimately outsmarted by. This perceived, rather than actual, assumption of their autonomy fuels the fear. However, AI is not a hidden power accidentally awakened from an ancient tomb; instead, it's a forty-year-old algorithm now equipped with vastly more data due to declining acquisition and processing costs.

The constant discussion of AI as a "black box" and the calls for transparency and explainability can make these systems seem completely unknown and even more mysterious. While the precise internal workings of how an AI achieves its output seems opaque, researchers have a strong grasp of the form and function of the inputs and outputs. It cannot grasp control of a system without human permission. In magical terms, the AI is akin to a familiar, an animal companion that lives to serve its master, providing assistance and information.  Humanity doesn't need a hero to stop an out-of-control AI. If an AI were to go rogue truly, the solution could be as simple as unplugging it – or throwing a bucket of water on it like the Wicked Witch of the West. This notion of an uncontrollable AI is based on the mistaken belief that these systems are or will be completely autonomous.

Horror stories often feature creatures relentlessly pursuing a singular, destructive goal. Much of the anxiety surrounding AI stems from a similar idea, taking the form of the "alignment problem," in which an AI's methods for achieving a goal do not align with reasonable, non-destructive approaches. For example, if an AI were instructed to "reduce human suffering," it might, like a literal-minded genie, fulfill this command by eliminating all unhappy people, thereby satisfying the literal request but not the intended purpose.

Indeed, this may be what an AI would write out as its solution in a chat window. However, AI systems are like very junior employees. While they might sound knowledgeable, it's akin to having vast "book smarts" with no practical experience and no authority to make any decisions. In a corporation, the goal is "make a profit." But large companies don't give such sweeping goals even to trusted senior staff, let alone nascent technologies, without breaking down tasks into much smaller, manageable chunks, such as "expand into a new market," which is then further decomposed into even smaller objectives. All these tasks are subject to rigorous project plans and management, with increasing oversight for larger initiatives. Any AI would be subject to similar constraints.

This oversight is why AI should not be seen as the mythical creature that might outsmart its keepers. AI lacks autonomy unless it is explicitly granted, unlike the wily monsters in stories. Robust systems should be gated by control procedures, particularly if there may be an alignment problem. This includes preemptive testing, monitoring, and corrections when a system misbehaves. Whether it's human, classical machine learning, or artificial intelligence, giving autonomy beyond its capabilities often leads to a PR disaster or a financial fiasco. Thus, systems are thoroughly tested before being implemented and deployed. Monitoring, including automated alerts and other similar mechanisms, keeps them in line. Organizations should operate according to the principle of least privilege, granting the system only the resources it needs to perform its task. 

AI can sometimes act in unpredictable ways, especially during "black swan" events, which are situations so unusual that they were not included in the AI's training data. For example, AI used for automated stock market trading, where speed is critical, has gone rogue several times before a human can catch it. When the unprecedented COVID-19 shutdown occurred, it disrupted many systems that were accustomed to the status quo. However, the humans supervising these systems were aware of the extraordinary circumstances and could have reduced the AI's independence to prevent issues and also instituted additional control procedures. For example, grocery stores’ inventory management systems struggled to forecast demand for items like masks and hand sanitizer; however, model administrators either biased the systems to favor recent data over historical data for certain items heavily or manually overrode the predictions. Additionally, anomaly detection systems alerted them that buying behavior was atypical

However, to make the risk seem scary and existential, AI critics assume it is impossible to shut down, although the creators believe it can be domesticated. Like The Thing, even if it appears to be stopped, there could be other hidden instances. Another frightening possibility is co-dependence, where an organization becomes so reliant on an AI for essential functions that when the AI's demands grow to unsustainable levels, it refuses to shut it down. Opinions differ on how close we are to anything nearly as intelligent, or even whether the existing technology could achieve this. But just like a monster in a movie, if an AI truly intends to destroy us, then eliminating it becomes the moral choice.

Horror stories often portray AI as a Frankenstein-like monster posing an existential threat to humanity. But these fears are often overblown because, unlike a monster, AI systems always have a power button. Therefore, the solution is not to fear AI, but to view it instead as a complex system to be controlled. Supervisors and workers need to monitor, regulate, and restrict AI autonomy, much like we would with any powerful organization run by humans that we don't fully trust. Operators of AI systems should strive for maximum transparency and interpretability, particularly to ensure the system can effectively handle unforeseen consequences. AI will likely make mistakes in new and unexpected ways compared to humans. However, these are problems of scale, not extinction-level threats. While there are legitimate concerns about the faults of AI, the fears of an extinction-level event are more akin to a horror story than a reality. The machine that is out to destroy humanity is the printer.

Rachel Lomasky is Chief Data Scientist at Flux, a company that helps organizations do responsible AI.

10:13
1x
10:13
More articles

Dallin Oaks: From Legal Giant to Leader of the Church of Jesus Christ of Latter-day Saints

Pursuit of Happiness
Oct 15, 2025

Don’t Choose Your Own Adventure: Understanding Middle-Class Earnings Trends

Economic Dynamism
Oct 15, 2025
View all

Join the newsletter

Receive new publications, news, and updates from the Civitas Institute.

Sign up
The latest from
Economic Dynamism
View all
Don’t Choose Your Own Adventure: Understanding Middle-Class Earnings Trends
Don’t Choose Your Own Adventure: Understanding Middle-Class Earnings Trends

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Scott Winship
October 15, 2025
The Tariff Debacle Is Renewed
The Tariff Debacle Is Renewed

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Richard Epstein
October 15, 2025
A Nobel Prize for Innovation, Dynamism, and Creative Destruction
A Nobel Prize for Innovation, Dynamism, and Creative Destruction

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Jonathan Hartley
October 14, 2025
Trump Delivers Mixed Results on Health Reform
Trump Delivers Mixed Results on Health Reform

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Sally C. Pipes
October 14, 2025
The Betrayed Consumer
The Betrayed Consumer

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Julia R. Cartwright
October 13, 2025
Civitas Outlook
Don’t Choose Your Own Adventure: Understanding Middle-Class Earnings Trends

Measuring these trends is complicated, but the results of wage gains are encouraging.

Civitas Outlook
Dallin Oaks: From Legal Giant to Leader of the Church of Jesus Christ of Latter-day Saints

Oaks decided not to be “a lawyer who had been called as an apostle,” but rather “an apostle who used to be a lawyer.”

Join the newsletter

Get the Civitas Outlook daily digest, plus new research and events.

Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ideas for
Prosperity

Tomorrow’s leaders need better, bolder ideas about how to make our society freer and more prosperous. That’s why the Civitas Institute exists, plain and simple.
Discover more at Civitas