Example Image
Topic
Pursuit of Happiness
Published on
Dec 22, 2025
Contributors
Rachel Lomasky
Friedrich Hayek

Hayek’s Rules for AI

Contributors
Rachel Lomasky
Rachel Lomasky
Rachel Lomasky
Summary
Hayek's Complexity Theory can provide essential frameworks for understanding the emergent systems we are building with LLMs, enabling us to solve novel, challenging problems that neither humans nor computers could address alone.

Summary
Hayek's Complexity Theory can provide essential frameworks for understanding the emergent systems we are building with LLMs, enabling us to solve novel, challenging problems that neither humans nor computers could address alone.

Listen to this article

The rise of Large Language Models (LLMs) is profoundly blurring the boundaries between humans and computers. Traditionally, computer programs were defined by explicit rules codified in software. Although potentially complex, their code made their behavior amenable to documentation, control, and precise prediction with sufficient effort. In contrast, modern AI has become less a product of human design and more a product of human action because of the vast, human-created datasets on which it is trained. Consequently, the behavior of LLMs is often subjective, unpredictable, and nuanced, necessitating a social science perspective, traditionally used to study human behavior, to fully comprehend and govern these new digital entities. There's significant public and academic focus on the societal impact of AI, e.g., bias and privacy concerns, the transformation of education, and human-computer interaction. However, the application of social science to the study of LLMs remains relatively unexplored.

The top-down determinism of traditional software does not apply to Large Language Models (LLMs). Instead, LLMs learn by processing massive, human-created datasets, including books, web texts, social media posts, and transcripts, which heavily reflect real-world social interactions. Consequently, the training data functions as a mirror of society, encoding collective human behavior, cultural norms, language habits, and biases. To truly understand an LLM's behavior, one must understand its training data, and to understand the data, one must first understand the social processes by which humans generate that data.

Similarly, our interactions with LLMs are inherently human-influenced. The dynamic has fundamentally shifted; we are no longer required to speak the computer's language, but the computer is speaking ours. Asking an LLM a question is not a database query, where the answer is determined by the formalism of the computer code used to request it. Rather, it is natural language communication in which we bring our conscious and unconscious biases. Just as we intuitively know how to phrase a question to a human to achieve a desired outcome, we employ techniques such as prompt engineering with LLMs to shape the response we receive. This makes our interactions closer to social.

Additionally, LLMs, much like human societies, are dynamic systems. With every retraining cycle, they receive an updated feed from the ever-changing internet, incorporating larger and larger subsections of the evolving digital world. Many models also incorporate explicit human feedback, e.g., Reinforcement Learning with Human Feedback, to shape their behavior. Even small changes to the input or the training process can lead to unpredictable, large-scale effects on the model's overall output and functionality.

The dynamic, complex nature of LLMs finds a compelling parallel in Friedrich Hayek's 1974 Nobel Lecture, "The Pretence of Knowledge," in which he explored economic and political systems characterized by billions of variables and interconnections. This structure aligns with the organization of LLMs. Hayek concluded that while we can understand the general principles that govern complex systems and predict the abstract consequences of interventions, we cannot predict the specific outcomes or the precise states of all their elements. In his broader theory of complexity, Hayek defines two types of order:

  • Taxis: A conscious, central design or constructed order. Traditional software falls into this category.
  • Kosmos: A complex, emergent, spontaneous, and functional order that arises from decentralized interactions and vast, unstructured data. LLMs function as kosmos.

The power of LLMs lies in their emergent behavior, which is far more complex than any human could consciously design. Treating LLMs as a form of kosmos represents the key conceptual breakthrough that has accelerated AI development.

In Hayek's concept of kosmos, order emerges through decentralized incentives, where agents are rewarded for accurate predictions and penalized for incorrect choices. The LLM-equivalent of these incentives is the fundamental training task: predicting the next word in a sequence, mirroring how humans constantly predict the future state of their world. Through this mechanism, agents (the model's parameters) learn the statistical relationships and pathways that underlie these successful predictions, thereby constructing an internal order. LLMs derive their vast capabilities from the complex statistical relationships observed across their entire training data. The interplay of words, context, typical usage, and linguistic subtleties enables them to effectively capture the functional, practical knowledge embedded in human communication, which is inherently tricky to formalize or manually code.

The complexity of LLMs mirrors that of the human brain. For a given stimulus, we can observe which brain region "lights up," but we cannot discern the exact thought. This opacity is the inherent cost of kosmos: it allows us to know the inputs and outputs but leaves the internal computational process opaque. As a result, controlling and auditing these systems becomes profoundly difficult and precise, and deterministic, "point change" adjustments are impossible. Consequently, LLMs are not perfectly controlled puppets, but are inherently buggy like humans.

Freed from the scalability constraints inherent in taxis, the emergent kosmos of LLMs unlocks massive, powerful abilities, enabling them to tackle many of the most challenging technical problems. Some of these applications are garnering major headlines, such as the ability to predict the properties of new alloys, which previously required slow, intensive simulations or massive experimental cycles. Likewise, immense progress has been made by applying LLMs to accelerate drug discovery. Another extremely important development leverages LLMs' fluency in both machine and human languages to detect vulnerabilities in compiled software, a critical task for security and code robustness.

There has been considerable publicity about "vibe coding," in which LLMs generate high-level programming languages, such as C and Java, models that ingest code and find statistical regularities, just as they do with natural language. However, once that code is compiled into a program (a binary), the problem of analysis becomes almost impossible. The high-level code is compiled into assembly language, a low-level language that computers understand and execute efficiently. Unlike natural and high-level programming languages, assembly is complicated for humans to understand and reason about. While reversers, people who specialize in this skill, can analyze assembly, the process remains tedious even for the highest-trained experts. Because it is so complex and labor-intensive, vulnerabilities can be inserted, either maliciously or accidentally. Likewise, it is challenging for them to be discovered and remediated. These flaws can be exploited by malicious actors, including nation-states, resulting in severe consequences, such as data theft and system compromise.

Traditional software security methods are unable to keep up with the complexity of compiled code, and vibe coding is increasing both the relative frequency of these vulnerabilities, as the coding models are less security-aware. Additionally, the productivity gains enabled by vibe coding mean that more software must be checked for security issues.  Previously, software vulnerabilities were detected using static analysis tools that rely on a set of complex, top-down rules. Like all taxis systems, this approach suffered from poor generalization and adaptability, requiring an intractable amount of formal semantic rules to be defined and implemented. However, novel breakthroughs are now being made with the help of LLMs, capitalizing on their fluency in both human and computer languages. These new systems leverage the emergent kosmos of LLMs, allowing us to build security solutions that would have been impossible just a few years ago.

Given the dynamic, emergent nature of LLMs, computer scientists must expand their studies beyond the "hard science" of deterministic rules. It must increasingly incorporate the study of human behavior and how complexity emerges and is represented within these systems. AI is now at least partially a social science, a product of human complexity and profoundly shaped by human behavior and interaction. Social science, particularly Hayek's Complexity Theory, can provide essential frameworks for understanding the emergent systems we are building. More importantly, applying these insights can strengthen LLMs, enabling us to solve novel, challenging problems that neither humans nor computers could address alone.

Rachel Lomasky is Head of AI at Delphos Labs, a company that analyzes compiled code to deliver advanced malware analysis, third-party risk evaluation, and supply chain integrity.

10:13
1x
10:13
More articles

Charles Sumner’s Harmony with the Declaration

Pursuit of Happiness
Feb 6, 2026

The Economic and Constitutional Vices of California’s “Once-only” Wealth Tax

Economic Dynamism
Feb 5, 2026
View all

Join the newsletter

Receive new publications, news, and updates from the Civitas Institute.

Sign up
The latest from
Pursuit of Happiness
View all
Charles Sumner’s Harmony with the Declaration
Charles Sumner’s Harmony with the Declaration

Sumner used the Declaration to increase the Constitution’s pursuit of forming a more perfect union.

James Howard
February 6, 2026
One Nation Spaced Out
One Nation Spaced Out

Kevin Sabet’s new book addresses a problem that has bedeviled us for thousands of years: What should individuals and society do about the use of psychoactive substances?

Paul J. Larkin
February 3, 2026
The AI Frontier Must be Fiercely Competitive
The AI Frontier Must be Fiercely Competitive

In the long run, overregulation could run the risk of making AI less safe.

Kevin Frazier
January 23, 2026
What Is History's Role in Civic Education?
What Is History's Role in Civic Education?

Regrettable trends within the professional discipline of history have forfeited its vaunted former status in civic education.

Benjamin P. Haines
January 23, 2026
McNamara in the Rear-View Mirror
McNamara in the Rear-View Mirror

There is much more to the McNamara story than simply a Ford CEO becoming a government executive, as Philip and William Taubman lay out in McNamara at War: A New History.

Tevi Troy
January 22, 2026
Civitas Outlook
The Economic and Constitutional Vices of California’s “Once-only” Wealth Tax

California's proposal to tax billionaires seems at first menacing, but could have drastic negative consequences for the future of the state.

Civitas Outlook
Charles Sumner’s Harmony with the Declaration

Sumner used the Declaration to increase the Constitution’s pursuit of forming a more perfect union.

Join the newsletter

Get the Civitas Outlook daily digest, plus new research and events.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ideas for
Prosperity

Tomorrow’s leaders need better, bolder ideas about how to make our society freer and more prosperous. That’s why the Civitas Institute exists, plain and simple.
Discover more at Civitas