Example Image
Topic
Pursuit of Happiness
Published on
Dec 22, 2025
Contributors
Rachel Lomasky
Friedrich Hayek

Hayek’s Rules for AI

Contributors
Rachel Lomasky
Rachel Lomasky
Rachel Lomasky
Summary
Hayek's Complexity Theory can provide essential frameworks for understanding the emergent systems we are building with LLMs, enabling us to solve novel, challenging problems that neither humans nor computers could address alone.

Summary
Hayek's Complexity Theory can provide essential frameworks for understanding the emergent systems we are building with LLMs, enabling us to solve novel, challenging problems that neither humans nor computers could address alone.

Listen to this article

The rise of Large Language Models (LLMs) is profoundly blurring the boundaries between humans and computers. Traditionally, computer programs were defined by explicit rules codified in software. Although potentially complex, their code made their behavior amenable to documentation, control, and precise prediction with sufficient effort. In contrast, modern AI has become less a product of human design and more a product of human action because of the vast, human-created datasets on which it is trained. Consequently, the behavior of LLMs is often subjective, unpredictable, and nuanced, necessitating a social science perspective, traditionally used to study human behavior, to fully comprehend and govern these new digital entities. There's significant public and academic focus on the societal impact of AI, e.g., bias and privacy concerns, the transformation of education, and human-computer interaction. However, the application of social science to the study of LLMs remains relatively unexplored.

The top-down determinism of traditional software does not apply to Large Language Models (LLMs). Instead, LLMs learn by processing massive, human-created datasets, including books, web texts, social media posts, and transcripts, which heavily reflect real-world social interactions. Consequently, the training data functions as a mirror of society, encoding collective human behavior, cultural norms, language habits, and biases. To truly understand an LLM's behavior, one must understand its training data, and to understand the data, one must first understand the social processes by which humans generate that data.

Similarly, our interactions with LLMs are inherently human-influenced. The dynamic has fundamentally shifted; we are no longer required to speak the computer's language, but the computer is speaking ours. Asking an LLM a question is not a database query, where the answer is determined by the formalism of the computer code used to request it. Rather, it is natural language communication in which we bring our conscious and unconscious biases. Just as we intuitively know how to phrase a question to a human to achieve a desired outcome, we employ techniques such as prompt engineering with LLMs to shape the response we receive. This makes our interactions closer to social.

Additionally, LLMs, much like human societies, are dynamic systems. With every retraining cycle, they receive an updated feed from the ever-changing internet, incorporating larger and larger subsections of the evolving digital world. Many models also incorporate explicit human feedback, e.g., Reinforcement Learning with Human Feedback, to shape their behavior. Even small changes to the input or the training process can lead to unpredictable, large-scale effects on the model's overall output and functionality.

The dynamic, complex nature of LLMs finds a compelling parallel in Friedrich Hayek's 1974 Nobel Lecture, "The Pretence of Knowledge," in which he explored economic and political systems characterized by billions of variables and interconnections. This structure aligns with the organization of LLMs. Hayek concluded that while we can understand the general principles that govern complex systems and predict the abstract consequences of interventions, we cannot predict the specific outcomes or the precise states of all their elements. In his broader theory of complexity, Hayek defines two types of order:

  • Taxis: A conscious, central design or constructed order. Traditional software falls into this category.
  • Kosmos: A complex, emergent, spontaneous, and functional order that arises from decentralized interactions and vast, unstructured data. LLMs function as kosmos.

The power of LLMs lies in their emergent behavior, which is far more complex than any human could consciously design. Treating LLMs as a form of kosmos represents the key conceptual breakthrough that has accelerated AI development.

In Hayek's concept of kosmos, order emerges through decentralized incentives, where agents are rewarded for accurate predictions and penalized for incorrect choices. The LLM-equivalent of these incentives is the fundamental training task: predicting the next word in a sequence, mirroring how humans constantly predict the future state of their world. Through this mechanism, agents (the model's parameters) learn the statistical relationships and pathways that underlie these successful predictions, thereby constructing an internal order. LLMs derive their vast capabilities from the complex statistical relationships observed across their entire training data. The interplay of words, context, typical usage, and linguistic subtleties enables them to effectively capture the functional, practical knowledge embedded in human communication, which is inherently tricky to formalize or manually code.

The complexity of LLMs mirrors that of the human brain. For a given stimulus, we can observe which brain region "lights up," but we cannot discern the exact thought. This opacity is the inherent cost of kosmos: it allows us to know the inputs and outputs but leaves the internal computational process opaque. As a result, controlling and auditing these systems becomes profoundly difficult and precise, and deterministic, "point change" adjustments are impossible. Consequently, LLMs are not perfectly controlled puppets, but are inherently buggy like humans.

Freed from the scalability constraints inherent in taxis, the emergent kosmos of LLMs unlocks massive, powerful abilities, enabling them to tackle many of the most challenging technical problems. Some of these applications are garnering major headlines, such as the ability to predict the properties of new alloys, which previously required slow, intensive simulations or massive experimental cycles. Likewise, immense progress has been made by applying LLMs to accelerate drug discovery. Another extremely important development leverages LLMs' fluency in both machine and human languages to detect vulnerabilities in compiled software, a critical task for security and code robustness.

There has been considerable publicity about "vibe coding," in which LLMs generate high-level programming languages, such as C and Java, models that ingest code and find statistical regularities, just as they do with natural language. However, once that code is compiled into a program (a binary), the problem of analysis becomes almost impossible. The high-level code is compiled into assembly language, a low-level language that computers understand and execute efficiently. Unlike natural and high-level programming languages, assembly is complicated for humans to understand and reason about. While reversers, people who specialize in this skill, can analyze assembly, the process remains tedious even for the highest-trained experts. Because it is so complex and labor-intensive, vulnerabilities can be inserted, either maliciously or accidentally. Likewise, it is challenging for them to be discovered and remediated. These flaws can be exploited by malicious actors, including nation-states, resulting in severe consequences, such as data theft and system compromise.

Traditional software security methods are unable to keep up with the complexity of compiled code, and vibe coding is increasing both the relative frequency of these vulnerabilities, as the coding models are less security-aware. Additionally, the productivity gains enabled by vibe coding mean that more software must be checked for security issues.  Previously, software vulnerabilities were detected using static analysis tools that rely on a set of complex, top-down rules. Like all taxis systems, this approach suffered from poor generalization and adaptability, requiring an intractable amount of formal semantic rules to be defined and implemented. However, novel breakthroughs are now being made with the help of LLMs, capitalizing on their fluency in both human and computer languages. These new systems leverage the emergent kosmos of LLMs, allowing us to build security solutions that would have been impossible just a few years ago.

Given the dynamic, emergent nature of LLMs, computer scientists must expand their studies beyond the "hard science" of deterministic rules. It must increasingly incorporate the study of human behavior and how complexity emerges and is represented within these systems. AI is now at least partially a social science, a product of human complexity and profoundly shaped by human behavior and interaction. Social science, particularly Hayek's Complexity Theory, can provide essential frameworks for understanding the emergent systems we are building. More importantly, applying these insights can strengthen LLMs, enabling us to solve novel, challenging problems that neither humans nor computers could address alone.

Rachel Lomasky is Head of AI at Delphos Labs, a company that analyzes compiled code to deliver advanced malware analysis, third-party risk evaluation, and supply chain integrity.

10:13
1x
10:13
More articles

The Tragedy of Paul Ehrlich

Economic Dynamism
Mar 23, 2026

The Ways, Means, and Ends of FDR

Politics
Mar 20, 2026
View all

Join the newsletter

Receive new publications, news, and updates from the Civitas Institute.

Sign up
The latest from
Pursuit of Happiness
View all
The Mores of Machines
The Mores of Machines

As AI agents begin to form societies of their own, the Frenchman who came to understand ours may yet again have the last word.

Thomas Dias
March 19, 2026
  Gratitude, Grit, and Miracles: The New Facts of Jewish Life in America
Gratitude, Grit, and Miracles: The New Facts of Jewish Life in America

Jews have rarely lived among neighbors who regarded their lives as valuable as anyone else’s — who would risk their own lives rather than look the other way.

Tal Fortgang
March 17, 2026
Becoming All-American
Becoming All-American

Blue Moon takes place on the evening of March 31, 1943, the opening night of Oklahoma!

Titus Techera
March 6, 2026
The Original Sin of U.S. Health Care
The Original Sin of U.S. Health Care

As long as most Americans receive health insurance as an invisible, employer-managed fringe benefit, health care will remain expensive, opaque, and unresponsive. 

Avik Roy
March 4, 2026
The False Equivalence of Multicultural Day
The False Equivalence of Multicultural Day

Parents have an affirmative obligation to reinforce patriotic values and counter the narratives that are taught in school.

Josh Blackman
February 26, 2026
Rachel Lomasky
Civitas Outlook
The Ways, Means, and Ends of FDR

David Beito’s "FDR: A New Political Life" could have been subtitled "A New Political Death."

Civitas Outlook
The Tragedy of Paul Ehrlich

In Ehrlich's view of the world, every new person is just a stomach and a pair of hands.

Join the newsletter

Get the Civitas Outlook daily digest, plus new research and events.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ideas for
Prosperity

Tomorrow’s leaders need better, bolder ideas about how to make our society freer and more prosperous. That’s why the Civitas Institute exists, plain and simple.
Discover more at Civitas