
AI Needs Consumer Choice, Not Bureaucratic Control
Much of the current AI regulatory push rests on an implicit distrust of the public.
A growing number of state legislatures are rushing to regulate artificial intelligence (AI) as if Americans were passive subjects in need of constant supervision rather than capable consumers with preferences, judgment, and agency. The premise behind many of these proposals is not merely that AI can cause harm — that much is true of any powerful technology — but that ordinary people cannot be trusted to navigate tradeoffs for themselves or their families. That premise misreads the public and, in doing so, undermines innovation, narrows consumer choice, and weakens the very protections legislators claim to strengthen.
Nowhere is this mindset more apparent than in recent efforts to regulate so-called AI companions. Bills proposed in states such as Utah and Washington would impose vague, open-ended standards on developers, ranging from requirements to exercise “reasonable care” when training AI tools to mandates that AI companions exhibit “appropriate” behavior. On paper, these standards may sound like good policy. In practice, they may be technically infeasible and socially undesirable, inviting government or tech companies to make sensitive decisions rather than allowing consumers to assess whether an AI tool aligns with their diverse values, beliefs, and needs.
In short, the result of the regulatory rush at the state level is not better protection but fewer options. Developers facing unclear liability boundaries do what rational actors always do: overcorrect. They narrow functionality, eliminate edge cases, and avoid serving users whose preferences might be misunderstood or politically sensitive. What emerges is a flattened marketplace that offers Americans fewer tools, fewer configurations, and fewer opportunities to decide for themselves how AI should fit into their lives.
This approach treats consumer AI as a problem to be solved rather than as another service best left to a competitive, dynamic market to provide consumers with autonomy and choice.
Legislators unsure how to govern AI should look to another market where consumers make consequential decisions with minimal government intervention: buying a new car. Imagine a world in which lawmakers decided that, because driving can be dangerous, every vehicle sold in America must be a four-door sedan engineered to maximize crash-test performance above all else. Safety would improve, but choice would collapse, and car manufacturers that deviated from this boring model would go out of business. Pickup trucks, sports cars, compact vehicles, and electric runabouts would all disappear — not because consumers rejected them, but because regulators decided uniformity was safer than pluralism.
That is not how we regulate cars today, and for good reason. Instead, the government plays a limited but essential role. It sets baseline safety requirements. It mandates disclosures. It standardizes testing so consumers can make meaningful comparisons. But it does not dictate outcomes or force certain consumers to buy cars the government believes will align with its goals. More specifically, neither state nor federal actors decide which vehicle is “appropriate” for a family in rural Montana versus downtown Miami. It trusts consumers to weigh trade-offs for themselves, something, believe it or not, consumers are actually capable of. There’s a reason a father of four does not come home from the lot with a convertible.
This market ecosystem is such that Americans almost never end up duped when buying a car. They do not accidentally purchase vehicles that cannot meet their needs. Critically, that outcome is not the result of legal diktats but rather the product of a consumer environment designed to cultivate judgment rather than replace it.
People grow up riding in cars. They learn, gradually and informally, how vehicles differ through observation and experience. By the time they are in the market themselves, they encounter a rich ecosystem of information: horsepower ratings, fuel economy metrics, crash-test scores, drivetrain options, and long-term reliability data. When they finally decide to buy a new car, they seek out more meaningful information. They test drive. They read reviews. They talk to friends. They consult experts. Buying a car is rarely impulsive because it is expensive, consequential, and familiar enough to invite careful thought.
Empirical data underscores this point. Studies consistently show that Americans spend significant time researching major purchases. Estimates suggest consumers devote roughly 15 hours, on average, to deciding which car to buy. They compare models, read reviews, and revisit decisions over weeks or even months. Importantly, this is not wasted time, but time spent aligning a product with one’s preferences, constraints, and long-term needs.
The same pattern appears in other high-stakes consumer decisions. Homebuyers, for example, spend well over 100 hours researching and evaluating options before committing. Even for less permanent purchases, Americans increasingly rely on reviews, side-by-side comparisons, and peer input. We have built a culture and a market infrastructure that assumes consumers can learn when the decision warrants it.
AI, by contrast, is often treated as if it demands the opposite approach. Even though AI tools may shape how people work, learn, communicate, and even form relationships over the years, regulatory debates frequently default to paternalism. Lawmakers assume that users will not or cannot invest the time needed to understand these systems, and so the state must decide in advance which tools are acceptable.
This mismatch should concern us. If Americans are willing to spend 15 hours deciding which car to buy thanks to an information-rich experience, they may spend a comparable amount of time evaluating an AI system likely to become a core part of their professional and personal lives. The problem is not that consumers are incapable of doing so. Rather, our regulatory frameworks have not been designed to encourage or support that level of engagement.
Instead of fostering savvy AI consumers, many state proposals aim to eliminate the need for consumer judgment altogether. They substitute vague standards for informed choice and uniformity for transparency. In doing so, they short-circuit the very learning process that makes markets safer over time.
A better approach would mirror what works in car markets. Focus on disclosure rather than prescription. Ensure users understand what a system does, what it does not do, and what tradeoffs it entails. Standardize how tools are compared so consumers can make meaningful distinctions. Encourage competition so that products improve not because regulators mandate it, but because users reward quality, reliability, and alignment with their values.
Crucially, this approach recognizes that not all consumers will behave the same way. That is a feature, not a bug. Some people will approach AI casually, gravitating toward a familiar brand or a tool recommended by a friend. Others will be meticulous, reading technical documentation, scrutinizing safeguards, and comparing systems side by side. Both approaches are legitimate. A healthy market accommodates both.
Heavy-handed regulation does not. When compliance standards are unclear, developers design the most risk-averse interpretation. That often means excluding legitimate use cases, marginal users, or innovative features that do not fit neatly within a regulator’s comfort zone. The irony is that this often leaves consumers worse off: fewer choices, slower improvement, and fewer opportunities to discover which tools actually work best in practice.
There is also a deeper normative issue at stake. Much of the current AI regulatory push rests on an implicit distrust of the public. It assumes that Americans, uniquely among consumers, cannot be trusted to make informed decisions about digital tools. This is difficult to reconcile with how we treat people elsewhere. What separates these domains is not Americans’ ability to decide but lawmakers’ willingness to trust them to do so. Lawmakers trust consumers in those markets. Too often, they do not extend the same trust when AI enters the picture.
That distrust carries consequences. When regulation replaces individual judgment with bureaucratic judgment, it narrows the space for pluralism. AI tools, like cars, are not one-size-fits-all. Different families will have different comfort levels, goals, and values. A regulatory framework that fails to recognize that diversity is not consumer protection but consumer control.
If states want to promote responsible AI adoption, they should focus on cultivating savvy AI consumers. That means clear, standardized disclosures of capabilities and limitations. It means plain-language explanations of risks. It also means comparison tools, review ecosystems, and mechanisms for redress when things go wrong. These measures respect agency while addressing legitimate concerns.
The alternative treats consumers as problems to be managed, leaving Americans with fewer choices and less control over technologies that increasingly shape their lives. We already know how to do better. The lesson from car markets is not to ignore differences among AI tools, but to recognize that trust, transparency, and choice can coexist with meaningful protection. The task now is to apply those lessons to AI tools rather than abandon them out of fear.
Kevin Frazier is the AI Innovation and Law Fellow at the University of Texas School of Law and co-host of the Scaling Laws podcast.

AI Needs Consumer Choice, Not Bureaucratic Control
The regulatory approach treats consumer AI as a problem to be solved rather than as another service best left to a competitive, dynamic market to provide consumers with autonomy and choice.

The Start-Up Paradox: The Coming Red Shift in Innovation
Despite London's success, the future of innovation is securely in American hands for the foreseeable future.

Oren Cass's Bad Timing
Cass’s critique misses the most telling point about today’s economy: U.S. companies are on top because they consistently outcompete their global rivals.
Get the Civitas Outlook daily digest, plus new research and events.




.jpeg)
