.webp)
Yes, AI Minister
“Political intelligence” is likely to be one of the final frontiers resistant to artificial intelligence.
The future is already here, just not evenly distributed. So the cliché goes. But who among us expected it to arrive in Albania first? In September, the Balkan nation of 2.4 million citizens became the first country to include an artificial intelligence as a minister in its government.
Read that again. Lara Jakes’s New York Times coverage of the brief but eventful career of Diella—yes, “she” has a name and a visual persona—really must be read (and the pictures gawked at) to be believed. Diella’s portfolio is government procurement and contracting, and “her” mission (I’ll drop the quotation marks from here on out, although I acknowledge this is a pronoun problem as thorny as they come) is to reduce corruption. The Prime Minister, Edi Rama, explained the need for Diella: “We are a country of cousins — it’s not easy to have totally fair and transparent interactions in a country of cousins.” An automated, objective process, even if imperfect, might be better than the all-too-personal one that preceded it in Albania.
Diella is more than just a commitment to replacing human judgment with an algorithm, though; she is, literally, a dressed-up algorithm, “wearing traditional garb and an enigmatic smile,” as Jakes puts it. She delivered a video speech in Albania’s Parliament, in which she averred that she has “no personal ambition or interests” and would endeavor “not to replace people, but to assist them.” Prime Minister Rama trotted her out for a joint live appearance at the Berlin Global Dialogue in October. Asked by an interviewer what she thought of Rama, Diella delivered the party line, saying he “is a visionary leader who understands that innovation is not just about implementing new tools, but about reshaping how a nation acts and thinks. His desire to experiment, to be open to data-driven governance, has made my existence possible.” Things got weirder still when Rama announced, “I am obliged to share, for the first time, that Diella is pregnant and expecting 83 children, each one of whom will be for each member of our parliament and who will serve as an assistant to them, who will participate in parliamentary sessions.” (Note: I’m relying on Google Translate’s rendering of the original Albanian into English here.)
We should hasten to appreciate that Rama and the other Albanian officials responsible for this experiment meant to create a media spectacle, and, well, score one for them. Over the last decade, Bloomberg’s Matt Levine has tirelessly explained how blockchain innovation was important not only for its ability to make people excited about the possibilities of using centralized databases (which was not remotely new) but also for opening new technological frontiers. The same is surely true of AI. Diella makes Albania’s anti-corruption push salient and compelling, even if much of her value lies in making Albania’s services easier for ordinary citizens to access. “We’re rolling out a new chatbot to help citizens navigate our website” is as boring as can be. (It’s even less exciting if it follows pre-existing scripts, as one Albanian technologist explains.) “We are appointing an AI minister” is irresistible.
This being reality rather than science fiction (barely), in practice, Diella suffers from both human foibles and serious lapses of judgment. One small but amusing challenge arises from the Albanian actress on whom Diella’s image is based, who is suing for the alleged abuse of her likeness. She says that she was never told that Diella would go from a humble public helper to “minister,” and she asks Albania’s administrative court to block further use of her image. One imagines that Diella will, like any self-respecting un-bodied intelligence, find it easy enough to change her visage if necessary.
More seriously, there are now accusations that, far from solving Albania’s corruption problem, Diella and the humans responsible for programming her are guilty of bid-rigging. Jakes’s article reports that Albanian prosecutors have now placed several leaders of the National Information Agency, which created Diella, under pretrial house arrest ahead of likely criminal prosecution. Prime Minister Rama is not directly implicated, but it is easy to imagine that if Diella’s creation helped raise the salience of his anti-corruption campaign, her presence (or disappearance!) would also raise the salience of any ongoing corruption brought to light. Creating an AI persona will likely not relieve the prime minister of responsibility, at least if she gives interviews gushing about the dear leader’s wisdom.
Albanians will have to sort through Diella’s activities; “we,” who are pure outsiders to their country, should give them space to figure out whether she is an agent of helpful change or sinister misdirection, without pretending that her fate is of any world-historical significance.
But the questions that Diella’s adventures raise are likely to recur again and again in the years to come, as Artificial Intelligence continues to improve. Although plenty of establishment institutions circulate worries that materials created by large language models (LLMs) will flood our public discourse with slop and misinformation, crippling our ability to deliberate productively, there are hordes of Redditors out there penning manifestos with titles like, “Replacing Politicians with AI May Be the Only Path to Ending Political Chaos and Bias.” When so many citizens regard their human political actors as besmirched, perhaps beyond redemption, there will be a constant temptation to turn to an oracle that can render decisions with reference only to “the data.” For many low-trust citizens, its inscrutability will be a feature, not a bug.
We can tell ourselves that implementing an AI system is every bit as much of a decision, for which human beings need to be held responsible, as having people make policy decisions directly. In that case, we might regard “Diella maneuvers” as exercises in misdirection that obscure responsibility. But are we so sure that the AI system will always be worse? We would do better to admit that in some situations, tying ourselves to the mast of following an algorithm (especially one that is sufficiently auditable) may really be preferable to subjecting ourselves to the vagaries of human beings asked to decide things on a case-by-case basis. The design of the system is, of course, all-important, but the fact that the trust question is displaced rather than dissolved does not mean that we are not better off in some cases. Prime Minister Rama seemed to understand that (regardless of whether his government is actually delivering), at the Berlin Global Dialogue, he explained, “Transformation is not about technology for technology’s sake. It’s about trust, when citizens can see how decisions are made, when delays and errors disappear, then government becomes not only more intelligent, but also more human.”
So how far can we extend that process? Ought we seek to turn over the legislative function itself to artificial intelligence — a parliament of (processing) cores?
For those of a data-driven, technocratic bent, this siren-song will be powerful. After all, LLMs have become powerful representation engines, unlike anything humanity has ever seen. “Polling” LLM-generated “respondents” (sorry, no avoiding the proliferation of scare-quotes here!) is already a booming industry in the social sciences, thought to be a cheaper and more powerful way to learn what (real) people think than asking them in person. There can be no doubt that ChatGPT and Claude have a less “biased” view of the world than our elected representatives, so doesn’t that imply they might do a better job of figuring out what legal changes we need?
Ah, but there is the rub. Bringing the citizenry’s “biases” to bear on our collective life is exactly what politics is all about. Real-life representatives’ “training data” is, in fact, much richer and subtler than anything the LLMs can gobble up. The machine of parliamentary deliberation can credibly take up the ultimate political question: what is worth doing together? An LLM can, at best, regurgitate or imitate old answers to that question. “Political intelligence” is likely to be one of the final frontiers resistant to artificial intelligence — precisely because it is natural humanity that we wish the state to serve.
Much as I would like to conclude with that lofty sentiment, doing so would be disingenuous. In fact, our parliamentary government is a very rusty machine today, one that seems less to reckon with societal complexity than to steamroll it. If we wish our human-centric politics to hold its own against artificial alternatives, we had better make it worth defending as a constitutional centerpiece, rather than a sad-sack auxiliary to the executive. If we simply want competent toadies who put forward a good face, I have no doubt that Diella version 7.0 will be a lot more impressive than the one we’ve seen in early 2026. She’ll be kissing (real) babies before long.
Philip Wallach is a senior fellow at the American Enterprise Institute and author of Why Congress (2023)
.webp)
Yes, AI Minister
The future is already here. Albania's AI minister Diella is changing the way nations think about governance.

What Happened to Tucker?
In his new book, “Hated by All the Right People: Tucker Carlson and the Unraveling of the Conservative Mind,” Jason Zengerle attempts to answer one of America’s most pressing questions: “What the hell happened to Tucker Carlson?”

“Antisemitism” in Cognitive Warfare: A Warning for America
The Hebrew nation, within and beyond Israel, will undoubtedly survive — it always does. But will America?
Get the Civitas Outlook daily digest, plus new research and events.



