Picture of Diella, the Albanian Minister of State for Artificial Intelligence

Diella: The AI Government Minister

Diella: The AI Government Minister

In September 2025, Albania made global headlines when Prime Minister Edi Rama introduced a new cabinet member: Diella, an AI-generated “minister” tasked with overseeing public procurement and fighting corruption.

Rama’s argument was straightforward. Human ministers can be biased, overworked, even corrupt. Algorithms, by contrast, do not sleep, take bribes, or play politics. In theory, Diella could bring consistency, transparency, and incorruptibility to one of government’s most abuse-prone areas – public contracting.

She has already addressed Albania’s parliament via screen, speaking via a synthetic appearance and voice. She represents, quite literally, a new form of political being: one that does not age, tire, or focus on popularity, but instead calculates probabilities, delivers on policy, and frankly, gets the job done.
AI-generated minister addresses Albanian parliament for first time – ABC News

From Digital Assistant to Digital Minister

Before this promotion to minister, Diella was originally launched as a Virtual Assistant in January 2025, assisting users with government services on Albania’s e-governance platform, e-Albania. Following this initial launch, she had reportedly processed 36,600 digital documents and provided nearly 1,000 services through the platform.
Albanian Government Council of Ministers – Diella

Giving the AI a human face, voice, and official position was clearly a bold move and the story quickly went viral. The opposition called it a publicity stunt and asked “Who will control Diella?”, perhaps missing the irony that having somebody controlling human ministers and civil servants was a significant reason why corruption was an issue in the first place.

Whatever the political take on Diella, governments, like large businesses, are adopting AI to automate the functioning of services, and that isn’t going to stop any time soon, so this is a subject worth thinking about more deeply.

When Government Becomes too Complex for the Human Brain

To understand why governments are looking to AI to automate services, we need to admit an uncomfortable truth: the complexity of modern government, public sector services, the legal system, and international affairs has become too much for any one human mind to process.

Scale and Complexity

Anyone who has worked in IT, or in a large corporation, will understand that the more complex a system is, the more rules it has, the greater the number of approvals and sign-offs needed, or even just the more people involved in it, the harder it becomes to change anything. If you think trying to organise going out for a team meal is difficult, imagine that on the scale of an entire country.

Ministers are expected to grasp energy policy, cybersecurity, healthcare funding, global supply chains, and environmental modelling, all while keeping up with daily events, giving media interviews, and responding to social media storms before lunch. And not forgetting dealing with local issues from their constituents.

Budgets run into billions, policies have effects across continents, a misjudged regulation can destabilise ecosystems, and a misspoken word in an interview can send financial markets plunging. Even the best-intentioned leaders are, to some degree, flying blind without a full understanding of the details, reacting instead of actually leading, dependent on people around them to actually keep track of what is going on, and not pretending in order to keep their job.

The Problem of Delivery

Politicians are, by nature, people of ideas and speech. They talk in terms of hopes and promises, they campaign on visions of a better tomorrow, or at least a condemnation of a fictional worse tomorrow. But those promises fade when brought into contact with the reality of delivery. The bureaucracy absorbs all their energy, determining who is actually responsible for the policy area they seek to change and what legislation is required to achieve change, before they even get to considering how to change it and any unintended consequences.

When promises made in an election campaign are not delivered or don’t provide the outcomes that were promised, voters feel that they have been deceived, elect someone else, and the cycle begins again with new politicians similarly unable to cope with the workload.

The problem then is how to deliver change in a system that is so complex that no single person actually understands how it works. Enter AI…

Democracy in the Age of AI Automation

Political history can be told as a story of who decides.

  • Autocracy gives power to one
  • Aristocracy to the best (not my definition, that is literally what it means in Latin)
  • Plutocracy to the rich
  • Theocracy to the divine
  • Technocracy to the experts (though someone still has to decide which experts)
  • And democracy, to everyone (it’s more complicated than that, obviously)

But in an ever more complex and fast-moving world, the bigger question is: who delivers what the leaders decide?

Imagine a system where citizens vote for a party and its manifesto commitments. That shouldn’t be difficult; it’s pretty much what we have in the UK (arguments aside, whether people actually vote for a local MP, a party, a prime minister, or what is written in the manifesto).

In the current world, an army of civil servants, committees and legal experts take months or even years to work out what a proposed change means, how to enact it, how to amend all the laws and regulations to allow it to happen, and then persuade all the public bodies and private citizens to actually follow the new rules, and the legal and regulatory systems to enforce them while keeping the people and the media on side.

Manifestos as AI Instructions

Suppose the manifesto becomes the input into an AI’s processing instructions. The algorithms have access to all available legislation and case law, all operating procedures, and real-time data on every government operation. The AI would (overseen by humans, obviously) figure out how to allocate funds, streamline projects, amend legislation, identify inefficiencies, monitor outcomes, track progress, and report progress back to the public.

The politicians’ role would evolve. They would still debate values, set priorities, and make moral choices (it’s to be hoped). But the administrative side, the exhausting machinery of delivery, would be automated, at least to some level. All work would be checked and double checked, processing would continue 24 hours a day, 365 days a year, with no need for sleep, no hangovers, no annual leave or sick days.

Arguments about humans losing their jobs would of course be made, but one way or another, humans will need to adapt to changing roles alongside this technology. It would have to be said that in the private sector to date, the scale of job losses due to the implementation of AI has not come close to matching the hype as yet.

Governments around the world are looking at AI technology to gain a competitive edge. It is totally plausible that countries that adopt AI will gain productivity and economic advantages over those that don’t, and on a global scale, that could have wide-reaching implications.

Politics as Vision, Delivery as Engineering

This would not be rule by AI. The politicians would remain in charge while delivery is run by AI. Politics would be the vision; delivery would become engineering. The ballot box would still choose the direction, but algorithms would assess it, deliver it, and report on it. If policies didn’t result in the benefits the politicians claimed, then they are the ones who would be answerable.

The Double-Edged Sword of Efficient Political Delivery

A significant dilemma stems from the realisation that the same system that could deliver high-quality, efficient governance could also deliver high-quality, efficient tyranny.

When Algorithms Serve the Public Good

A benevolent leader might use AI to allocate resources fairly, eliminate waste, and ensure that citizens receive timely, data-driven support. Public services could become astonishingly efficient with every transaction logged, every outcome measured, every mistake instantly corrected.

A digital minister would not sleep, lose interest, run for re-election, or play factional games. They would calmly, quietly, and efficiently get the job done with all the necessary information at their virtual fingertips.

When Algorithms Serve Public Control

But imagine that same infrastructure in malevolent hands. Databases that optimise welfare could just as easily map dissent. The algorithms that ensure fairness could be instructed to discriminate against political opponents and funnel money to allies. A surveillance network designed as a safety net could become a tool of control.

Once an AI system is given authority, its obedience is total; it will do precisely what it is told, without hesitation, conscience, or rebellion. It will allow the elected government to actually govern, whatever that may mean.

AI as an Amplifier of Power

That is the moral paradox – AI doesn’t choose good or evil, it amplifies the choices made by humans. The question is not whether AI will make governments more powerful; it absolutely will, but whether societies can remain vigilant enough to ensure that this power serves the public rather than controls it.

There are, of course, benevolent and malevolent leaders and governments in the world today. It is perhaps a sobering thought that the only reason they have not achieved their ultimate goals during their term in office, good or bad, may be down to the inefficient workings of outdated government administrative machinery.

It is no surprise that the first act of ambitious leaders today is to try to tear down the legislative and bureaucratic barriers that would prevent them from delivering on their policies. The concern is that we may find out too late whether a leader and those around them are truly benevolent or malevolent. But do we really need to act as if every politician is a potential dictator?

The Importance of Safeguards, Checks, and Balances

While safeguards can be built into any system to an extent, the most important safeguards are those outside the system. The culture, checks and balances, transparent oversight, openness, trustworthy media, and public understanding must evolve alongside any changes. Voters of the future should not just tick a box to cast a vote; they should have all the tools necessary to understand the implications of that decision.

Democracy is dependent on the people being given accurate and correct information and choosing wisely. If that fails, then perhaps the inefficiency of the government machinery and the slow pace of change it forces on those in power is the only protection we have left. Which is not ideal, especially when governments in other countries are embracing rapid change, for better or for worse.

Where Does This Lead?

The likeliest future is not one where AI replaces government; that remains science fiction. Instead, AI will run alongside and inside government. Humans will still argue, vote, and decide values while AI will manage delivery, audit outcomes, and learn from results.

In one future, that partnership will lead to citizens seeing promises being kept, resources being allocated transparently, and progress being measured objectively to hold the government to account. Other futures may be more bleak.

The point is that whatever happens, it will be the humans in power that lead the way, not the AI. And in most countries, the electorate decides which humans have that power.

If we want AI in government to be used sensibly, every one of us must take more seriously our decisions on which politicians we put in charge of it.

WANT MORE?

SIGN UP TO BE NOTIFIED OF FUTURE ARTICLES

We don’t spam. Read our privacy policy for more info.