Malta has spent the past years laying the groundwork for artificial intelligence, but the real test lies ahead. According to Vanessa Camilleri, Associate Professor of AI at the University of Malta, success will by how responsibly and effectively AI is embedded across society.
“What would success look like for Malta in AI five years from now?” Professor Camilleri was asked by MaltaCEOs. The answer, she explains, is not simply more AI systems, but better ones with people front and centre, implemented with transparency and public trust at their core.
“Success would mean that Malta is not only adopting AI, but doing so in a way that is responsible, transparent, and trusted by society,” she says. “The country already has important building blocks in place, including a national AI strategy and ethical frameworks. The next phase is about consistent implementation across sectors such as public administration, education, health, finance, and justice.”
A central pillar of this transition is investment in people. Initiatives such as AI for All (AI Għal Kulħadd), developed between the Malta Digital Innovation Authority and the University of Malta’s Department of Artificial Intelligence, are expanding AI literacy beyond technical specialists and into broader society.
Building on this, she identifies three defining markers of success for Malta over the next five years.
First, citizens must experience tangible improvements in services and quality of life. Second, institutions need to develop internal capacity to understand and govern AI systems, rather than relying entirely on external providers. Third, Malta should position itself as a small state capable of combining agility with credible, responsible innovation.
While the long-term vision is clear, Professor Camilleri stresses that immediate policy action is equally critical. If Malta had to prioritise one measure within the next 12 months, she argues it should be the introduction of a mandatory AI governance and risk assessment framework for public-sector use.
“This does not need to be heavy or restrictive,” she explains. “A practical approach would be a tiered or ‘traffic light’ system, where lower-risk uses can proceed quickly, while higher-risk applications require stronger safeguards, documentation, and oversight.”
Such a framework would include human oversight, data protection, bias and fairness checks, transparency obligations, and clear accountability structures. Importantly, it would align Malta with broader European developments such as the EU AI Act, which emphasises a risk-based approach to governance.
Beyond regulation, one of the most pressing strategic questions is how Malta avoids becoming overly dependent on private AI vendors, particularly in critical public systems.
“Avoiding over-reliance does not mean excluding private sector expertise,” Professor Camilleri notes. “It means ensuring that control, accountability and understanding remain within public institutions.”
This requires stronger procurement practices, including demands for transparency, auditability, and robust data governance. Contracts should also include interoperability requirements and clear exit strategies to prevent long-term vendor lock-in.
At the same time, public-sector capability must be strengthened. “Institutions need to be able to critically evaluate systems, challenge vendor claims, and make informed decisions about deployment,” she says. “Private vendors can and should remain partners, but responsibility for public outcomes, rights and trust must remain firmly with the state.”
The risks of AI are perhaps most visible during election cycles, as previously highlighted by experts Keith Cutajar and Gege Gatt, where misinformation and deepfakes can have an immediate societal impact. While Malta benefits from being part of a broader European regulatory ecosystem, Professor Camilleri cautions that preparedness must extend beyond technical solutions.
“Election periods introduce heightened risk. Preparedness is not only a technical issue, but also an organisational and societal one,” she explains. Coordination between institutions such as the Electoral Commission, media organisations, digital platforms, civil society, and technical experts is essential.
Rather than relying on restrictive measures, she advocates for rapid verification systems, clear labelling of synthetic content, and sustained public awareness. Just as importantly, there must be a shared commitment among political and institutional actors to avoid the deceptive use of AI-generated media.
“In this space, trust is critical,” she adds lastly. “The effectiveness of any technical solution ultimately depends on whether citizens have confidence in the systems and institutions that support it.”
A new €30 million secured bond issue positions Challenge Aviation to accelerate growth in aircraft leasing and strengthen its role ...
The CEO's comments sparked a discussion on social media about proposals that have been made.
'A technocratic team supporting Cabinet is not just relevant – it is essential.'
As the election looms, tech leaders warn that without urgent policy, Malta risks falling behind on AI governance, security, and ...