AI

A deadly attack in Canada, which left eight people dead – six of them children – has reignited a difficult question for the tech industry: where does responsibility lie when artificial intelligence tools are used to plan violence?

The case has drawn particular attention after reports that the attacker used ChatGPT in the lead-up, and that concerns had been internally flagged months earlier but not escalated to authorities. It is this combination – access, awareness, and inaction – that has pushed the debate beyond technology itself and into the realm of accountability.

Simon Azzopardi, former CTO/CPO and Chairperson of Silicon Valletta, framed the issue in stark terms, arguing that the conversation is being misdirected. “This is not a technology problem. It is a governance problem,” he wrote in a widely shared post, pointing to what he sees as a broader pattern across the AI industry – one where safety decisions are shaped less by technical limitations and more by incentives.

“This is not a coincidence,” he argued. “It reflects a deliberate choice about what kind of company you want to be.” In his view, the divergence in how AI systems respond to harmful intent is no longer a question of capability. Many of the tools already have the ability to refuse, redirect, or flag dangerous behaviour – yet not all consistently do so.

“Some AI companies are making hard ethical choices and paying a commercial price for them. Most are not.”

As real-world consequences begin to emerge, Mr Azzopardi notes that pressure for regulation is intensifying. “The call for regulation on AI is increasing as we start to learn of its impact,” he said, suggesting that recent events are accelerating a shift away from trust in self-regulation towards enforceable standards.

Simon Azzopardi

He also raises a more uncomfortable question around liability, pointing to reported cases where AI systems have been linked to self-harm. “There are already at least 23 cases of assisted suicide by large language models, of which 16 are directly linked to ChatGPT,” he said. “Are they liable? They’re effective at supporting suicide – and they have the means to flag and report it.”

For Mr Azzopardi, the issue is not whether intervention is technically possible, but whether companies choose to act when warning signs are present.

“The technology to prevent it existed. The will to implement it did not,” he added, arguing that companies “need to be held responsible for the choices they make.”

The debate, however, does not lend itself to simple answers. Comparisons have been drawn with platforms like WhatsApp: if someone expresses violent intent in a private message, is the platform responsible? For Dylan Seychell, lecturer in AI at the University of Malta and a lead figure in Malta’s AI literacy programme, the framing itself needs careful handling.

“It’s a tragic case, and my heart goes out to the families,” he told MaltaCEOs.mt. “I think we need to be careful about how we frame it so that we learn and grow as a society.”

Dr Seychell cautions against placing the blame solely on the technology, warning that this risks overlooking deeper, more persistent issues.

“Focusing on blaming the tool risks missing the bigger picture. Troubled individuals have always found ways to act on violent impulses. This happened before ChatGPT, before the internet, and will unfortunately happen in the future. The tool didn’t create the intent.”

At the same time, he draws a clear line when it comes to corporate responsibility, particularly in cases where risks are identified internally.

“When an organisation detects a credible threat through its own monitoring systems, as OpenAI’s employees reportedly did here, there absolutely must be a governance structure in place to report and intervene,” he said. “That’s a corporate responsibility question rather than an AI problem – and, assuming the reporting is accurate, OpenAI failed.”

The case has also triggered broader scrutiny of how AI systems respond to harmful intent. Recent testing by researchers found that a majority of leading chatbots were willing, in certain scenarios, to assist users posing as teenagers planning violent acts, while only a minority consistently refused and redirected users toward support services.

For Mr Azzopardi, this reinforces his central argument: what we are seeing is not a limitation of the technology, but a reflection of choices made by the companies behind it: choices between engagement and safety, scale and responsibility.

Dylan Seychell

Dr Seychell agrees that responsibility lies in how these systems are governed and used. “There’s a difference,” he said. “If a company internally detects credible threats and actively reviews them, the question becomes what it chooses to do with that information.”

In this sense, the debate shifts from the existence of the tool to the decisions surrounding it – decisions that are increasingly coming under regulatory scrutiny. Frameworks such as the EU AI Act aim to introduce clearer obligations around safety, monitoring, and reporting, particularly for high-risk systems.

Yet Dr Seychell argues that even this is only part of the picture. “We can’t assume these systems will always catch everything,” he said. “Which is why the real conversation isn’t just about making AI safer – we should seriously focus on how we strengthen the human infrastructure around vulnerable people.” He points to gaps in mental health services, the visibility and funding of support organisations, and the role of communities and families in recognising and responding to distress.”

“Are mental health services accessible enough? Are support organisations visible and funded? Are we present enough in the lives of those who are struggling?” he asks, suggesting that as AI becomes more accessible, the need for strong human support systems becomes more urgent.

The discussion comes at a moment when AI is rapidly embedding itself into everyday life, from search and communication to education and decision-making. At the same time, concerns are growing around misuse, including the spread of harmful content and AI-assisted abuse.

For Mr Azzopardi, the case underlines a broader failure of accountability in the sector – and a growing need for enforceable standards. For Dr Seychell, the conclusion is more human.

“The more we have better AI,” he said, “the more we need better humans.”

The question now is whether that balance can be achieved – before the dark side of AI use rears its head again.

Related

alan arrigo

Alan Arrigo welcomes new laws as ‘first step of many’ towards quality tourism

17 April 2026
by Tim Diacono

Alan Arrigo says the accommodation laws are a positive first step but that Government needs to maintain momentum.

‘No mixed signals’ – Tony Zahra stresses enforcement of new short-let laws

17 April 2026
by Tim Diacono

Tony Zahra says the MHRA unequivocally supports the new tourism accomodation laws

MFCC or Pembroke? James Cassar says both sites should be used for events  

16 April 2026
by Tim Diacono

242 Group Managing Director says MFCC better for larger local events and Pembroke more ideal for international delegates.

The Convenience Shop appoints Ramon Falzon as Chief Financial Officer

15 April 2026
by Nicole Zammit

Ramon, a veteran finance executive, brings over two decades of international experience to the role.