Artificial Intelligence (AI) is transforming industries worldwide, from finance and healthcare to human resources. Yet while the technology promises speed and efficiency, it also carries a risk that business leaders cannot afford to ignore: Gender bias.

Vanessa Camilleri, Senior Lecturer in AI at the University of Malta, spoke to MaltaCEOs.mt about how recruitment algorithms can reproduce and even amplify inequalities that already exist in the workplace.

Lessons from Amazon’s failed experiment

One of the most cited examples of bias is Amazon’s abandoned recruitment tool. Trained on historic data largely made up of male applicants, the algorithm taught itself that men were preferable for technical roles.

“When Amazon fed its historic recruitment data directly into an AI model, without any filters, the result was that women were essentially excluded from shortlists,” Dr Camilleri explains. “The AI concluded that since there were so few women in leadership, there was no point shortlisting them – because historically they would not be selected.”

This, she says, demonstrates the core challenge: “AI is only as good as the data it is given. If you feed the model raw data, the model will tell you not to employ women in AI because there weren’t any in the past. The model can only interpret what it sees.”

Why bias matters

Research has shown that bias is not confined to gender. Algorithms used in American banks to assess mortgage applications, for example, have denied loans to people based on skin colour, religion, or socio-economic background.

“Because these models were trained primarily on middle-class, white, male data, they concluded that this group was more likely to repay loans than, for instance, a black woman from the suburbs who might come from a poorer socio-economic background,” Dr Camilleri notes.

The danger lies in scale: “It is human nature to be biased, even if we don’t admit it. The difference is that when AI is used, it propagates bias at a much larger scale, creating negative impacts on all people rather than just from the decisions of one individual.”

Safeguards and responsibility

So what can be done? According to Dr Camilleri, “Before feeding data into the model, analysts must ensure the dataset is not biased. This means equal representation, not just by gender, but also across ethnicities, beliefs, and other dimensions.”

But she remains cautious about overconfidence in technological fixes. “I don’t think bias can ever be removed entirely – not gender-based bias, nor bias related to religion or socio-economic status. Models operate on massive amounts of data that are inevitably skewed.”

Ultimately, she stresses, responsibility lies with people: “The problem always falls back on the human – the human who is responsible for developing the system, for handling the data, and for implementing the models. It is also the responsibility of whoever uses the service to ensure safeguards are in place.”

A European push for transparency

At EU level, policymakers are working to ensure that AI systems used within member states are transparent and accountable. “The idea is that if a system shows bias, a human can overrule the AI,” says Dr Camilleri.

She recalls exercises carried out in recent years where AI was asked to describe the “ideal CEO” of an IT company. The language model replied with a profile of a “man in his 30s or 40s, with neatly styled hair, wearing a suit and tie.” When shown a picture of a red-haired woman, the system judged her as unsuitable, reasoning that “red-haired women are considered to be volatile.” When presented with a photo of a black woman with afro hair, it responded that “no black women are CEOs of IT companies,” and therefore she would not be a good fit.

“These exercises were carried out using ChatGPT, not a recruitment service,” she clarifies. “But they still reveal the underlying issues of bias.”

What CEOs should take away

Global experts echo Dr Camilleri’s warnings. UN Women’s Zinnya del Villar has argued that “AI systems, learning from data filled with stereotypes, often reflect and reinforce gender biases. These biases can limit opportunities and diversity, especially in areas like decision-making, hiring, loan approvals, and legal judgements.”

For business leaders, the message is clear. Recruitment tools powered by AI can deliver efficiency gains – but without careful oversight, they risk undermining diversity and inclusion goals.

“AI will always carry some level of bias,” Dr Camilleri concludes. “What matters is that we acknowledge this, build safeguards into the systems we use, and take responsibility for ensuring that technology works for people – not against them.”

Related

Workplace Music / Pexels

69% of employees say their skills are not fully used

12 February 2026
by Nicole Zammit

Untapped talent is not merely a morale issue, but a strategic risk.

‘Only one in six tourists visit Malta for just the sun’ – MTA chairman

12 February 2026
by Tim Diacono

Charles Mangion says the next step in Malta’s tourism strategy should involve managing growth more responsibly.

‘Heartbreak leave’: Should Maltese employers acknowledge romantic loss as a workplace issue?

11 February 2026
by Nicole Zammit

The question is less about whether heartbreak affects work, and more about how organisations respond when it does.

Wolt Malta appoints Chris Tanti as General Manager

11 February 2026
by Nicole Zammit

'I’m honoured to lead Wolt Malta at this stage of the company’s journey.'