Office / Unsplash

Artificial Intelligence (AI) is beginning to have a prominent effect on our work lives. By simulating intelligent behaviour, AI doesn’t only make our lives easier but also has the potential for minimising unconscious workplace bias. The concept of a diverse workplace gained the spotlight in the 1960s. However, the trend is now moving from diversity to inclusion.

Ginni Rometty, the CEO and Chairman at BIM describes this phenomena as, “Today when I think about diversity, I actually think about the word ‘inclusion.’ And I think this is a time of great inclusion. It’s not men, it’s not women alone. Whether it’s geographic, it’s approach, it’s your style, it’s your way of learning, the way you want to contribute, it’s your age – it is really broad.”

Using AI to mitigate bias

While we are susceptible to having biased opinions which in turn inhibit diversity and inclusion, AI isn’t limited by bias. If deployed correctly AI can support HR managers in two key areas. First, it can help ensure all candidates have equal access to opportunities. Second, it can aid hiring managers in making well-informed and fair decisions.

Equal access to opportunities

By ensuring equal access to job opportunities hiring managers can make their positions available to a larger number of candidates rather than limiting their accessible talent pool. AI can aid HR by helping make job descriptions encouraging and by improving the awareness of job seekers regarding open positions.

Improving job opportunity awareness

The theory of information asymmetry suggests that there is an information gap between potential employers and job seekers in the labour market. AI technology, especially interactive chatbots, can be adapted to remove this information asymmetry.

Rather than relying on keyword research tools with the use of interactive technology, AI can better understand the candidate’s interests and skills and match them with opportunities tailored to their profiles. In this way AI can help organisations build diversity before they even begin to interact with candidates personally.

Making job descriptions Inclusive

A job description is where interested candidates will learn most about your company, and first impressions do last. Admittedly, preparing the perfect job posting can be challenging, especially one that attracts diverse talent and gives an accurate description of the role and the company.

Here’s where AI plays a pivotal role. Effective algorithms can be deployed that review job postings and filter them for any discriminatory elements, suggest changes, and help managers reach out to a diverse audience.

Fair decision-making

As a manager, removing bias to attract diverse talent is just the beginning. Companies make decisions throughout employment life-cycles that might have potential long-term effects on the livelihood and wellbeing of their employees. They need to think bigger and ensure that those decisions are made fairly.

Identifying and reducing bias across the employment lifecycle

The only things hiring managers know about applicants are through the cues on their CVs or cover letters. Oftentimes personal attributes like age or ethnicity might trigger unconscious biases which leads to impaired decision making. For example, hiring managers might underestimate the potential of their female applicants in authoritative and managerial roles.

Here AI can help recruiters and applicants on two fronts. First, it can help applicants by identifying terms in candidate applications that may potentially lead to bias. An accurately written algorithm is even capable of making suggestions to use alternative neutral terms instead. Secondly, as a manager you can save yourself tons of time and effort. Just by deploying AI in the screening process. AI can be programmed to provide unbiased and blind ranking of candidates by simply matching their skills to the role thereby reducing your workload and bias!

Concealing job irrelevant attributes in applicant profiles

For decisions regarding remunerations, reward and punishment systems, adverse impact analysis tools may be of key importance. Managers can make their HR practices fairer by deploying these tools to identify potential discrimination against a particular group in a given scenario.

Similarly for promotions and rewards, AI is adept at understanding data, recognising patterns, and understanding natural languages to gain insights into employee performance. Managers can then use this information to explore internal talent. It can help managers save recruiting time and help save training budgets by making suggestions to fill open positions by utilising in-house talent. AI can further help HR managers in evaluating the training needs of employees and help in their constant learning and skill development.

Successfully deploying AI to mitigate bias

Integration of AI to improve diversity and inclusion is a promising venture. But, however promising it might be, it’s only going to be as good as its programmer. In fact a 2018 survey reveals that 23 per cent of hiring managers think that AI might as well perpetuate the problem of bias in hiring.

The problem here is that the data and the algorithms that enable machine learning are both provided by humans and unconscious bias is an inherent human trait. To overcome this, firms must ensure they keep testing their datasets and models to detect bias and improve their algorithms accordingly.

Organisations can prevent the aggravation of bias in AI by taking the help of experts in industry specific fields, data scientists, and behavioural psychologists. As data scientists bring their expertise in building efficient algorithms psychologists have the required skill set to provide insight on human behaviour and the resultant bias. Furthermore, an organisation built on a framework of fairness that uses standardised hiring and competency models can make the most out of the opportunity AI presents in the field of D&I. And finally, the area that requires the most attention is data collection and creating the right algorithm. Granted that AI itself isn’t biased. However, its performance is completely dependent on humans for both programming and information selection. Before deploying algorithms, their outputs should be thoroughly checked by using standardised models so bias can be effectively mitigated.

The author would like to thank Hafsa Shakil, GMD intern, for his support in the research and writing for the present article.

Related

Are we mishandling our finances during major life events?

17 May 2024
by Luca Caruana

This week, Money Coach Luca Caruana addresses a pressing concern from a 29-year-old software developer at a critical juncture — ...

6 tips on how to conquer your to-do list

14 May 2024
by Sarah Muscat Azzopardi

Here’s how to make the most out of your workdays.

Getting some well-needed rest: A reminder to switch off outside of work

4 May 2024
by Fabrizio Tabone

This does not entail slowing down your productivity, but it concerns setting boundaries between personal life and work.

Is It too late to start building wealth for retirement?

3 May 2024
by Luca Caruana

An anxious financial controller in their 40s is concerned about their financial future due to low savings and no investments ...

Close Bitnami banner
Bitnami