As generative AI (GenAI) tools like ChatGPT, Microsoft Copilot, and Google Gemini become increasingly integrated into workplace processes, concerns over data security are mounting.

Employees frequently input sensitive information – ranging from customer data and financial records to employee benefits and source code – into these platforms, posing significant risks to enterprise security.

A recent study by Harmonic has highlighted the extent of this issue, validating concerns that many organisations have about the unrestricted use of AI tools. Each time a user enters data into a GenAI platform, the information becomes part of the system’s learning model. If adequate security measures are not in place, there is a risk that this data could be accessed in the future, either through sophisticated prompts, security vulnerabilities, or cyberattacks.

The most commonly leaked data

According to Harmonic’s research, 8.5 per cent of all prompts analysed contained sensitive data. The most frequently compromised categories include:

  • Customer data (45.77 per cent): Employees often use AI to process tasks more efficiently, such as summarising insurance claims or managing billing details. However, this exposes confidential information like payment transactions, customer profiles, and authentication details.
  • Employee data (27 per cent): GenAI tools are being used internally for tasks such as performance reviews, recruitment, and payroll calculations. This can lead to the exposure of personally identifiable information (PII) and employment records.
  • Legal and financial information (14.88 per cent): Despite being used for relatively simple tasks such as spell checks or summarising legal documents, this category presents substantial risks. Leaked data could include sales forecasts, merger and acquisition plans, and other corporate financial details.
  • Security-related data (6.88 per cent): Inputting penetration test results, network configurations, and backup strategies into AI platforms could create a roadmap for cybercriminals looking to exploit vulnerabilities.
  • Sensitive code (5.64 per cent): Software developers using AI tools to generate or refine code may inadvertently expose proprietary codebases, making their organisations vulnerable to security breaches and competitive disadvantages.

Weighing the risks against the rewards

Given the security risks associated with GenAI, should businesses limit or abandon its use? Some experts argue that avoiding AI altogether could be detrimental to competitiveness.

“Organisations that fail to implement GenAI could fall behind in terms of efficiency, productivity, and innovation,” said Stephen Kowski, Field CTO at SlashNext Email Security+. “Without AI, companies face higher operational costs and slower decision-making, while competitors harness AI to automate tasks, gain customer insights, and accelerate product development.”

However, not everyone agrees that GenAI is a necessity. Kris Bondi, CEO and co-founder of Mimoto, argues that blindly adopting AI without a strategic purpose is unsustainable. “If AI doesn’t serve a clear business need, it will eventually lose support when budgets are reallocated,” Bondi remarked.

Kowski acknowledges that while GenAI can provide advantages, companies can still succeed without it, particularly in industries like engineering, healthcare, and local services, where traditional methods often remain more effective.

Mitigating AI-related risks

For businesses looking to leverage GenAI while minimising potential risks, Harmonic’s researchers suggest adopting a structured approach to AI governance:

  • Move beyond simple access restrictions and implement comprehensive monitoring systems to track AI usage in real time.
  • Ensure that employees use corporate-approved AI tools with security safeguards, rather than free versions that collect user data for training purposes.
  • Establish clear policies for classifying and handling sensitive data within AI platforms.
  • Enforce structured workflows to prevent unauthorised input of confidential information.
  • Educate employees on the risks and best practices associated with GenAI to promote responsible usage.

Related

Young employees / Pexels

How employers can better support neurodivergent talent

20 January 2025
by Nicole Zammit

By taking proactive steps, employers can move towards a workplace culture that prioritises genuine inclusion over tokenism.

William Spiteri Bailey replaces John Zarb as PG Group Chairman

20 January 2025
by Robert Fenech

Mr Zarb had been in the role since 2017.

Jesmond Fenech appointed SpeedyDD CEO

20 January 2025
by Nicole Zammit

The appointment signals a new phase of growth and innovation for the company.

George Bugeja appointed Partner at Ganado Advocates

17 January 2025
by Nicole Zammit

With expertise in corporate law, energy law, and aviation law, he brings a wealth of experience to the firm.