As generative AI (GenAI) tools like ChatGPT, Microsoft Copilot, and Google Gemini become increasingly integrated into workplace processes, concerns over data security are mounting.
Employees frequently input sensitive information – ranging from customer data and financial records to employee benefits and source code – into these platforms, posing significant risks to enterprise security.
A recent study by Harmonic has highlighted the extent of this issue, validating concerns that many organisations have about the unrestricted use of AI tools. Each time a user enters data into a GenAI platform, the information becomes part of the system’s learning model. If adequate security measures are not in place, there is a risk that this data could be accessed in the future, either through sophisticated prompts, security vulnerabilities, or cyberattacks.
The most commonly leaked data
According to Harmonic’s research, 8.5 per cent of all prompts analysed contained sensitive data. The most frequently compromised categories include:
Weighing the risks against the rewards
Given the security risks associated with GenAI, should businesses limit or abandon its use? Some experts argue that avoiding AI altogether could be detrimental to competitiveness.
“Organisations that fail to implement GenAI could fall behind in terms of efficiency, productivity, and innovation,” said Stephen Kowski, Field CTO at SlashNext Email Security+. “Without AI, companies face higher operational costs and slower decision-making, while competitors harness AI to automate tasks, gain customer insights, and accelerate product development.”
However, not everyone agrees that GenAI is a necessity. Kris Bondi, CEO and co-founder of Mimoto, argues that blindly adopting AI without a strategic purpose is unsustainable. “If AI doesn’t serve a clear business need, it will eventually lose support when budgets are reallocated,” Bondi remarked.
Kowski acknowledges that while GenAI can provide advantages, companies can still succeed without it, particularly in industries like engineering, healthcare, and local services, where traditional methods often remain more effective.
Mitigating AI-related risks
For businesses looking to leverage GenAI while minimising potential risks, Harmonic’s researchers suggest adopting a structured approach to AI governance:
By taking proactive steps, employers can move towards a workplace culture that prioritises genuine inclusion over tokenism.
Mr Zarb had been in the role since 2017.
The appointment signals a new phase of growth and innovation for the company.
With expertise in corporate law, energy law, and aviation law, he brings a wealth of experience to the firm.