Generative AI: Productivity Dream or Security Nightmare - Image by Jensen Art Co from PixabayThe field of AI has been around for decades, but its current surge is rewriting the rules at an accelerated rate. Fuelled by increased computational power and data availability, this AI boom brings with it opportunities and challenges.

AI tools fuel innovation and growth by enabling businesses to analyse data, improve customer experiences, automate processes, and innovate products — at speed. Yet, as AI becomes more commonplace, concerns about misinformation and misuse arise. With businesses relying more on AI, the risk of unintentional data leaks by employees also goes up.

For many, though, the benefits outweigh any risks. So, how can companies empower employees to harness the power of AI without risking data security?

Ready or not, AI is here to stay

Generative AI (GenAI) tools have been hugely popular in recent years as this groundbreaking technology is capable of producing content that appears strikingly human-made.

GenAI is transforming how we work and create. It is revolutionising content creation, personalised recommendations, and innovative problem-solving. These features are reshaping our interaction with technology, unlocking new avenues for efficiency, creativity, and user engagement.

This wave of innovation is reshaping industries, cementing its status as a valuable asset for businesses and individuals alike. Given the rapid pace of technological advancements, it is anticipated many more compelling use cases and applications for GenAI are on the horizon.

Yet, not without risks

As GenAI advances, it’s crucial to balance excitement with an awareness of the associated risks, particularly in the areas of data privacy and technology misuse.

Many organisations are finding that the number of employees accessing AI apps is growing exponentially. According to a study by Netskope Threat Labs, during May and June 2023, the percentage of enterprise users using at least one AI app daily increased by 2.4% each week.

Additionally, a recent Deloitte study revealed that 61% of employees are currently using or planning to use GenAI. Of those using it, 26% have not informed their managers, and 24% use it despite company bans.

The growing adoption of GenAI raises the risk of unintended data exposure. Security teams often have limited visibility into the data shared on these platforms, making it harder for businesses to strike a balance between innovation and minimising security risks.

Data privacy and leakage concerns

One of the most pressing issues associated with GenAI is the risk of unauthorised data access and leakage. This arises due to two main factors. First off, AI needs a lot of data to learn from and generate content. That data could include sensitive personal information protected by privacy laws as well as copyrighted information used without permission.

Secondly, the various stages of AI training and deployment open multiple vectors for potential leaks or breaches, with increasing sophistication in cyber-attacks explicitly targeting these AI systems.

For instance, a chatbot like ChatGPT requires users to provide relevant prompts to generate responses. During this interaction, employees might accidentally or intentionally share sensitive data. Once submitted, this data could be used in training AI models. Also, because the information is transmitted and stored on external servers, it cannot be retrieved once submitted.

Employees may upload sensitive data like personally identifiable information (PII), intellectual property (IP), or financial data. This could lead to external exposure and leakage which could impact the company’s reputation. An example of this was last year when it was reported that Samsung workers unwittingly leaked confidential data whilst using ChatGPT to help them fix problems with their source code.

Misuse of technology

The very attributes that make GenAI a powerhouse — like the generation of credible and sophisticated content—also make it vulnerable to misuse. This technology can produce misleading and hard-to-detect false media, such as deepfakes, that can be used maliciously. Its capabilities can be weaponised to deceive, defame, or defraud individuals and organisations, enhancing impersonation and fraud attempts like phishing emails and fake news.

Ethical considerations must form the core of GenAI deployment strategies. There is an imperative for organisations to develop guidelines and policies that govern the responsible use of AI.

Inaccurate or dangerous responses and hallucinations

While most people are aware of GenAI being inaccurate in an image — like giving people the wrong number of fingers — recent examples are emerging of GenAI responses being inaccurate or downright dangerous.

For example, in May 2024, Google Gemini briefly suggested in response to a query about cheese not sticking to pizza, that you should mix non-toxic glue into the cheese. Further, a study from Perdue University in December suggested that 52% of GenAI answers to coding questions were incorrect!

Going forward: Gain real-time visibility to promote secure AI use

With visibility of how employees use AI tools, organisations can provide the real-time coaching necessary for safe and effective use. Monitoring for the oversharing of sensitive data is crucial. Knowing when and by whom a risk occurs allows for effective mitigation and management.

To protect data privacy and curtail misuse, a determined effort that includes stringent security protocols, ethical guidelines, and continuous education is essential. Only with a comprehensive approach can GenAI continue to be an asset rather than a liability.

Organisations should empower employees to responsibly utilise applications like ChatGPT. These tools serve specific business needs, so instead of banning them or reprimanding users, promote secure use and educate employees about potential risks. With advanced technology and strong privacy policies, organisations can maximise AI’s potential while maintaining user trust.


CultureAICultureAI’s innovative Human Risk Management Platform empowers security teams to instantly identify employee security risks, educate employees in real time, and nudge them to make immediate fixes.

We recognise employees make mistakes, bypass security protocols, and are targeted by social engineering attacks – all of which can result in devastating data breaches. CultureAI’s comprehensive platform encompasses multiple solutions that reduce risks across SaaS, phishing, generative AI, instant messaging, data handling, compliance, and more. This enables organisations to identify and improve risky employee behaviours and mitigate resultant risks. Trusted by leading organisations globally, CultureAI is a UK-based company with offices in Manchester and London.

Previous articleGigamon research reveals 20% rise in cybercrime
Next articleFour tips for the successful implementation of AI
Frederick Coulton
Frederick Coulton is Head of Product at CultureAI, a Human Risk Management Platform that empowers organisations to measure real-time employee security behaviours in order to surface and mitigate their most prominent risks. He joined CultureAI as the third employee, assuming the role of UX Designer/Developer. Frederick was attracted by the opportunity to build something from the ground up in a company with a clear vision. As the organisation multiplied in size, he quickly rose the ranks and is now responsible for the Product and Design teams, working with customers and the wider business to define and deliver the product roadmap. His background stems from Design and UX, having worked in digital marketing, fintech, data, and creative agencies in The Netherlands and UK before transitioning into cyber security in 2018. Frederick is passionate about storytelling through a product lens. He has a fascination for schematic maps and loves to use products that promote a clear user journey. He believes that data-led products should live purely to surface, interpret, and enhance data without adding noise or complexity.

LEAVE A REPLY

Please enter your comment!
Please enter your name here