How to Prevent “Shadow AI” Use in the Workplace

Quick summary
- Employees already copy your organisation’s data into public chatbots every day, often without thinking.
- If that data leaks, the company, not the individual, answers to regulators.
- Simple bans push the activity underground.
- Discover a balanced approach to AI for the workplace.
“I’ll just ask ChatGPT”
Picture a Tuesday afternoon in finance. Elliott needs board-level wording for a tricky clause in a contract, and he’s under pressure – his meeting is in 30 minutes – so he copies the draft contract into ChatGPT and asks for a rewrite.
Elliott looks impressive in the meeting, and he’s happy he could cut a corner and get the result he needed. Elliott isn’t a bad employee; he is intelligent, diligent, and trying to hit a deadline.
What he may not realise is that the contract text now sits on infrastructure run by OpenAI. No compliance check, no legal review, no log inside the corporate environment, and no real idea if that info is now at risk of being exposed.
Multiply Elliott by every well-meaning colleague who has a wealth of free AI tools at their disposal, and you have “shadow AI”: the same unofficial pattern of use we once called shadow IT, only wrapped in new branding.
The invisible journey your data takes
When staff type or paste content into a public large language model (LLM), three things happen that rarely get logged or reported:
- Copy & paste: The text leaves your controlled network and is stored in the provider’s cloud, at least temporarily.
- Process: The service may retain logs for safety or product improvement, unless you pay for an enterprise tier that switches this off.
- Replay: Future prompts, integrations, or internal processes could reproduce that content. Either directly or by becoming part of a different output.
In its March 2023 blog post, the National Cyber Security Centre warned that user “queries stored online may be hacked, leaked, or more likely accidentally made publicly accessible”.
The issue is not malicious; it is savvy staff who are rewarded for speed. Good people but lacking solid AI education.
Liability does not transfer with the click
The kind of activity we’re discussing doesn’t only create internal risks to your data; it can also get you in hot water under UK privacy laws.
Under UK GDPR, your organisation remains the “data controller”. If a customer name or personally identifiable information gets moved to an LLM provider, the Information Commissioner will address the enforcement notice to your company, not to OpenAI and not to our employee, Elliott.
That simple fact should frame every policy discussion.
Why bans do not work in practice
The easy solution looks like a company-wide ban. Blanket restrictions feel decisive, yet most employees simply swap devices, use personal email accounts or use home Wi-Fi to regain access. This can put your data at increased risk, rather than secure it.
While most organisations will already have an AI policy of some kind, this only provides guidance, rarely covers all bases, and in an ever-changing AI landscape, quickly becomes outdated. If your policy is deemed to be too restrictive by employees, they may choose to secretly ignore it in favour of getting results fast.
As The Information Commissioner’s Office noted in its 2025, bans are ineffective:
“With AI offering people countless ways to work more efficiently and effectively, the answer cannot be for organisations to outlaw the use of AI and drive staff to use it under the radar. Instead, companies need to offer their staff AI tools that meet their organisational policies and data protection obligations.”
- ICO statement
Our own customer engagements mirror the ICO’s assessment. Users respect clear boundaries when they understand why they exist and can achieve the same end through an approved route.
Risky business
Let’s break down the risks from rogue AI use into something we can understand and start to manage:
Silent data leaks, without a hack or breach.
Your private data quietly moves to a third-party cloud, a physical server of a third-party company, without a contract or agreement with you.
Auditing blind spots
If it’s not logged within your SIEM, then it can’t be secured or managed. When compliance teams ask you for an audit trail, you can’t provide one.
No contract, no complaints.
Your employees are using a third-party service and have most likely ticked the “I agree to the terms of service” box without ever reading the T&Cs. This leaves you in no-man’s-land when it comes to controlling your data or getting it back.
You don’t really know what happens to your data.
Marketing copy that says “we do not train on your data” can still allow logging or retention for other reasons. If the data that is inputted into an LLM isn’t immediately deleted when the chat ends, then it poses a risk.
Each risk is manageable once you acknowledge it, map accountability and add reasonable safeguards.
Start a conversation about sensible governance
Each organisation is entirely different, so rather than provide a copy-and-paste solution for all, we present the following ideas.
First, we recommend framing your controls as an enablement story, not a clampdown. Start by listening to your staff and understanding how AI is already enabling them to work more effectively.
Educate staff and adjust your policy
Write an acceptable-use statement in plain English. Show real examples of what can and cannot be transmitted to AI. Include practical prompts such as “Remove personal identifiers before submission”. Get feedback from employees and adjust policy to enable AI usage safely, not block time-saving technology.
Give staff a safe harbour
Provide an enterprise license or private model that disables data retention by default and lets you hold the encryption keys. Microsoft and Anthropic both sell UK-hosted tiers that honour this requirement.
Log everything
Proxy traffic through a secure gateway that records prompts and responses. Feed those records into your existing monitoring platform so investigators trace incidents in minutes rather than days.
Configure controls
Link data-loss-prevention tooling to your proxy. If a block of text matches the “restricted” label, the upload is halted and the user sees a helpful message rather than a silent failure.
Coach, don’t punish
When someone crosses the line, educate them and explain how to achieve the same desired result safely while keeping data secure. Conversation beats discipline if your aim is culture change.
How Trustco can help
Trustco helps with governance and guardrails, rather than fear. We work with technical and legal teams to:
- Consultancy and planning on policies and guardrails
- Ensuring data sovereignty within enterprise AI models
- Deploy private Large Language Models
- Employ RAG technology to hone, adjust and improve the accuracy of LLMs
- Discover and map shadow AI usage, with governance and risk implications analysed for you
- Reduce and block unauthorised AI use with bespoke cyber security tools
- Collaborate on custom AI plans that meet your organisation’s goals
What’s best about Trustco? Our independence means we can recommend multiple vendors or solutions to build an AI stack that works for you.
In summary
AI helpers are already woven into daily routines. You cannot uninvent the habit, and the AI genie is not going back inside the lamp now. Your task is to keep visibility, accountability and data safety in step with the technology.
Define the rules, provide a trusted route for employees to use, watch and manage data leakage, and reeducate staff when they slip up. Your staff stay productive and your organisation stays in control -exactly the balance modern governance is meant to strike.
If you would like to speak to a trusted advisor about shadow AI? Contact Trustco today.
Latest posts
How to Prevent “Shadow AI” Use in the Workplace

Trustco’s Big Tech Predictions for 2026

Trustco Gives Back At Christmas

Big IT Is Getting Bigger, and Customers Are Paying The Price

Speak Up. Unplug. Recharge: How To Maintain Good Mental Health When Working in the Tech Industry

