All postsAI Tools & Education

Shadow AI Is Already in Your Legal Team

2026-02-284 min read

A guide to identifying undisclosed AI use inside your organisation — and why chasing it down is the wrong first move.

Shadow IT became a compliance problem the moment cloud storage made it trivially easy to move files outside the corporate network. Shadow AI is the same pattern, an order of magnitude faster.

The question for General Counsel is not whether undisclosed AI use exists in their legal team. It almost certainly does. The question is what to do about it.

Why People Use AI Without Telling You

The instinct is to frame this as a policy violation. That framing is usually wrong.

Most shadow AI use in legal teams is not malicious. It is the response of people who have a volume problem — too many documents, too many emails, too many contracts to review — and who have found a faster way to work. They are not trying to circumvent governance. They are trying to do their jobs.

Understanding this changes the response.

A zero-tolerance policy enforced without addressing the underlying volume problem doesn't stop shadow AI. It drives it underground while creating a compliance culture where people are less likely to disclose problems when they occur.

How to Identify It

Identifying shadow AI use requires looking in the right places.

Output quality anomalies. AI-generated text has characteristic patterns — not always, and not obviously, but often enough to notice at scale. Unusual consistency of structure, generic phrasing in documents that should be contextually specific, response times that don't match the complexity of the task.

Tool subscriptions. Consumer versions of AI tools (ChatGPT, Claude, Copilot outside the corporate tenant) are often purchased personally and expensed, or used on free tiers. A careful review of expense reports and personal device use policies will surface a significant proportion of shadow AI use.

Direct conversation. This is the most effective method and the least used. Ask. Not "are you using unauthorised tools?" but "what would help you work faster?" People will tell you what they're doing if they believe the response will be to solve the problem rather than discipline them.

Why Chasing It Down Is the Wrong First Move

The forensic hunt for shadow AI use is a resource-intensive exercise that, even if successful, solves the symptom rather than the cause.

The more productive response is parallel: acknowledge that AI tools exist and that some team members are already using them; assess which tools are creating the most value; create a pathway to formal adoption for the tools that pass governance review; and close the remaining gap with a realistic acceptable use policy that people will actually follow.

The organisations that are managing AI risk in legal most effectively are not the ones that locked everything down. They are the ones that moved fast enough — with appropriate governance — to get ahead of the informal adoption curve.


The Shadow AI Risk Assessment is a self-assessment tool for organisations that want to understand the current state of AI use inside their legal function before building a governance response.

Susie Kalen

Legal Operations & AI Strategy Consultant. Working with enterprise teams at Lego, Amazon, ABB, and Unilever.

The Newsletter

Think before you build.

Practical thinking for legal and operations leaders navigating AI — without the hype. Read by enterprise teams globally.