All postsAgentic AI & Legal

What Agentic AI Actually Means for GCs

2026-03-105 min read

Beyond the buzzword: what agentic systems can and cannot do in legal practice today, and where the liability sits.

The word "agentic" has entered legal technology marketing in the same way "blockchain" did six years ago — deployed broadly, defined rarely, and attached to things it doesn't accurately describe.

This matters more for legal than for most professional contexts, because misunderstanding what an AI system can and cannot do autonomously isn't just a procurement error. It's a liability question.

What "Agentic" Actually Means

An agentic AI system is one that can take sequential actions toward a goal without human intervention at each step. It can use tools, call APIs, make decisions, and act on the outputs of those decisions — all within a defined scope, but without a human reviewing each move.

This is meaningfully different from a generative AI system that drafts text for a human to review. The key distinction is the loop: in an agentic system, the AI acts, observes the result, and acts again. The human may not be in that loop at all.

For legal practice, this distinction matters enormously.

What Agentic AI Can Do in Legal Today

The current state is more limited — and more specific — than the marketing suggests. Agentic systems are genuinely useful in legal where:

  • The task is highly structured. Contract data extraction, clause comparison against a playbook, compliance flag generation. Tasks with defined inputs and outputs.
  • The error cost is recoverable. First-pass review that a human will see before anything is filed or sent. The agent makes a mistake; the human catches it.
  • The workflow is well-documented. You cannot automate what you cannot describe. Teams that have mapped their processes find agentic tools far more deployable than teams that haven't.

What Agentic AI Cannot Do in Legal Today

This is the more important list.

  • It cannot make judgment calls with accountability attached. An AI system can flag that a clause is non-standard. It cannot decide whether to accept that clause on behalf of the organisation. That decision carries legal weight and belongs to a named individual.
  • It cannot operate without governance architecture. Deploying an agentic system without an acceptable use policy, audit trail architecture, and human escalation triggers is not bold — it's negligent.
  • It cannot replace institutional knowledge. What looks like judgment from outside is often accumulated organisational context. Agentic systems optimise within their training. They do not know what your organisation's board decided three years ago about a specific counterparty.

Where the Liability Sits

This is the question most GCs are not yet asking systematically.

When an agentic AI system takes an action that causes harm — sends a document, flags a clause incorrectly, misses a regulatory trigger — who is liable? The vendor? The organisation? The individual who deployed it?

The answer, in almost every current legal framework, is: the organisation, and within the organisation, the person accountable for the decision the AI was empowered to make.

That means governance is not optional infrastructure. It is the condition under which agentic AI is deployable at all.

The GCs who are getting this right are not moving slower than their peers. They are building the accountability architecture first, so they can move faster with confidence once it's in place.


The AI Governance Checklist covers the 12 questions every GC should be able to answer before approving any AI tool — including agentic systems.

Susie Kalen

Legal Operations & AI Strategy Consultant. Working with enterprise teams at Lego, Amazon, ABB, and Unilever.

The Newsletter

Think before you build.

Practical thinking for legal and operations leaders navigating AI — without the hype. Read by enterprise teams globally.