3
min read
Apr 22, 2026

For AI Agents to Work at Scale, Access Context Can't Be Optional

Simon Thornell
data security layerA group of people walking through a lobby.

Sensitive data doesn't stop being sensitive because an AI agent is the one accessing it. That's obvious when you say it out loud. But in practice, many enterprise AI projects are built as if it isn't true.

The typical setup: a team stands up an AI agent, connects it to a data platform, and grants it service account credentials with broad access. The agent works. The pilot succeeds. Then someone asks: what data can this agent actually reach? Who approved that? How would we know if something went wrong? Those questions rarely have clean answers.

AI agents are data super-users. They don't pause to question whether they should access a table. They don't notice when a dataset contains PHI that falls under a data residency restriction. They act on whatever access they've been given, at machine speed, across every asset in scope. A human analyst might pause when something looks off. An agent won't.

The access control problem isn't new. What's new is the speed, scale, and autonomy with which it now plays out.

The gap that makes this worse

Most enterprises are managing data access risk with disconnected systems. Security platforms identify risks that data teams never see. Data catalogs carry certification and trust signals that security teams can't easily act on. A governance team might mark a data product as trusted, unaware that a high-severity access risk exists on that exact asset. A security team might flag a policy violation with no way to raise it where data teams will actually see it.

When AI agents enter that environment, they inherit the gap. They operate inside the data platform, largely invisible to the catalog, and outside the visibility of security controls that weren't designed with non-human identities in mind.

The result: access risk accumulates faster than any team can manually track it, and the context needed to act on it is split across systems that don't talk to each other.

Context has to include security, not just meaning    

The context layer and security context are two sides of the same problem. A context layer tells an agent what a dataset means: the business meaning, relationships, and operational rules needed to reliably interpret data. A security layer determines who can access it, why, and under what conditions, evaluated in real time against identity, purpose, and policy.That's the shift from static RBAC to true least-privilege in the age of AI agents.

When both layers are present and connected, agents operate with the full picture. When only one is, there's a gap: either the agent understands the data but isn't properly constrained, or it's constrained but operating blind to business meaning. The enterprises building AI they can actually trust are treating both as infrastructure from day one.

Atlan, the Enterprise Context Layer for AI, is bringing data and AI leaders together at Atlan Activate on April 29 to show exactly what this infrastructure looks like in production: context and security, connected from day one.

Closing the gap between context layer and security

Security context belongs where data decisions get made. Not in a separate platform that data teams rarely open, not in a ticket queue, not in a spreadsheet someone exports once a quarter. When security and catalog operate as one, risk is visible where it's actionable, remediation happens faster, andAI agents can be governed with the same rigor as human users.

That's why TrustLogix and Atlan are building together. Our upcoming TrustLogix app for Atlan brings TrustDSPM and TrustAccess directly into Atlan’s Enterprise Context Layer, so governance and security teams finally share the same view of the data estate and make access risk visible where data decisions get made.

Enterprises that get AI right won't be the ones who locked everything down. They'll be the ones who built context into the infrastructure from the start.

Stay in the Know

Subscribe to Our Blog

Decorative