5
min read
Jan 13, 2025

Now in Public Preview: Securing Agentic AI with TrustLogix TrustAI and MCP

Simon Thornell
data security layerA group of people walking through a lobby.

The enterprise landscape is rapidly shifting from passive ChatBots to autonomous AI Agents. However, as these agents gain the ability to interact with sensitive corporate data and execute tools, a critical challenge emerges: How do we govern what an agent can see, do, and share without creating a security "maintenance nightmare"?

We are thrilled to announce that TrustLogix TrustAI’s SaaS-hosted Model Context Protocol (MCP) server is now in Public Preview. TrustAI  acts as the specialized security control plane for your AI ecosystem, bridging the gap between LLM capabilities and the strict governance required by the modern enterprise. Leveraging MCP, TrustAI replaces fragmented, hardcoded agent connectors with a standardized security control plane that centralizes authentication, externalizes policy enforcement, and delivers unified auditability across all AI-to-data interactions.

The "Old Way" vs. The "MCP Way"

Before MCP, connecting an AI agent to data required custom, hardcoded connectors for every tool, leading to a fragmented security posture.

TrustAI + MCP Comparison
Feature The "Old Way" (Hardcoded) The TrustAI + MCP Way
Connectivity Custom API/OAuth per tool Standardized Protocol (MCP)
Security Hardcoded logic in Agent code Externalized Policy Engine
Auth Management Individual keys & secret sprawl Centralized Onboarding & Auth
Auditability Scattered across microservices Centralized Security Control Plane

Bridging the Governance Gap: A Strategic Evolution for the CDO & CISO

While TrustLogix TrustDSPM excels at discovering data risks and TrustLogix TrustAccess manages entitlements and access policies, TrustLogix TrustAI is a purpose-built solution that’s focused specifically on agentic data access. This Public Preview marks a turning point: moving security from a reactive check to a real-time authorization service built for the speed of autonomous systems.

For the CDO and CISO, TrustAI provides a unified solution to the three biggest barriers to AI production:

Eliminating the "AI Stall" (CDO & CISO Alignment)

Most AI projects fail to move past the pilot phase because manual security reviews for every new agentic "tool" or data source create a massive bottleneck.

  • The CDO Benefit: Innovation moves at the speed of AI, not the speed of paperwork.
  • The CISO Benefit: Security is "baked in" via a federated, automated governance model that enforces least-privilege by default, preventing Shadow AI.

Mitigating Agentic Risk (CISO Mandate)

Agents introduce non-human risk vectors like autonomy drift (acting beyond intent) and chained vulnerabilities (where one agent's flaw cascades to another).

The Solution: TrustAI understands the unique risk profile of agents such as arbitrary code execution and the potential for "shadow" data retrieval. It ensures that agents operate only within the scope of the users they represent, not the broad service accounts they run on.

Ensuring Explainability & ROI (Business & Security Value)

When a regulator asks, "Why did the agent make this decision?", you need an answer immediately.

  • Transparency: TrustAI provides an audit trail of exactly what data an agent was authorized to see during a specific decision.
  • Efficiency: By decoupling security logic from agent code, developers spend 80% less time on custom auth-wrappers, allowing the CISO to reduce the "Security Debt" that typically accumulates in fast-moving AI projects.

TrustAI at a Glance: High-Level Capabilities

The TrustAI Public Preview introduces a centralized authorization designed specifically for the era of agentic workflows. Before diving into the technical architecture, here is the high-level value this preview delivers:

  • Universal Agent Connectivity: Seamlessly connect any MCP-compliant agent (from OpenAI to Claude 3.5) to your enterprise data using a single, standardized protocol.
  • Centralized Security Control Plane: Externalize authorization logic from your agent's code. Manage all permissions, resources, and attributes in one dashboard rather than hardcoding them into individual scripts.
  • Context-Aware Access Policies: Beyond simple "allow/deny" rules, TrustAI evaluates the full context of a request identity, task intent, and environmental signals to determine entitlements.
  • Comprehensive Auditability: Automatically log every intent, authorization request, and data access event. Turn the "black box" of agentic behavior into a transparent, compliant record for auditors.

Proxyless Architecture: Security at the Control Plane

TrustLogix differentiates itself from the "Gateway" solutions in the market by operating entirely on the Control Plane. Traditional AI security gateways act as a middleman, inspecting every prompt, which introduces latency and forces sensitive data to leave your secure perimeter.

TrustLogix’s proxyless approach ensures that the actual data payload never touches our SaaS. Instead, TrustAI handles the authorization handshake and entitlement checks out-of-band. The agent receives a "Permit" or "Deny" decision and, if authorized, fetches the data directly from the source. This ensures zero data bottlenecks and maintains complete data sovereignty which is a critical requirement for CISO-led compliance.

Asset Management: Defining Your Data Landscape

With the "Control Plane" architecture in place, the next step is defining the boundaries of your AI ecosystem. TrustAI allows you to map your enterprise architecture into the MCP framework by onboarding your existing data sources and defining granular resources.

Onboarding Your Data Ecosystem

You can seamlessly plug in your primary enterprise data platforms to serve as the foundation for your AI agents:

  • Cloud Data Warehouses/Lakehouses: Snowflake, Databricks, Amazon Redshift etc.
  • Object Storage: Amazon S3 buckets etc.

Defining Resources and Permitted Actions

Once your data sources are connected, you define the specific "Assets" your agents can interact with. Beyond standard types, you can create custom resources to fit unique business logic or proprietary data silos. For example:

Resource Type Permissions
Resource Type Default Permitted Actions
Columnar Database READ, WRITE, PUBLISH
Data Product READ, WRITE, PUBLISH
Folder READ, WRITE, DELETE, LIST
File READ, WRITE, DELETE, DOWNLOAD
API Endpoint READ, WRITE, EXECUTE

Deep Dive: Dynamic Attribute-Based Access Control (ABAC)

Once resources are defined, TrustAI allows you to build policies using ABAC to create dynamic, context-aware boundaries. Unlike traditional role-based access, ABAC evaluates the "Who, What, Where, and Why".

Advanced Policy Example: The "High-Clearance Analyst" Agent

With TrustAI, you can define a policy that grants access only when multiple signals align. For example the request is PERMITTED ONLY IF:

  • MFA is Enabled: The user's current session is verified via multi-factor authentication.
  • Access Window Match: The system timestamp falls within the user’s specific Access_Period attribute.
  • No Unusual Login: The Unusual_Login attribute for the user is set to False.
  • Geographic Match: The Country_IP_Address matches the user's registered Country attribute.

The TrustLogix TrustAI Authorization Lifecycle

When an agent interacts with a data source like Snowflake through TrustLogix, it follows this process:

  1. Connection & Initialization: The MCP Client (Agent) establishes a secure session with the TrustLogix MCP Server.
  2. Receiving User Input: The user asks the agent to perform a specific data-driven task.
  3. Understanding & Analysis: The LLM identifies the specific resource (e.g., a Data Product) it needs to access.
  4. Determining Need for External Info: The agent determines it needs a specific action (e.g., READ) to complete the task.
  5. MCP Invocation: The agent calls the TrustLogix MCP Server to request authorization for that resource and action.
  6. Information Retrieval (Policy Check): TrustAI evaluates the complex ABAC policy (MFA, IP, Time etc.) and checks for explicit User Consent.
  7. Generating the Response: The LLM receives the authorized, policy-compliant context and creates the final answer.
  8. Responding to the User: The agent delivers the result to the user, with every step fully audited within TrustAI.

Conclusion: Future-Proofing AI Governance

The TrustAI Public Preview is the foundational layer for the agentic future. By decoupling Permission Logic from Agent Logic and keeping it separate from the Data Path you can scale AI initiatives with the confidence that your data remains under your control and fully auditable.

Stay in the Know

Subscribe to Our Blog

Decorative