Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Here’s What We Learned Tuesday

    May 12, 2026

    The Top 10 Franchises in Every Industry in 2026

    May 12, 2026

    Japan’s biggest snack maker is changing its iconic chip bags because of a growing global crisis

    May 12, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Live Wild Feel Well
    Subscribe
    • Home
    • Green Brands
    • Wild Living
    • Green Fitness
    • Brand Spotlights
    • About Us
    Live Wild Feel Well
    Home»Brand Spotlights»Data Security Considerations For Building Enterprise AI Agents
    Brand Spotlights

    Data Security Considerations For Building Enterprise AI Agents

    wildgreenquest@gmail.comBy wildgreenquest@gmail.comMay 11, 2026005 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Tony Dang is co-founder & CTO at Infisical, an identity and security infrastructure platform.

    AI agents and custom AI-powered applications are rapidly becoming commonplace in production. But to implement them, engineering teams are connecting large language models (LLMs) to internal databases, customer records, proprietary codebases and operational systems.

    This, of course, expands the data security surface. Each time an enterprise sends a query to an LLM provider, it starts a data pipeline that flows sensitive information outside organizational boundaries. And every time an agent acts on untrusted input, it creates an opportunity for that pipeline to be exploited.

    As CTO of Infisical, I spend a lot of time thinking about security infrastructure. In this article, I explore the data security risks that enterprises should be thinking about when building custom AI applications and agents, and the practical controls that can reduce exposure today.​

    Building Classification And Redaction Into The Data Pipeline

    When an enterprise builds a custom AI application, whether it’s a support agent, a code review tool or an internal knowledge assistant, it typically connects to an LLM provider via API. Any data you want the model to reason over must be sent to the provider.

    This means that if your agent summarizes customer support tickets, the contents of those tickets leave your infrastructure. If it searches internal documentation to answer employee questions, the relevant documents are included in the request payload.

    Organizations need to internalize a simple principle: Calling an LLM API is a data transfer. You’re trusting the provider with every piece of information included in that context window. The data leaves your perimeter, transits their infrastructure and is processed on their systems.

    Some providers also retain API inputs by default and may use them for model training unless you explicitly opt out or use specific API tiers with different data handling terms. Enterprises should review these policies carefully before production data flows through the pipeline.

    This doesn’t mean enterprises should avoid using LLM providers. It means they should be deliberate about what they send. Sensitive fields like personally identifiable information, financial records, credentials, health data and internal access tokens should be stripped or redacted before data reaches the provider. This is a necessary control any time proprietary or regulated data is involved.

    I’ve seen teams move quickly to prototype an AI feature and only later realize that production data flowing through the application includes information that should never have left the organization. The earlier data classification and redaction are built into the pipeline, the less painful the remediation.​

    Data And Credential Exfiltration Through Prompt Injection​

    Beyond the inherent data exposure of LLM API calls, there’s a more adversarial risk: prompt injection leading to data exfiltration.

    LLMs are probabilistic systems. They don’t follow instructions the way traditional software executes code. When an agent processes external content such as emails, documents, web pages or user-submitted text, that content can contain adversarial instructions designed to manipulate the model’s behavior.

    Through prompt injection, an attacker can attempt to cause an agent to leak sensitive data it has access to. This could mean extracting customer records, internal documents or API responses that the agent retrieves as part of its workflow. In the worst case, attackers can target credentials themselves, attempting to exfiltrate secrets, API keys or tokens that the agent uses to authenticate with internal systems. If successful, this gives the attacker direct access to the organization’s infrastructure without ever needing to breach a network boundary.

    Security researchers have demonstrated prompt injection attacks that cause agents to embed sensitive data in outbound requests, encode credentials into URLs or exfiltrate information through tool calls that the agent is authorized to make. These attacks exploit the agent’s own permissions and capabilities, so from the system’s perspective, the agent appears to be behaving normally even though its intent has been subverted.​

    Practical Controls For Reducing Exposure​

    Given these risks, there’s no single solution because the right approach depends on the application, the data involved and the threat model. But several principles apply broadly.

    1. Classify and redact before anything leaves your perimeter.

    Build data classification into the pipeline early. Identify which fields contain sensitive information and ensure they’re stripped, masked or replaced with placeholders before being included in any LLM request.

    2. Centralize secrets management and prefer short-lived credentials.

    If an agent requires credentials to interact with internal systems, those credentials shouldn’t be hardcoded, embedded in environment variables or statically provisioned. They should be managed through centralized secrets management infrastructure that supports audit logging, access policies and automated rotation. Where possible, use dynamic or short-lived credentials that are issued on demand and expire after use. If an attacker manages to exfiltrate a credential through prompt injection, a dynamically issued secret with a short TTL dramatically limits the window of exploitation compared to a static API key that’s been active for months.

    3. Enforce constraints outside the model.

    Don’t rely on prompt engineering alone to prevent misuse. Security controls for tool invocation, allowlists for sensitive operations, output filtering and verification steps for high-impact actions should exist in the application layer, not inside the prompt. The model shouldn’t be the last line of defense for preventing unauthorized data access or credential exposure.​

    Moving Forward​

    The industry hasn’t yet converged on a standardized framework for securing AI agent data flows, and the threat landscape around prompt injection continues to evolve. But that isn’t a reason to delay action. Data redaction, centralized secrets management with dynamic credentials and application-layer security constraints are available today and can materially reduce risk.

    Organizations that treat data security as a foundational concern will be far better positioned as agents become more capable, more autonomous and more deeply integrated into critical systems.​


    Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?




    Source link

    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    wildgreenquest@gmail.com
    • Website

    Related Posts

    Here’s What We Learned Tuesday

    May 12, 2026

    Japan’s biggest snack maker is changing its iconic chip bags because of a growing global crisis

    May 12, 2026

    Canada Declares Digital Independence, But ‘Sovereignty Is Not Solitude’

    May 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Study finds asking AI for advice could be making you a worse person

    March 31, 202612 Views

    Workers are using AI to learn on the job, even though 65% worry about accuracy

    April 21, 20266 Views

    Deadly Ice Prompts a Critical Delay on Mount Everest

    April 21, 20264 Views
    Latest Reviews
    8.5

    Pico 4 Review: Should You Actually Buy One Instead Of Quest 2?

    wildgreenquest@gmail.comJanuary 15, 2021
    8.1

    A Review of the Venus Optics Argus 18mm f/0.95 MFT APO Lens

    wildgreenquest@gmail.comJanuary 15, 2021
    8.3

    DJI Avata Review: Immersive FPV Flying For Drone Enthusiasts

    wildgreenquest@gmail.comJanuary 15, 2021
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Demo
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer
    © 2026 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.