AI Payment Security: How AI Agents Handle Card Data Without Breaking PCI

By Shuttle Team, February 20, 2026

The Security Question Nobody's Asking Loudly Enough

AI agents are processing real payments. Voice agents take card details over the phone. Chat agents send payment links mid-conversation. Agentic systems initiate transactions on behalf of customers.

And the security conversation hasn't caught up.

Most of the "agentic payments" content in 2026 focuses on protocols, commerce frameworks, and revenue opportunities. Almost none of it addresses the fundamental question: how do you let an AI agent process a payment without exposing card data to the AI model?

This isn't theoretical. If an AI voice agent hears a customer read their card number, that audio is being processed somewhere. If an AI chat agent receives a card number in a message, that text is being handled by an LLM. If the architecture doesn't explicitly prevent it, card data will flow through systems that aren't PCI-certified — and probably into training data.

The payment industry spent decades building PCI DSS to protect card data from humans. Now it needs to protect card data from AI.


The Core Principle: AI Agents Must Never See Card Data

This is non-negotiable. The AI agent — the LLM, the conversation engine, the decision-making model — must never:

  • Hear card numbers (in audio form)

  • Read card numbers (in text form)

  • Store card data (in any form)

  • Process card data (through any model)

  • Log card data (in conversation transcripts)

The AI agent decides when to initiate a payment and what to say about it. The payment infrastructure handles the how — card capture, tokenisation, PSP routing, and transaction processing.

These two systems must be architecturally separated with no card data crossing the boundary.


Three Architectures for AI Agent Payments

1. DTMF Capture (Voice Agents)

How it works: The AI voice agent's audio processing is paused during card capture. The customer enters card digits via their phone's keypad (DTMF tones). These tones are captured directly by the PCI-certified payment layer — the audio never reaches the AI model.

The agent says: "Please enter your card number using your keypad." The voice channel switches to DTMF mode. Card digits are captured by the PCI-certified payment layer (NOT the AI agent). Payment is processed. The agent continues: "Thank you, your payment has been confirmed."

Why it works: DTMF capture is an established PCI-compliant pattern used in IVR systems for decades. The innovation is integrating it seamlessly into an AI voice agent conversation so the customer experience feels natural, not like a transition to a clunky IVR.

PCI scope: The AI voice agent is out of PCI scope. The payment layer (which handles DTMF capture) is in PCI scope and holds PCI DSS Level 1 certification.

2. Payment Links (Chat Agents)

How it works: The AI chat agent determines payment is needed. It generates a payment link via the payment layer API (passing amount, merchant ID, and reference — no card data). The link is sent to the customer within the chat. The customer clicks the link and pays on a PCI-certified hosted checkout page. A payment confirmation webhook notifies the agent. The agent says: "Payment received. Your booking is confirmed."

Why it works: Payment links are the cleanest separation possible. The chat platform, the LLM, and the conversation logs never contain card data. The only data the agent sees is a payment confirmation (paid/not paid, amount, reference).

PCI scope: The AI chat agent and the entire chat platform are out of PCI scope. The hosted checkout page (payment layer) is in PCI scope.

3. Tokenised Card-on-File (Automated Agents)

How it works: The customer has previously stored a card via a PCI-certified flow. The AI agent sends a payment request to the payment layer with a customer token (NOT card data). The payment layer uses the stored token to process with the PSP. Confirmation is returned to the agent.

Why it works: Tokenisation is a foundational PCI concept. The AI agent has the same access as any other system that processes token-based payments — it can initiate a charge but cannot access the underlying card data.

PCI scope: The AI agent is out of PCI scope (it only handles tokens). The tokenisation vault and payment processing layer are in PCI scope.


What Could Go Wrong (and How to Prevent It)

Risk 1: Card Data in Conversation Transcripts

The problem: A customer reads their card number aloud to a voice agent, or types it into a chat. If the conversation is transcribed and stored, card data ends up in logs that aren't PCI-certified.

Prevention:

  • Voice agents: DTMF mode suppresses audio capture during card entry. The AI's speech-to-text engine never receives the card number audio.

  • Chat agents: Payment links eliminate this risk entirely — customers never type card data into the chat.

  • If a customer volunteers card data in conversation (unprompted), the system must detect and redact it from transcripts. Pattern matching for 13-19 digit sequences is the minimum.

Risk 2: Card Data in LLM Training Data

The problem: If card data flows through an LLM's inference pipeline, it could theoretically appear in model outputs or influence training (for fine-tuned models).

Prevention:

  • Architectural separation: card data never reaches the LLM. DTMF capture and payment links keep card data in the payment layer.

  • If using third-party LLMs (OpenAI, Anthropic, Google), ensure card data never appears in prompts or conversation context. This is enforced by the architecture, not by policy.

Risk 3: The AI Agent Asks for Card Data

The problem: An improperly configured AI agent might ask the customer to read their card number aloud (voice) or type it into the chat — creating a card data exposure even with proper payment infrastructure.

Prevention:

  • Agent conversation design must explicitly prevent card data solicitation in text or voice channels where the agent processes the data.

  • DTMF capture requires the agent to redirect to keypad entry — the agent should never say "please read me your card number."

  • Chat agents should always send a payment link — never ask for card details in the chat.

Risk 4: Spoofing and Social Engineering

The problem: A malicious actor could attempt to trick an AI agent into revealing payment information, processing unauthorised transactions, or bypassing payment controls.

Prevention:

  • AI agents should have strict guardrails on payment actions: maximum transaction amounts, customer authentication requirements, and refund limits.

  • Payment actions should require customer verification (OTP, biometric, or SCA challenge) — initiated by the payment layer, not the AI agent.

  • Transaction anomaly detection at the payment layer level catches patterns the AI agent might not recognise.


PCI DSS and AI Agents: Where We Are

What PCI DSS 4.0 Says

PCI DSS 4.0 (effective March 2025) doesn't specifically address AI agents. The standard focuses on protecting cardholder data wherever it's stored, processed, or transmitted — regardless of the technology.

The principles apply directly:

  • Requirement 3: Protect stored account data. AI agents must not store card data.

  • Requirement 4: Protect cardholder data with strong cryptography during transmission. Card data flowing to the payment layer must be encrypted.

  • Requirement 7: Restrict access to system components and cardholder data by business need to know. AI agents have no business need to access card data.

  • Requirement 10: Log and monitor all access to system components and cardholder data. Payment transactions initiated by AI agents must be auditable.

What's Coming

The PCI Security Standards Council hasn't published AI-specific guidance yet. Industry expectations:

  • Guidance on AI agent payment architectures (likely validating the separation model described here)

  • Requirements for AI conversation transcript handling (redaction, retention)

  • Standards for AI-initiated transaction authentication

  • Clarification on PCI scope boundaries between AI platforms and payment infrastructure

Platforms that architect the separation now will be ahead when formal guidance arrives.


Audit Trail Requirements

AI-initiated payments need stronger audit trails than human-initiated ones because:

  • There's no human judgment to fall back on if something goes wrong

  • Dispute resolution needs to prove the customer consented

  • Regulators will scrutinise AI-processed transactions more closely

Minimum audit trail for AI agent payments:

  • Conversation transcript (with card data redacted)

  • Customer consent record (verbal for voice, click-through for links)

  • Payment request timestamp and parameters

  • PSP response (approval/decline code, reference)

  • Agent decision logic (why the payment was initiated)

  • SCA/authentication result (if applicable)


FAQ

Can we use our own LLM and still be PCI compliant?

Yes — as long as card data never reaches the LLM. The LLM handles conversation logic. The payment layer handles card data. If these are architecturally separated (DTMF for voice, payment links for chat, tokens for stored cards), the LLM and its hosting environment are out of PCI scope.

What about voice biometrics for payment authentication?

Voice biometrics can be used as one factor in customer authentication — but it's not sufficient alone for payment authorisation under SCA requirements. Combine with a second factor (OTP, device confirmation) where SCA applies.

Do we need a separate PCI certification for AI agent payments?

If your AI agent never touches card data (and the architecture enforces this), the agent itself doesn't need PCI certification. The payment layer that handles card data does. This is the same model as any application that uses a PCI-certified payment provider — the application is out of scope if card data doesn't flow through it.

What if the customer insists on reading their card number to the voice agent?

The agent should redirect: "For your security, I'll ask you to enter your card details using your phone's keypad." If the customer speaks card details despite this, the system must: (1) not process the spoken data as payment input, (2) redact the card number from any transcript, and (3) still route to DTMF capture for the actual payment.


Related Reading


Building AI agents that handle payments?

Shuttle's payment layer is PCI DSS Level 1 certified and purpose-built for AI agent payment flows — DTMF voice capture, payment links, and tokenised payments. Your AI agent stays out of PCI scope. Every transaction is secure and auditable.

See the Architecture | Talk to Our Team

Talk to us

See how Shuttle can power payments for your platform — multi-PSP, multi-channel, white-label.

Book a Demo