Agentic Commerce and Regulation: What Brands Need to Know
A compliance guide for brands navigating agentic commerce regulation, covering consumer protection, GDPR, the EU AI Act, and practical compliance frameworks.
The Regulatory Gap
Consumer protection law was written for human interactions. When a customer walked into a store, picked up a product, and approached the checkout, the legal responsibility chain was clear. The merchant disclosed the price, the customer made a decision, and the transaction occurred with full transparency.
The regulatory pressure is real and accelerating. On March 9, 2026, the UK Competition and Markets Authority (CMA) published formal guidance on "Complying with Consumer Law When Using AI Agents," establishing that businesses are responsible for how they engage with consumers regardless of whether they use human agents or AI systems. The EU AI Act will enforce strict rules beginning August 2026, with potential fines reaching 7 percent of global revenue for non-compliance. In the United States, NIST hosted a public-private conversation on AI agent standards in April 2026, signaling that federal-level frameworks are coming.
Industry responses are already emerging. American Express launched a developer kit with purchase protection for agentic commerce on April 14, 2026, the first card network to offer explicit buyer protections for AI-initiated transactions. Major law firms have published guidance: Cooley LLP's "AI Agents and Consumer Law" (March 2026), Fenwick LLP's "Is 2026 the Year of Agentic Payments?" and Torys LLP's "Five Questions In-House Counsel Should Ask About Agentic Commerce" (February 2026). The UK Information Commissioner's Office published "AI'll get that!" in January 2026, addressing privacy implications of agentic shopping.
Agentic commerce disrupts this model. Now an AI agent acts as a proxy for the consumer, making purchasing decisions, negotiating terms, and processing payments on their behalf. The Center for Data Innovation warned that "regulation meant for humans will slow agentic commerce down," yet that is precisely the regulation we operate under today.
This gap creates uncertainty for brands. Without clear regulatory guidance, companies deploying AI shopping agents face a patchwork of conflicting requirements across jurisdictions. Some regulators treat AI agents as consumers (triggering consumer protection rules). Others treat them as merchants (triggering merchant liability). Still others haven't decided yet.
The solution is not to wait for clarity. Brands that build compliance into their architecture now will have a competitive advantage as regulation solidifies. Those that ignore the landscape will face costly retrofits later.
Consumer Protection When AI Agents Buy
When an AI agent makes a purchase on behalf of a human consumer, several fundamental questions emerge.
Liability and Authority
Who is responsible for the transaction? If an AI agent negotiates a price, commits the consumer to a subscription, or recommends an unsuitable product, who is liable?
In most jurisdictions, the answer is the human consumer. The AI agent is a tool, and the consumer is responsible for the decisions their tools make. This is similar to liability for other intermediaries. However, this assumption breaks down if the agent acts outside the consumer's instructions or if the consumer cannot reasonably understand what the agent is doing.
Disclosure Requirements
Must consumers know they are interacting with an AI? The answer depends on the jurisdiction. The EU's AI Act, for example, requires transparency when AI makes decisions that significantly affect consumers. Many regulators believe that shopping through an AI agent is significant enough to warrant disclosure.
Best practice: disclose AI involvement clearly at the start of the conversation. Let the consumer know they are talking to an Agentic Client Advisor, not a human. This transparency builds trust and reduces liability.
Right of Withdrawal
The EU's Consumer Rights Directive grants consumers a 14-day cooling-off period for distance contracts. Does this apply to purchases made by AI agents? The directive's language refers to the consumer, but the intent is to protect humans from pressure and misunderstanding. An AI agent purchasing on the consumer's explicit instructions might fall outside the cooling-off period, while a purchase the AI made without clear authorization might not.
Brands should clarify their terms of service on this point and honor the spirit of the rule: if the consumer did not fully understand what their agent was doing, they should have recourse to cancel.
Price Transparency and Negotiation
Can an AI agent negotiate prices on the consumer's behalf? Yes, but with safeguards. The final price must be transparent, and the consumer must approve any binding commitments. Many jurisdictions require that the final price be clearly displayed before the consumer confirms the purchase.
If your AI agent negotiates dynamic pricing with a merchant's AI agent, ensure that the final price is shown to the human consumer before payment is authorized. Do not allow agent-to-agent negotiations to finalize transactions without human review.
Data Privacy in AI-to-AI Transactions
Agentic commerce introduces a new layer of data sharing: AI agent to AI agent. When a consumer's shopping agent communicates with a merchant's inventory or pricing system, what personal data is shared? Who controls that data? Who is responsible for protecting it?
GDPR and Data Processing
Under GDPR, personal data is any information that identifies a person or makes them identifiable. When a consumer's AI agent shares their preferences, purchase history, or payment method with a merchant's system, that is personal data. Both the consumer's AI provider and the merchant become data processors.
The question of who is the controller (responsible party) is complex. If the consumer controls their agent, they might be the controller. If the AI provider controls the agent, the provider might be the controller. In practice, both parties should assume responsibility for compliance.
Consent and Blanket Authorization
Can a consumer give blanket consent for their agent to share preferences with any merchant? GDPR requires consent to be specific and informed. A blanket authorization that the consumer does not fully understand violates GDPR.
Best practice: require the consumer to approve each merchant before the agent shares data. Or require the consumer to review and approve the specific data categories being shared. Document the consumer's consent for audit purposes.
Data Minimization
GDPR requires that only necessary data be shared. If a consumer is shopping for shoes, the agent should not share their medical history or financial information. Limit the data your agent shares to what is strictly required for the specific transaction.
Cross-Border Data Flows
AI agents do not respect borders. A consumer in Germany with an AI agent talking to a merchant in the US creates data flows that cross jurisdictions with different privacy rules. Under GDPR, such transfers are heavily restricted unless adequate protections are in place.
If your merchant system receives data from agents outside the EU, ensure you have legal mechanisms for cross-border data transfers (Standard Contractual Clauses, Binding Corporate Rules, or adequacy decisions).
Liability and Responsibility
As agentic commerce grows, courts will need to assign liability for AI-related harms. Three models are competing.
Consumer Liability Model
Under this model, the human consumer is responsible for what their agent does, similar to how humans are responsible for their own shopping choices. This model assumes consumers understand and authorize everything their agents do.
Risk for brands: if a consumer can argue their agent acted without authorization, the brand might face liability for selling to an unauthorized agent.
Under this model, the AI provider (OpenAI, Google, Anthropic, or Querytail) is liable for their agent's actions. The logic is that the platform controls the agent and should be responsible for ensuring it behaves correctly.
This model incentivizes AI providers to build compliance and safety features. However, it could also slow down agentic commerce innovation if liability exposure becomes too high.
Merchant Liability Model
Under this model, the merchant is liable for honoring commitments made by the consumer's agent. If the agent promised a price, the merchant must deliver that price. If the agent recommended a product, the merchant is responsible for its suitability.
This model is familiar to e-commerce (merchants are responsible for accurate product listings and pricing), but it creates new risks when agents negotiate custom terms.
Current Reality: liability is unclear, and the answer will vary by jurisdiction, transaction type, and product category. No consensus has emerged. Build systems that minimize harm regardless of how liability is ultimately assigned.
The EU AI Act and Agentic Commerce
The EU AI Act, which became enforceable in 2026, classifies AI systems by risk level. Shopping agents fall into this framework.
Risk Classification
Most general shopping agents are classified as "limited risk," requiring transparency (the consumer must know they are talking to an AI) and record-keeping (you must log interactions for compliance audits).
However, if your agent handles regulated products, it might trigger "high risk" classification. For example, an agent recommending pharmaceuticals or alcohol must comply with additional requirements, including human oversight, conformity assessments, and stricter accuracy standards.
What You Must Do
For limited-risk agents (general shopping): maintain transparent disclosures, keep audit logs, and publish documentation about how the agent works. For high-risk agents (regulated products): conduct conformity assessments, implement human review workflows, and report serious incidents to regulators.
How Querytail Supports Compliance
Compliance is not something you bolt on after deploying an agent. It must be built into the architecture from the start.
Semantic Firewall
The Semantic Firewall prevents your agent from making unauthorized claims or commitments. It ensures that every recommendation aligns with your brand policies and legal guardrails. By enforcing constraints at the language model level, the Semantic Firewall creates an auditable compliance layer. If the agent recommends a product, you have a record of which guardrails were applied and why the recommendation was approved.
Audit Trail
Every recommendation, price negotiation, and customer interaction is logged with full reasoning. If a regulator asks "why did your agent recommend this product to this customer," you can point to the audit trail and show the decision logic. This documentation is essential for defending your compliance posture.
Brand Voice Controls
You define what your agent can and cannot say. You set the policies, disclaimers, and tone. The agent operates within those constraints. This gives you control over how your brand appears in agentic commerce and ensures consistency across channels.
Trust Layer
Payment processing stays within regulated financial infrastructure. The agent does not handle payment details or access financial systems directly. Instead, it coordinates with your Trust Layer, which manages secure, PCI-compliant transactions. This separation reduces your liability and simplifies compliance.
Practical Compliance Checklist
Use this checklist to ensure your agentic commerce initiative meets current regulatory expectations.
- Disclose AI involvement. Tell customers they are talking to an AI agent, not a human. Make this disclosure clear and early in the conversation.
- Maintain human oversight. For high-value transactions or regulated products, require human review before the agent commits to a transaction.
- Keep audit logs. Record every recommendation with context: what the customer asked, what guardrails were applied, and why the agent made that recommendation.
- Review brand voice guardrails quarterly. As regulations evolve and your business changes, update the constraints your agent operates under. Document these changes.
- Monitor regulatory developments. Subscribe to updates from regulators in your key markets. When new guidance emerges (EU AI Act updates, FTC guidance, etc.), assess the impact on your systems and update accordingly.
- Test for bias and fairness. Ensure your agent does not discriminate based on protected characteristics. If regulators challenge your agent, you need evidence it treats all customers fairly.
- Prepare for audits. Assume regulators will ask to audit your agentic commerce system. Organize your documentation so you can quickly provide evidence of compliance.
FAQ
Do I need to tell customers they are talking to an AI?
Yes. Most regulators expect transparency. The EU AI Act requires it for limited-risk systems. Best practice is to disclose upfront: "You are chatting with an Agentic Client Advisor." This builds trust and reduces liability.
Am I liable if the AI recommends the wrong product?
Liability depends on the type of error, the regulatory jurisdiction, and the product category. If your agent makes a false claim (a claim contradicted by product specs), you are likely liable. If the product is simply not ideal for the customer, liability is less clear. Ensure your agent makes only defensible recommendations and maintains audit logs of its reasoning.
Does GDPR apply to AI agent conversations?
Yes, if the conversation involves personal data (which it usually does). GDPR governs how you collect, process, and share customer data. When your agent shares customer preferences with a merchant, that is data processing under GDPR. Ensure you have legal grounds for that processing (usually informed consent) and that you comply with data minimization and cross-border transfer requirements.
How do I prepare for the EU AI Act?
Classify your agent by risk level. Most shopping agents are limited-risk. For limited-risk agents, maintain transparency, document your system, and keep audit logs. If your agent handles high-risk products, conduct conformity assessments and implement stricter oversight. Create a compliance roadmap and update it as new guidance emerges.
Can I use agentic commerce for regulated products like alcohol or pharmaceuticals?
Yes, but with significant additional controls. Regulated products typically trigger high-risk classification under the EU AI Act and require stricter compliance. You will need human oversight (a human must approve high-risk recommendations), robust accuracy testing, and potentially conformity assessments. Consult legal counsel specific to your product category and jurisdictions.
What happens if my agent makes a mistake and a customer is harmed?
Document everything. Your audit trail is your defense. If the agent operated within its guardrails and the harm resulted from a genuine error (not negligence), you are in a stronger position to defend yourself. If the agent violated a guardrail or made a claim you had no basis to authorize, liability is likely. This is why the Semantic Firewall and audit logging are critical.
Want to explore how Querytail can help? Request a demo to see the platform in action, or contact our team for any questions. For brands that want to build compliance into their agentic commerce infrastructure from day one, the Design Partner program offers hands-on collaboration with the Querytail team.
Compliance Built In, Not Bolted On
See the Semantic Firewall in action. A compliance layer that makes agentic commerce safer and more defensible.
Schedule a Demo
Related Articles