Using AI to Prevent Chargebacks

AI chatbots are shaking up the e-commerce buying journey.
Once upon a time, prospective buyers would discover a product organically or through an ad and click into the seller’s site to make a purchase. Shoppers could also do some manual comparison shopping. They’d examine products and services by visiting multiple sites and clicking through search results.
This is changing, though. AI tools like Claude, ChatGPT, Gemini, and Perplexity can now browse the web on behalf of users, distilling product listings and pricing information from several sites in mere seconds. Recommendations posed by AI are quickly becoming a dominant shopping asset.
According to the University of Virginia, just under “half [of polled consumers] trust AI more than a friend when it comes to choosing what to wear.” And nearly 60% have used AI as part of their e-commerce buying journeys.
Then there’s agentic payments; this tech upgrades AI from a shopping companion to a full-fledged independent buyer. It empowers AI agents to autonomously shop and pay on behalf of customers based on user-defined rules.
What’s entirely missing from this picture, however, is what happens when AI misses the mark. What happens when a customer purchases an AI-recommended product they don’t like? Or, when an AI agent orders something a buyer doesn’t want?
The obvious answer is chargebacks.
Understanding the Chargeback Landscape
A chargeback is a forced payment reversal that occurs when a cardholder files a complaint with their issuing bank. When this happens, money is forcibly withdrawn from the merchant’s account and returned to the cardholder.
The merchant loses out on sales revenue, plus any merchandise shipped (assuming the buyer doesn’t return it). To make matters worse, the merchant’s acquiring bank will assess a chargeback fee ranging from $20 to $100 per filed dispute.
Chargebacks today are fairly common: between 0.6% and 1% of all card-not-present e-commerce transactions end up as chargebacks, meaning that an online seller that receives 1,000 orders a month can expect to receive, on average, between six and 10 chargebacks.
Cardholders are also fairly chargeback-happy. The average consumer files 5.7 chargebacks per year and disputes an average transaction value of $76 per instance.
However, agentic payments could put upward pressure on all of these figures. The reasoning is intuitive: if a customer outsources their entire shopping journey to an AI agent, the buyer will not know what the agent purchased until payments have been made. The most immediate way to “cancel” a purchase, then, would be to open one’s banking app, find the charge in question, and dispute it.
Left unchecked, this incoming deluge of chargebacks could overwhelm merchants. While agentic AI could make e-commerce shopping more convenient for buyers, it could do so at the expense of sellers, who may end up on the losing end of this new technology.
Deploying AI to Identify Fraud
The good news is that merchants aren’t powerless in this situation. In fact, AI itself presents the most potent defense against the challenges it creates. AI shopping recommendations and agentic AI exist at the bleeding edge.
But, the use of AI and machine learning in fraud prevention, for instance, is already a well-established and highly effective discipline.
Merchants can deploy these sophisticated tools, which learn and adapt over time to the seller’s specific fraud surface, to proactively identify and block fraudulent transactions before they ever become chargebacks.
An additional area in which AI excels is in real-time transaction risk scoring. When a purchase is made, a machine learning model analyzes hundreds of data points associated with the transaction in fractions of a second—from the time of day and the purchase amount to the type of device used and the customer’s location. It then assigns a risk score to the transaction, which allows both the model and humans-in-the-loop to make more intelligent risk decisions.
For instance, orders with high risk scores can be automatically blocked or flagged for manual review, while orders assigned low scores can be approved without friction, simultaneously protecting revenue and preserving the customer experience. In fact, transaction monitoring can be further enhanced by tools like behavioral biometrics and device fingerprinting. These AI-driven systems analyze how a user interacts with a website. They can distinguish between a real human and a bot based on subtle cues like mouse movements, typing cadence, touchscreen gestures, and dwell time.
Collecting these data points allows the AI fraud detection system to create a unique “fingerprint” for each user’s device and behavior, which it can use to distinguish fraudsters from legitimate users.
AI is similarly adept at rooting out synthetic profiles, or fake personas made using both real and fake information. For example, an AI model can comb through proprietary datasets, public databases, and social media sites to identify less-obvious links and anomalies that could indicate a synthetic identity is being used to make fraudulent purchases.
Doing so can help merchants protect their revenue and thwart checkout attempts from bad actors masquerading as people who don’t exist.
Ways AI Can Help Prevent Disputes
Beyond stopping outright criminal fraud, AI can also play a crucial role in preventing the types of customer-service-related disputes that come from accidental purchases, including AI-assisted purchases. These transactions are legitimate, but the buyer might have remorse, be confused, or expected to get the goods for free.
The key is to resolve customer issues and reduce friction in the transaction process long before a buyer feels the need to contact their bank.
Here, AI can help by:
Providing Instant, 24/7 Customer Support
Modern AI-powered chatbots can understand and resolve a wide range of common customer issues immediately, from questions about order status to product details. Merchants who use AI as a first point of contact may be able to de-escalate customer complaints before they become disputes. If the purchase was AI-assisted, an AI-powered chatbot could double-check with the consumer whether the consumer indeed wanted to make this purchase before delivering any goods or services.
Predicting and Preventing Customer Dissatisfaction
AI can analyze customer behavior to identify leading indicators of a potential chargeback, such as a user repeatedly viewing the return policy page after a purchase. Merchants can act on these warning signs by proactively reaching out to at-risk customers with targeted support. In the best case, this could turn a potential dispute into a positive interaction.
Increasing Successful Transactions
Merchants can deploy “smart retry” logic for legitimate transactions that fail due to technical glitches. Instead of simply declining the card, the system can intelligently retry the payment through a different processor or at a better time. This captures otherwise lost revenue and prevents customer frustration that can lead to cart abandonment.
Reducing Billing Confusion
AI can automate transaction verification and reconciliation to ensure the data that appears on a customer’s credit card statement is always clear and accurate. This reduces billing errors and billing descriptor issues that could cause customers to file chargebacks out of confusion.
Responsible AI Implementation
Merchants who are interested in deploying AI tools to combat fraud must do so in a measured, responsible, and transparent manner that balances the interests of both merchants and customers.
One primary concern, for example, is ensuring fairness and avoiding bias in fraud scoring algorithms. Because AI learns from historical data, it can inadvertently perpetuate existing biases, potentially leading to higher decline rates for legitimate customers from specific geographic locations.
Merchants have to be judicious about selecting AI providers who are transparent about their models and who actively work on bias detection and mitigation. This is the only way to make sure all customers are treated equitably.
Privacy is another critical consideration, especially in the context of harnessing public record databases or cross-merchant data networks. While data is essential for fighting fraud, methods for collecting it should prioritise consent and be minimally invasive. For example, the personal information of customers can be anonymized or tokenized so that it protects individual identities while still getting merchants the data they need.
Any AI solution also needs to be fully compliant with a complex web of regulations. This includes standards like PCI-DSS, which governs the protection of cardholder data. There are also region-specific data privacy laws like the EU’s General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) to consider, too.
Failing to comply with these regulations can result in steep fines, exposure to lawsuits, and reputational harm, so finding an AI provider that prioritizes compliance is a non-negotiable.
About the Author
Donald Kossmann is the Chief Technology Officer at Chargebacks911, a global leader in chargeback prevention and dispute management. Possessing more than 30 years of experience in computer science, AI, data systems, and fraud protection, Donald brings a rare blend of academic distinction and enterprise innovation to the role. He led several groundbreaking initiatives, including serving as General Manager of Fraud Protection and Managing Director of Microsoft Research’s Redmond Lab. Before joining Microsoft, Donald served as a Professor at ETH Zurich, a Visiting Professor at Stanford University, and co-founder of Teralytics AG, a Big Data startup powering behavioral insights across industries like retail, finance, and transportation. Kossmann also holds a Doctor of Philosophy in Computer Science.