At Google I/O 2025, Google announced agentic checkout, powered by its Gemini 2.5 AI. The feature allows users to complete purchases directly from a Google Search result. Once a user confirms their preference, the AI takes over: it adds the item to the cart, checks out using Google Pay and completes the transaction.
Google isn’t alone. Major payment platforms like Visa and Mastercard also recently launched their versions of autonomous shopping.
This shift marks the beginning of AI-first commerce: users set the preferences and AI agents handle the rest.
But this new level of automation also introduces new risks, especially when it comes to identity verification, fraud prevention and the future of payments. When AI agents act on behalf of users, it raises complex questions. Who's really making the purchase? How is the buyer verified? And who’s responsible if something goes wrong?
Agentic checkout allows an AI system to handle the entire shopping process from discovery to payment, based on a user’s direction.
Say you’re interested in a product. Simply share your preferences (like size, color and budget) or tap on a suggested item directly from a search result. From there, the AI scans product listings, compares prices, checks reviews and even keeps an eye out for deals. You tap “buy” and the AI completes the purchase on your behalf. No need to visit a website. No added apps. No forms to fill out.
And it all happens within your search window or voice assistant. Fast. Frictionless. Highly personalized.
The rise of agentic checkout calls for a change in the way businesses approach identity and trust.
Most fraud prevention systems today rely on human behavior: IP address, typing patterns, device fingerprints and even how a mouse moves across the screen. But AI agents don’t act like people. They don’t type. They don’t click. And they don’t log in from a specific location.
Now, identity verification isn’t just about knowing who the user is. It’s understanding what the AI agent is allowed to do and whether it can be trusted to act on someone’s behalf.
“If platforms don’t build a new trust layer that’s AI-aware, mobile-first and real-time, the agentic AI revolution could be accompanied by a surge in fraud and user mistrust,” warns Shruti Kapoor, director, Financial Crime & Compliance, TaskUs.
As automation accelerates, so do the opportunities for abuse. Bad actors are already paying attention.
Here are four emerging challenges platforms need to get ahead of.
1. Losing human signals in fraud detection
Traditional KYC and fraud detection models struggle to adapt to AI-initiated actions that don’t follow normal human behaviors or session activity.
Fraudsters can deploy rogue agents that mimic legitimate ones or hijack authorized agents through credential stuffing or API abuse. Without identity controls designed for agentic AI behavior, it becomes harder to detect when something’s off or who’s really behind the action.
2. The rise of deepfake agents
Synthetic identities (a blend of real and fake information) are already common in fraud schemes. Now imagine deepfake agents trained to talk, behave and spend like real consumers, mirroring purchase patterns, using realistic language and accessing stolen credentials.
Fraudsters could link these agents to real payment methods, gaining authorization and bypassing detection.
3. Blurred lines around consent
What if an AI agent makes a purchase that the user didn’t explicitly approve — or worse, one that turns out to be fraudulent? Who’s liable: the user, merchant, platform or payment processor?
Without clearly defined consent frameworks, accountability gets murky. Users must have transparency into what their agents are allowed to do and be able to revoke permissions instantly. Anything less opens the door to serious legal and financial risk.
4. Real-time verification
Most identity systems today rely on point-in-time verification like KYC at sign-up and assume that’s good enough. But AI agents operate and make decisions 24/7.
To keep up, platforms need persistent identity frameworks: real-time monitoring, dynamic risk scoring and transaction-level authentication.
According to Pragya Agarwal, VP, Financial Crime & Compliance, TaskUs, securing agentic AI systems requires a layered defense strategy.
First, ensure every AI agent’s actions are traceable to a verified user, with clear, session-based consent to prevent misuse or unauthorized activity. Then, reinforce identity verification by confirming mobile device ownership during transactions and monitoring for fraud indicators like SIM swaps or anomalous behavior.
Using AI-specific detection models helps identify emerging threat patterns that traditional tools may miss. And most importantly, empower users with greater visibility and control. Enable them to easily monitor, manage and adjust what their AI agents are authorized to do on their behalf.
Agentic checkout is a major step forward in eCommerce, promising speed, personalization and automation at a scale we’ve never seen before.
“It’s a double-edged sword,” cautions Pragya. “While agentic systems drive better customer experiences and operational efficiency, they also enable smarter fraud. Traditional controls weren’t designed for this level of sophistication. Businesses must move beyond static identity checks and toward dynamic, real-time intelligence that can tell the difference between a person and a machine pretending to be one.”
Platforms will need to collaborate closely with mobile network operators (MNOs), fintechs and digital identity providers to ensure that convenience doesn’t come at the expense of security.
Businesses that build this trust infrastructure now will not only reduce fraud but also lead the way into the next era of digital commerce.
References
We exist to empower people to deliver Ridiculously Good innovation to the world’s best companies.
Services
Cookie | Duration | Description |
---|---|---|
__q_state_ | 1 Year | Qualified Chat. Necessary for the functionality of the website’s chat-box function. |
_GRECAPTCHA | 1 Day | www.google.com. reCAPTCHA cookie executed for the purpose of providing its risk analysis. |
6suuid | 2 Years | 6sense Insights |
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
NID, 1P_JAR, __Secure-3PAPISID,__Secure-3PSID,__ Secure-3PSIDCC | 30 Days | Cookies set by Google. Used to store a unique ID for various Google services such as Google Chrome, Autocomplete and more. Read more here: https://policies.google.com/technologies/cookies#types-of-cookies |
pll_language | 1 Year | Polylang, Used for storing language preferences on the website. |
ppwp_wp_session | 30 Minutes | This cookie is native to PHP applications. Used to store and identify a users’ unique session ID for the purpose of managing user session on the website. This is a session cookie and is deleted when all the browser windows are closed. |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Cookie | Duration | Description |
---|---|---|
_ga | 2 Years | Google Analytics, Used to distinguish users. |
_gat_gtag_UA_5184324_2 | 1 Minute | Google Analytics, It compiles information about how visitors use the site. |
_gid | 1 Day | Google Analytics, Used to distinguish users. |
pardot | Until Cleared | Salesforce Pardot. Used to store and track if the browser tab is active. |
Cookie | Duration | Description |
---|---|---|
bcookie | 2 Years | Browser identifier cookie. Used to uniquely identify devices accessing LinkedIn to detect abuse on the platform. |
bito, bitolsSecure | 30 Days | Set by bidr.io. Beeswax’s advertisement cookie based on uniquely identifying your browser and internet device. If you do not allow this cookie, you will experience less relevant advertising from Beeswax. |
checkForPermission | 10 Minutes | bidr.io. Beeswax’s audience targeting cookie. |
lang | Session | Used to remember a user’s language setting to ensure LinkedIn.com displays in the language selected by the user in their settings. |
pxrc | 3 Months | rlcdn.com. Used to deliver advertising more relevant to the user and their interests. |
rlas3 | 1 Year | rlcdn.com. Used to deliver advertising more relevant to the user and their interests. |
tuuid | 2 Years | company-target.com. Used for analytics and targeted advertising. |