Please ensure Javascript is enabled for purposes of website accessibility

Industry Insights

Six Hard Truths About the Vercel Breach (And What to Do About Them)

April 22, 2026

·

10-Minute Read

Table of contents

What Happened: The Full ChainYou Don't Have to Be a Customer to Get BreachedSomeone Turned Up the Difficulty LevelPat Opet Told You SoThe Sad Math of Detection and ResponseGoogle Workspace Is the New Active DirectoryWhat CISOs Should Do This Week

Share Article

TL;DR: The most consequential breach of the month started with a game cheat download and traveled through an OAuth consent screen that nobody in security ever reviewed.

A Context.ai employee downloaded Roblox cheat scripts onto a work machine. Ten weeks later, Vercel was breached, customer credentials were exfiltrated, and a threat actor was asking $2 million on BreachForums for the stolen data.

The attack chain involved zero novel techniques. An infostealer, stolen OAuth tokens, and lateral movement through identity trust relationships. The kind of attack the industry has known about for years and still hasn't solved.

This post breaks down the full attack, what it tells us about where the industry is actually failing, and what needs to change.

What Happened: The Full Chain

In February 2026, a Context.ai employee searched for and downloaded Roblox "auto-farm" scripts. The scripts were bundled with Lumma Stealer, a commodity malware-as-a-service infostealer that harvested everything in the browser: Google Workspace credentials, session cookies, and API keys for Supabase, Datadog, and Authkit. The compromised user was a core member of Context.ai's engineering team.

By March, the attacker used the stolen credentials to access Context.ai's AWS environment. Context.ai detected the intrusion, brought in CrowdStrike, shut down the environment, and deprecated their consumer product. They thought they had it contained.

They didn't. Before the AWS environment went down, the attacker had already stolen OAuth tokens belonging to Context.ai's consumer users. One of those tokens granted access to a Vercel employee's Google Workspace account. That employee had signed up for Context.ai's "AI Office Suite" consumer product using their corporate email and granted "Allow All" permissions. Vercel's Google Workspace admin settings allowed this broad consent to go through.

From the Google Workspace account, the attacker pivoted into Vercel's internal systems. They enumerated environment variables that hadn't been marked "sensitive" (and therefore decrypted to plaintext), accessed project settings, deployment configurations, and production logs. The stolen data reportedly includes API keys, NPM tokens, GitHub tokens, source code, and 580 employee records.

Vercel's CEO described the attacker's behavior as showing "surprising velocity and in-depth understanding of Vercel," and said he strongly suspects the operation was "significantly accelerated by AI."

Hard truth #1: The attack that breached Vercel not use any novel technique. It just needed an OAuth token, and a permissive default. Those conditions exist in most environment right now.

You Don't Have to Be a Customer to Get Breached

Vercel was not a Context.ai customer. No contract existed between the two companies, and no vendor risk assessment was conducted. The entire relationship that enabled this breach was a single employee signing up for a free consumer product with a work email.

This completely breaks the mental model most organizations use for third-party risk. Security teams build vendor questionnaires, review compliance certifications, and negotiate security addenda. All of that work assumes the vendor relationship is known, documented, and managed through procurement.

OAuth doesn't follow that model. Any employee can grant a third-party app access to your Google Workspace or Microsoft 365 tenant by clicking "Allow" on a consent screen. That app now holds a persistent token with whatever scopes the user consented to. It doesn't show up in your vendor inventory. It doesn't go through procurement. It doesn't trigger a risk assessment. But it has access to your environment, and in Vercel's case, broad access.

Hard truth #2: Your third-party risk surface is not defined by your contracts. It is defined by every OAuth consent screen your employees have ever clicked "Allow" on.

Someone Turned Up the Difficulty Level

Everyone is talking about AI threats. Agentic AI security, MCP server exploitation, prompt injection, tool poisoning. Entire product categories are being built around threats that are still mostly theoretical. Meanwhile, this breach used an attack chain that predates ChatGPT by a decade: infostealer, credential theft, OAuth token abuse, lateral movement through identity trust.

Nothing about AI caused this breach. The entry point was commodity malware distributed through game cheat downloads. The lateral movement mechanism was a plain OAuth token. The privilege escalation came from a Google Workspace admin setting that defaulted to permissive. These are the basics, and the basics are still killing us.

That said, there is a real AI angle worth paying attention to. Vercel's CEO believes the attacker was AI-accelerated. Trend Micro's analysis suggests the enumeration speed and adaptive behavior inside Vercel's environment exceeded what manual or scripted operations would typically produce. Whether this specific attacker used an LLM to navigate Vercel's systems faster is something only the forensics will confirm.

Two weeks before this breach, Anthropic announced Mythos, a model so capable at finding and exploiting software vulnerabilities that they refused to release it publicly. Whether you see that as a genuine safety decision or an exceptionally well-timed marketing play, the underlying reality is the same: AI models are getting better at offense, and fast.

This doesn't give attackers magic powers. It just means the difficulty level went up. Think of it like a video game where everyone was playing on Normal and someone quietly switched the settings to Legendary. The game mechanics are the same. The enemies still come from the same directions. They just move faster, react quicker, and punish mistakes harder.

For CISOs, the implication is specific: the margin for error got thinner. The window between "attacker gains initial access" and "attacker has exfiltrated what they need" is shrinking, not because the attack vectors are new, but because AI-augmented attackers can enumerate, pivot, and extract at machine speed. The fundamentals of defense still apply, but the tolerance for leaving them unaddressed just dropped significantly.

Hard truth #3: AI is compressing the timeline on existing attack vectors, and most organizations are still failing at the basics that were exploitable before AI entered the picture.

Pat Opet Told You So

In April 2025, exactly one year before this breach, JPMorgan Chase CISO Pat Opet published an open letter to third-party suppliers. It described the exact attack pattern that played out at Vercel.

Opet warned that modern SaaS integration patterns "collapse authentication and authorization into overly simplified interactions, effectively creating single-factor explicit trust between systems on the internet and private internal resources." He called out OAuth-based integrations specifically, noting that they create direct, often unchecked interactions between third-party services and sensitive internal resources. He warned about "inadequately secured authentication tokens vulnerable to theft and reuse."

One year later, an OAuth token stolen from a compromised AI tool was used to take over a Vercel employee's Google Workspace account, and from there, the attacker walked into Vercel's internal environment. Single-factor explicit trust between systems on the internet and private internal resources, exactly as Opet described.

The supply chain attack playbook through OAuth has been running for years: Codecov, CircleCI, SalesLoft, Drift, Gainsight, and now Context.ai to Vercel. Different vendors, same playbook. The vendor gets compromised, the attacker steals OAuth tokens the vendor holds on behalf of its users, and those tokens become the front door into downstream enterprises. Opet recognized the pattern. The industry read his letter and nodded along.

Hard truth #4: Pat Opet published that letter a year ago, and the industry collectively treated it as interesting reading rather than an operational directive. Nothing's changed.

The Sad Math of Detection and Response

Prevention for this attack was trivial. A Google Workspace admin can restrict third-party OAuth app access to pre-approved apps only. A configuration toggle. No product to buy, no agent to deploy, no integration to build. One setting, and this entire attack chain breaks at the pivot point. The employee can still sign up for Context.ai, but the app can't get a token with access to the company's Google Workspace.

Once that token exists and gets compromised, the math changes dramatically. OAuth token usage looks like legitimate application behavior because the permissions were granted, the tokens are valid, and the access patterns match what the app was authorized to do. There is no signature, no anomaly, no IOC until the attacker starts doing something the app wouldn't normally do. And if the attacker is moving at the velocity Vercel described, you have minutes to catch it, not hours.

AI makes this asymmetry worse, not better. An attacker using an LLM to enumerate your environment, understand your naming conventions, identify high-value targets, and extract credentials can compress hours of manual reconnaissance into minutes of automated execution. Your SOC analyst gets an alert, opens the ticket, starts pulling context. By the time they understand what they're looking at, the attacker has already exfiltrated everything the token could reach.

This doesn't mean you throw away your SIEM. It means you stop treating identity hygiene, OAuth governance, and NHI lifecycle management as "nice to have" hardening work and start treating them as the primary control plane. Because when the prevention layer fails and someone clicks "Allow All" on a consent screen, the detection layer won't save you in time.

Hard truth #5: When attackers operate at machine speed, detection becomes post-mortem, not defense.

Google Workspace Is the New Active Directory

For SaaS-native companies, Google Workspace is the identity layer. The IdP, the email system, the document store, the SSO hub. Compromising one Google Workspace account through an OAuth app gave this attacker the same kind of access that compromising a Domain Admin used to provide in the on-prem world.

The difference is that enterprises spent 20 years building monitoring, segmentation, privilege tiering, and attack surface reduction around Active Directory. Entire product categories exist to protect AD. Google Workspace OAuth governance gets almost none of that attention.

Most CISOs cannot answer a basic question right now: how many third-party apps have OAuth access to your Google Workspace tenant, and what scopes do they hold?

If you don't know the answer, you have the same exposure Vercel had. One employee, one consumer app, one "Allow All" click, and that exposure becomes a breach.

Hard truth #6: We spent two decades learning how to protect Active Directory, then migrated to Google Workspace and forgot everything we learned.

What CISOs Should Do This Week

This is not a theoretical risk. The attack chain is documented, the IOCs are published, and Context.ai's OAuth app potentially affected hundreds of users across many organizations. Vercel is likely not the only victim.

Lock down OAuth app consent in Google Workspace and Microsoft 365. Restrict to admin-approved apps only, or at minimum restrict the scopes employees can consent to. This single control would have prevented this breach.

Build an inventory of every third-party app that holds OAuth tokens to your environment. Not just the ones you procured, but every app any employee has ever authorized. If you can't build this inventory, that itself is the finding.

Treat OAuth tokens as high-value credentials. They persist, they don't require MFA, and they survive password resets. An OAuth token is functionally equivalent to a long-lived API key with broad permissions. Manage them accordingly.

Monitor for the specific IOC Vercel published: OAuth App ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. If this app was authorized in your environment, assume compromise and investigate.

Secure Non-Human Identities. Everywhere.

Ofir is the Co-Founder and CEO of Clutch Security. With over 15 years of experience in cybersecurity, including leadership roles at Sygnia and Hunters, he’s helped global enterprises respond to the most advanced cyber threats. At Clutch, Ofir is focused on tackling one of the industry’s most overlooked risks: securing the explosion of Non-Human Identities across modern infrastructure.