Google Cloud Security published their Cybersecurity Forecast 2026 this month. It's a serious document. The predictions are specific, grounded in Mandiant incident data, and written by people who spend their days inside active breaches.
We read it carefully. Not because MIR is a cybersecurity company — we're not. But because every major theme in the report traces back to the same structural gap: identity systems know who you are, but not what you've done.
Here's what we see coming, and what the report stops short of saying.
1. Shadow Agents Are an Identity Problem, Not a Security Problem
Google's report introduces "Shadow Agent Risk" as a 2026 headline. Employees will deploy autonomous AI agents without corporate approval, creating "invisible, uncontrolled pipelines for sensitive data." The proposed solution: route and monitor all agent traffic, establish AI governance, secure by design.
That's all correct. But it describes containment, not context.
The deeper problem isn't that shadow agents exist — it's that when you discover one, you have no way to answer the most basic question: what has this agent actually done? Not what it could access. Not what its permissions allowed. What it did.
Permissions tell you what's possible. Participation history tells you what's real. When an unauthorized agent is discovered six weeks after deployment, the organization that has participation history can assess blast radius in minutes. The one relying on credentials alone is guessing.
MIR records events emitted by the systems agents interact with — access granted, action completed, boundary crossed. The agent never self-reports. The infrastructure around it does. That's the difference between monitoring and history.
2. "Agentic Identity Management" Needs a Memory Layer
The report's most forward-looking section predicts the rise of "agentic identity management" — treating AI agents as first-class identity principals with their own managed identities, just-in-time access, and chains of delegation. Google calls this "a central pillar of the new security paradigm."
We agree. And we'd go further: identity without history is just another credential.
An agent with a valid OAuth token and appropriate scopes is authenticated and authorized. But is it established? Has it operated at this sensitivity level before? Has it interacted with this class of data across multiple partners? Or is this the first time it's touched production?
Forecast: By late 2026, organizations deploying AI agents at scale will need to answer "has this agent done this before?" as a routine authorization input — not as a forensic question asked after something goes wrong.
MIR's resolve signal answers exactly that question. It returns verifiable participation history — how many trusted events, across how many partners, over what time period — without revealing what those events were or who they involved. The enterprise system decides what to do with that context. MIR never makes the decision.
3. Credential Fragility Is Getting Worse, Not Better
The report documents an ongoing crisis in credential security: vishing attacks bypassing MFA, social engineering targeting IT staff, North Korean operatives maintaining long-term employment access through fake identities. The common thread is that credentials — even strong ones — are point-in-time assertions. They prove who you are at the moment of authentication but carry no memory of what happened before or after.
This is the fundamental limitation Google's report describes without naming: authentication is stateless.
A stolen credential works because nothing in the system knows that the entity using it has no prior participation history. A deepfake-aided vishing call succeeds because the target has no way to verify the caller's history of interactions, only their claimed identity.
Forecast: Multi-factor authentication will continue to be bypassed at scale in 2026. The organizations that weather this best will be those that supplement authentication (who are you?) with participation history (what have you done here before?).
4. Supply Chain Attacks Demand Cross-Organization History
Google highlights China-nexus actors targeting third-party providers because "compromising one trusted partner may enable access to many downstream organizations." The report calls for "a layered approach to network defense."
Layered defense is necessary. But the reason supply chain attacks work so well is that trust is inherited, not earned. When a vendor's credentials are compromised, every downstream organization that trusts those credentials is exposed — because none of them have independent participation history for the entity using them.
A compromised vendor credential with extensive participation history across your systems looks very different from a compromised vendor credential being used for the first time. The first scenario is a breach. The second is a breach you can detect.
Forecast: Cross-organization participation history will become a factor in vendor risk assessments by 2027. Not "does this vendor have SOC 2?" but "does this specific service account have established history with our systems?"
5. On-Chain Attribution Is Participation History for Finance
One of the report's most interesting sections describes the on-chain cybercrime economy — adversaries migrating operations onto public blockchains for resilience against takedowns. Google notes the irony: "Every on-chain action — funding a wallet or deploying a contract — leaves a permanent, publicly auditable record."
This is participation history by another name. The blockchain doesn't judge intent. It records that transactions occurred. Investigators can then ask narrow questions: has this wallet interacted with known laundering contracts? Has this address participated in patterns consistent with previous campaigns?
MIR applies the same principle to identity infrastructure. We don't inspect data, infer behavior, or decide intent. We record that assertions occurred. When a decision needs to be made, the system asks a narrow question and gets a verifiable answer.
Forecast: The blockchain's accidental creation of immutable participation history will accelerate demand for the same capability in traditional identity systems. Organizations will ask: if on-chain activity is auditable by default, why isn't API activity?
6. The Virtualization Blind Spot Is a History Blind Spot
Google warns that hypervisor-level attacks are becoming a primary vector — bypassing guest EDR, encrypting virtual machine disks, "rendering hundreds of systems inoperable in a matter of hours." The report notes that the virtualization layer "remains largely unmonitored."
Unmonitored means no history. And no history means that when a hypervisor is compromised, there's no baseline to compare against. Was this administrative action routine? Has this service account accessed the control plane before? At this frequency? From this location?
These are all participation history questions. The answers exist in the events the infrastructure already emits. They're just not being recorded in a way that makes them queryable.
What Google Got Right
To be clear: this is a strong report. The predictions are specific and well-sourced. The section on shadow agents is prescient. The call for agentic identity management is exactly right. The observation that banning AI agents "only drives usage off the corporate network, eliminating visibility" is one of the clearest statements of the problem we've seen.
What the report doesn't do — and this isn't a criticism, it's a gap in the industry — is identify the primitive that connects all of these problems.
Shadow agents are unobservable because they have no history. Credential theft works because credentials carry no memory. Supply chain attacks propagate because trust is inherited without evidence. Virtualization is a blind spot because the layer has no participation record.
The missing primitive is participation history: a neutral, verifiable record of what occurred, queryable at decision time, that never infers intent or replaces the authority of the systems that use it.
Our Forecast
Here's what we expect to see by the end of 2026:
- Agent identity tiers — Organizations will classify AI agents not just by permissions but by participation history. New agents will face higher friction. Established agents with clean history will earn expanded autonomy.
- History-informed authorization — At least one major cloud provider will ship a feature that incorporates participation history (or "behavioral context") into authorization decisions, moving beyond static RBAC.
- Cross-partner resolve signals — Enterprises will begin requesting participation history from vendors and partners as part of ongoing trust evaluation, not just point-in-time audits.
- The "flight recorder" pattern — The idea that infrastructure should record what happened (without judging it) will gain traction as a design pattern, separate from logging, separate from monitoring, separate from SIEM.
- Non-human identity history — Service accounts, API keys, and AI agents will be evaluated on their participation history the same way user accounts are evaluated on their authentication history today.
History informs decisions, but authority stays with your controls. That separation is what prevents confidence from turning into hallucination — and turns trust into something grounded, calm, and observable.
MIR is the participation history layer for the internet. We record what happened. Your systems decide what it means.