2026 is going to be a strange year in cybersecurity. Not only will it be more of the same, but bigger and louder. It stands to bring about a structural shift in who is attacking us, what we are defending, exactly where we are defending, and hopefully, who will be held accountable when things go wrong.
For context, I am framing these predictions based on the way I run security and the way I find it effective to talk to board members. This is through the lens of business impact, informed by things like the adversarial mindset, identity risk, and threat intelligence.
Artificial adversaries move from Proof-of-Concept (PoC) to daily reality
In 2026, most mature organizations will start treating artificial adversaries as a normal part of their threat model. I use artificial adversaries to mean two things:
- Artificial Intelligence (AI) enhanced human actors using agents, LLMs, world models, and spatial intelligence to scale their campaigns while making them far more strategic and surgically precise.
- Autonomous nefarious AI that can discover, plan, and execute parts of the intrusion loop with minimal human steering. This is true end-to-end operationalized AI.
We will see the use of AI move from simply drafting great-sounding phishing emails to running entire playbooks (e.g., reconnaissance, targeting, initial access, lateral movement, exfiltration, and extortion). Campaigns will use techniques like sentiment analysis to dynamically adjust tactics and/or lures, elements such as infrastructure to dynamically scale, and timing based on live target feedback, not human shift schedules.
The practical reality for defenders is simple – assume continuous, machine‑speed contact with the adversary. Controls, monitoring, and incident response must be designed for a world where the attacker never sleeps, constantly learns and adapts, gets smarter as things progress, and never gets bored. When attackers move at machine speed, identity becomes the most efficient blast radius to exploit.
Identity becomes the primary blast radius – and ITDR grows up
We have said for years that identity is the new perimeter. In 2026, identity becomes the primary blast radius. Many compromises will still start with leaked/stolen credentials, session replays, or abuse of machine and/or service identities.
Identity Threat Detection and Response (ITDR) will mature from a niche add‑on into a core capability. Identity risk intelligence (signals from breach data, infostealer logs, and dark‑web data) will be fused into a continuous identity risk score for every user, device, service account, and increasingly every AI agent. Moreover, corporate identities will be fused with personal identities so that intelligence represents a holistic risk posture to enterprises.
The key question will no longer be just “Who are you?” but “How dangerous are you to my organization right now?” Every login and API call will need to be evaluated against current exposure, behavior, and privilege. Leaders who cannot quantify identity risk will struggle to justify their budgets because they will not be able to fight on the right battlefields.
CTEM finally becomes a decision engine, not a useless framework
Continuous Threat Exposure Management (CTEM) has been marketed heavily. In 2026, we will separate PowerPoint and analyst hype CTEM from operational CTEM. At its core, CTEM is exposure accounting, or a continuous view of what can actually hurt the business and how badly.
Effective security programs will treat CTEM as continuous exposure accounting tied directly to revenue and regulatory risk, not as a glorified vulnerability list that will never truly get addressed. Exposure views will integrate identity risk, SaaS sprawl, AI agent behavior, data ingress/egress flows, and third‑party dependencies into a single, adversary‑aware picture.
CTEM will feed capital allocation, board reporting, and roadmap planning. If your CTEM implementation does not influence where the next protective dollar goes, it is not CTEM; it is just another dashboard full of metrics that are useless to a business audience. Regulators won’t care about your dashboards; they’ll care whether your CTEM program measurably reduces real-world exposure.
Regulation makes secure‑by‑design non‑negotiable (especially in the European Union (EU))
2026 is the year some regulators stop talking and start enforcing. The EU Cyber Resilience Act (CRA) moves from theory to operational reality, forcing manufacturers and software vendors targeting the EU to maintain Software Bill of Materials (SBOMs), run continuous vulnerability management, and report exploitable flaws within tight timelines. One key point here is that this is EU-wide, not sector-centric or targeting only publicly traded companies.
While the EU pushes toward horizontal, cross-sector obligations, the United States (U.S.) will continue to operate under a patchwork of sectoral rules and disclosure-focused expectations. SEC cyber-disclosure rules and state-level privacy laws will create pressure, but not the same unified secure-by-design mandate that CRA represents. Other regions, such as the U.K., Singapore, and Australia, will continue to blend operational resilience expectations (e.g., for financial services and critical infrastructure) with emerging cyber and AI guidance, effectively exporting their standards through global firms.
The EU AI Act will add another layer of pressure, particularly for vendors building or deploying high-risk AI systems. Requirements around risk management, data governance, transparency, and human oversight will collide with the reality of shipping AI-enabled products at speed. For security leaders, this means treating AI governance as part of product security, not just an ethics or compliance checkbox. You will need evidence that AI-driven features do not create unbounded security and privacy risk. Moreover, you will need to be able to explain and defend those systems to regulators.
NIS2 will also bite in practice as the first real audits and enforcement actions materialize. At the same time, capital markets regulators such as the SEC in the U.S. will continue to scrutinize cyber disclosures and talk about board‑level oversight of cybersecurity risk.
There is a net effect here – cybersecurity becomes a product-safety and market-access problem. If your product cannot stand up to CRA-grade expectations, AI-governance scrutiny, and capital-markets disclosure rules, you will lose market share or access. Some executives will discover that cyber failures now have grand, and potentially personal, consequences.
Disinformation, deepfakes, and synthetic extortion professionalize and achieve scale
We are already seeing AI‑generated extortion and executive impersonations. In 2026, these will become industrialized. Adversaries will mass‑produce tailored deepfake incidents against executives, employees, and customers. From fake scandal footage to convincingly spoofed “CEO in crisis” voice calls ordering urgent payments, this will start to happen at scale the way the NPD sextortion wave hit in 2024.
Digital trust has eroded to a disturbing point. Brand and executive reputation will be treated as high‑value assets in this new threat landscape. Attackers will try to weaponize misinformation not only to manipulate politics and financial markets, but also to further break trust in areas such as incident‑response communications and official statements.
This is where vibe hacking becomes mainstream as the next generation of social engineering. Campaigns will focus less on factual deception and more on psychological, emotional, and social manipulation to create exploitable chaos across multiple fronts (e.g., in the lives of individuals as well as inside organizations and societies).
The software supply chain gets regulated, measured, and attacked at the same time
In 2026, the software supply‑chain story gets more complex, not less. Regulatory SBOM requirements are ramping up at the same time that organizations add more SaaS, more APIs, more AI tooling, and more automation platforms.
Adversaries will continue to target upstream build systems, AI models, plugins, and shared components because compromising one dependency scales beautifully across many downstream organizations.
Educated boards will shift from asking “Do we have an SBOM?” to “How quickly can we detect a poisoned component, isolate the blast radius, and prove to regulators and customers that we contained it?” Continuous, adversary‑aware supply‑chain monitoring will replace static point‑in‑time attestations.
Deception engineering and security chaos engineering become standard practice
Static and traditional defenses are proving to age badly against autonomous and AI‑enhanced adversaries. In 2026, we will see sophisticated programs move toward deception engineering at scale (e.g., documents with canary tokens, deceptive credentials, honeypot workloads, decoy SaaS instances, and fake data pipelines) instrumented to deceive attackers and capture their behavior. Deception engineering techniques will become powerful tools to force AI‑powered attackers to burn resources.
Sophisticated programs will also start to leverage Security Chaos Engineering (SCE) as part of their standard practices. They will expand SCE exercises from infrastructure into identity and data paths. Teams will deliberately inject failures and simulated attacks into IAM, SSO, PAM, and data flows to measure real‑world resilience rather than relying on configuration checklists and Table Top Exercises (TTX).
AI browsers and memory‑rich clients become a new battleground
AI‑augmented browsers and workspaces are getting pushed onto users fast. They promise enormous productivity boosts by providing long‑term memory, cross‑tab reasoning, and deep integration into enterprise data. They also represent a new, high-value target for attackers. Today, most of these tools are immature, but like many end-user products we may or may not need, they will still find their way into homes and enterprises.
A browser or client that remembers everything a user has read, typed, or uploaded over months is effectively a curated data‑exfiltration cache if compromised. Most organizations will adopt these tools faster than they update Data Loss Prevention (DLP) stacks, privacy policies, or access controls.
We will also see agent‑to‑agent risk. The proliferation of decentralized agentic ecosystems will see to this. Inter-agent communication is both a feature of adaptability and a new element in attack surfaces. Authentication, authorization, and auditing of these machine‑to‑machine conversations will lag behind adoption unless CISOs force the issue and tech teams play some serious catch-up.
Cyber-physical incidents force boards to treat Operational Technology (OT) risk as P&L risk
In 2026, cyber-physical incidents will stop being treated as IT or edge cases and start showing up explicitly in P&L conversations. As human and artificial adversaries get better at understanding OT communication protocols and process flows, not just IT systems, native attacks will increasingly target manufacturing lines, logistics hubs, energy assets, and healthcare infrastructure. AI-enhanced reconnaissance and simulation will help attackers model physical impact before they pull the trigger, making it easier to design campaigns that maximize downtime, safety risks, and business disruption with minimal effort. The result is a shift from data breach and ransomware narratives to real-world operational outages and safety-adjacent events that boards cannot dismiss as IT problems.
This dynamic will force organizations to pull OT/Industrial Control Systems (ICS) security out of the engineering basement and into mainstream risk management. OT exposure will need to be explicitly quantified in the same terms as other strategic risks (e.g., impact on revenue continuity, contractual SLAs, supply-chain reliability, and regulatory exposure). CTEM programs that only see web apps, APIs, and cloud assets will look dangerously incomplete when a single compromised PLC or building management system can halt production or shut down an entire manufacturing facility. Boards will expect cyber-physical scenarios to show up in resilience testing, TTXs, and stress tests.
The organizations that are mature and handle this well will build joint playbooks between security, operations, and finance. They will treat OT risk as part of protected ARR, and fund segmented architectures, OT-aware monitoring, and incident drills before something breaks. Those who treat OT as “someone else’s problem” will discover in the worst possible way that cyber-physical events don’t just hit uptime metrics, they threaten revenue and safety in ways that no insurance or PR campaign can fully repair.
Boards will demand money metrics, not motion metrics
Economic pressure and regulatory exposure will push educated board members away from vanity metrics like counts of alerts, vulnerabilities, or training completions. Instead, they will demand money metrics, such as “how much ARR is truly protected”, “how much revenue is exposed to specific failures”, and what it costs to defend an event or buy down a risk.
As AI drives both attack and defense costs, boards will expect clear security ROI curves. It will need to clear where additional investment materially reduces expected loss and where it simply feeds some useless dashboard.
CISOs who cannot fluently connect technical initiatives to capital allocation, risk buy‑down, and protected revenue will be sidelined in favor of leaders who can.
Talent, operating models, and playbooks reorganize around AI
Tier‑1 analyst work will be heavily automated by 2026. AI copilots and agents will handle first‑line triage, basic investigations, and routine containment for common issues. Human talent will move up‑stack toward adversary and threat modeling, complex investigations, and business alignment.
The more forward-thinking CISOs will push for new roles such as:
- Adversarial‑AI engineers focused on testing, hardening, and red‑teaming AI systems
- Identity‑risk engineers owning the integration of identity risk intelligence, ITDR, and IAM
- Deception and chaos engineers are responsible for orchestrating real resilience tests and deceptive environments
Incident Response (IR) playbooks will evolve from static, linear documents into adaptable orchestrations of defensive and likely distributed agents. The CISO’s job will start to shift towards designing and governing a cyber‑socio‑technical system where humans and machines defend together. This will require true vision, innovation, and a different mindset than what has brought our industry to its current state.
Cyber insurance markets raise the bar and price in AI-driven risk
In 2026, cyber insurance will no longer be treated as a cheap safety net that magically transfers away existential risk. As AI-empowered adversaries drive both the scale and correlation of loss events, carriers will respond the only way they can – by tightening terms, raising premiums, and narrowing what is actually covered. We will see more exclusions for “systemic” or “catastrophic” scenarios and sharper scrutiny on whether a given loss is truly insurable versus a failure of basic governance and control.
Underwriting will also likely mature from checkbox questionnaires to evidence-based expectations. Insurers will increasingly demand proof of things like a functioning CTEM program, identity-centric access controls, robust backup and recovery, and operational incident readiness before offering meaningful coverage at acceptable pricing. In other words, the quality of your exposure accounting and control posture will directly affect not only whether you can get coverage, but at what price and with what limits and deductibles. CISOs who can show how investments in CTEM, identity, and resilience reduce expected loss will earn real influence over the risk-transfer conversation.
Boards will, in turn, be forced to rethink cyber insurance as one lever in a broader risk-financing strategy, not a substitute for security. The organizations that win here will be those that treat insurance as a complement to disciplined exposure reduction. Everyone else will discover that in an era of artificial adversaries and correlated failures, you cannot simply ensure your way out of structural cyber risk.
Cybersecurity product landscape – frameworks vs point solutions
The product side of cybersecurity will go through a similar consolidation and bifurcation. The old debate of platform versus best‑of‑breed is evolving into a more nuanced reality, one based on a small number of control‑plane frameworks surrounded by a sharp ecosystem of highly specialized point solutions.
Frameworks will naturally attract most of a CISOs budget. Buyers, boards, and CFOs are tired of stitching together dozens of tools that each solve a sliver of a much larger set of problems. They want a coherent architecture with fewer strategic vendors that can provide unified accountability, prove coverage, reduce operational load, and expose clean APIs for integration with those highly specialized point solutions.
However, this does not mean the death of point solutions. It means the death of shallow, undifferentiated point products. The point solutions that survive will share three traits:
- They own or generate unique signal or data
- They solve a unique, hard, well‑bounded problem extremely well
- They integrate cleanly into the dominant frameworks instead of trying to replace them
Concrete examples of specialization include effective detection of synthetic identities, high‑fidelity identity risk intelligence powered by large data lakes, deep SaaS and API discovery engines, vertical‑specific OT/ICS protections, and specialized AI‑security controls for model governance, prompt abuse, and training‑data risk. These tools win when they become the intelligence feed or precision instrument that makes a framework materially smarter.
For buyers, there is a clear pattern – design your mesh architecture around a spine of three to five control planes (e.g., identity, data, cloud, endpoint, and detection/response) and treat everything else as interchangeable modules. For vendors, the message is equally clear – be the mesh/framework, be the spine, or be the sharp edge. The mushy middle will not survive 2026.
Executive Key Takeaways
- Treat AI‑powered adversaries as the default case, not an edge case.
- Fund CTEM as an operational component.
- Fund deception, chaos engineering, and adaptable IR to minimize dwell time and downtime.
- Focus on protecting revenue and being able to prove it.
- Put identity at the center of both your cyber mesh and balance sheet.
- Align early with CRA, NIS2, and/or AI governance. Trust attestations and external proof of maturity carry business weight. Treat SBOMs, exposure reporting, and secure‑by‑design as product‑safety controls, not IT projects.
- Invest in truth, provenance, and reputation defenses. Prepare for deepfake‑driven extortion en-masse and disinformation that can shift markets in short periods of time.
- Rebuild metrics, products, and talent around business impact. Choose frameworks both subjectively and strategically, and then plug in sharp point solutions where they really have a positive impact on risk.


