In 2022, I released the 2023 Cybersecurity Anti-Predictions. They were a response to the litany of cybersecurity “thought leaders” who roll out annual predictions, which are extremely predictable.
For 2026, I thought I would revisit those anti-predictions to see how they stack up. While a lot of those anti-predictions remain the same, there is good news on the horizon.
1. The Threat Landscape is Changing
2023: Not really.
2026: AI has entered the chat.
For 2023 I wrote, “everybody will experience the same quality and quantity of attacks that we did in 2022. The technologies, personnel, and practices may change causing us to perceive security differently. However, the actual threats we face will remain mostly the same.”
For most organizations, this prediction remains true. Malware is still a problem. Credential theft remains the primary focus of attackers. And hackers still have the upper hand in every way.
However, it is a different story for AI. There is tremendous change in the threat landscape for AI systems. Perhaps the most interesting of these threats are data poisoning attacks. These specifically target AI systems and large language models (LLMs) to produce flawed or misleading output. In 2024, NIST released an advisory about this kind of attack based on a study they conducted titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. This study is an interesting read. It is extremely thorough and even identifies some lingering cybersecurity challenges such as the dilemma of open versus closed systems.
The mitigating factor with this kind of treat is that it targets the AI platforms, and not the end users of those platforms. This limits the scope of this threat to a handful of AI platform providers, such as OpenAI, Google, Microsoft, etc. Furthermore, I could not to locate any confirmed instance of a data poisoning attack, however that does not mean it has not happened.
What is a larger issue are employees sending company data into AI platforms with no regard to the sensitivity of that data. This poses a complex challenge for organizations who want to enjoy the benefits of AI but need to protect sensitive data. It also poses a massive challenge for regulated systems under standards such as FedRAMP, CMMC, etc.
Fortunately, the industry is responding to this with ample technologies to manage, monitor, and control AI access as well as model context protocol (MCP) servers. Some examples of AI security providers in this space include Obsidian, Zenity, and Cyberhaven.
2. Executives Will Start Taking Security Seriously
2023: Probably not.
2026: No, and you can turn in your badge with security.
For 2023, I wrote, “Information security is an esoteric threat to executives. They know it exists, but they cannot quantify it with clear consequences. They know it is serious, but they do not know how to dimmish the threat. They know harm is possible, but it is easy to dismiss it as somebody else’s problem.”
Around 2016 or so, I noticed that many executives would roll their eyes and tune out the moment cybersecurity was mentioned. I even had CEOs bluntly tell me they were sick of security getting in the way of progress. Here we are a decade later and this attitude has only become more prevalent. If you need proof of this, in early 2025 the Trump administration wiped out the entire Department of Homeland Security’s Cyber Safety Review board. The message is clear, security is unimportant.
This attitude is the single largest barrier security startups face when taking new products to market. Executives and decision-makers do not care about security, until it becomes a catastrophe, then they care just enough to seek out somebody to blame.
3. Companies will Commit to Stronger Security Defenses
2023: No, they will stick with “good enough” security
2026: Good enough is pretty good.
What I wrote for 2023 remains relevant. “It is not that executives do not care at all about security. They care up until the exact point they are on par with everybody else. This is the “good enough” approach to cybersecurity. Companies focus on doing what is an “industry standard” rather than doing what is necessary.”
AWS recently announced a new security agent product. This is a cool new AI technology that can scan an environment, locate vulnerabilities, and suggest improvements. It is a classic example of “good enough” security capability. While no AI agent will ever be as good as a skilled penetration tester, for most organization, this agent is all they really need.
Another good example of how “good enough” has improved is Azure Sentinel. What used to be a mediocre SIEM and endpoint product, has evolved into a respectable security platform. Azure environments have Sentinel built-in, so Azure customers can access and use it easily.
4. We Will See a Megabreach that Cannot be Ignored
2023: We are already ignoring them.
2026: Megabreaches, what’s that?
I cannot even think of a megabreach from 2025 that had any significant impact. Apparently, Verizon had a massive leak in August, which they denied. Whatever. This is a classic “boy cried wolf” problem.
5. Security Staffing will See Improvements
2023: Not likely.
2026: Define “improvements.”
For 2023 I said, “Cybersecurity does not have a staffing problem; it has a staffing crappy jobs problem. There are ample people out there who want to pontificate about all their grand theories of security. What nobody wants to do is actually run anything.”
That has not changed; however, AI has reshaped this problem. AI can do what a lot of security analysts and engineers once did. AI even can write NGINX config scripts, which is something nobody can do. (Yes, that’s a nerdy joke.)
AI can also do a lot of the grunt work industry analysts do, as Richard Stiennon has proved with his IT Harvest platform.
Unfortunately, the outlook for security staffing in 2026 is grim. While the cratering US economy accounts for a lot of that, AI is only making it worse. While AI cannot entirely replace humans, organizations are testing the limits of that. If you pay for security talent, this is good news. If you are security talent, not so much.
However, AI is not the sole reason staffing is declining. This gets us to the good news part of these predictions.
Good News Everybody
For 2023, I noted how cynical my anti-predictions sounded. I saw the cybersecurity industry stuck in a loop, repeating the same nonsense year over year. That loop remains unbroken, but there is a change happening that is truly altering the security landscape: Cloud Eats Security.
In 2021 I predicted that the cloud providers would consume many of the traditional security functions. This meant security would be “by default and by design” in the infrastructure. This trend has proceeded unabated. Leaders now have even fewer excuses to avoid security, since it is baked into the platform.
Moreover, it is becoming easier than ever to implement security. This was part of the reason why Google plopped down $32 big ones for Wiz. Wiz is a powerful product that simplifies a lot of cloud security functions. And if you do not have $32B to blow, there is a whole universe of truly fantastic free security tools, like Cloudflare Tunnels which allows anybody to securely host anything on the Internet. AI-powered security operations are also now a reality, with companies like AI Strike, Torq, and Dropzone AI.
All this AI stuff seems unstoppable, and wildly insecure, but there is promising emerging technologies such as Automated Moving Target Defense.
Rounding out the good news is the proliferation of managed security providers which also accelerates and simplifies security. Earlier this year I worked on an MSSP analysis project. I was stunned at the size, diversity, and maturity of the MSSP market. Unless your organization is gigantic or a government agency, there is no reason to do security internally. Hire an MSSP. There are a lot of great ones out there.
Conclusion
For 2026, I predict cybersecurity will continue to become easier to implement leaving executives with fewer excuses to ignore it. This may be the single most meaningful issue for the threat landscape. When security is “just there” and no special expertise is needed to make it work, then it reduces the surface area of attack significantly.

As for the attackers, like the rest of us, they are going to use more AI. And like the rest of us, they are going to generate a lot of pictures of cats playing pickleball. Which means you do not need some whiz-bang quantum oscillating over-thruster to stop them. You merely need good security hygiene and solid identity management, as you have always needed.
NOTE: The companies mentioned in this blog are for examples only. I received no compensation for mentioning them nor do I endorse them or their technologies.









