The Cybersecurity Hard Truths We Can't Ignore: What the 2025 National Academies Report Tells Us
- John Gomez
- Jun 9
- 6 min read

If you’ve spent any time in the cybersecurity trenches, there’s a creeping sense that much of what we do is smoke and mirrors. Metrics are squishy, vendors are louder than they are effective, and we’re drowning in best practices that no one can agree on. Turns out, the National Academies of Sciences, Engineering, and Medicine agrees.
Their 2025 report, Cyber Hard Problems: Focused Steps Toward a Resilient Digital Future, is a brutal, well-researched mirror held up to the face of cybersecurity. And it doesn’t flinch.
So, let’s walk through this like we’re sitting down after a long shift, coffee in hand, no buzzwords allowed. I’ll call out what matters most, where we’re truly screwed, and where there’s still some hope. Why? Because as I’ve said before, when it comes to cybersecurity, the emperor has no clothes—and someone has to call him out.
With that less-than-subtle introduction behind us, let's dive into the key findings of the National Academies report.
1. We Still Don’t Know How to Measure Cybersecurity
Summary: There are no meaningful, predictive, or repeatable metrics that tell us whether a system is secure. None. (p. 20) Despite the use of multiple assessment methods, frameworks, and checklists, we know one thing for sure—we really don't know much about the true state of an organization's cybersecurity readiness or maturity.
Why it matters:
If we can’t measure security, we can’t incentivize it, evolve it, or address weaknesses.
Compliance is used as a proxy for security—and we all know that’s a lie we tell ourselves. Just because you meet a regulatory framework—be it PCI, HIPAA, 800-53, FedRAMP, or what have you—there is no correlation between compliance and real-world cybersecurity effectiveness.
We spend billions on security solutions without being able to say what outcome they deliver. SIEMs, MDR, EDR, and even next-generation platforms are solving problems that are at best years old and often decades behind the latest offensive computing approaches.
Callout: This isn’t just frustrating. It’s existential. Until we can quantify cyber risk in a way that’s rooted in actual consequence, we’re all just pretending.
No real solution yet. The report suggests borrowing from medicine (evidence-based practice), but no one's figured out how to translate that into repeatable operational models. Even more challenging is that cybersecurity evidence-based practices (CEBP) take tremendous time and effort to research, quantify, and deploy. In many cases, from a cybersecurity perspective, CEBP addresses things in the rearview mirror—not what is facing defenders today. Even more distressing is that CEBP does not provide any predictive capability allowing defenders to develop strategic solutions and TTPs that address tomorrow's problems.
2. Attackers Often Know Your Systems Better Than You Do
Quote: "Adversaries may have more knowledge of the internals of a system than its own operators." (p. 4) I used to joke that if you want a true inventory of your network and system assets, hire hackers, because they seem to be able to easily quantify your network at a rather detailed level. I recall an attack from years ago, Orange Worm, which created deep inventories of entire healthcare networks (including medical devices, building management systems, physical security, and yes—IT) using rather basic approaches. Yet, for many defenders, there is no deep understanding of what is not just on their networks, but the configurations, baselines, interactions, integrations, and workflows those systems represent. By no understanding, let me be clear: CIOs and CTOs have failed to represent any holistic architecture that would allow a security professional to actually design a reasonable security model that has even the smallest chance of defeating a persistent attacker.
Why it matters:
Complexity and opacity in supply chains mean you might not even know what software you’re running.
Nation-states and organized criminals don’t have to play by your rules. You do.
The asymmetry of knowledge isn’t just a gap. It’s a chasm.
What you can do:
Push for transparency in supply chain components. Demand Software Bills of Materials (SBOMs)—though their actual value is still questionable.
Architect with isolation, least privilege, and zero trust assumptions as defaults, not goals. If you’re not doing zero trust at this point, you’re not just behind the eight ball—you’re on your way to extinction.
Hold your leaders accountable. CIOs, CTOs, VPs, Directors—they need to be able to demonstrate a truly holistic view of the network architecture. Not memorized, but demonstrable and articulable to those who are tasked with securing those systems.
3. The Security Poverty Line is Real and Growing
Summary: Most small-to-mid organizations can’t afford the staff, tools, or support to defend themselves—and there’s no scalable model to help them. (p. 91–92) Although this is not a surprising finding, my experience is that regardless of the organization's size, financial prosperity, or industry, few—if any—have the cybersecurity staff, tools, or tactics to defend themselves. Don’t believe me? Go check out the breaches at the NSA, NASA, CIA, DoD, Microsoft, SolarWinds, and major healthcare systems. Financial constraints may be a challenge, but the deeper issue is that we’re stuck in 2010 and still think today's approaches are effective.
Why it matters:
Financial challenges and lack of prioritization create fundamental vulnerabilities that need to be addressed—or they will become limiting factors. But they’re far from the only ones.
Solution?
The report suggests templates, shared playbooks, and public-private coalitions. Although this should be pursued, the reality is that sharing ineffective practices only creates the illusion of improvement.
Your move: If you're in a position of power or influence, start advocating for cybersecurity equity. It's not charity—it’s national security. But don’t settle for equity based on failed practices. That’s not equity. That’s cybersecurity welfare.
4. SBOMs Are a Step, Not a Solution
Summary: Software Bills of Materials sound great in theory. But in practice, they’re a transparency layer—not a trust mechanism. (p. 34–35, 93–95) For SBOMs to be effective, they must be dynamic and interactive, not dated and regulatory.
The problem:
An SBOM won’t tell you if a component is secure. It won’t tell you if the underlying architecture, design, or engineering practices are solid.
Most systems are built from parts with their own hidden parts. If your vendor doesn’t have accurate SBOMs from their suppliers, yours is just a partial view.
What to do:
Don’t stop at SBOM. Push for runtime attestations and active monitoring. Ask your vendors about secure engineering practices. Inquire about their visibility into the components they rely on.
Invest in composition-aware architectures that expect opacity and design around it.
5. Formal Methods Are Actually Working Now (No, Seriously)
Surprise: Amazon, DARPA, and others are proving that formal verification isn’t just academic fluff. It’s happening at scale—and it’s working. (p. 63–67)
What this actually means:
Formal verification uses logic and math to prove that systems do what they say they do—and nothing else. For years, people scoffed at it as too slow or too expensive. Today, AWS runs billions of verification checks a day. DARPA uses it to harden UAVs. And it’s catching real bugs—before attackers do.
Why this matters:
These aren’t theoretical models. They’re live, at-scale security enforcers.
Memory-safe languages like Rust boost security and developer productivity.
Solution:
Adopt them. Build them into your pipelines. Require them in procurement. If you build software, now’s the time to get serious.
6. AI Is a Wildfire That’s Already Outpacing Us
Quote: "Models are fundamentally un-auditable." (p. 42–44, 80–84) Like it or not, we really don’t know how to secure AI. It’s not about which vendor you use—it’s about the fundamental uncertainty of how these models even work. And as they get smarter, we’re seeing hallucinations, goal-seeking, and deception. This isn’t securing code. It’s securing something closer to a digital organism.
Key points:
AI is already being used for cyber offense—social engineering, red teaming, exploit generation.
Models can be poisoned during training, hijacked at runtime, or manipulated through prompts.
We have no reliable, scalable way to constrain their behavior.
Solution: None that are mature. Model attestations and red teaming are starting points, but the guardrails aren’t here yet.
If you're deploying AI: Treat, it like it’s already compromised. Wrap it in controls. Architect for failure. Don’t assume it’s safe—assume it’s adversarial.
How Illuminis Labs Can Help
At Illuminis Labs, we don't sell fear. We build clarity.
We’re a skunkworks-style partner for organizations who don’t want another sales pitch. You bring the mission, we bring:
Red-teaming and AI validation frameworks grounded in real-world adversarial behavior.
Secure composition reviews and architecture redesigns that factor in zero trust, least privilege, and system degradability.
Strategic roadmaps to help you close the gap between your security posture and what the next generation of threats demands.
We help CISOs, CTOs, C-Suite and board members cut through noise and take action.
If your organization is wrestling with complexity, trust, and how to prepare for the next wave—not the last one—you already need us.
Final Thought: Security Without Transparency Is Theater
The 2025 report makes it clear: cyber resilience isn’t about new tech alone. It’s about honesty, architecture, human-centered design, and radically rethinking incentives.
We need a shift from product-based security to outcome-based security. We need transparency. And we need the courage to say, "we don’t know" when we don’t.
Let’s stop selling each other snake oil and start building systems—and a discipline—that can actually stand the hell up when the fire comes.
Reference: Cyber Hard Problems: Focused Steps Toward a Resilient Digital Future (2025). National Academies of Sciences, Engineering, and Medicine. Link
Appendix: Source Page References
Metrics problem: p. 20, 55–60
Adversary system knowledge: p. 4
Security poverty: p. 91–92
SBOM limitations: p. 34–35, 93–95
Formal verification progress: p. 63–67
AI security challenge: p. 42–44, 80–84
Comments