All posts

Badges Don’t Stop Hackers: Why Cyber Security Testing Needs a Reality Check

Let’s be honest, cyber security testing has a reputation problem. Between pay-for-badge schemes; incompetent testers; and outright corrupt practices, it’s no wonder security professionals approach test results with healthy scepticism. But there is a better way to test security products. One that actually tells you if your defences will hold up when attackers come knocking.

Test Like a Hacker, Not Like a Script

There is a fundamental principle that separates meaningful testing from superficial product showcases: test like a real hacker would attack. Not with pseudo simulations. Not with automated tools that spit out PCAP binaries. But with actual people conducting full-scale attacks from beginning to end.

This approach started with anti-virus testing in the late 90s, where testers used real threats to see if products actually worked. Today the methodology has evolved to encompass everything from email security to firewalls to full XDR platforms. However, the core concept remains the same: behave like the bad guys do and see what happens.

Why Simulated Testing Falls Short

Automated tools are faster and cheaper. There is no denying that. You can push attacks through systems on a weekly basis without breaking into a sweat. But if a security product doesn’t detect a simulated attack, is it compromised or not? Because, technically, it wasn’t actually an attack.

Real attackers don’t follow scripts. They adapt, pivot, and use tactics specific to the threat group that has their affiliation. Emulating this type of approach, with real people at the helm, takes longer (sometimes eight weeks or more) and costs more money, but the quality of the results actually helps to improve products. More importantly, it gives businesses confidence that their security stack will perform when it matters most.

The Transparency Imperative

The solution to cynicism about testing? Complete transparency. Share everything – the attack methodology; the samples used; the complete chain of events. When everything is documented and repeatable, it’s possible to either prove the test was done correctly or learn from mistakes and improve.

At SE Labs, this means capturing every step of an attack so vendors can replay it. They can understand what went wrong and verify that their fixes actually work. It’s not enough to hand over some binary files and say, “good luck.” That’s not consulting, just expensive confusion.

Real-World Testing Catches What Matters

Something interesting happens while testing systems like an actual attacker. It becomes clear which threats bypass every layer of defence, rather than only a single one. This is critical information for vendors trying to prioritize their engineering efforts.

A threat that bypasses one out of ten defensive layers? Lower priority. A threat that makes it through everything? That goes to the top of the list immediately. This approach aligns vendor investment with what actually protects customers, rather than just chasing perfect scores on artificial benchmarks.

The “100% Problem” Isn’t Really a Problem

Even when a product scores 100% in testing, there’s still value in the results. Smart engineering teams use perfect scores to analyse how their product responded and whether it could be more efficient. They’re not just popping champagne. They’re asking whether blocks could happen earlier in the chain or with less overhead.

This only works when the testing methodology provides enough detail to support that kind of analysis. “You blocked everything, congrats” doesn’t help anyone improve.

Prevention AND Detection Both Matter

The debate between prevention and detection is kind of missing the point. In a world where attackers live off the land and use legitimate admin tools, pure prevention isn’t always possible. What matters is the end result: did you get breached or not? And if something did happen, can your Security Operations Centre (SOC) get in the way before serious damage occurs?

Real-world testing captures this full picture to give organisations confidence in their product choices. It’s like crash test dummies versus measuring individual component strength – you need to see what happens when everything comes together under realistic conditions.

Making It Repeatable and Useful

Good testing isn’t just about finding problems—it’s about creating a feedback loop that makes products better. This means:

  • Complete documentation of every attack step
  • Repeatable procedures so vendors can verify and fix issues
  • Forensic evidence that proves what actually happened, not just what alerts claimed
  • Long-term consistency so you can evaluate products over time, not just in one snapshot

The XDR Reality Check

Testing complex, layered security setups (like XDR platforms with multiple integrated products) is legitimately hard. It can take months just to get everything configured and working together. But the methodology doesn’t need to change. The test should still be conducting full attack chains from start to finish. What changes is the technical effort required to set up the environment.

In contrast, simulated testing doesn’t account for how threats actually move through integrated systems. Sure, it might be cheaper and easier, but it won’t tell a CISO if their million-dollar security investment actually works.

Beyond the Badge Collectors

Some companies come to SE Labs and think they can just pay money and get a badge to slap on their website. They’re surprised when we insist on running actual tests that involve real threats and measuring actual results. Because, after all, real attackers don’t care about badges. However, it reveals a fundamental misunderstanding of what security testing should be.

Real testing means you might discover your product doesn’t detect anything. That’s valuable information—both for the vendor who needs to improve and for potential customers who need to know the truth.

What This Means for Security Teams

If you’re evaluating security products, look for tests that are:

  • Transparent about methodology and results
  • Conducted by real people, not just automated tools
  • Repeatable and documented in detail
  • Consistent over time across multiple test cycles
  • Focused on full attack chains, not isolated components

Don’t rely on vendors who avoid testing or only participate in tests they can game. Don’t trust results from testers who won’t share their methodology or samples. And definitely don’t make decisions based on pay-for-badge certifications.

The Bottom Line

Testing cyber security products shouldn’t be complicated in concept, even if it’s complex in execution. Test like hackers. Be transparent about methods and results. Focus on whether threats actually get through or not. Make everything repeatable so your security and the vendors’ product can improve.

Behaving like a hacker is more expensive and time-consuming than automated alternatives, but it’s the only way to know if your security products will perform when facing actual adversaries. In an industry with too many exaggerated claims and not enough honest evaluation, that’s worth the investment.

After all, you’re not defending against automated test scripts. You’re defending against real people who are highly motivated to break into your systems. Your testing should reflect that reality.

Contact us

Give us a few details about yourself and describe your inquiry. We will get back to you as soon as possible.

Get in touch

Feel free to reach out to us with any questions or inquiries

info@selabs.uk Connect with us Find us