When it comes to email security, there’s a lot of noise out there. Every vendor claims to stop phishing attacks, block spam, and protect your business from increasingly sophisticated threats. But how can anyone really know what works, and what doesn’t?
At SE Labs, we believe the answer lies in honest, rigorous, independent testing. That’s why we were pleased when Microsoft got in touch to ask us to take a close look at the methodology behind their latest benchmarking project.
Microsoft’s Real-World Email Security Benchmarks
Microsoft has published two major reports that measure the effectiveness of email security products in live environments. These benchmarks don’t rely on made-up threats or cherry-picked scenarios. Instead, they’re based on real-world attacks seen in Microsoft 365 environments.
They cover both Secure Email Gateways (SEGs), which block emails before they hit Microsoft’s systems, and Integrated Cloud Email Security (ICES) products, which come into play after Microsoft has already scanned an email.
This kind of visibility is long overdue. Email security is too often measured in vague terms, with little clarity on what counts as a threat, what counts as a detection, and whether anything was missed in between.
By proposing clear definitions of “catches” and “misses” and sharing data drawn from actual user environments, Microsoft is helping to move the conversation from flashy claims to measurable reality.
Our Role in the Process
Microsoft asked SE Labs for an independent review of its testing approach. We’ve conducted plenty of email security tests ourselves, so we know what good procedures look like, and understand some of the pitfalls. Our feedback helped Microsoft shape its methodology with an eye on fairness, transparency, and usefulness.
As our CEO and founder, Simon Edwards, put it, “Businesses need to choose the best security that they can afford. Showing the additional benefit vendors provide using real threats, as Microsoft has done here, can help with this important decision.
While traditional comparative tests with synthetic threats allow for testing that targets certain features in a product, using specific, advanced, or novel attack techniques, real-world data exposes how products perform against the full spectrum of threats encountered day to day.
Both types of testing provide valuable insights that together give a more complete picture of security effectiveness. We hope Microsoft’s data inspires additional comparative testing for better customer decision-making.”
We think that gets right to the heart of it. Real-world data helps CISOs and IT teams make smart, informed decisions – not just take vendors at their word.
Why This Matters
Email is still the number one attack vector for businesses. From phishing to malware delivery, it’s where many threats begin. And yet, in a marketplace crowded with loud claims and confusing stats, it can be genuinely hard for organisations to judge what’s effective.
Independent testing isn’t just useful, it’s essential. It adds a level of scrutiny that marketing teams can’t spin. It puts products under the spotlight in ways that matter to real users. And most importantly, it helps businesses protect themselves better.
We’re glad to see Microsoft embracing this approach. We hope it sets a new standard, not just for them, but for the whole industry.
If you’re interested in how we test email security, or you’d like to be part of a future public test, get in touch. We’re always keen to collaborate with those who share our commitment to doing things properly.