Our latest email cloud security test really challenged the services under evaluation.
Latest report now online.
Last summer we launched our first email cloud security test and, while it was very well received by our readers and the security industry as a whole, we felt that there was still work to do on the methodology.
This report shows the results of six months of further development, and a much clearer variation in the capabilities of the services under test.
The most significant change to the way we conducted this test lies in the selection of threats we used to challenge the security services: we increased the number and broadened the sophistication.
Whereas we might have used one fake FBI blackmail email previously, in this test we sent 10, each created using a different level of sophistication. Maybe a service will detect the easier versions but allow more convincing examples through to the inbox?
We wanted to test the breaking point.
We also used a much larger number of targeted attacks. There was one group of public ‘commodity’ attacks, such as anyone on the internet might receive at random, but also three categories of crafted, targeted attacks including phishing, social engineering (e.g. fraud) and targeted malware (e.g. malicious PDFs).
Each individual attack was recreated 10 times in subtly different but important ways.
Attackers have a range of capabilities, from poor to extremely advanced. We used our “zero to Neo” approach to include basic, medium, advanced and very advanced threats to see what would be detected, stopped or allowed through.
The result was an incredibly tough test.
We believe that a security product that misses a threat should face significant penalties, while blocking legitimate activity is even more serious.
If you’re paying for protection threats should be stopped and your computing experience shouldn’t be hindered. As such, services that allowed threats through, and blocked legitimate messages, faced severe reductions to their accuracy ratings and, subsequently, their chances of winning an award.
Intelligence-Led Testing