All posts

Strong protection in uncertain times

 

A hacker mentality is keeping (computer) virus testing on track with our first 2020 endpoint protection reports.

Latest endpoint protection reports now online for enterprisesmall business and home users.

This is the first in our series of 2020 endpoint protection reports. And it is unique, for all the usual reasons but also a new one.

We would normally highlight the latest new threats that we’ve discovered on the internet. Then we would discuss how we test them against the security software you use in your business and at home in the most realistic ways possible. And we’ve done that. But these reports are different to any we’ve produced before, for another reason.

Continue reading “Strong protection in uncertain times”

All posts

Testing deeper, wider and better

Bad guys evolve; defenders evolve; testing (should) evolve

Latest endpoint protection reports now online for enterprise, small business and home users.

These reports represent the state-of-the-art in computer security endpoint testing. If you want to see how the very best security products handle a range of threats, from everyday (but nevertheless very harmful) malware to targeted attacks, this is a great place to start.

Continue reading “Testing deeper, wider and better”

All posts

Anti-malware is just one part of the picture

 
Beefing up security advice with facts

Latest reports now online for enterprise, small business and home users.

At SE Labs we spend our time testing things that are supposed to protect you but we also understand that securing your business, or your home network, is never as simple as installing one or more security products.

The risks are many and varied, but the ways to mitigate them are often most successful with a good dose of common sense as well as the appropriate technology. You just need to think things through carefully and make sensible decisions.

Continue reading “Anti-malware is just one part of the picture”
All posts

The best security tests keep it real


The best security tests are realistic. That’s why it’s important not to try to be too ‘clever’
The best security tests are realistic. That’s why it’s important not to try to be too ‘clever’

Latest reports now online for enterprisesmall business and home users.

Realism is important in testing, otherwise you end up with results that are theoretical and not a useful report that closely represents what is going on in the real world. One issue facing security testing that involves malware is whether or not you connect the test network to the internet.

Continue reading “The best security tests keep it real”
All posts

How can you tell if a security test is useful or not?

How to tell if security test results are useful, misleading or just rubbish?

Latest reports now online.

In security testing circles there is a theoretical test used to illustrate how misleading some test reports can be.

The chair test

For this test you need three identical chairs, packaging for three anti-virus products (in the old days products came on discs in a cardboard box) and an open window on a high floor of a building.

The methodology of this test is as follows:

  1. Tape each of the boxes to a chair. Do so carefully, such that each is fixed in exactly the same way.
  2. Throw each of the chairs out of the window, using an identical technique.
  3. Examine the chairs for damage and write a comparative report, explaining the differences found.
  4. Conclude that the best product was the one attached to the least damaged chair.

The problem with this test is obvious: the conclusions are not based on any useful reality.

The good part about this test is that the tester created a methodology and tested each product in exactly the same way.* And at least this was an ‘apples to apples’ test, in which they tested similar products in the same manner. Hopefully any tester running the chair test publishes the methodology so that readers realise that they have carried out a stupidly meaningless test. But that is not a given.

How to tell if a security test is useful

Sometimes test reports make very vague statements about, “how we tested”.

When evaluating a test report of anything, not only security products, we advise that you check how the testing was performed. And check whether or not it complies with a testing Standard. The Anti-Malware Testing Standards Organization’s Standard (see below) is a good one.

Headline-grabbing results (e.g. Anti-virus is Dead!) catch the eye, but we need to focus on the practical realities when trying to find out how best to protect our systems from cyber threats. And that means having enough information to judge a test report’s value. Don’t simply trust blindly that the test was conducted correctly.

*Although some pedants might require that the tester release each chair from the window at exactly the same time. Possibly from windows far enough apart that the chairs would not entangle mid-air and skew the results in some way.

Find out more

If you spot a detail in this report that you don’t understand, or would like to discuss, please contact us via our Twitter or LinkedIn accounts.
 
SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define ‘threat intelligence’ and how we use it to improve our tests please visit our website and follow us on Twitter.
 
These test reports were funded by post-test consultation services provided by SE Labs to security vendors. Vendors of all products included in these reports were able to request early access to results and the ability to dispute details for free. SE Labs has submitted the testing process behind this report for compliance with the AMTSO Testing Protocol Standard v1.0. To verify its compliance please check the AMTSO reference link at the bottom of page three of each report or here.

UPDATE (10th June 2019): AMTSO found these test complied with AMTSO’s Standard.

Our latest reports, for enterprise, small business and home users are now available for free from our website. Please download them and follow us on Twitter and/or LinkedIn to receive updates and future reports.

All posts

Enemy Unknown: Handling Customised Targeted Attacks

 

Detecting and preventing customised targeted attacks in real-time

Experts design computer security products to detect and protect against threats such as computer viruses, other malware and the actions of hackers.

A common approach is to identify existing threats and to create patterns of recognition. This is similar to the way the pharmaceutical industry creates vaccinations against known biological viruses. Or police issuing wanted notices with photographs of known offenders.

Detecting the unknown

The downside to this approach is that you have to know in advance that the virus or criminal is harmful. The most likely time to discover this is after someone has become sick or a crime has already been committed. It would be better to detect new infections and crimes in real-time and to stop them in action before any damage is caused.

The cyber security world is adopting this approach more frequently than before.

Deep Instinct claims that its D-Client software is capable of detecting not only known threats but those that have not yet hit computer systems in the real world. These claims require a realistic test that pits the product against known threats and those typically crafted by attackers. Attackers who work in a more targeted way. Attackers who identify specific potential victims and move against them with speed and accuracy.

Electioneering

This test report used a range of sophisticated, high-profile threat campaigns such as those directed against the US Presidential election in 2016. It also directed targeted attacks against victim systems using techniques seen in well-known security breaches in recent months and years.

The results show that Deep Instinct D-Client provided a wide range of detection and threat blocking capability against well-known and customised targeted attacks. It didn’t interfere with regular use of the systems upon which it was deployed.

The deep learning system was  trained in August 2018, six months before the customised targeted threats were created.

Latest report now online.

All posts

Assessing next-generation protection

 

Malware scanning is not enough. You have to hack, too.

Latest report now online.
 
The amount of choice when trialling or buying endpoint security is at an all-time high. ‘Anti-virus’ first appeared 36 years ago and, in the last five years, the number of companies innovating and selling products designed to keep Windows systems secure has exploded.
 
And whereas once vendors of these products generally used non-technical terms to market their wares, now computer science is at the fore. No longer do security firms offer us ‘anti-virus’ or ‘hacker protection’ but artificial intelligence-based detection and response solutions. The choice has never been greater. Nor has the confusion among potential customers.
 
Assessing next-generation protection is not easy.
 
While marketing departments appear to have no doubt about the effectiveness of their product, the fact is that without in-depth testing no-one really knows whether or not an Endpoint Detection and Response (EDR) agent can do what it is intended.

Assessing next-generation protection

Internal testing is necessary but inherently biased: ‘we test against what we know’. We need through testing, including the full attack chains presented by threats. That’s how to show not only detection and protection rates, but response capabilities.

EventTracker asked SE Labs to conduct an independent test of its EDR agent, running the same tests as are used against some of the world’s most established endpoint security solutions available, as well as some of the newer ones.
 
This report shows EventTracker’s performance in this test. You can compare the results directly with the public SE Labs Enterprise Endpoint Protection (Oct – Dec 2018) report, available here.

All posts

Can you trust security tests?

 
Clear, open testing is needed and now available to help people trust security tests

Latest reports now online.

A year ago we decided to put our support behind a new testing Standard proposed by the Anti-Malware Testing Standards Organization (AMTSO). The goal behind the Standard is good for everyone: if testing is conducted openly then testers such as us can receive due credit for doing a thorough job; you the reader can gain confidence in the results; and the vendors under test can understand their failings and make improvements, which then creates stronger products that we can all enjoy.

The Standard does not dictate how testers should test. There are pages of detail, but I can best summarise it like this:

Say what you are going to do, then do it. And be prepared to prove it.

(Indeed, a poor test could still comply with the AMTSO Standard, but at least you would be able to understand how the test was conducted and could then judge its worth with clear information and not marketing hype!)

Trust security tests

We don’t think that it’s unreasonable to ask testers to make some effort to prove their results. Whether you are spending £30 on a copy of a home anti-antivirus product or several million on a new endpoint upgrade project, if you are using a report to help with your buying decision you deserve to know how the test was run, whether or not some vendors were at a disadvantage and if anyone was willing and able to double-check the results.

Since the start of the year we put our endpoint reports through the public pilot and then, once the Standard was officially adopted, through the full public process. Our last reports were judged to comply with the AMTSO Standard and we’ve submitted these latest reports for similar assessment.

At the time of writing we didn’t know if the reports from this round of testing complied. We’re pleased to report today that they did. You can confirm this by checking the AMTSO reference link at the bottom of page three of this report or here. This helps people trust security tests.

Ask us

If you spot a detail in this report that you don’t understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.

SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define ‘threat intelligence’ and how we use it to improve our tests please visit our website and follow us on Twitter.

This test report was funded by post-test consultation services provided by SE Labs to security vendors. Vendors of all products included in this report were provided with early access to results and the ability to dispute details for free. SE Labs has submitted the testing process behind this report for compliance with the AMTSO Standard v1.0.

Our latest reports, for enterprise, small business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

All posts

How well do email security gateways protect against targeted attacks?

 
Email security gateways protection:  Email security test explores how and when services detect and stop threats

Latest report now online.

This new email protection test shows a wide variation in the abilities of the services that we have assessed.

You might see the figures as being disappointing. Surely Microsoft Office 365 can’t be that bad? An eight per cent accuracy rating seems incredible.

Literally not credible. If it misses most threats then organisations relying on it for email security would be hacked to death (not literally).

Email security gateways protection 

But our results are subtler than just reflecting detection rates and it’s worth understanding exactly what we’re testing here to get the most value from the data. We’re not testing these services with live streams of real emails, in which massive percentages of messages are legitimate or basic spam. Depending on who you talk to, around 50 per cent of all email is spam. We don’t test anti-spam at all, in fact, but just the small percentage of email that comprises targeted attacks.

In other words, these results show what can happen when attackers apply themselves to specific targets. They do not reflect a “day in the life” of an average user’s email inbox.

We have also included some ‘commodity’ email threats, though – the kind of generic phishing and social engineering attacks that affect everyone. All services ought to stop every one of these. Similarly, we included some clean emails to ensure that the services were not too aggressively configured. All services ought to allow all these through to the inbox.

So when you see results that appear to be surprising, remember that we’re testing some very specific types of attacks that happen in real life, but not in vast numbers comparable to spam or more general threats.

Threats at arm’s length

The way that services handle threats are varied and effective to greater or lesser degrees. To best reflect how useful their responses are, we have a rating system that accounts for their different approaches. Essentially, services that keep threats as far as possible from users will win more points than those who let the message appear in or near the inbox. Conversely, those that allow the most legitimate messages through to the inbox rate higher than those which block them without the possibility of recovery from a junk folder or quarantine.

 
If you spot a detail in this report that you don’t understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.
 
 
SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define ‘threat intelligence’ and how we use it to improve our tests please visit our website and follow us on Twitter.
 
 
Our latest reports, for enterprisesmall business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.
All posts

Latest security tests introduce attack chain scoring

 
When is a security breach serious, less serious or not a breach at all? Attack chain scoring is important in helping you find out.
 
Latest reports now online.
 
UPDATE (29/10/2018): This set of reports are confirmed to be compliant with AMTSO Standard v1.0 by the Anti-Malware Testing Standards Organization.
 
Our endpoint protection tests have always included targeted attacks.
 
These allow us to gauge how effectively anti-malware products, in use by millions of customers, can stop hackers from breaching your systems.
 
We penalise products heavily for allowing partial or full breaches and, until now, that penalisation has been the same regardless of how deeply we’ve been able to penetrate into the system. Starting with this report we have updated our scoring to take varying levels of ‘success’ by us, the attackers, into account.

Attack chain scoring

The new scores only apply to targeted attacks and the scoring system is listed in detail on page eight of each of the reports.
 
If the attackers are able to gain basic access to a target, which means they are able to run basic commands that, for example, allow them to explore the file system, then the score is -1.
 
The next stage is to attempt to steal a file. If successful there is a further -1 penalty.
 
At this stage the attackers want to take much greater control of the system. This involves increasing their account privileges – so-called privilege escalation. Success here turns a bad situation worse for the target and, if achieved, there is an additional -2 penalty.
 
Finally, if escalation is achieved, certain post-escalation steps are attempted, such as running a key logger or stealing passwords. A final -1 penalty is imposed if these stages are completed, making possible scores for a breach range between -1 and -5 depending on how many attack stages are possible to complete.
 
We have decided not to publish exact details of where in the attack chain each product stands or falls, but have provided that detailed information to the companies who produce the software tested in this report and who have asked for it.
 
If you spot a detail in this report that you don’t understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.
 
 
SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define ‘threat intelligence’ and how we use it to improve our tests please visit our website and follow us on Twitter.
 
Our latest reports, for enterprisesmall business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

Contact us

Give us a few details about yourself and describe your inquiry. We will get back to you as soon as possible.

Get in touch

Feel free to reach out to us with any questions or inquiries

info@selabs.uk Connect with us Find us