All posts

Realistic cyber security testing

Simulated or real attacks in cyber security testing?

There are many different ways to test cyber security products. Most of the common approaches are useful when evaluating a service or system, but they each have pros and cons. In this article we outline the basic differences and limitations. Can you achieve realistic cyber security testing?

Realistic cyber security testing

When you use a security test report to help choose from a list of security products, use what you learn here to help decide which reports are most useful to you.

You would expect a tester to use a different test when checking out an email security service to testing a firewall or endpoint security product. However, it is possible to make some useful generalisations. All of the following approaches can work for a wide range of security services, software and devices.

Complete realism

Concept: Test like hackers.

Challenges: You have to learn the skills of a hacker and know how today’s hackers are behaving. Or pay someone how has that knowledge, which is very expensive. If you run the testing yourself you also need to exercise significant responsibility. Ensure that your actions don’t cause harm to systems except those you have the intention and authority to damage.

In all cases, testing takes a long time so the number of test cases will always be lower than less realistic, automated test options. This low number also means that many types of attacks will be ignored, which could bias a test in favour of products that detect and protect from the types of attacks preferred by the tester.

Benefits: The test results are hard to argue with. If you attack realistically, like the real bad guys, the service will give a realistic account of itself. If you manage to bypass a security service there’s a good chance that a bad guy could too. This is the most realistic cybersecurity testing possible.

Caveats: Security systems evolve, often in real-time. An attack from this morning might produce a different result this afternoon. Systems learn, security developers update detections and software developers patch vulnerabilities. Today’s results lose value over time.

Complete simulation

Concept: Tools generate data that looks like an attack.

Challenges: Often expensive, the tools claim to simulate attacks and breaches realistically. They take shortcuts to represent a real attack. Their results are, therefore, not completely reliable. That is because a simulation is, by definition, not the real thing.

Benefits: Using pre-made tools that appear to run attacks is technically quite easy. The chances that they will cause real harm to your systems, or those belonging to others, is very low. They can be automated and use a wide range of different attack types. You might find some clear security gaps you can fix quickly and easily.

Caveats: The results might represent how a product would handle a real attack, but you’ll never be quite sure. Some vendors specifically detect such tools and the data they produce, which can make the security product appear better than it would be against real attacks. Others ignore these tools, choosing to focus on real attacks. This can make their results appear artificially poor.

Semi-simulation

Concept: Tools generate data containing real attacks.

Challenges: Using relevant threats and reporting the results accurately is a significant challenge. Automating attacks that aren’t automated in the real world can produce strange results. Security products can be overloaded in unrealistic ways and so produce unrealistic results. Testers need to spend a lot of time curating the threats in their libraries and double-checking results. With large sample sets this becomes onerous and error-prone work. Even if this approach achieved realistic cybersecurity testing, the results might still have issues.

Benefits: Automation means that large sample sizes are possible, far beyond that possible with manual, realistic testing. It’s also possible to test a wider range of products because, with sufficient hardware, large tests can run quickly for hours, days or even months.

Caveats: Quality control becomes an issue. The large data sets computers can produce automatically may be statistically significant but, due to their size, can also introduce large margins of error. In other words, the certainty that the data is accurate starts to drop as the test grows larger. As a result, the test report becomes less useful.

It can be exceptionally challenging to automate some kinds of attacks, because you sometimes need to simulate a real user’s behaviour. Their reaction to a certain threat, for instance. That can mean they are excluded from such tests. In turn this creates a bias in the test’s results towards attacks that are easier to automate.

Penetration testing

Concept: Manual assessment of systems and networks.

Challenges: Sometimes the most expensive option, penetration testers have a limited amount of time to focus on either bypassing security products completely or breaking those products. Their results are most useful when assessing a full production network at a given time, rather than producing a general view of a security product’s effectiveness.

Benefits: By looking at how an organisation has set up a range of products and configurations, a penetration test can highlight mistakes and gaps that can be handled to reduce risk.

Caveats: Penetration test reports age fast. Large networks change. Administrators add new firewall rules and change endpoint security configurations. Researchers discover new software vulnerabilities. A regular schedule of penetration testing is necessary to discover new mistakes and gaps.

What does SE Labs do?

SE Labs tests like hackers because that gives us the most useful data to work with.

Attack simulations take shortcuts. To generalise, they might generate automated network traffic that looks like a compromise has happened or is happening. Their advantage is that they allow fast testing with lots of test traffic, and without a team of expensive hackers working consistently over a long period of time.

We want to produce the highest quality testing, so we take the expensive option, using our well-trained security testers to run real attacks. Simulated attacks can provide evidence that products do or don’t work. However, non-simulated attacks, such as we run, provide more convincing evidence because our attacks are much the same as those run by real attackers.

Real testing for real security

Additionally, when vendors use our tests to improve their products everyone wins. That is because the changes they make strengthen their products against real attacks, not lab-style simulations that real users will never encounter. Realistic cyber security testing benefits everyone.

Contact us

Give us a few details about yourself and describe your inquiry. We will get back to you as soon as possible.

Get in touch

Feel free to reach out to us with any questions or inquiries

info@selabs.uk Connect with us Find us