Software is guaranteed to have bugs. There can be many thousands of lines of code in software, and human fallibility means that at least some of them won’t be entire as intended. The software development life cycle is a process designed to try to minimize these issues by regularly testing.
The problem is that testing is often done by developers, who may have learned how to code something but may not have learned secure coding practices. Even in thoroughly tested systems, having an outside observer look and bring in a new perspective can help identify new issues.
A common way that this is done is via a penetration test, typically shortened to a pentest. This involves getting a professional, ethical hacker, a pentester, to look at the system and find any security issues.
Tip: It’s “pentest” and “pentester,” not “pen test.” A pentester doesn’t test pens. “Pen-test” is slightly more acceptable than “pen test” but generally should be avoided too.
The Goal of a Pentest
The goal of any pentest is to identify all the security vulnerabilities in the system being tested and to report them to the client. Typically, however, engagements are somewhat time-limited based on cost. If a company has an internal pentester or pentest team, they may permanently work for the company. Still, many companies with the scale for that have a broad portfolio of systems that must be tested. This includes both products being sold and the company’s business systems.
As such, they can’t spend all their time testing one thing. Many companies prefer to hire an external pentesting company to perform the engagement. This is still time-limited based on the cost. The cost is driven by the fact that a pentest is a very manual process and that skillset is in short supply.
Typically, a pentest will be scoped to a specific timeframe. This is done based on the target in question and how long it should take to be reasonably confident to have found everything. The timeline for finding vulnerabilities is generally a bell curve. Not much is found instantly as the pentester looks around the application. Then the vast majority of findings can be achieved within a specific time scale before tapering off. At some point, the cost of spending more time looking isn’t worth the chance that there isn’t anything else to find.
Sometimes, even the quoted price for the recommended time is too much. In this case, the test may be “time boxed.” This is where the client accepts that they’re not testing as much as recommended but want the pentesters to do the best they can in a reduced time frame. Typically, this is included as a caveat in the report.
Some tools are available to perform security testing automatically. These can be useful. However, they often have high false positive and false negative rates. This means that you must spend time digging through verifying issues, knowing that it might not be comprehensive. Most of these tools look for specific indicators, such as known vulnerable versions of software or known vulnerable functions. However, there are plenty of ways for these not to be actual issues or mitigated in practice.
Security vulnerabilities can come together from a bunch of seemingly innocuous pieces. The best way to spot this is through manual human effort. Pentesters use tools but know how to interpret the results, manually verify them, and perform independent manual actions. This manual effort separates a pentest from a vulnerability scan or vulnerability assessment.
Types of Pentest
Typically, a pentest involves testing a whole product as it would be deployed. Ideally, this happens in a real production environment. However, this isn’t always practical. First, there’s the fear that the pentest could knock the target offline. In general, this fear is essentially unfounded. Pentests don’t generally generate too much network traffic, maybe the equivalent of a few extra active users. Pentesters also won’t deliberately test for denial-of-service type issues, especially in production environments. Instead, typically they’ll report suspected denial-of-service problems to allow the client to investigate it themselves.
Additionally, it’s worth noting that if the system is connected to the Internet, it is constantly subject to “free pentests” from real black hat hackers and their bots. Another reason to avoid production environments is privacy issues with live user data. Pentesters are ethical hackers under NDAs and contracts, but if a test environment exists and is similar, it can be used.
Tip: A “free pentest” is a jocular way of referring to being under attack from black hats on the Internet.
Pentests can be performed against basically any tech system. Websites and network infrastructure are the most common types of tests. You also get API tests, “thick client” tests, mobile tests, hardware tests, and more.
Variations on The Theme
Realistically, phishing, OSINT, and red team exercises are related but slightly different. You’re likely aware of the threat of phishing. Some tests involve testing to see how employees respond to phishing emails. By tracking how users interact – or don’t – with the phish, it is possible to learn how to tailor future phishing training.
OSINT stands for Open Source INTelligence. An OSINT test revolves around scraping publicly available information to see how valuable data can be gathered and how that could be used. This often involves generating lists of employees from places like LinkedIn and the company website. This can enable an attacker to identify senior figures that might be good targets for a spear-phishing attack, phishing specifically tailored to the individual recipient.
A red team engagement is typically much more in-depth and can involve some or all other components. It can also include testing physical security and adherence to security policy. On the policy side of things, this involves social engineering. That is trying to convince your way into the building. This can be as simple as hanging out in the smoking area and coming back in with the smokers after a smoke break.
It can be posing as an official or asking someone to get a door for you while carrying a coffee cup tray. On the physical security side, it can even involve trying to break in physically, testing camera coverage, quality of locks, and the like. Red team engagements typically involve a team of people and can run over much longer time scales than normal pentests.
A red team exercise may seem less ethical than a standard pentest. The tester is actively preying on unsuspecting employees. The key is that they have permission from the company leadership, typically from the board level. This is the only reason it is ok for a red teamer to try actually to break in. Nothing permits it to be violent, though. A red team exercise will never try to injure or subdue a security guard, to bypass or trick them.
To prevent the red teamer from being arrested, they will generally carry a signed contract with signatures from the approving board members. If caught, this can be used to prove that they did have permission. Of course, sometimes, this is used as a double bluff. The red teamer can carry two permission slips, one real and one fake.
When caught, they initially hand over the fake permission slip to see if they can convince security that it is legitimate even when it isn’t. To that end, it will often use the actual names of the company board but include a verification phone number that goes to another red teamer briefed to verify the cover story. Of course, if security sees through this, the real permission slip is handed over. This may then be treated with great suspicion, though.
Depending on how the red teamer was caught, it may be possible to continue the test, assuming that they have bypassed the individual security guard that has caught them. However, it is possible that the tester’s identity may be “blown,” essentially removing them from any further in-person testing. At this point, another team member may swap in with or without informing security.
A pentest is an engagement in which a cyber security professional is asked to test the security of a computer system. The test involves manually searching for and verifying the presence of vulnerabilities. Automated tools may be used as part of this. At the end of the test, a report is provided detailing the issues found and providing remediation advice.
It is important that this report is not just the automated output from a tool but has all been manually tested and verified. Any computer system, hardware, network, application, or device can be pentested. The skills needed for each vary but are often complementary.