Vulnerability Management

Vulnerability scanners: Overview | Security Weekly Labs

Share

In the beginning, there were network vulnerability scanners. These early security tools would scan the network for active hosts, would then scan for listening services, and would finally check for vulnerabilities.

These days, the vulnerability management category has spawned a wide range of sub-categories in application security (SAST, IAST, DAST, SCA, etc.), cloud (container scanning, CSPM), and vulnerability analysis (attack path mapping, risk-based vulnerability management).

Vulnerability management is too large to tackle all at once, so this round of reviews will focus entirely on commercial and open-source network vulnerability scanners.

An independent resource operated by the cybersecurity professionals at Security Weekly and built on the foundation of SC Media’s SC Labs, SW Labs is a clearinghouse for useful and relevant product and services information that enables vendor and buyer to meet on common ground.

The aim of this report is to share what we’ve learned about the space, to clearly define it as a category, and provide useful context for the individual product reviews that accompany this report.

Looking for the methodology we used to test the products in this category? Click here.

Reviews

Below is a list of vendors and products we reviewed, in alphabetical order, which will be published gradually in the days following publication of this overview. We recommend reading through the overview before digging into individual reviews, as some thoughts about the space as a whole will be expressed here, to avoid needing to repeat these insights in each individual product review.

Commercial Products:

Open Source Products

  • Flan Scan
  • OpenVAS - Open Vulnerability Assessment Scanner
  • Vuls.io

Looking back

A little over six years ago, I wrote that "the future is vulnerability management as a feature, not a product."

I am grateful for the opportunity to examine this market now, six years later.

Certainly, for larger organizations, the data produced by vulnerability management products have become just another data point that feed into a larger risk equation. For those outside the F1000 or Global 2000, however, things might not look that different. Reviewing some of this market's past problems may provide some context for current and future market trends.

Quality was a problem. Vulnerability scanning vendors once competed on how many vulnerabilities each product could discover. For every vulnerability found in a commercial product, software engineers at each of the scanning vendors would create a piece of code that could detect if a particular system was vulnerable. These checks quickly grew into the tens of thousands. These engineers were given quotas and had to produce a minimum number of these "checks" every month.

These vulnerability checks were often rushed out to meet quotas or just to get them in customer hands as quickly as possible. The quality of these checks varied wildly and often resulted in false positives. The result became hugely time consuming: analysts would have to manually verify each vulnerability to weed out these false positives before reporting them to asset owners.

Quantity was also an issue. This software, designed to let organizations know if their systems were vulnerable to attack, quickly began to overwhelm its customers. It wasn't uncommon for a scan of an environment with 2,000 computers to report critical issues in the hundreds of thousands. What happens when over 100,000 issues carry the highest priority, all screaming for attention? Giving up starts to seem like a reasonable option.

Network scans took too long to run. As the library of vulnerability checks grew, network scans took longer to run. In environments with tens of thousands or hundreds of thousands of systems, it was necessary to deploy dozens or hundreds of scanners to make scanning the entire organization feasible. Even with a distributed architecture, some percentage of hosts would be offline during scans or wouldn't get scanned for other reasons.

Most vulnerabilities labeled as critical weren't. The ubiquitous common vulnerability scoring system (CVSS) either didn't or wasn't able to take several significant factors into account. Was the vulnerability exploitable? Did an exploit exist for it? Was this exploit publicly available? Was it currently being used in the wild? Did the exploit simply crash the vulnerable software, or did it allow attackers to run commands? Could they do this remotely, or only if they already had access to the system? The system with this vulnerability: was it exposed to the public internet, or tucked away, well protected — deep within a corporate network?

Only the security team collected and managed vulnerability data. The typical vulnerability management model has the security team managing the scanning products, running the scans, and analyzing the results. Vulnerabilities would be broken out by asset owners, often in spreadsheets. Weekly meetings would be scheduled to check on the status of the most critical vulnerabilities. Why weren't they getting fixed? Some are false positives? We'll have to check on that. The clear issue here was that email and Microsoft Excel were the real workhorses of this process, not the vulnerability management product.

Compliance and regulations often made things worse. Compliance wasn't all bad — it was sometimes the only driver behind security in some organizations. The alternative to compliance was often no security program at all in the early days. However, compliance often encouraged inefficient vulnerability management processes and forced teams to ignore other security tasks in favor of correcting hundreds of thousands of vulnerabilities that represented little to no actual risk. Many regulations for example, require fixing vulnerabilities above a certain baseline CVSS score. As previously mentioned, CVSS scores weren't reliable indicators of risk. In fact, studies have shown that an unmodified CVSS score is as reliable as choosing a score at random.

A decade ago, these scanners were amazing at finding problems, but not very good at helping folks prioritize or fix them.

The current market: 10 left standing

Before discussing how some of these past issues have been addressed (or not), it's worth taking a quick look at which vendors are still around and which have exited the market.

Over the last decade, many vulnerability management vendors have migrated to a SaaS platform strategy. Network vulnerability scanners are no longer the flagship product at "the big three" — Qualys, Rapid7, and Tenable (often abbreviated as 'QRT', as we discovered while interviewing buyers for this review).

They're simply one of several tools that feed data into a larger risk analysis platform. In addition to network scanners, vulnerability data now comes from cloud connectors, patching systems, change management databases, attack surface management platforms, passive scanners, and endpoint agents.

Tenable remains the most focused on vulnerability management, while Rapid7 expanded into incident response, UBEA, and orchestration. Qualys expanded into patch management, asset management, and even introduced a WAF to address larger pieces of remediation and mitigation workflows. Outside QRT, few legacy scanners have survived. BeyondTrust has recently shut down the old eEye Retina product. ISS Internet Scanner is long gone. Foundstone's scanner was shut down by McAfee years ago.

Comprising the rest of the market, Digital Defense, Outpost24, Tripwire (fka nCircle Suite360), SAINT, and Greenbone (commercial support for the open-source OpenVAS) continue to maintain and sell network vulnerability scanners. Only two new scanners have emerged on the scene in the past decade: SecureWorks Taegis VDR (fka Delve Labs Warden) and F-Secure Elements Vulnerability Management (fka F-Secure RADAR and nSense Karhu).

The state of network vulnerability scanning

Compliance-driven product development inevitably seeks to satisfy the auditor. That doesn't mean that results-driven development won't satisfy the auditor, however. Once heavily compliance-driven markets like vulnerability management realized this, we started to see some of the larger issues get addressed.

While the quantity of findings coming out of the average scan is still a challenge, vulnerability management products have added some key features to help tame the numbers. While most of the following features were first introduced by risk-based vulnerability management vendors (e.g., Kenna Security, NopSec, RiskSense), many have trickled down in some form to basic network scanners. In a future group test, it will be interesting to compare the performance between these two categories.

  • Flexible search and filtering functions help analysts answer questions quickly.
  • Exploit and threat intelligence correlation separates theoretical risk, from real-world risk. It also removes reliance on CVSS as the only quantifiable factor to prioritize findings.
  • Asset criticality and contextual data (is the vulnerable host exposed to the public internet?) also helps with prioritization
  • Confidence scores also help prioritize. Some vulnerability checks can be 100% certain, while others have to guess. Knowing the difference is important.

The quality of vulnerability checks seems to have improved, but asset identification remains a significant issue. For example, Linux is broadly used for network and IoT devices today. Most scanners fail to see the difference and it's difficult to update a vulnerable IoT device if the scanner identifies it as "Linux 2.6 kernel."

Admittedly, cataloguing the vast sea of IoT and network devices is challenging, but that's the job network scanners have committed to.

One aspect of vulnerability management that has drastically improved is workflow. Originally designed as purely a security tool, vulnerabilities were once exported to spreadsheets and emailed to asset owners. Today, modern vulnerability management tools are designed for non-security folks as well. Role-based access control means that an asset owner can log in and see remediation recommendations just for their assets and no one else's. Reporting is also more flexible and useful.

It has long been said that the spreadsheet is the primary competition in this market. When customers stop exporting findings to CSV, vendors will know they're making positive progress with UI, UX, and workflow.

Scanner architecture: getting the data

Qualys is well-known for having adopted a SaaS architecture long before anyone was using the acronym or knew it was pronounced "sass." Currently, all the commercial products we tested employed both on-premises components and cloud, SaaS-delivered components. Typically, these components include:

  • A SaaS-based console, managed by the vendor
  • Network scanning engines that either install as software packages or are available as complete virtual appliances compatible with most hypervisors. These network scanning engines send their results back to the SaaS-based console.
  • Cloud scanning engines that can be used for performing external vulnerability scans (scanning from an internet, "outside," perspective)
  • Optional agents can typically be installed on Windows, Mac and a variety of Linux and Unix operating systems. Agents alleviate the need to run active, point-in-time network scans by collecting data and sending it back to either a local scan engine or the SaaS console on a regular basis. Agents are typically preferred in very large environments where active scanning is difficult or impossible. They are also ideal for monitoring the state of remote systems on home networks, or small branch offices too small to deploy a network scanning engine to.

In any scenario where one or more corporate LAN and/or datacenter exist, it will be necessary to deploy local scan engines to scan internal hosts. Even in an environment where vulnerability management agents are deployed, printers, network devices, IoT devices and many other device types must be assessed with a traditional network scan.

There are many factors that go into determining the requirements for a host that will be designated a dedicated scan engine. Authenticated scans will require more time to run and resources to store. The number of overall assets to be scanned. Network latency. Vendors will typically provide a guide that includes hardware recommendations based on different size environments. Scan engines can be tuned to use more RAM and a larger number of scanning threads if the hardware is capable.

With larger, more complex networks, it may be necessary to consider a distributed scan architecture. Factors here include the number of assets and segmented networks. Another major factor is the device makeup in the environment — a domain controller and a switch take very different amounts of time to scan. Perhaps it makes more sense to have fewer, high performance scan engines in an environment with a largely flat network and a high number of assets to scan. In a highly segmented environment, perhaps a high number of scan engines running with more modest CPU, disk, and RAM makes more sense.

The speed of deploying individual scan engines may take an hour or less. Often, the more important factor is how difficult — politically and process-wise, it is to get them deployed into the necessary environments. Statements like “you’re not putting that on my network” aren’t unheard of.

Assuming cordial relationships between security and IT, no more than a few planning meetings will be necessary to get a half dozen scan engines up and running in roughly a week. Assuming the ability to deploy vulnerability management agents to remote hosts via device management software, this also seems achievable to maybe 80% of workstations and servers within the same week. We know, we know – stop laughing. Of course, there are organizations where this task takes a day and organizations where a year passes and it still isn’t done. Layer 8 (9, 10) problems are tough.

Use cases and strategies

Not everyone uses vulnerability management tools in the same way. For some, the scanner output is the main source for a day's work. For others, they're seldom used and only when necessary.

The vulnerability-driven organization might feel like it is running on a hamster wheel — always moving, but never getting anywhere. The output of the scanner drives analysis activities, kicks off patching processes, and generates endless meetings to check on remediation status.

Sometimes this workflow is necessary but is rarely productive long-term.

The goal-driven organization uses vulnerability scan results to inform task selection. The goal-driven approach asks questions, like, "why are these systems missing so many patches in the first place" and "how do these misconfigurations keep getting propagated, even after we've corrected them?" By going after specific goals and the root causes of vulnerabilities getting out of control, permanent progress can be made. This approach is ideal for organizations that have a lot of catching up to do on patches and hardening.

The automated remediation approach is often adopted by more mature or cloud-first organizations. There are organizations that don't test or discuss the impact of security patches. Instead, they just automatically push them out as soon as they're available and ensure they're prepared to deal with any potential fallout if it comes.

In a typical DevOps shop, vulnerability scanning and patching are just parts of an automated build process. If tests fail, the process stops and the details of the failure are investigated.

The second opinion use case is common in mature and/or cloud-first organizations. With mature, mostly automated patching programs in place, vulnerability scans are typically used to ensure nothing is missed by these patching programs and that patches, do in fact, address the vulnerabilities they claim to fix.

Reporting and metrics are a common use case, often in addition to one of the others mentioned. When building formal metrics to share with upper management and the board, vulnerability scan results often feed the risk calculations and trends that produce these metrics. Most compliance needs also fit into this use case.

Conclusion

Vulnerability scanners aren't quite as essential and central as they once were, but they're still necessary. Thanks to the explosion of IoT devices, the need for them is unlikely to change any time soon. However, they have a long way to go before they can provide useful results when it comes to IoT devices.

According to the Verizon DBIR team, vulnerabilities are exploited in less than 5% of all reported cyber incidents. Don't take this statistic the wrong way — when attackers discover an exploitable vulnerability, they will take advantage of it. This news puts defenders in an awkward position — vulnerability management is still an essential practice (and rightfully sits at #7 on the 18 CIS Controls), but this traditionally labor-intensive process shouldn't distract from more important security work.

Adrian Sanabria

Adrian is an outspoken researcher that doesn’t shy away from uncomfortable truths. He loves to write about the security industry, tell stories, and still sees the glass as half full.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.