Ben has been in various IT roles for about 30 years now, bouncing from desktop to system admin to security to various dev positions to infrastructure and finally back to security again. He joined Recon a little over 2 years ago and serves as the Manager of Technology & Engineering. Ben has spent nearly a decade as a manager for just about every type of distributed infrastructure (OS, networks, storage, cloud services, etc.). This experience showed, first-hand, how debilitating and ineffective "traditional" vulnerability management can be. His first encounter with vulnerability scans in the late '90s highlighted their unwieldy nature and questionable data quality. Upon returning to a security-specific role, he quickly focused on finding ways to derive meaningful value from a process that historically produced only misery for him & his teams.
The first thing one must understand is that, in the vulnerability management space, the data is often unwieldy and misleading. The foremost problem is how the data is gathered. Even when using the gold stamp approach of an endpoint agent, current vulnerability scanners have a simple approach: identify system's OS, catalog installed software (often through simple filesystem or registry scan), then list every CVE ever reported for those versions. This is frequently supplemented by an overlapping list of vulnerabilities discovered via port scans, though often performed with downgraded encryption on every connection made. If you rely solely on network scanning, tools will make their best guess based on how a system responds on well known ports (think nmap) as to the OS and software, then just dump the list of vulnerabilities it thinks might apply.
The result, whether using an agent or network-only scans, is an extensive report containing many vulnerabilities that are:
There are, of course, folks trying to fix this issue, so we may not have to suffer through the darkness forever. But for now, this is our world. Still, not all is lost, if you just roll with it, there are valuable kernels of knowledge within these extensive reports.
It's a common thought process amongst intelligent security professionals that prioritizing the highest-rated vulnerabilities first is an effective solution. Operationally, however, this assumption fails in the reality of enterprise IT environments. Using a strictly priority driven process is a bad bargain, as it assumes that priority is in any way static.
Even assuming a monthly update cycle (Patch Tuesday), you’re asking a team to fix all the things in the 4-5 weeks before the next batch lands. When they fail to do so, the next month brings a new batch, forcing you to reshuffle the deck with the existing issues, plus the new ones. The simple fact is that new CVEs and POCs drop daily (looking at you, Chrome), and can drastically change the risk rating you should apply to the CVEs you already have.
The only real, effective prioritization you should plan to achieve is more dynamic and focused, asking "What is the most important thing we can be doing right now?"... and believe it or not, that approach is enough.
It must be clearly stated: CVSS Measures Severity, not Risk.
The CVSS Specification Document itself clarifies this - “The CVSS Specification Document has been updated to emphasize and clarify the fact that CVSS is designed to measure the severity of a vulnerability and should not be used alone to assess risk.”
Deciding how risky a CVE is based on its CVSS score? Take the time to read the CVSS specification. It makes key assumptions that you have to learn to account for—namely, that the attacker has detailed knowledge of, and access to, the system in question.
It is important to realize that this is a never-ending, continuous operational process, not a project. This fact must be communicated clearly to folks, especially at the management level as they don’t have the context of actually working directly with these systems. There won’t be a day where you dust your hands off and are simply going to stop tracking vulnerabilities.
Another key piece to communicate to management, to ensure you’re not setting your team or another up for failure based on things they have no control over, is that this is a measurement of the software ecosystem, not your systems staff. These risk levels are an amalgamation of many factors, most of which are outside of any one team's direct control. While your teams influence security posture, their performance should not be judged based on perceived risk at any given time.
Vulnerability management serves as a check, not a primary driver, on existing patch management and configuration management programs. If effective patch management is lacking, the vulnerability management process will only confirm that this is true. Addressing fundamental operational weaknesses (like broken patch management) is the crucial first step.
While the specific number may fluctuate, a substantial percentage of vulnerability data can, and should, be quickly ignored. Tracking the monthly swing of CVE release and patch is a waste of time. The true value lies in identifying and focusing on issues that have fallen outside of that typical process, those are actually worth looking into.
How do we shift from spinning our wheels to valuable insight?
You need these data points for this process:
Integrate all these data points (scanner data, both rescores, and cadence information) using appropriate tools that you and your team are comfortable with, such as Power BI or Python/SQLite, that can handle the volume.
This step is key to isolating into the actionable items:
The succinct process for figuring out how you can make the most impact in the vulnerability space is now very simple.
The final step is to articulate the value derived from this filtered list and define your Key Risk Indicators (KRIs) and Key Performance Indicators (KPIs). This refined list represents the operational exceptions that truly demand attention and can relieve your organization from being fraught with high vulnerability scores irrelevant to your environment.
While traditional vulnerability management can be ineffective and time consuming, applying realistic expectations with scalable processes can derive genuine value by leveraging your vulnerability scanning data to assess your environment and find actionable improvements on an ongoing basis.