In today’s information security departments, no matter what the maturity level, metrics are almost always a deliverable required by upper management to gauge the security posture of the company as well as department progress. What I have seen in my various positions within information security is, often times, these metrics fail to provide real value to their organization. If a security department is running a report out of a tool, they’re really only providing information within the limitations of that tool’s reporting capability. Even so, what does 1,000 “Medium” level vulnerabilities really mean?
Often times, metrics fail to provide real value to their organization
The importance of metrics
In their simplest form, metrics are merely a type of measurement, and what they measure is determined within various tools’ configurations. I have found risk, threat counts, and severity to be the most common metrics within the information security realm. However, what does that mean to your environment? I’m certain most individuals reading this blog are all too familiar with the infamous weekly or monthly “status meeting,” where everyone in a department comes together to discuss current projects, share any relevant news, and share “vulnerability reporting.” Having worked in a large Fortune 500 company, in these scenarios I would find myself staring at absurd figures wondering how they even pertained to the security of our environment. “This month our Windows servers had 3,291 vulnerabilities, down from 5,026 after patching. Linux servers had 729 vulnerabilities, down from 944 after updates.” So, are we really more secure because the more recent number is smaller? Are we really 35% more secure in our Windows servers because of the numerical difference in vulnerabilities?
Ranking the criticality of assets, vulnerabilities and metrics
Without understanding vulnerabilities and their inherent risks as they pertain to your environment, these numbers are little more than mathematical equations. 35% less total vulnerabilities does not equal a 35% improvement in the security of your environment if 25% of those vulnerabilities were “low risk” informational findings, and the other 10% were open FTP ports that allow anonymous logins. Senior level engineers and management need to identify, and then classify, their assets by criticality. After doing so, not only would they have quantifiable information being reported, but they also have quantifiable information with meaning that is unique to their environment. Once this is completed, a security program may begin to truly measure risk posture. For example, Company A has 500 endpoints discovered in SecurityCenter™. The first step is simply to identify what the assets are. Are they test labs, or are they critical servers holding financial data? Once the assets have been sorted into function, criticality or other criteria, the company can perform various types of threat modeling.
For example, take a group of servers responsible for hosting a company’s internal instant messaging application. A report is generated and the servers are found to have a total of 10 “high severity” vulnerabilities.” Do these classify as critical systems requiring 100% uptime? Perhaps, or maybe the company chooses to accept a higher risk appetite for this group of servers. While they’re very important, it could be argued that they’re not exactly critical to the success of the company. As such, perhaps those 10 high-severity vulnerabilities have a longer acceptable timeframe to remain on the system before being remediated.
Consider this same question for a group of servers supporting the accounting department. Does the threat to the company remain the same? I’m fairly confident most organizations would say no. I’m also sure that few would argue that 10 high-severity vulnerabilities on servers supporting a financial department, as opposed to 10 high-severity vulnerabilities on servers supporting an application for instant messaging, have two completely different levels of importance to a company. That is also the exact reason why senior-level engineers and management need to get together before generating metrics and running reports on groups of assets.
Senior-level engineers and management need to get together before generating metrics and running reports on groups of assets
Creating meaningful metrics
There are many advantages to identifying and classifying the systems in your organization. Save yourself confusion in the future by taking some of these steps to avoid reports that simply spit out a total count of vulnerabilities:
- Identify: Know what you’re scanning. For example, does a Windows Server 2008 with an IP address of 10.10.10.4 support the marketing or finance department?
- Analyze: Know the risk to your organization. How much damage will be done if this system becomes compromised?
- Categorize: Obtain a high-level view of your security posture. Identify which assets belong to each area of your organization, and assign levels of criticality based on those findings.
- Execute: Generate meaningful metrics. Once the previous steps have been completed, you will have a solid foundation to generate reports with quantifiable and meaningful vulnerability and risk information as they pertain to your organization.
Make metrics meaningful to your organization
Make metrics meaningful to your organization. Identify your assets, analyze their criticality to your organization, categorize their areas of use, and execute metrics and reports based on what is important to your organization. Visit our Security Metrics page to learn more about creating meaningful metrics with Tenable products.