How to prove your value in information security
With companies slashing budgets, it’s crucial to evolve your methods – and make a convincing business case
By Jack Jones Posted on 2 September 2014
In the "good old days" of information security (maybe eight to 10 years ago) an organization's information security program was considered mature if it had aligned its policies with ISO27001 standards and could pass a standard audit checklist of common controls. Those were simpler days.
Today, for many organizations, information security has become much more complicated:
- The use of technology has become much more complex — e.g., cloud services, business interconnectivity, etc.
- The threat landscape has continued to evolve.
- Regulatory expectations have become more stringent.
- Budget belts have tightened.
This last element — the budget — is particularly important to the point of this article because it implies the need to do more with less and build more meaningful business cases for funding.
Jack Jones has worked in technology for over 30 years, the past 25 in information security and risk management. He has a decade of experience as a Chief Information Security Officer (CISO) with three different companies, including Nationwide Insurance. His work there was recognized in 2006 when he received the Information Systems Security Association (ISSA) Excellence in the Field of Security Practices award. In 2007, he was selected by Tech Exec Networks as a finalist for the Information Security Executive of the Year, Central United States, and in 2012, he was honored with the CSO Compass Award for leadership in risk management.
Jones, who lives in Spokane, Washington, serves on the ISACA CRISC Certification Committee and the ISC2 Ethics Committee, and is the President and Co-founder of CXOWARE. He is the author and creator of the Factor Analysis of Information Risk (FAIR) framework. He writes about that system in his book Measuring and Managing Information Risk: A FAIR Approach, which was just published by Elsevier.
Without reasonable efficiency and sufficient funding based on strong business cases, the odds of an organization keeping up with the evolving risk landscape is questionable, at best. Unfortunately, most information security programs haven't evolved their practices to become more efficient and more appropriately funded. Instead, they've piled on new technologies and checklists and continued to fight for budget, claiming the need to follow "best practices."
This "money pit" strategy has worn thin for many boards of directors, and even some regulators are beginning to insist that Chief Information Security Officers (CISOs) provide more meaningful rationale for the money they're spending and the inherent limitations information security imposes on the business.
Forget "high/medium/low" risk – show the math!
Efficiency implies making comparisons between, for example, which problems or opportunities are most important, and which solutions are likely to be most cost-effective. These comparisons require measurement — meaningful measurement, that is. Historically, the information security profession has relied on simple qualitative High/Medium/Low rating scales to label the importance of the issues it faces. This is fine as far as it goes, but it doesn't go very far. Dividing up the entire continuum of significance into just those three buckets (or five buckets, in some cases) leaves a lot of unanswered questions, such as:
- Which of the "high-risk" problems are most important to address?
- How much difference is there between the lowest "high-risk" problem and the highest "medium-risk" problem?
- What does "high risk" equate to in business terms? In other words, how much should the business care about issues rated "high risk"?
From an operational perspective, the inability to answer the first two questions means that a CISO is hamstrung in their ability to reliably identify and focus on the most critical issues. This also makes it difficult to identify which solutions are likely to be most cost-effective.
The inability to answer the third question presents yet another problem. Without being able to express risk in business terms it's too easy for personal bias, myth, and commonly held misperceptions to affect the risk rating applied to issues — i.e., does that "high-risk" issue really represent material risk to the business? Over the years I've seen many information security issues rated "high risk" that, following a little scrutiny, turned out to be immaterial. This not only severely affects an organization's ability to manage risk efficiently, it also affects the credibility of the information security organization and the CISO.
Make a convincing business case
The inability to answer the third question above also means that business executives are unable to effectively prioritize information security issues amongst all of the other things on their plates. Sales and marketing come to the annual budget wars with projections of increased revenue. The Chief Operating Officer comes to the table with the operational costs associated with keeping the lights on. Information security comes to the table with a chart littered with "high-risk" issues. It's not a fair fight.
If the organization's management team is paranoid, it's likely that they'll spend more on information security than is warranted. If they're less paranoid or fundamentally skeptical, it's more likely that information security will be under-funded. Either way, the organization loses. In the first case, those resources unnecessarily applied to information security are unavailable for other business imperatives, like growth. In the latter case, the organization ends up with more information security exposure than it would like. In either case, management is making poorly informed decisions.
Creating a value proposition for information security
At the end of the day, information security's value boils down to its effect on how often losses occur and how bad those losses are. As a result, everything done in the name of information security can be measured in those terms. This provides a number of key advantages:
- Loss frequency and magnitude can be expressed in financial terms, which is inherently meaningful to executives. This also makes comparisons against other financially expressed business cases (e.g., marketing and sales) much easier, and makes it possible for executives to decide more explicitly whether a particular issue is relevant to them and where it stands in the grand scheme of things.
- CISOs can prioritize more effectively. Instead of having a bucket of, for example, 10 "high-risk" issues, they can recognize that the top four issues in that bucket represent 90 percent of the total loss exposure, and that the top issue represents twice as much loss exposure as the next one down in the list.
- CISOs can better compare their options for treating risk issues. Sometimes "best practice" (or common practice) is the most cost-effective option. Sometimes it's not. By measuring the cost-effectiveness of various options by their expected effect on the frequency and/or magnitude of loss, a CISO can make better informed choices.
Getting there from here
You're probably familiar with the piece of wisdom that says (paraphrased), "You can't manage what you can't measure." That's essentially the gist of everything discussed here.
In order to solve this measurement problem for information security though, we need to extend that phrase to include, "…and you can't measure what you haven't defined" — or at least defined well.
To get my point across, let me pose a question. How likely would you be to volunteer for a ride on a space mission if you knew that the engineers and scientists who designed the rocket and planned the mission couldn't agree on the meaning of foundational terms like mass, weight and velocity? Not likely, I'd bet. Yet that's exactly where the information security profession is from a nomenclature perspective.
To prove my point, pick up two books on information security by two different authors and you'll very likely find that they've used foundational terminology like "risk," "threat," "vulnerability" and "incident" differently from one another. Odds are very good that within each book, the authors have been inconsistent in their own use of these terms. Adding to the confusion are many of the glossaries in information security industry standards. Sometimes a glossary will show multiple definitions for a term. In other cases, a definition will be so unclear or convoluted as to unusable. This being the case, it's not hard to see why meaningful measurements have been problematic.
Another obstacle has been a prevailing belief that information risk can't be accurately measured quantitatively for a number of reasons, including:
- Incomplete data. A commonly heard concern is that change in the landscape occurs too rapidly, making the useful lifetime of data too short. Another is that because the adversary is intelligent it's not possible to know with certainty when, how, or where the next attack will come.
- A landscape that is too complex. This is really just another "insufficient data" problem, but focused on imperfect information about the structure of the landscape and all of the interconnections.
Another common concern is that quantitative analysis takes too much time. The underlying assumption being that in order to perform quantitative analysis, you need a significant amount of data, which takes time and resources away from "doing security."
There's no question that quantitative analysis takes longer than sticking a wet finger in the air and proclaiming "high/medium/low risk". That said, good quality quantitative analysis requires much less data than commonly believed. As Douglas Hubbard states (and demonstrates) in his book How to Measure Anything, you have more data than you think you do, and you need less data than you think you do.
[pullquote align="right"]The question isn't whether data is imperfect, it's whether we make the best possible use of the available data and are able to logically and rationally defend the results.[/pullquote]
The problem with these concerns about data is they ignore the fact that qualitative estimates (high/medium/low, etc.) suffer from the very same data-related challenges, plus the difficulties discussed earlier in this article. So the question isn't whether data is imperfect, it's whether we make the best possible use of the available data and are able to logically and rationally defend the results.
Fortunately, there are a number of well-established methods commonly used in other disciplines for effectively leveraging imperfect data — Monte Carlo simulation and calibrated estimation to name two. With these and other tools in our kit, we can significantly improve the quality of our risk statements, become far more efficient, and increase the value of information security.
Summing it up
At the end of the day, a CISO's job is not to "do information security stuff." Their responsibility is to help their organization cost-effectively manage how often losses are likely to occur and how bad those losses are when they do occur. Information security is simply a means to that end. Doing this well is where CISOs earn their keep and earn a seat at the executive table. It's an inherently difficult task that is not possible today without mature risk management practices.
By Paulo Shakarian, PhD | Posted on 25 Sep 2013
A military computer scientist writes about the growing threat of 'hactivism'By Paulo Shakarian, PhD | Posted on 04 Jun 2013
A military computer scientist explores the growing threat and the forms the attacks could takeBy Paulo Shakarian, PhD | Posted on 14 May 2013
The relationship of cyber-war to Chinese military thought and why a multidisciplinary approach is needed