In my last post we started a discussion around cybersecurity economics answering two simple questions: How much should we spend on cybersecurity and what should we spend it on? To answer the questions I started on the hunt for a financial model. From my research oneof the better models for cybersecurity cost justification is an ROI model from Booz Hamilton , Inc. It’s a nice model, but as discussed in my first post, it falls far short of answering my questions.
In this post I’m taking a different tact. Rather than focusing on the hard number – thou shall spend $5M on cybersecurity – I’m taking a step back and focusing on the fundamental cybersecurity economics. My goal is a cost-benefit analysis: keep increasing cybersecurity spending as long as the incremental benefit is greater than the cost. At the point where the incremental benefit equals the cost then that’s the limit of our spending. This will at least put an upper limit on the answer to my first question on how much we should spend.
NPV vs. IRR vs. ROI
As mentioned previously, most analyses I’ve seen for cybersecurity spending are based on ROI. Despite some feeble attempts, I believe a true return is impossible with cybersecurity: the best we can do is a reduction in potential loss, or potential costs. For this reason some people are using the term return on security investment (ROSI) rather than ROI. As mentioned in the previous post, this is the premise of the Booz ROI model. Based on the Booz model:
ROSI= (Benefits – Costs)/Costs
So, let’s invest $1M in an identity management system (Cost) that reduces our expected loss by $2M (Benefit). The identity management system costs us $200,000/year to operate and our three year ROSI is $4.4M (assuming future benefits are not discounted, an assumption that will be revised in the next example).
If we are using ROSI as our justification measure then we’d most likely move forward with this investment.
Unfortunately, it’s not that simple. There are two problems with using ROSI to justify cybersecurity investments. First, ROSI is a historical figure where as I’m using it to project future benefits. As discussed below, this can dramatically overstate the economic rate of return. The second issue with ROSI is it’s not the same thing as optimizing corporate investment. In fact, from the CFO’s perspective, the goal of the firm is not maximizing ROSI, but rather deriving the optimal level of cybersecurity investment for the firm. These are two very different goals. The bottom-line is walking into a Board Meeting with ROSI as a cybersecurity spending justification will definitely get the discussion with the CFO off on the wrong foot!
Two terms that CFOs understand are the Internal Rate of Return (IRR) (economic rate of return) and Net Present Value (NPV) (compare anticipated benefits and costs over time). What these terms do that ROSI does not do is discount investments and costs over time to today’s value (PV).
The formula for NPV is pretty straight forward and it’s based on k, the discount rate.
With NPV we have three choices:
- Invest if the NPV > 0
- Reject investment if NPV < 0
- Indifferent on investing or not investing if NPV=0
Taking the same investment/benefits from above and assuming a 15% discount rate, our NPV is $3.1M.
In the above example, we’d make the investment because NPV >0 however, the NPV is substantially less than the ROSI.
The reality of cybersecurity investments is there is significant risk involved, particularly when projecting expected loss reductions. What I really like about NPV is we can reflect this risk in the discount factor, K. For example, what if we double k to 30% to reflect the highly uncertain nature of our projections? This drops our NPV to $2.3M.
Now we’re looking at an NPV of $2.3M versus our initial ROSI projection of $4.4M and initial NPV of $3.1M. Most likely, we’d still make the investment since NPV >0, but it’s not the slam dunk we see with the ROSI, or even the initial NPV.
The other key economic factor I mentioned above is Internal Rate of Return (IRR). I won’t get into the details here, but with IRR the goal is finding the discount rate where the initial investment (C0) equals the PV of future net benefits. With IRR, invest if IRR >K, reject the investment if IRR <K and be indifferent to investing or not investing if IRR=K.
My takeaway here is K (cost of capital) is critical. Adjusting K is how companies can adjust for risk: higher risk investments carry a higher K; the higher the K, the lower the NPV. Of course, figuring out the level of risk is necessary before adjusting K (more on this below).
The Gordon-Loeb Model
So far, we’ve been talking about discreet costs and benefits. All the above examples should be funded since the NPV > 0. Yet, I’m still looking to figure out the optimal investment strategy for the company, or “how much should we spend on cybersecurity?” Another way to look at this is at what point do the incremental benefits become less than the incremental cost reductions?
In 2002, Professors Lawrence Gordon and Martin Loeb published the Gordon-Loeb Model (see: https://en.wikipedia.org/wiki/Gordon-Loeb_Model) for information security economics. Though the math and underlying analysis go well beyond the scope of this blog, the high level cybersecurity economics are worth reviewing.
Their analysis looks at three factors:
- The potential loss from a security breach
- The probability that a loss will occur
- The effectiveness of additional investments in security
When looking at figure 1, the first thing to notice is the shape of the curve: benefits of investment increase at a decreasing rate. This is crucial since it shows that at some point the expected net benefits of the investment start decreasing in relation to the cost of the investment. Another way of looking at this is my first $1M spent on cybersecurity may be far more effective than my fifth $M spent on cybersecurity. I see a four key takeaways here:
- Even a little investment in cybersecurity can have a big impact
- There is a limit at which point we can’t economically justify spending more on cybersecurity
- We’ll never achieve perfect security
- We should consider the point of optimal investment (z*) as the point where we define the beginning of our residual risk (more about this in the next post)
I urge everyone to read Managing Cybersecurity Resources: A Cost-Benefit Analysis by Gordon and Loeb. It’s the best cybersecurity economics discussion out there. I’ve probably read it three times and each time I learn something new. Professors Gordon and Loeb have two key findings from their analysis/model:
“One key finding from the model: The amount a firm should spend to protect information is generally no more than one-third or so [37%] of the projected loss from a breach. Above that level, in most cases, each dollar spent will reduce the anticipated loss by less than a dollar.”**
“A second key finding: It doesn’t always pay to spend the biggest share of the security budget to protect the information that is most vulnerable to attack, as many companies do. For some highly vulnerable information, reducing the likelihood of breaches by even a modest amount is just too costly. In that case, companies may well get more bang for their buck by focusing their spending on protection for information that is less vulnerable.”**
Are We There Yet?
So, how can I use the Gordon-Loeb Model to answer my question about how much we should spend on cybersecurity? There are four steps outlined by Gordon and Loeb.
It is important to note that the model is focused on loss from a data breach. This is only one class of breach, though the fundamentals should still be the same. The process is as follows:
- Estimate potential loss (L) from security breach for each set of information. The inverse of this becomes the value of the information (high, medium and low)
- Estimate the likelihood that an information set will be breached by examining its vulnerability/threat (v) to attack
- Create a grid with all the possible combinations of the first two steps, from low value (low L, low v) to high value (high L, high v)
- Focus on spending where it should reap the largest net benefits based on productivity of investments
From Table 4 the average potential loss (L) from a medium vulnerability/threat (v) against a medium value information set is $25M. Based on the Gordon-Loeb Model, we should spend on average no more than $9.25M (37% of $25M) to protect this information. Please note that the above table is a summary of much more detailed work done by Gordon and Loeb.
To me, the most powerful finding of the Gordon-Loeb Model is all cybersecurity spending is not equal: spending effectiveness per dollar spent is often lower for the highest risk assets (highest v and highest L) and lower risk assets than for medium risk assets; and, spending effectiveness increases at a decreasing rate.
Where does the Gordon-Loeb Model leave us? It’s clearly a cybersecurity economics framework we can use to establish the border conditions for cybersecurity spending, so on average we should spend no more than 37% of our potential loss (L). But, as Professors Gordon and Loeb wrote in the WSJ:
“However, this approach is best thought of as a framework, not a panacea, for making sound information-security investments. It is not a magical formula that can be used to churn out exact answers. Rather, it should be used as a complement to, and not as a substitute for, sound business judgment.”**
Another way of looking at this is:
“In theory there is no difference between theory and practice. But in practice, there is.” – Yogi Berra
So, let’s review: we still need to figure out how much we should spend on cybersecurity and what we should spend it on. It appears that we may have an upper limit of spending as long as we have a clear understanding of our potential loss and vulnerability/threat.
Per Professor Gordon’s warning, we need a way to move from theory of potential loss to practice of potential loss, preferably based on sound business judgement. If we can do this we can figure out what to spend on cybersecurity.
I believe the missing link between theory and practice is a proper and rigorous accounting for risk (including residual risk). Calculating risk occurs at the intersection of all the factors we’ve been discussing: loss, threats, vulnerabilities, costs, benefits and sound business judgement.
The great news is there is a framework for calculating risk called Factor Analysis for Information Risk (FAIR). In my quest, when I learned about FAIR I was so excited I ran out and became certified on it! As I’ll discuss in my next post, I’m hoping that combining the Gordon-Loeb Model and FAIR and using NPV will give us the framework we need to determine what we should spend on cybersecurity and potentially, what we should spend that money on.
*Gordon, L. A. and M. P. Loeb, 2002, “The Economics of Information Security Investment,” ACM Transactions on Information and System Security, pp. 438-457.
**Gordon, L. A. and M. P. Loeb, 2011, “You May Be Fighting the Wrong Security Battles,” The Wall Street Journal, September 26.