Disarming Cyber Weaponization

This post is the third in a series of posts (IntroReconnaissance), aligning the 20 Critical Security Controls (CSC) from the Center for Internet Security (CIS) to the seven steps of the Lockheed Martin Cyber Kill Chain (CKC™). As I wrote in the intro post, I believe it is time to rethink the way we go about protecting our assets and building our cybersecurity practices. Mapping the CIS Critical Security Controls (CSC) against the CKC™ achieves a relatively short list of actions that dramatically reduces risk. Also, this approach aligns well with the NIST Cybersecurity Framework and the NIST Cybersecurity Framework Controls Factory Model (NCSF-CFM) that I wrote about previously.

Stopping the attack at this point is extremely difficult since it is occurring offline, from the perspective of corporate IT. If the organization picks up on the recon activity, then it could block weaponization. For example, if we see SQL scans then potentially the target is SQL Injection weaknesses. Or, if the scans are looking at Apache rev/patch levels, it could be indicative of a potential exploit such as a Struts vulnerability.

At this stage, I see three primary defensive moves to deter potential weaponization:

  1. Actively pursue threat intelligence to track current weaponization techniques
  2. Deploy honeypots as a means to drive the adversary to invest in a delivery mechanism against a vulnerability resident in the honeypot
  3. Deploy tools and training to detect elements of recon as early indicators of potential delivery vectors. Also, prepare the Incident Response team to identify possible attack vectors based on recon artifacts

Key CIS Critical Security Controls to implement to disrupt the weaponization step, include CSC9, CSC17, and CSC19 (also CSC1, CSC2, CSC3, and CSC6):

  • CSC9 – Limitation and Control of Network Ports, Protocols, and Services – This includes layered perimeter defense with network segmentation and extensive use of IDS/IPS on these segments. This lockdown has to occur in both physical and virtual environments along with vulnerability scanners properly configured to scan all ports and protocols for potential vulnerability
  • CSC17- Security Skills Assessment and Training – The more aware staff are to their role in an attack, the less likely weaponization will succeed. For example, if the staff is trained to always “hover before clicking” the likelihood of a drive-by download is significantly reduced
  • CSC19 –Incident Response and Management – It’s critical that IR has the tools and knowledge to detect artifacts of weaponization as a means to better understand the intent, scope, and target of the attack

What Goes Around Comes Around

The below diagram highlights the relationship between the CKC Weaponization Phase, The NIST Cyber Security Framework Core, and the CIS-20. It is critical to think of the kill chain as a continuous loop, as depicted in the drawing. For example, after establishing a foothold and conducting additional reconnaissance, the adversary could develop a second weaponization step based upon the discovery of a new vulnerability.

Moving on Down the Chain

To make this as actionable and succinct as possible, I have done my best to distill best practices at each step while adding my insights. I base much of this analysis on a report from NTT/Dimension Data, but I also draw from excellent work done by multiple organizations, including the Australian Government’s Cyber Security Centre, CIS, Lockheed Martin, NIST, Optiv, SANS, Trend Micro, and Verizon.

I welcome feedback to help refine this series. With critical and constructive feedback, I believe these posts may become an outline any organization may use to efficiently and effectively reduce its risk.

First stop was Introduction. Last stop was Reconnaissance.

Next stop is Delivery, ETA 10/24/2017

Turning Cyber Reconnaissance Opaque

This post is the second in a series of posts, aligning the 20 Critical Security Controls (CSC) from the Center for Internet Security (CIS) to the seven steps of the Lockheed Martin Cyber Kill Chain (CKC™). As I wrote in the intro postit is time to rethink the way we go about protecting our assets and building our cybersecurity practices. Mapping the CIS Critical Security Controls (CSC) against the CKC™ achieves a relatively short list of actions that may dramatically reduce risk. Also, this approach aligns well with the NIST Cybersecurity Framework and the NIST Cybersecurity Framework Controls Factory Model (NCSF-CFM) that I wrote about previously.

Phase I: Reconnaissance

The first phase of the CKC™ is Reconnaissance. During this step, the adversary collects as much information as possible about the target, much of which does not require explicit interaction with the organization’s IT infrastructure (i.e., no log entries), but as discussed below there may be telltale traces in the logs, even during this step.

Key Moves

At this stage, I see five primary defensive moves to limit the reconnaissance surface area, thus reducing the attacker’s ability to discover potential targets and approaches:

  1. Implement controls to identify any possible interaction with the IT infrastructure, quickly. Typically, look for scans and probes. For example, there may be a burst of login attempts against Outlook Web Access (OWA) as the adversary attempts to determine the invalid login lockout setting. Web analytics is critical for identifying potential adversary activity
  2. Conduct in-depth scans to identify all live IP addresses and open ports. Scan across multiple protocols and scan the cloud environment (e.g., check for exposed EC2 Security Groups on AWS)
  3. Deploy honeypots to provide the adversary with “easy recon,” incenting them to move to weaponization, rather than spending more effort to uncover potential vulnerabilities
  4. Educate employees about best practices to limit exposure of potentially sensitive information
  5. Conduct external threat intel scans and social media tracking to identify the disclosure of potentially leverageable publicly available information (e.g., looking on Pastebin and the dark web for corporate and staff PII)

Key CIS-20 Controls

Please note that in my first post, I described CSC1, CSC2, CSC3, and CSC6 as fundamental to every step, including this one. Additional controls to detect and disrupt the recon step are CSC9, CSC11, CSC12, and CSC20:

  • CSC3 – Secure Configuration of Hardware and Software – Much recon activity is possible due to weak configurations and these poor, and misconfigured systems are an attractive target
  • CSC6 – Maintenance, Monitoring of Audit Logs – This is the only opportunity to catch scans and probes as an indicator of a potential attack vector. Of course, one must be recovering the right logs and retaining them long enough. A significant consideration is the attack velocity. Ideally, attacks are verbose and intense, but the attack could be low and slow (possibly an Advanced Persistent Threat (APT)). In the latter case, it is critical that the log retention times be very long, or even, forever
  • CSC9 – Limitation and Control of Network Ports, Protocols, and Services – The tighter the lockdown, the less leverageable the recon data
  • CSC11 – Secure Configuration of Network Devices – Routers and access points with default passwords are easy targets. Lock them down!
  • CSC12 – Boundary Defense – Recon will detect weak boundary defense which could increase the likelihood of tactics such as exfiltration of data by tunneling via non-standard protocols
  • CSC20 – Penetration Test and Red Team Exercises – Though listed last, this is one of an essential control because it gives a Red Team the ability to see what adversaries see as they conduct their recon efforts

What Goes Around Comes Around

The below diagram highlights the relationship between the CKC Reconnaissance Phase, The NIST Cyber Security Framework Core, and the CIS-20. It is critical to think of the kill chain as a continuous loop, as depicted in the drawing. For example, recon could initially be external, and once the adversary establishes a foothold (Install), they will launch recon internal to the Firewall.

Moving on Down the Chain

To make this as actionable and succinct as possible, I have done my best to distill best practices at each step while adding my insights. I base much of this analysis on a report from NTT/Dimension Data, but I also draw from excellent work done by multiple organizations, including the Australian Government’s Cyber Security Centre, CISLockheed MartinNISTOptivSANSTrend Micro, and Verizon.

I welcome feedback to help refine this series. With critical and constructive feedback, I believe these posts may become an outline any organization may use to efficiently and effectively reduce its risk.

Next stop is Weaponization, ETA 10/23/2017

Using SANS-20 to Cut Through Security Vendor Hype

Wahoo!  This is the last post of the series.  I think I’ve saved the best for last because what I’m writing about is immediately actionable.  For a little background, I was working with a client and one of their prospects said “how will you affect my SANS 20 score?”  Brilliant!  This Fortune 100 insurance company makes cybersecurity investment decisions based on potential impact to SANS 20 posture.  They use SANS 20 as a qualitative assessment tool to compare one product/control to another.  Essentially, this is the bookend to the quantitative discussion in my last post.

A Brief History

First developed by SANS, the 20 Critical Security Controls (CSC) provide a very pragmatic and practical guideline for implementing and continually improving cyber security best practice.  The CSC-20 are real-world prescriptive guidelines for effective information security.   As stated in the Council on Cyber Security’s overview, the CSC-20 “are a relatively short list of high-priority, highly effective defensive actions that, provides a ‘must-do, do-first’ starting point for every enterprise seeking to improve their cyber defense.”

The great news for all organizations is there is significant synergy between the CSC-20 and ISACA’s COBIT, NIST 800-53, ISO27001/2, the NIST Cyber Security Framework and the Department of Homeland Security Continuous Diagnostic and Mitigation (CDM) program.  For example, just as I discussed how Open FAIR controls map to NIST 800-53 control categories, the CSC-20 maps directly to 800-53.

Diving into the depths of the CSC-20 is well beyond the scope of this post, but as a reference point, the CSC-20 contains 20 controls made up of 184 sub controls.  My focus in this post is explaining how to build a matrix to map both internal organization progress implementing the controls and also how to evaluate potential new security products’ or services’ effectiveness.  This is only possible because of the CSC-20’s granularity, modularity and structure for measuring continual effectiveness improvement. To underscore this point, each control not only defines why the control is essential, but it provides relevant effectiveness metrics, automation metrics and effectiveness tests for that control.  In other words, the control provides guidance on what to do as well as guidelines on how to know you are doing it correctly.

Birth vs. Security vs. Pest Control(s)

Screen2As mentioned above, there are many different methodologies and approaches to security control selection.  It’s important we recognize that most security controls deliver value well before they reach maximum effectiveness.  This opens the door to a continuous improvement and monitoring practice.

I emphasize most security controls are applicable to a continuous improvement program.  However, some are not.  Put another way, for pest control, a screen with a few holes in it will do a pretty good job keeping out the mosquitos: with every patched hole, fewer mosquitos get through.  In contrast, for birth control, this approach doesn’t work so well!  Birth control must be implemented with maximum effectiveness from the start.

To put this in Open FAIR terms, the control effectiveness must exceed the threat capability to be effective.

Figure 1

Using the CSC-20 opens the door to control effectiveness monitoring.  Figure 1 shows my representation of a CSC-20 control effectiveness measure.  A few things to note about this:

  1. It’s not ordinal. Yes, there are red, yellow, and green bands, but the needle is pointing to a discreet number.   For the reasons why this is important, please check out my post introducing Open FAIR.
  2. The max effectiveness state may not be 100%. There will be reasons (technical, policy, procedural, political, etc.) why organizations will not implement specific sub-controls.
  3. We need to measure progress over time for continual effectiveness improvement. In figure 1, the direction of the arrow shows which way the needle is going.  In this example, there is no improvement (or drop) from the previous assessment.

Once the monitoring format is determined, we can create a dashboard to view effectiveness of all 20 controls.  I’ve seen this done with status bars rather than the tachometer icons and there are pros and cons with each approach.  I’d love to hear of any other ideas people have on ways to graphically track control effectiveness.

 

Figure 2

For this post I drew the meters manually.  For ongoing use, a similar result can be achieved through Excel macros and creative graphic templates.  However, since effectiveness is probably only measured 1-2 times/ year, a manual process may be the most effective time investment versus creating an automated template.

Control Breakdown

CSC-20 defines four categories of controls – quick win, visibility/attribution, configuration/hygiene, and advanced.  The key to the effectiveness measure is assigning weights to these different types of control.  As an example, following on the earlier discussion, CSC-5 is the malware defense control made-up of 11 sub-controls: “Control the installation, spread and execution of malicious code at multiple points in the enterprise, while optimizing the use of automation to enable rapid updating of defense, data gathering, and corrective action.”

Figure 3

 

As shown in Figure 3, I’m using a simple scale with quick wins having the lowest weight (4 points) and advanced having the highest weight (16 points).  The approach is arbitrary and the key is being consistent across all 20 controls.  For example, I’ve also considered an approach where quick wins get the highest weight because they have the quickest impact.

Once the weighting is final, we can calculate an effectiveness score.  To do this I self-assess my effectiveness on each sub-control.  For example, I have anti-malware (5-2) software on all end points and in my DMZ so I’m giving myself 100% (4 points) for this sub-control.  At the other end of the spectrum, I have no behavior-based anomaly detection so I’m giving myself 0% (0 points) for this sub-control.   The end-result is a sucky 39%.   There is certainly great room for improvement here.

Using Qualitative Assessment to Evaluate Different Products

In my last post we used a quantitative assessment to evaluate the potential impact of a new control.  Using CSC-20, we can get more granular and not just evaluate the potential impact of a new control, but compare one product to another!    Surprisingly, the organization we were dealing with had already deployed a number of products labeled “malware defense” – with very poor results – and this time they were able to determine ahead of time the potential impact of their next product; without running a single test.

The process is pretty straightforward:

  1. Perform a CSC-20 self-assessment as described above
  2. Determine the incremental projected benefit of adding a new security product. What sub controls will the product cover and to what level?  How much overlap is there between what the new product covers and the existing environment?
  3. Recalculate a projected effectiveness rating against the control with the new product/service added to the security infrastructure.
  4. Repeat the above process with other vendor products to determine which product has the greatest potential impact on the organization’s overall security effectiveness

Figure 5

To illustrate, Figure 5 shows the potential impact of adding my client’s breach detection solution into the Insurance Company’s security infrastructure. We projected adding significant value in sub controls 5-8 through 5-11, raising the overall CSC-5 score from 39% to 92%.

When we looked across all 20 controls there were other controls we projected benefit, though none as strongly as CSC-5.  The organization asked one of our competitors to do the same thing and the end-result (please see Figure 6) was our solution scoring higher in projected effectiveness improvement than the competition.  The insurance company is still evaluating products and weighing the six-point difference against differences in lifecycle costs.   The key point is they were able to pre-assess a product’s real impact without doing any testing or relying on vendor brochure-ware and marketing hype.  (Of course, my client is hype-free; I’m referring to the other guys!).

products compare

Figure 6 – Overall Effectiveness of Two Security Solutions

Conclusion

If we can standardize this effectiveness measurement and monitoring process, companies can assess investments across their entire security ecosystem (not just within a specific area).  Combining this approach with the quantitative assessment methodology outlined in my last post and the cybersecurity economics discussed in my first two posts, CISOs– for the first time – can make defensible decisions for security spending that satisfies the evaluation criteria of the CIO, the CFO and the CEO.

It would be best if an organization like SANS, ISC2, ISSA or ISACA took this on and developed a formal process for CSC-20 effectiveness measurement and monitoring.  For example, if we standardize on the assessment metrics (e.g. the relationship between quick win versus configuration/hygiene) then we can do cool things like benchmarking and data normalization to characterize control effectiveness baselines across different industries and company sizes.   This would also help us develop a standard script that vendors can follow to project their product’s effectiveness impact.

Obviously, we have a long way to go with this, but I think I’ll contact SANS to see what they think.  What do you think?  I’d love to hear thoughts on this and its potential to change the way we make security spending decisions.

 

Using Open FAIR to Quantify Cybersecurity Loss Exposure

Why is Cybersecurity Risk Different?

Should business executives treat cybersecurity differently than other risk centers?  It must be different, otherwise why it is so hard to answer even simple questions about cybersecurity spending such as what should we spend and what should we spend it on?   But, why is this so?  This is not rocket science, is it?  No, it’s not, but not in the way you are thinking.  With all due respect to my Dad (he literally is a Rocket Scientist), by treating cybersecurity as a “special risk,” we’re making answering these simple questions more complicated than making rockets fly.

To Infinity and Beyond

BuzzI started this journey  trying to answer two simple questions: what should we spend on cybersecurity and what should we spend it on.  These answers seem so elusive and therefore I figured we must need some new perspective or approach specific to cybersecurity spending.

From my first post, most people use ROI to justify cybersecurity spending.  A good example is the Booz Allen model.  From the second post  I showed how ROI (or Return on Security Investment (ROSI)) are not good metrics to use to justify cybersecurity spending; in fact, any type of spending.  We need to take our economics discussion up a notch and focus on using NPV (Net Present Value) and/or IRR (Internal Rate of Return) rather than ROI/ROSI.   In my third post  I outlined a standardized way to qualify and quantify risk: Factor Analysis of Information Risk (FAIR).   Yes, a standardized approach that does not treat cybersecurity any differently than other areas of risk!  Because of this, organizations using FAIR are developing a standard lexicon to discuss cybersecurity risk in terms that their risk management peers understand.  With FAIR, business executives can assess cybersecurity risk with the same scrutiny, maturity and transparency they assess other forms of organizational and institutional risk.

In this post we’re diving a bit deeper into FAIR and focusing on how we can start using FAIR to help make cybersecurity investment decisions.

As a quick refresher, in Open FAIR, risk is defined as the probable frequency and probable magnitude of future loss.  That’s it!  A few things to note about this definition:

  • Risk is a probability rather than an ordinal (high, medium, low) function. This helps us deal with our “high” risk situation discussed above.
  • Frequency implies measurable events within a given timeframe. This takes risk from the unquantifiable (our risk of breach is 99%) to the actionable (our risk of breach is 20% in the next year)
  • Probable magnitude takes into account the level of loss. It is one thing to say our risk of breach is 20% in the next year.  It’s another thing to say our risk of breach is 20% in the next year resulting in a probable loss of $100M
  • Open FAIR is future-focused. As discussed below, this is one of its most powerful aspects.  With Open FAIR we can project future losses, opening the door to quantifying the impact of investments to offset these future losses

As shown in Figure 1, the Open FAIR ontology is pretty extensive and this post isn’t the place to get into all the inner workings.  I urge everyone to learn more about FAIR.

FAIR-24
Figure 1 – Open FAIR High-Level View

As discussed in my last post, risk is determined by combining Loss Event Frequency (LEF) (the probable frequency within a given timeframe that loss will materialize from a threat agent’s actions) and Loss Magnitude (LM) (the probable magnitude of primary and secondary loss resulting from a loss event).

At a Loss

yay-4860554To date, I’ve mostly focused on the loss event frequency (LEF) side of the risk equation, specifically to tease out the intricacies of threat and vulnerability In this post, I’m shifting the focus to the loss magnitude (LM) side of the risk equation because I believe the ability to project a realistic loss magnitude is the foundation of a quantitative risk analysis.  Based on my discussions with cybersecurity executives, it’s often the toughest thing to quantify because quantifying loss magnitude can only be done with extensive communication with other parts of the business; parts that quite often have never interacted with IT and cybersecurity before.  This is one of the main reason I say this is not rocket science.  It’s harder!

Defining Loss

How do we define loss?  Booz Allen model defines cost to fix, opportunity cost and equity loss.  These are pretty broad categories and the broader the measure, the more difficult it is to quantify the potential loss.  We need more granularity, but not too much granularity.   If we get too granular the whole process may collapse under its own weight.

In terms of granularity, Open FAIR calculates six forms of loss, covering primary and secondary loss.

Primary Loss is the “direct result of a threat agent’s action upon an asset.”    Secondary Loss is a result of “secondary stakeholders (e.g., customers, stockholders, regulators, etc.) reacting negatively to the Primary Loss event.  In other words, it’s the “fallout” when the s*&t hits the fan.

 

Open FAIR Primary and Secondary Loss

The secondary loss magnitude is losses materializing from dealing with secondary stakeholder reactions.  To me, this is a critical distinction of Open FAIR versus other models.  We can’t assume that secondary losses will always occur.

Ponemon Cost of Breach Study Example

The best work I’ve seen on cost of breach is the annual studies performed by Dr. Larry Ponemon and his team (www.ponemon.org).  Since 2005, they have been tracking costs associated with data breaches.  To date, they have studied 1,629 organizations globally.  Some of the key findings from the 2015 study are in Table 1:

Table 1 Ponemon ResultsIn Open FAIR terms, Ponemon is saying the average loss event (LE) frequency of a 10,000 record breach is .22 over two years with a Loss Magnitude (LM) of $1.54M (10,000 records x $154/record).   Similarly, Ponemon states the average Loss Event Frequency (LEF) of a 25,000 record breach is approximately 0.10 over two years with a LM of $3.85M.1

From this we can determine the Aggregate Loss Exposure (ALE).  Typically, the ALE is an annualized number so if we assume Ponemon saw an even distribution over two years, we develop an ALE for a 10,000 record breach of approximately $169K.   This is a lot smaller than the oft-quoted $3.79M average cost of breach.  Shifting the discussion from Loss Magnitude (LM) to Aggregate Loss Exposure (ALE) changes the whole tenor of the conversation.

Not There Yet

This is very helpful information, but it’s not precise enough to make clear quantitative risk decisions.  I suspect, Ponemon has much more information than what is published in the report and hopefully, it includes the following key points:

  • The distribution of the primary Loss Event Frequency (LEF) and Loss Magnitude (LM). We know the average, but to make decisions we really need the Min, Max and Mode.  For example, an average of .22 is only relevant if you know the shape of the distribution curve.  Is it peaked or flat?  The sharpness of the curve defines the level of confidence we have with the data.  To assess this, we need to compare the average to the mode: the closer the two, the higher the level of confidence.
  • The relative frequency of primary to secondary events. Though Ponemon does tease out the two types of losses (e.g. he differentiates between direct and indirect costs), it isn’t as well differentiated as an Open FAIR analysis.  Lumping the two together can skew the results dramatically.
  • Separating Primary and Secondary Loss Magnitude (LM). This is related to above.

As an example, check out Figure 4, a sample risk analysis done by RiskLens.**

RiskLens ALE Charts
In this example, we’re looking at pretty steep distribution curve where the average and peak (mode) are fairly close.  The Aggregate Loss Exposure (ALE) is made up of multiple loss (primary and secondary) scenarios.  This analysis is developed from 118 individual risk scenarios covering 32 asset classes and 5 threat communities.  For example, one individual risk scenario might be the loss of 10,000 records due to a data breach caused by weak authentication controls, contributing $169K to the ALE.  As mentioned above, the ALE is a factor of both the Loss Event Frequency (LEF) and Loss Magnitude (LM).

Figure 4 contains a ton of information.  First, the chart shows a Risk Appetite (RA) of $130M for the organization.   Just looking at the curve shows the RA is less than both the peak (mode) and average ALE.  The chart also shows the 10% and 90% distribution points.   Many CFOs look at the 90% line as the worst case ALE scenario (essentially equivalent to Ponemon’s $3.79M cost of data breach).  In other words, on average we expect an ALE of $223M but to prepare for the worst, we should prepare for an ALE of $391M.

We can further break the average ALE down into primary and secondary LM components (see Figure 5).

RiskLens ALE Breakdown

Now what?  In this organization’s case, the secondary loss elements are far larger than the primary loss elements and the bulk of the materialized loss relates to loss of confidentiality (Figure 6).

Now, what do we do with this information?  How do we turn charts into actionable guidance?  Right off the bat, we have a fundamental problem because our Risk Appetite (RA) is significantly lower than peak and average ALE.   We have three main choices: raise the RA (rarely the best option), outsource a significant chunk of the risk by buying cyber insurance, and implement controls to lower the ALE below the RA.

Control Your Destiny

Open FAIR only defines four classes of controls: vulnerability, avoidance, deterrent, and responsive controls.  In comparison, NIST defines 17 categories of controls; many could be considered a cross of avoidance, deterrent and vulnerability controls.

Having only four broad classes of controls makes performing “what-if” analyses practical.  It also provides a framework to determine control selection based on most significant ALE impact.

FAIR-22

Figure 7 – Mapping Open FAIR Controls to Ontology

To determine the most effective controls, we need to determine the threat communities with the greatest impact on ALE.  For example, from the above RiskLens example, we can break down the average $223M ALE by specific threat communities (these need avoidance and deterrent controls).

RiskLens ALE Threat Communities

Assessing Impact of ControlsThe value of this knowledge is HUGE!  Without even talking control specifics, I know that more than half of my expected loss will be from insiders (privileged and non-privileged).    This tells me to turn to NIST and focus on access controls (AC), audit and accountability (AU) and security awareness and training (AT) controls!The analysis indicates the greatest loss exposure is from the privileged insider community (approx. 43% of the total average ALE), cyber criminals (approx. 36%) is second, non-privileged insiders (approx. 13%) is third, and it goes down from there.

Assessing the Impact of Controls

Cleveland, we still have a problem.  Our risk appetite is well below our average and peak ALE.  I don’t want to raise our RA so we must reduce the ALE.  But, how can we determine which of the above controls (AC, AU or AT) are most effective?  The beauty of using Open FAIR with an analytic and modeling engine (e.g. RiskLens) is we can simulate the potential impact of security controls on quantitative risk.  This is something that most organizations do not do.  Instead they simulate the potential impact of security controls on qualitative risk.  I’ll get to this in my next post when we dive into SANS 20 controls as a model to assess qualitative impact of security controls.

The beauty of using quantitative analytics is it opens the door to effective economic discussion.   For example, the yellow curve in Figure 9 depicts our initial ALE (this is a different analysis from figure 4 though the curves are very similar).   The blue curve shows the projected ALE after the implementation of both avoidance and deterrent controls:  controls that we know the cost of!

RiskLens ALE Impact Simulation

This is a pretty extreme example to illustrate how this stuff works.  In reality, most organizations will already have a significant investment in controls reflected in the baseline analysis.  The exercise will be a series of incremental control tweaks to bring the ALE in line with the RA.   After all, any investment that brings the ALE below the RA cannot be economically justified.

A Final ALE Perspective

To me, this is very exciting!  If we run simulations against different control classes we can figure out our best control investment strategy.  We can plot the control costs against the ALE impact to pick the winning approach.  We can then evaluate the NPV and IRR of the control investments as a function of the ALE to build a business case for cybersecurity control investment.  We can also directly compare the cost of implementing controls against the cost of buying cyber insurance.  Essentially, with this information – plus the insights of the Gordon-Loeb cybersecurity spending model – we can make intelligent decisions about cybersecurity spending.  And, most importantly, we can discuss these spending analyses on equal terms with any other form of business risk analysis.

Disclaimer – I have no financial or formal business relationship with RiskLens.  I do have the utmost of respect for Jack Jones, RiskLens Founder, and I’m very much appreciate his support and willingness to share output from his analysis tools.

1http://www-03.ibm.com/security/data-breach/