Contributed by Richard Seiersen, SVP & Chief Information Security Officer, LendingClub
The actual science of logic is conversant at present only with things either certain, or impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore, the true logic for this world is the Calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man’s mind. —James Clerk Maxwell
There are two basic questions I ask myself, my teams, and security folks at large. First, “How do I know I have the right security capabilities?” and second, “What would I see occurring that would let me know my capabilities are improving?” I might add to that last one, “… while the business scales?”
Do I Have the Right Security Capabilities?
My co-author Doug Hubbard and I provide a detailed answer for the first question in our book, How to Measure Anything in Cybersecurity Risk (Wiley 2016)[1]. Measurement experts such as scientists, actuaries, mathematicians, statisticians, some engineers, and data scientists will find our approach familiar. Especially actuaries because the green book (as we affectionately call it) will become required reading for The Society of Actuaries exam prep from 2018 onward.
These experts would most certainly take a quantitative approach to my first question. Their tactics are grounded in the logic of uncertainty, aka probability theory. Please don’t be scared off by that “mathy” turn of phrase. You just need to know that probability theory simply counts up all the ways an event can happen and puts more weight on those possibilities that are most plausible. It’s a centuries old shortcut born out of laziness, boredom, and the desire to beat the house.
Truth Is Not the Goal, Better Is.
Adopting a probabilistic approach means not looking for the “perfectly correct” answer to intangible questions like “do I have the right capabilities?” You want the most plausible answer(s) given your current state of uncertainty. This means being resourceful with what little empirical data you have. And if you lack empirical data you may be left with modeling your subject matter experts’ beliefs. You likely paid a lot for their expertise, you might as well model it. Now that is being resourceful!
This is a key point for security folks. Security by its very nature is mired in uncertainty. We have uncertain sentient and artificially intelligent adversaries attacking a myriad of systems all in transient states. Our understanding, or model, of that world is by its very nature, woefully incomplete.
The statistician George Box made this point of view popular by saying, “all models are wrong, but some are useful.” Which my co-author embellishes with, “…and some models are measurably more useful than others.” Your goal is improvement over your current model at a reasonable cost. Don’t let your uncertainty caused by a lack of perfect data stand in your way.
Better Decision Making
Models, wrong or very wrong, exist to aid you in decision making as opposed to substituting for it. The model for answering my first question would help you figure out which capabilities best reduce risk (breach) given your risk tolerances[2]. It should also take into consideration any reduction in opportunity loss[3] (lost sales) as well as the cost of controls[4] (cost of people and gear etc.). That’s how we get the best return on investment (ROI) i.e. the best bang for our buck in reducing probable future loss.
ROI becomes a type of score[5] for organizing our choices in order of importance. It’s a huge improvement over risk registers, heat maps, and other qualitative scoring systems in the security marketplace. We and other experts in our book enjoy saying that those approaches are “worse than doing nothing.”
But Wait, We’re Different!
Security folk may argue that the combination of systems complexity and chaotic actors make the possibilities of compromise uncountable (not that they have tried) and thus immune to probabilistic means. They say this as if fields that use probabilistic approaches must have easier problems to solve; fields like nuclear engineering, military logistics, epidemiology, seismology, and cytology (name your ology as long as it’s not astrology … it doesn’t work). The point is that measurement experts adopt probabilistic approaches because of uncertainty, not in spite of it.
Making Security Rigorous
If you haven’t guessed it by now, I believe it’s time for security to start measuring more like the sciences do, or like anyone with serious treasure at stake would do. And you don’t have to be a scientist or a statistician to do this (I’m not). Statisticians, similar to cooks, do what they do for others to consume. Take plumbers for example: they don’t need to know squat (pun intended) about the physics of fluid dynamics to fit the right pipes given the water pressure coming into a house. They just know which tools and materials to use for the particular problem at hand. Likewise, you don’t necessarily need to understand the math[6] as much as you need to understand the problem you are trying to solve. From there you are just fitting the appropriate quantitative materials together to make what will ultimately be a wrong (all models are wrong) but hopefully better model than you are currently using.
Think More, Do Less
“A problem well defined is a problem half solved.” – Charles Kettering.
If your problem is framed badly then no model, no math, no concoction of any kind can magically save you from yourself. In my experience, most security folks don’t spend enough time thinking or framing their problems. The current trend is to knock out tasks (be a doer/builder) and deploy taken-for-granted technology in the hope things will improve. Task obsession is a sure-fire way to lose the forest for the trees in security. The bad guys would love nothing more than to have you whittling away the hours on low impact, uncoordinated busy work.
By way of example, I consulted with an organization not too long after the Equifax breach. I used what we knew of the breach[7] as a tabletop exercise to determine the state of the current organization’s end-to-end vulnerability management program. While they had historically knocked out numerous tasks related to the topic and made several key investments, they profoundly underperformed Equifax. Why? They couldn’t rank-order what big outcomes were important in a systematic way. What they did have was “more security tasks … faster.” That was their model. Now, after improving their vulnerability management program and focusing on ranking important outcomes, their results should beat their old model, which had near zero measurable outcomes, and Equifax to boot (at least I hope it will).
The improvements were fundamentally about shifting their thinking from being task-oriented/busyness-obsessed to big picture strategizing for the organizations’ assets.
As a security leader, don’t be fooled by busyness and don’t let your teams be fooled by it either. It’s faux noble and will not be effective in light of increasing platform uncertainties and talented adversaries. Perhaps it’s time to think more and do less? Specifically, thinking more about our capabilities and doing less busy work so you can focus on big impact, ROI-based, outcomes.
In my next article, I will address the second question. And who knows, I may throw in some code!
[1] Doug Hubbard was my co-author: https://www.linkedin.com/in/dwhubbard/
[2] Risk tolerance could be your cyber insurance coverage or it could be multiple factors. Also consider that the NIST CSF, amongst others, expects risk management to consider tolerance.
[3] Opportunity loss is reduced when security meets customer, industry or regional requirements and allows for new and expanded sales.
[4] Security gear, people and etc.
[5] It’s a mathematically unambiguous score. Unlike a “High” or a 10 on a 1-10 scale.
[6] Data analysis is an applied art. Analysts are API/tool users. Deeper math, statistics, probability theory and etc. is not required. But, it would certainly help in better understanding what is going on under the hood. Those people designed tools for you use to answer questions in your particular domain. Go for it!
[7] Use big breach announcements, new zero days, etc. as a form of table top. Collect the evidence from an article about the event and turn it on yourselves to see how well you would do. This is a much more productive way to read all the security blather that is out there. Ask “what if it were me?”