By Dick Wilkinson, C|CISO
The buzzword for 2019 that we have all heard a thousand times is Artificial Intelligence, AI. This term has been placed on products ranging from smartwatches to washing machines and of course every security tool on the market. Advertisements lead us to believe that if you couldn’t find a way to shoehorn AI into your product in 2019, you might as well admit defeat before you even get to market. The term AI conjures up science fiction fantasy and makes us believe that the future really is now. People like to hear it and marketing teams like to sell it, but what does AI have to offer in the cybersecurity industry?
The term AI is often interchanged with machine learning. The process that is commonly applied is to collect a very large data set about some type of function, feature, behavior or use case; and then use that large data set to create some kind of predictable pattern. When you can refine your pattern of prediction to something with reliable outcomes you can integrate it into your tools and call it machine learning. If you can teach the machine to continue to collect data on its own and refine the predictions on its own, then you are approaching what most would consider AI. This is a long way away from Terminator Robots walking around securing our networks but is great progress in the right direction.
Behavior-based analysis
Cybersecurity tools currently use this data aggregation and pattern analysis in the field of heuristic modeling. The process works very well to monitor and eventually predict important items such as packet traffic or how each machine operates on your network. We apply behavior-based tolerances to these patterns and when something breaks a tolerance, it creates an alarm. Even this seemingly advanced model still relies on some strong baseline of normal behavior and has to be pre-scripted in many cases. This method is still only as good as the original data set or programming instructions; the tolerances are hardcoded and inherently stagnant and cause many false alarms in certain instances. The limits become even more evident when the machine learning and prediction stop at machine behavior and does not include the “human” aspect. Security experts know that the user is the most dangerous part of the network, but we spend time and money predicting “machine behavior” and not “user behavior.”
I would like to describe how AI can take us to the next step in securing our networks. Place the dystopian thoughts on hold while I describe how we can observe our user behavior and create even more powerful predictions that lead to true alarms and secure networks. AI will need to move the focus from the way a machine, identified by an IP or MAC address, is behaving, and place the interest on the way the human is acting in each scenario. What I am proposing is a pattern of life analysis on how your users interact with the network on a daily, weekly and even longer-term basis. The selector that indicates the human will not be an IP or MAC address but instead is likely to be some form of biometric identification.
AI will be taught to observe identifying characteristics that all add up to the digital identity of each user
Biometric technology is already widely adopted in some sectors and will probably become ubiquitous over the next 10 years. No username and passwords, just a finger or your face will authenticate you as being you. So we would assume this network is now much less hackable, but you still have the insider threats as well as whatever unique ideas that future hackers discover to defeat biometrics.
We will rely on the greater digital participation of that user to decide if a behavior is normal or anomalous. AI will be taught to observe identifying characteristics that all add up to the digital identity of each user. A user will have a typical day based on things like when they first check their email, what programs they accessed at what times; did they enter or leave the building or secure spaces at certain times. Other identifying features such as keystroke patterns or even specific types of content in chat or email messages will help determine the behavior and identity of a user. This data is not limited to just what happens at work because your users interact with your corporate services via company-issued smartphones, BYO devices, VPN from home, or cloud-based email services.
The true function of AI will be to determine with a long arc of time and data, what “normal” looks like for a user, not a machine
The pattern of life can expand to understand that checking email at 10 o’clock at night is actually “normal” for this user when it is on a smartphone but not when it is at his desk. The baseline will be learned by the network over time, not preprogrammed to only allow for certain tolerances; the user’s normal activities will define the tolerances. The true function of AI, in this case, will be to determine with a long arc of time and data, what “normal” looks like for a user, not a machine. If this method of AI sensing your users is adopted, then many features can be built to tune the alarm rate and tolerances of the AI algorithm. That process will fall under the CISO and other security professionals to decide what right looks like for their organization. These risk-based decisions can be built into a Risk Management Framework just like any other set of technical controls.
You can sensitize or desensitize your AI engine based on the risk tolerance of a company or department and different models could even apply to different divisions in one company. Your IT admins may be applied a strict tolerance where the guy bolting car doors together doesn’t need the same level of scrutiny. A CISO could even control costs given the assumption that running an aggressive, tight tolerance AI engine on a user will cost more than running a loose tolerance or one that collects fewer data points. These AI functions will be integrated into the SOC or SIEM platform for your organization and will become a seamless sensor just like looking at a machine log event, but it will be human-based, not IP-based.
ROI on AI-based user authentication
The technology to develop and integrate this in an aggressive way already exists, but it is expensive. Biometrics are becoming more common and basic machine learning is built into thousands of programs already. The enhanced decision making and truly flexible tolerance thresholds will be the area requiring more research and the most monetary investment. Adoption will continue to be slowed by several factors. Digital privacy will be the biggest argument, but that will almost certainly be overcome by the convenience of using biometrics. Cost of implementation will slow adoption, but the cost trade-off of abandoning current SIEM products in favor of user tracking will help ease the burden. The biggest positive factor will be the immediate return on investment. The CISO will be able to clearly define the slew of controls that will be replaced by this new method. High-risk threats like credential-stealing scams will simply disappear when this process is implemented. Insider threats will be extremely well controlled and corporate intellectual property threats will decrease in orders of magnitude. Cyber insurance costs will dwindle to tolerable levels or may not be needed at all depending on the risk appetite of the company. The negative aspects of this user-based AI system will be overshadowed by the clear benefit to the bottom line of the company or agency. An advanced artificial intelligence system that tracks users not machines could be the goal that every CISO strives towards to reduce risk and keep the business running smoothly.
Dick Wilkinson is a senior leader with 20 years of results-focused leadership in the intelligence and cybersecurity field. E has diverse training in intelligence collection, signal analysis, space programs, ethical hacking, cyber vulnerability assessment, penetration testing, cyber program development, and project management. He undertook multiple assignments both stateside and overseas with a wide range of cultural experiences in Europe and Asia. He is a Technical Advisor to Executives at the highest levels as well as liaison relationships with many intelligence community partners. His current job is the Information System Security Officer for the court system of New Mexico.
CISO MAG did not evaluate the advertised/mentioned product, service, or company, nor does it endorse any of the claims made by the advertisement/writer. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.