Home Features Five Practical Steps to Implement a Zero-Trust Network

Five Practical Steps to Implement a Zero-Trust Network

4 in 10 Companies Expose Unsafe Network Services Online, network and security

While the concept of Zero Trust was created 10 years ago, the events of 2020 have thrust it to the top of enterprise security agendas. The COVID-19 pandemic has driven mass remote working, which means that organizations’ traditional perimeter-based security models have been broken up, in many cases literally overnight.  In this new normal of remote working, an organization’s network is no longer a single thing in one location: it is everywhere, all of the time. Even if we look at organizations that use a single data center located in one place, this data center is accessed by multiple users on multiple devices.

By Professor Avishai Wool, Co-founder and CTO at AlgoSec

With the sprawling, dynamic nature of today’s networks, if you don’t adopt a Zero-Trust approach, then a breach in one part of the network could quickly cripple your organization as malware, and especially ransomware, makes it way unhindered throughout the network. We have seen multiple examples of ransomware attacks in recent years: organizations spanning all sectors, from hospitals to local government and major corporations, have all suffered large-scale outages. Put simply, few could argue that a purely perimeter-based security model makes sense anymore.

So how should organizations go about applying the Zero Trust blueprint to address their new and complex network reality? These five steps represent the most logical way to achieve Zero-Trust networking, by finding out what data is of value, where that data is going, and how it’s being used. The only way to do this successfully is with automation and orchestration.

1. Identifying and segmenting data

This is one of the most complicated areas of implementing Zero-Trust since it requires organizations to figure out what data is sensitive.

Businesses that operate in highly regulated environments probably already know what that data is since the regulators have been requiring oversight of such data. Another approach is to separate systems that humans have access to from other parts of the environment, for example, parts of the network that can be connected to by smartphones, laptops, or desktops. Unfortunately, humans are often the weakest link and the first source of a breach, so it makes sense to separate these types of network segments from servers in the data center. Naturally, all home-user connections into the organization need to be terminated in a segregated network segment.

2. Mapping the traffic flows of your sensitive data and associate them to your business applications

Once you’ve identified your sensitive data, the next step is knowing where the data is going, what it is being used for, and what it is doing. Data flows across your networks. Systems and users access it all the time, via many business applications. If you don’t know this information about your data, you can’t effectively defend it.

Automated discovery tools can help you to understand the intent of your data – why is that flow there? What is the purpose? What data is it transferring? What application is a particular flow serving? With the right tooling, you can start to grow your understanding of which flows need to be allowed. Once you have that, you can then get to the Zero-Trust part of saying “and everything else will not be allowed.”

3. Architecting the network

Once you know what flows should be allowed (and then everything else deserves to be blocked), you can move onto designing network architecture, and a filtering policy that enforces your network’s micro-perimeters. In other words, architecting the controls to make sure that only legitimate flows are allowed.

Current virtualization technologies allow you to architect such networks much more easily than in the past. Software-defined networking (SDN) platforms within data centers and public cloud providers all allow you to deploy filters within the network fabric – so placing the filtering policies anywhere in your networks is technically possible. However, actually defining the content of these filtering policies: the rules governing the allowed flows – is where the automatic discovery really pays off.

After going through the discovery process, you are able to understand the intent of the flows and can place boundaries between the different zones and segments. This is a balancing act between how much control you want to achieve and how secure you want to be. With many tiny islands of connectivity or micro-segments, you have to think about how much time you want to invest in setting that up and managing it over time. Discovering intent is a way to make this simple because it helps you decide where to logically put these segments.

4. Monitoring

Once the microsegments and policies are deployed, it’s essential to monitor everything. This is where visibility comes into its own. The only way to know if there is a problem is by monitoring traffic across the entire infrastructure, all the time.

There are two important facets of monitoring. Firstly, you need continuous compliance. You don’t want to be in a situation where you only check you are compliant when the auditors drop in. This means that you need to be monitoring configurations and traffic all the time, and when the auditor does come, you can just show them the latest report.

Secondly, organizations have to make the distinction between the learning phase of monitoring, and the enforcement stage. In the discovery’s learning phase, you are monitoring the network to learn all the flows that are there and to annotate these with their intent. This allows you to see what flows are necessary before writing the policy rules. There comes a point, however, where you have to stop learning and decide that any flow that you haven’t seen is an anomaly that you will block by default. This is where you can make the big switch from a default ‘allow’ policy to a default ‘deny,’ or organizational ‘D-Day.’

At this stage, you can switch to monitoring for enforcing purposes. From then on, any developer who wants to allow another flow through the data center will have to file a change request and get permission to have that connectivity allowed.

5. Automate and orchestrate

Finally, the only way you will ever get to D-day is with the help of a policy engine, the central ‘brain’ behind your whole network policy. Without this, you have to do everything manually across the entire infrastructure every time there is a need for a change.

Your policy engine, enabled by automation orchestration, is able to compare any change request against what you have defined as your legitimate business connectivity requirements. If the additional connectivity being requested is in line with what is defined as acceptable use, then it should move ahead with Zero-Touch, in a fully automated manner. This can be achieved with the deployment of necessary updates to the filters in minutes. Only requests that fall outside the guidelines of acceptable use need to be reviewed and approved by human experts.

Once approved (automatically or after review), a change needs to be deployed. If you have to deploy a change to potentially hundreds of different enforcement points, using all kinds of different technologies, each with their own intricacies and configurations, this change request process is almost impossible to do without an intelligent automation system.

Focus on Business Outcomes, Rather than Security Outcomes

Removing the complexity of security enables real business outcomes since processes become faster and more flexible without compromising security or compliance. Right now in many organizations, even with the limited segmentation that they have in place already, pushing through a change post ‘D-Day’ is very slow – sometimes taking weeks to get through the approval stage on the security side because there is a lot of manual work involved. Micro-segmentation can make this even more complex.

However, using the steps I’ve outlined here to automate Zero Trust practices means that the end-to-end time from making a change request to deployment and enforcement goes down to one day, or even a few hours – without introducing risk.  Put simply, automation means organizations spend less time and budget on dealing with managing their security infrastructure, and more on enabling the business.  That’s a true win-win situation.

About the Author

Avishai WoolAvishai Wool is the Co-founder and CTO at AlgoSec. He has served on the program committee of the leading IEEE and ACM conferences on the computer and network security. Wool has published more than 110 research papers and holds 13 U.S. patents with more pending. He is also a professor in the School of Electrical Engineering at Tel Aviv University and deputy director of the Interdisciplinary Cyber Research Center at TAU. He’s is the creator of “Unlocking information Security”, a successful massive open online course (MOOC). When he’s not busy evangelizing AlgoSec’s solutions, Wool enjoys tinkering with all sorts of computer and network security technologies, most recently focusing on in-vehicle communication networks, industrial control systems, side-channel cryptanalysis.


Views expressed in this article are personal. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.

Previous articleRansomware Attacks: The Major Cause of Cyber Insurance Claims in H1 2020
Next articleIndia Reports Twice as Many Cyberattacks as any Other Country