Micro-Segmentation: Overcoming Barriers to Deployment

It’s official: micro-segmentation has become “a thing.” Enterprise security experts are declaring that 2018 is the year they are going to do it and do it right. The irreversible movement of critical workloads into virtualized, hybrid cloud environments demands it. Audits and industry compliance requirements make it imperative. News stories of continued data center breaches, in which attackers have caused severe brand and monetary damage, validate it. Customers tell us (and independent studies confirm) that east-west data center traffic now accounts for most enterprise traffic — as much as 77% by some estimates.

The Micro-Segmentation Dilemma
In view of this sense of urgency and awareness, why is it a daunting challenge? In conversations with customers at dozens of organizations that have tried to implement micro-segmentation, we’ve uncovered some of the more common obstacles.

Lack of visibility: Without deep visibility into east-west data center traffic, any effort to implement micro-segmentation is thwarted. Security professionals must rely on long analysis meetings, traffic collection, and manual mapping processes. Too many efforts lack process-level visibility and critical contextual orchestration data. The ability to map out application workflows at a very granular level is necessary to identify logical groupings of applications for segmentation purposes.

All-or-nothing segmentation paralysis: Too often, executives think they need to micro-segment everything decisively, which leads to fears of disruption. The project looks too intimidating, so they never begin. They fail to understand that micro-segmentation must be done gradually, in phases.

Lack of multi-cloud convergence: The hybrid cloud data center adds agility through autoscaling and mobility of workloads. However, it is built on a heterogeneous architectural base. Each cloud vendor may offer point solutions and security group methodologies that focus on its own architecture, which can result in unnecessary complexity. Successful micro-segmentation requires a solution that works in a converged fashion across the architecture. A converged approach can be implemented more quickly and easily than one that must account for different cloud providers’ security technologies.

Inflexible policy engines: Point solutions often have poorly thought-out policy engines. Most include “allow-only” rule sets. Most security professionals would prefer to start with a “global-deny” list (whitelist model), which establishes a base policy against unauthorized actions across the entire environment. This lets enterprises demonstrate a security posture directly correlated with compliance standards they must adhere to, such as HIPAA. Moreover, point solutions usually don’t allow policies to be dynamically provisioned or updated when workflows are auto-scaled, services expand or contract, or processes spin up or down — a key reason enterprises are moving to hybrid cloud data centers. Without this capability, micro-segmentation is virtually impossible. 

Given these obstacles, it’s understandable why most micro-segmentation projects suffer from lengthy implementation cycles, cost overruns, and excessive demands on scarce security resources, and fail to achieve their goals. So, how can you increase your chances of success?

Winning Strategies for Successful Micro-Segmentation
It starts with discovery of your applications and visually mapping their relationships. Visibility has been the Odyssey of security professionals. The saying, “You can’t protect what you can’t see” has never been truer.

Application dependency mapping: Strong micro-segmentation approaches cannot be implemented unless IT Operations and Security have clear visibility into how their applications are communicating so that they can quickly determine what should be communicating. This requires going beyond traditional network visibility to understand how the applications actually work. The current parlance around this capability is application dependency mapping. It cannot be produced, however, if you only see what happens on the network. The applications and the hosts they sit on must be included in a live, continuous map. You need real-time understanding of both vectors to build a cybersecurity approach to protect your data center.

With proper visibility into your entire environment, including network flows, assets, and orchestration details from various platforms and workloads, you can more easily identify critical assets that can logically be grouped via labels to use in policy creation. Rather than the Visio world much of the network world originated from, we are moving into a metadata world. Powerful and accurate micro-segmentation requires the ability to ingest metadata from CMDBs and spreadsheets or derive it from real-time observation of traffic flows. Moreover, the system needs to suggest micro-segmentation rules based on observation of behavior. Application visibility accelerates your ability to identify and label workflows, and to implement micro-segmentation successfully at a deeper, more effective level of protection.

Hybrid multi-cloud coverage: Converged micro-segmentation strategies that work seamlessly across your entire heterogeneous environment, from on premises to the cloud, will simplify and accelerate the rollout. When a policy can truly follow the workload, regardless of the underlying platform, it becomes easier to implement and manage, and delivers more effective protection. 

Autoscaling is one of the major features of the new hybrid cloud terrain. The inherent intelligence to understand and apply policies to workloads as they dynamically appear and disappear is key. 

Protect your crown jewels first: Finally, take a gradual, phased approach. Start with applications that need to be secured for compliance. Which assets are most likely to be targets of attackers? Which are most vulnerable to compute hijacking? Create policies around those groups first. Over time, you can gradually build out increasingly refined policies down to the micro-service, process-to-process level. 

Done right, micro-segmentation is very achievable. The successful implementation of any micro-segmentation policy requires deep visibility down to the process level to identify applications, recognize relationships between them, and understand both the network and application flows.

The ONUG S-DSS working group focuses on how to secure applications in the modern software-defined world and micro-segmentation is sure to be a topic of discussion amongst the community and solution vendors at the upcoming ONUG Fall Conference in NYC. Join the conversation. We hope to see you there.

 

 

Author's Bio

Faraz Aladin

Faraz Aladin

Product Marketing Director at Illumio

Faraz Aladin is a Technical Product Marketing leader. With over 25 years of industry experience, Faraz spent a major portion of his professional career at Cisco across various business units and roles before joining vArmour as Product Management and Marketing leader. At Illumio, Faraz is responsible for Product Marketing missions as well as technical evangelism. Having worked with customers across various segments and industry verticals, he is very well versed with how customers successfully leverage technology to drive business outcomes. His subject matter expertise spans across Networking & Cloud Infrastructure, Data Center Architectures, Security and Collaboration technologies. He is a CCIE and holds an engineering degree from Bombay University.