by Marc Woolward
Having spent the best part of three decades architecting and operating enterprise infrastructures, including 16 years at Goldman Sachs, it’s clear to me that the security model within the data center represents the last major barrier to our transition to ‘software-defined’ cloud computing architectures. Existing security ‘art’ fails to serve the challenges of the modern software-defined data center in two major ways; it is functionally incompatible with cloud principles, and it fails to provide the required security capabilities.
Today’s security architectures consist of hierarchies of physical infrastructure (or more recently virtualized devices based upon legacy principles) which partition the data center through ‘defence in depth’ principles. Incompatible functions (firewalls, IPS (intrusion prevention system), proxies, content filters, and honeypots among others) form complex systems of control, generally at the perimeter or DMZ (demilitarized zone), while the innermost layers of the data center provide free and open access with little control and even less visibility. Unfortunately, this innermost layer is where the assets of most value reside, and where attackers are increasingly gaining access.
Functionally, the static and complex perimeter-based security model prevents the realization of the benefits of cloud principles in the following ways:
- – It is nearly impossible to orchestrate and automate the complex system of incompatible controls we have assembled at the network perimeter. There is just too much ‘stuff’ to program, most of which was not designed with automation and self-service in mind. This prevents the scaling and reconfiguration of cloud applications outside the data center.
- – The physical zones constructed to implement separation have resulted in fragmentation of data center resources. Rather than calling computing and storage resources from a large common pool, it has become necessary to treat each ‘zone’ of trust (or even business function) as separate resource pools.
- – The security products within the perimeter typically scale vertically based upon expensive and proprietary hardware platforms. These systems are often initially over- provisioned to delay the day that disruptive and expensive forklift upgrade is required. This contrasts with today’s software architectures which are designed to scale horizontally based upon business need and distributed systems principles.
As we have seen from security incidents over the past few years, today’s security architectures are even less successful at safeguarding the basic security of our data and services. Following are some of the reasons:
- – Our security controls are predominantly placed in the physical ‘edge’ of the data center, but it is not so easy to define where the logical boundary resides from an organizational point of view any more. In a world where IT services are increasingly consumed from the cloud – whether they are based upon hybrid cloud solutions, or the integration of public cloud services into the IT flow and the workforce is mobile, the historical belief that the edge of the data center equated to the edge of trust no longer holds.
- – The threat actors are now highly sophisticated and organized, becoming particularly adept at penetrating traditional perimeter controls. It is also no longer a given that would-be attackers begin their campaigns outside the perimeter, whether that be through deliberate insider access or via inadvertent exploit through social engineering. If you look at today’s threat models, the controls are in the wrong place.
- – Many years spent pursuing the latest security threats by layering incremental controls within the perimeter has resulted in systems of exponential complexity. They have become difficult to engineer and test, leaving myriad avenues for exploitation. As we learnt within security engineering 101, but seem to have forgotten in many implementations, “complexity is the worst enemy of security”.
- – Finally, and quite simply, today’s controls are not aligned to protect our most important assets – the data and services provided within the core of our data centers. Today’s data centers provide very little control around access to these services with far less security information or analytics, and make it very easy for attackers who have compromised the perimeter to pursue their campaign undetected and often unencumbered by security measures. We need to move the security controls from the old ‘thick red line’ at the perimeter to flexible and dynamic ‘application and data-centric’ security.
At ONUG Academy, we will get into the details of many of the important questions facing network architects in building secure SDDC (software-defined data center) environments including;
- – What to consider when designing a security architecture?
- – Where are we and how did we get here?
- – What architectural considerations are critical in designing a security framework?
- – How do you build your policy? – declarative versus imperative, intent versus discovery-based, dynamic versus static, and, the role of abstractions.
- – How do you apply policy? – self-service versus FW administrator, manual versus DevOps versus orchestration.
- – What have we learnt about attacks on the data center and the anatomy of attacks?
- – How to use security logs to detect and remediate?
Chief Technology Officer, vArmour
Marc has over 30 years of experience in mission and software-defined critical networks. Marc joined vArmour as CTO, EMEA in December, 2014. Prior to this role he was a Technology Fellow and CTO for Networking at Goldman Sachs, and the founder and leader of the Security Working Group within the Open Networking Foundation. Earlier in his career he managed infrastructure at Cantor Fitzgerald and Coutts & Co. Marc has over a decade’s worth of experience of architecting and implementing datacenter software automation, and private cloud architectures.