by Bruno Germain
At the root of John Boyd’s “OODA loop” methodology, there is the notion that we need to acknowledge and work with levels of “uncertainty”—gaps that result when applying established models to new and changing contexts[i]. Unfortunately, the networking community—desperate for operational stability—largely ignores these mismatches, designing network and security architectures as if they can dictate how applications are deployed.
Nothing is further from the truth.
While this trend started more than a decade ago, container technology is flipping the concept on its head. Basic networking constructs, such as L2 connectivity deployed via overlay networks, are built directly into the container engine, giving developers complete control to deploy their applications without a courtesy call to the networking team.
Cloud providers have made it easier to move workloads to public infrastructure, and more organizations are writing applications specifically for this environment. Furthermore, application “availability” is no longer tied to the infrastructure stack. The difficulty of moving these applications out of these “cloud jails”[ii] when new offerings or economics dictate prove once again that networking and security are mere afterthoughts, behind ease of consumption and application development.
When applying traditional security models to clouds or containers (and, soon, serverless processes), it is quickly apparent that notions like perimeters, enforcement points, and correlation are very different from what has been done in the past. When you factor in location (as in “my data center” vs. “public provider”), the attack surface widens and diversifies.
Application development teams have outflanked the entire IT organization, exposing the shortcomings of static infrastructure and putting teams in a reactive position as the level of management complexity grows. These new applications must share data securely with legacy systems that will take years to transition to the cloud-native format.
How do we respond to dynamic, ever-changing network topologies and security needs? How can we provide consumable network services while maintaining a shared network infrastructure over which islands of infrastructure technologies can co-exist and share applications and data?
Blueprint for a network and security consumption layer
The solution lies in the introduction of an abstraction layer for network and security functions. This is what SDN and network virtualization are meant to deliver: a way to model and operate the underlying infrastructure in a traditional manner, in accordance with established engineering principles, while applications create and modify arbitrary topologies on top.
If network virtualization begins in the data center, however, it must extend its reach and federate uniformly across multiple infrastructure silos and sites, as well as any service providers’ clouds, and decouple from the underlying infrastructure. If a specific hardware component, hypervisor, or cloud provider is required, it will limit—or potentially eliminate—your network and application security options.
One solution—Contrail vRouter—interfaces directly with VMs, replacing the bridge in Docker engines or the proxy in Kubernetes environments, as well as in your data center, in public cloud, making it easy to move workloads in and out of silos and cloud providers.
One might think that a centralized approach to network virtualization would help federate silos and clouds. However, centralization presents its own set of challenges when different management layers, each with their own unique network and security constructs, interface with a common controller. Current implementations of this approach have met with limited success in simple use cases such as VMware deployments using native management tools like vRealize (vRA) or vCloud Director (vCD) in newer OpenStack environments.
Contrail lets operators implement a fully distributed architecture, providing each infrastructure silo with its own independent Contrail environment. These Contrail environments share network and security information, allowing network services to be stitched across silos. A virtual network can be extended from a physical switch to a VM running under vRA and others under OpenStack, to a Docker container in the data center, to VMs or containers in the cloud of your choice—all with consistent services such as firewalling, NATing, routing, and load balancing.
One of the desired outcomes of network virtualization is that services and analytics be exposed not only to the application management layer, but also to higher-end network applications such as security orchestrators, correlation tools, and SD-WAN applications. The goal is to move away from element management towards policy management. Information such as topology, addressing, location, traffic type, rates, sessions, and name space are collected at the source and exposed to these applications, providing extraordinary visibility and context for making informed path selection and security decisions.
A new concept called Software-Defined Secure Networks leverage information provided by the network virtualization layer about user intent policies across all domains to create a distributed, service-aware blueprint that is decoupled from the infrastructure, delivering consistent services across silos.
Get out of (cloud) jail
Application development has imposed new models, management, and orchestration approaches on the networking industry, not to mention basic constructs that are dynamic and arbitrary. Furthermore, while it was believed that one day all infrastructure would be unified under a single management scheme, the sad truth is that silos of non-interoperable stacks are here to stay, ready to impose network requirements we cannot anticipate. Juniper Networks has accepted the challenge of building networks that help businesses scale to accommodate ever-evolving requirements, helping network and security teams regain control over their network and services while still giving developers the agility they need.
How? Find out more by visiting Juniper at our ONUG 2017 Spring booth!
[i] “Science, Strategy, and War” by Frans P.B. Osinga
[ii] Term coined by Avi Freedman and described in http://firstround.com/review/the-three-infrastructure-mistakes-your-company-must-not-make
Bruno Germain is a Technology Architect in the CTO of the Financial Services division at Juniper Networks. As early as 2008, he worked on virtualized network architectures for data centers as part of the team working on MiM / SPB standards for which he shares patents for his work on the integration of virtual routers to this technology. He joined the Nicira team shortly after VMware’s acquisition and worked on the original micro-segmentation / Zero Trust security architectures and presented its evolution in a number of conferences. He has been designing, implementing or securing networking infrastructures for the last 31 years holding positions with manufacturers, service providers and financial institutions.