Container Networking – Easy Button for App Teams; Heartburn for Networking and Security Teams

Containers are a hot topic these days, allowing developers to be more agile and less dependent on infrastructure by making their applications portable and enabling them to extract more juice out of their compute. Many application teams are thinking of migrating their existing apps to containers or at least dipping their toes into building new apps.

When a team starts looking at containers, the journey generally starts with a developer getting their hands dirty with Docker running on a single VM or a development laptop. The developer chooses a host OS, installs Docker, downloads some Dockerized images, and starts running basic containers. Pretty easy, right?  

But for any useful deployment, containers will be running on multiple hosts. Next choice: the orchestration platform. Kubernetes seems to be the de facto standard. You also need a service registry, image registry, and other tools to deploy your containerized applications in production. But the focus of this blog post is on the network that connects all these things together.

Container networking connects the nodes, masters, registries, and then the containers themselves. There are many choices to pick from and here are some common options.

Host Mode Networking: The simplest form of container networking. In this mode, containers use the IP of the host and listen to different ports. For example, 3 nginx containers on a host with IP 10.1.1.1 listen on ports 80, 81, and 82. External clients can connect to these containers on 10.1.1.1:80, 10.1.1.1:81, and on 10.1.1.1:82. You can register these ports on your load balancer or service registry and the containers will start serving client requests.

 

If you have multiple containers listening on the same port (e.g, all ngnix listen to port 80), host mode won’t work because only one container is allowed to bind to port 80. Host mode works for use cases where one container runs on one host (this makes an application more portable) or you have an orchestration system for allocating and discovering dynamic ports without causing port conflicts.

Bridge + NAT: This option is popular because it’s the default Docker networking mode, so most who are new to containers and networking choose it. In this mode, Docker creates an L2 bridge (docker0) on the host, a subnet range is given to docker0, and the containers get IPs from that subnet. Multiple containers on the same host can listen to the same port because they have different IPs. If you want to expose these ports to other hosts/containers outside this host, you have to NAT them to external ports on the IPs of the host. Anyone with networking experience will quickly understand that NAT introduces many complexities.

 

While simple Bridge + NAT works well for single host dev environments, you need multiple hosts and multi-host networking along with some orchestration and routing for any decent size production deployment.

There are a large number of multi-host networking options and at this point everyone is struggling to figure out which one to choose and why. Most of these options can be categorized into overlay-based or flat IP-based multi-host networking.

Overlay-Based Multi-Host Networking: Many container networking solutions use overlays with popular options including Flannel (CoreOS), OpenShift (Red Hat), Contrail (Juniper), VSP (Nuage), Contiv (Cisco), and NSX (VMware).

These solutions create L2 overlays that can be used to connect containers. Each container gets an IP from those subnets and listens to ports on those IPs. Complicated NAT and load balancing solutions are used to expose containers to the outside world. For example, Kubernetes has at least three options for NAT and load balancing: 1) kube-proxy – default with Kubernetes; 2) Ingress controller – detects containers being launched/destroyed and updates a standard nginx or haproxy load balancer; 3) iptables-based NAT and load balancing – kube-proxy can program iptables to perform load balancing and NAT (a higher performance option because NAT and load balancing is done by the kernel).

 

Flat IP-Based Multi-Host Networking: Calico and Romana are alternatives to complicated overlay-based networking. Some organizations have built custom networking solutions for flat IP-based networking that assign routable IPs to the containers and use IP address management (IPAM) and/or routing (most commonly BGP) to make containers reachable externally. NAT aren’t needed and load balancing can be done by solutions such as F5 or DNS servers.

So, which option should you choose?

 

Many DevOps teams start with containers and end up making the “easy network choice” – overlay-based networking. Write the network name in the yml file and everything magically works.

However, once in production, containerized applications communicate with non-containerized infrastructure. Internetworking between containers, VM, and bare-metal is critical. This is where networking and security teams get involved and run into challenges:

  • Internetworking workloads introduces complexity of NAT and chokepoints (edge routers, etc.) and lacks end-to-end visibility.
  • Troubleshooting tunneled traffic is difficult; tools are scarce.
  • Overhead of tunneling causes performance issues and the required MTU reduction can affect some applications.

In contrast, flat IP-based networking is easier to deploy and troubleshoot, provides clear end-to-end visibility, works in mixed environments, and delivers predictable higher performance.

I’ve seen more than a few customers go from overlays to flat as their container deployments matured. If you’re just starting your container journey, involve networking and security teams early to help design the right network.

The ONUG Container Working Group was formed so the ONUG Community, with decades of networking and security experience, can provide guidance on container networking. The goal of the working group is to publish a white paper on available container networking and security options and their operational implications. The working group will also document best practices from ONUG members who are well ahead on their container journey. We would love to see you join and help us out in this effort.

Author's Bio

Mukesh Gupta

Sr. Director of Product Management, Illumio