The Cloud vs. Platform Provider Arms Race Is in Full Swing: Top Five Things You Need to Know

There is an arms race raging between cloud and IT platform providers, and the war is about to take place in the large enterprise. As Amazon, Google, Microsoft and Oracle focus on the large enterprise market, they have traditional IT platform providers in their crosshairs; that is, HPE, Dell/EMC, Cisco, VMware, IBM, et al. Cloud providers have been hugely successful in the small to mid-sized enterprise market but have struggled in the large enterprise market, thanks to security, compliance and the overall complexity of the large enterprise market where the list of requirements are long and the inertia to change is high. At present, we’re in the calm before the storm, with cloud and platform providers creating relationships or partnerships to address mostly hybrid cloud demand, but make no mistake about it, the cloud providers are after the platform vendors’ revenues.

Here are the top five things you need to know to prepare.

Software-Advantage Cloud Providers: The cloud providers have been busy hiring the best and brightest software engineers and commoditizing their hardware investments. They are all in on software building blocks, and that software eats the world. At ONUG, we had thought early on that the separation of hardware and software from infrastructure would create a new industry structure made up of commodities hardware players and a new set of innovative software companies that were faster to deliver features and value. Well, we were right about the new industry structure, but it’s the cloud providers that ceased this opportunity. Cloud providers are in a feature war with traditional enterprise platform vendors, and they are winning. In fact, most ONUG Community members believe that the cloud providers will solve the security issues that hamper most cloud workload deployments much faster than traditional players. In short, the cloud providers are moving faster as they are truly software-based, giving them speed to market advantage.

Enterprise Initiatives-Advantage Cloud Providers: The cloud providers are viewing the enterprise market in a fundamentally different way than traditional platform suppliers. For example, Google’s eSDN initiative seeks to make enterprise infrastructure 1/10th the cost and 10 times easier + faster than today. The cloud providers have no installed base or margins to preserve, and with software as their preferred weapon, they can change the rules of the game in their favor. 

Financials-Advantage Cloud Providers: According to IDC, cloud providers spent some $27B on infrastructure in 2017, and IDC projects a 14.2% GAGR that will balloon spending to $48.1B in 2020. That’s the size of Cisco’s revenue in 2017! 

Installed Base-Advantage Platform Providers: In reality, this advantage is mixed as Microsoft and Oracle understand the large enterprise market extremely well. But in the large enterprise, legacy applications simply do not go away; many run workloads on mainframes. The large enterprise is slow to move, and this inertia may prove to be one of the best advantages of the IT platform suppliers in this war.

Skills and Training-Advantage Platform Providers: Many of the cloud providers are focused on container-based and serverless workloads. While this is great for new native cloud applications, the biggest impediment for large enterprise adoption is lack of skills and correspondingly high cost of training. At ONUG, IT organization structure, culture and skills have been shown to be the largest barrier of entry for cloud adoption. 

In addition, there have been many failed attempts for vendors to offer a common stack or bridge between private and public cloud. The vendor community always seems to be in the process of “King making,” with mixed results that tend to waste tens of millions of dollars, if not more. Unfortunately, customers pay that cost as it diverts attention and investment away from product and service development. We’ve seen this in OpenStack, ONF, OCP, Eucalyptus and now with Kubernetes/Docker and the Cloud Native Computing Foundation or CNCF. 

But what the Kubernetes/Docker initiative has in its favor over past vendor initiatives is that most major cloud providers will offer the same flavor of Kubernetes. That is, if the cloud providers offer a common container ecosystem and IT leaders in large enterprises implement said container ecosystem in a thoughtful way, then IT departments should be able to run container workloads across clusters on multiple clouds. This would be a major new software building block for the large enterprise market and fundamentally change the way IT is delivered.

In war strategy terms, this is the cloud providers’ flanking strategy over the platform suppliers. There’s a three-year war chest build up happening now. We’ll see how this plays out at ONUG as all the major cloud and platform providers will be there showing their weapons.

IT Professionals: We Are on Our Own!

Gone are the days that large enterprise IT organizations would rely upon large vertically integrated IT vendors to sell us full stack solutions. Yes, many legacy application workloads rely upon proprietary stacks supplied from a single vendor and their ecosystem partners, but in today’s digital transformation era, IT organizations are more and more becoming solution integrators. That is, IT organizations are stitching together solutions made up of home-grown software, open source software and commercial products/modules to get the desired outcome needed for their digital business. This approach is systemic from security, monitoring, analytics, connectivity, hybrid cloud, multi cloud, containers, etc. When we embarked upon an open IT future, we didn’t factor in this integration work. So, the future is here, we are it, and we are on our own. 

So, how do you survive when you’re on your own? When large enterprise applications are a product of a bygone era? When there is no large vendor to send an army of engineers to fix a disruption? When public cloud providers offer little to no visibility of a workload’s infrastructure dependency map or customer support for that matter? When there are no frameworks or building blocks to guide solution integration?

Well, it’s not as bad as Bob Dylan put it when he sang “how does it feel, to be on your own, with no direction home.” There is great gain in solution integration in that it allows every enterprise to customize solutions to address its unique market. These IT organizations that engineer and design unique solutions gain the gestalt effect, meaning that the sum of the parts are much greater than the individual components.  At ONUG, many IT business leaders share how they not only survive but thrive when doing IT on their own. Here are the top five “must dos”:

Software Building Blocks:  There is a growing number of software building blocks that IT organization should master. These include Blockchain, Cloud Technologies, Kubernetes/containers, SD-WAN, Microservices, 5G, Machine Learning, Artificial Intelligence, Analytics, Automation, Risk & Trust Monitoring and Quantum Computing, plus a whole host of open source code. These are all growing in importance as the next generation building blocks for the digital enterprise. Knowing which building blocks are ready, well suited to your corporation’s initiatives and gaining staff experience are fundamental to success.

Framework Collaboration: One of the biggest missing pieces to IT solution integration is the lack of well-understood frameworks that guide how software building blocks are connected to drive successful outcomes. That is, many IT organizations develop their own frameworks in isolation. At ONUG, one of the best benefits is the collaboration between community members to share their frameworks’ successes and challenges. This happens in the ONUG Working Groups, conference sessions, fireside chats, proof of concept demonstrations and hallways discussions. In fact, ONUG Community leaders are now requiring their operational staff to attend ONUG primarily for this reason: hear and learn software building block frameworks from others.

Operational Staff On-Boarding Is Key: Operational staff are fundamental to the success of any IT solution integration program as they keep the lights on and are on the front lines when things go wrong. Enterprise architects, engineers and designers who don’t engage operational staff in solution integration projects to make them part of the solution are doomed to fail. IT delivery used to be in three distinct phases: 1) design, 2) build and 3) run. Design and build is being collapsed into one phase then handed off to operations to run, making their involvement so much more critical to successful outcomes.

Address the Skill Shortage: The largest impediment for solution integration project deployment is the lack of skilled infrastructure engineering professionals. As Chris Drumgoole, ONUG Board member and CTO of GE, says, “Technology transformation is led by talent transformation.” There is a shortage of people with infrastructure programming skills to facilitate their firms’ digital transformation strategies. Within the ONUG Community alone, there are tens of thousands of high-paying infrastructure developer jobs available, paying as much as $300,000 per year and up, thus, making professionals who are trained to code infrastructure one of the highest priority and paid in the IT work pool. So, what’s the problem? Time and money.  Training via traditional methods are prohibitive to even the largest of enterprise firms. Also, nearly one-third of IT staff don’t have interest in learning new skills as they are toward the end of their career, and there are not enough engineers entering IT but rather work for tech companies. A new model is needed and is being found in ONUG Academy.

Sharing Culture: Successful IT organizations are pushing the reset button of their culture and lead from the top down. No longer are isolated, siloed organizational structures the preferred organizing model, but multi-skilled, team player groups are preferred. Many IT organizations are instituting mentor-developer cultures; that is, a senior IT professional with significant accomplishment in solution integration takes on the mentorship role of newly hired software engineer recruits. Out are professionals who solely possess vendor-specific certifications that institutionalize silo thinking and culture. In are technical/engineering recruits with fresh views and thinking toward IT solution integration delivery.

IT’s new role is that of a solution integrator. This role will only accelerate and for good reason. Every enterprise offers a unique value proposition, thus requires unique IT solutions. Solution integration delivers that digital transformation uniqueness and in the process, puts IT in control of its future. It’s not controlled by large IT suppliers, but by their ability to innovate and put the new software building blocks to work for their digital business benefit. 

At ONUG, we believe that over next few years IT organization skills, culture and process will change significantly. Here are a few points that peek into the future.

  • 95% of IT skills will be non-vendor specific, coders that understand multiple APIs, open source code and standards
  • 95% of alarms will be mitigated by automation, templates, machine learning and artificial learning code
  • 90% of operations task will be consumed and automated by DevOps teams

One last point, many senior IT business leaders and executives view legacy applications and their proprietary dependency map, just like the banking industry viewed subprime mortgages, as toxic assets. That is, assets are to be packaged up and sold off all in an effort to free up much-needed cash to go all in on solution integration.

We are on our own, and that means there are transition pains, but it also means we’re in control of our future.

Can Kubernetes Avoid Being OpenStack 2.0?

Kubernetes and its various commercial manifestations are getting much attention from the vendor community these days, much like OpenStack did in 2014-2016. Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications. What’s important about Kubernetes is that all the major cloud providers, such as Google Cloud Platform, Microsoft Azure, Amazon Web Services, are adopting it, and there are many enterprise-based Kubernetes distributions from companies such as Rancher Labs, Docker, CoreOS, Mirantis, RedHat’s OpenShift, IBM’s Cloud Container Service, Pivotal’s PKS, Mesosphere’s DC/OS, etc.

In short, Kubernetes is looking like one of the key building blocks for containerized applications that span multiple clouds, be those clouds public or private. This is all good news for IT business leaders who seek options and choice as they progress on their cloud journeys. But in the end, will Kubernetes be widely adopted or will be it be relegated to a relatively small number of applications much like OpenStack has?

Back in 2014-2016, there was great hope from the vendor community that OpenStack may be the open-source building block that would connect public and private cloud-based applications. There was also a group of vendors that was hoping OpenStack would dislodge VMware as the preferred enterprise virtualized infrastructure provider. At ONUG, we knew that OpenStack would not succeed these ambitious goals for two reasons:

1) Lack of Interoperability: There were more distributions of OpenStack than the Android OS. All of these distributions were different and not interoperable, thus IT departments had to choose which distribution it would deploy and be locked into, thus mitigating the advantages of an open-source block of code.

2) Lack of Application Suitability: As IT architects and designers dove into OpenStack from the perspective of deployment, they quickly realized that there were a small number of applications that would benefit. Further, training and certifications would be required for DevOps and operational staff, adding to its cost and complexity. In the end, too much pain and very little gain.

Once these two OpenStack attributes were understood by the vendor community, many started to use the phrase “we got OpenStackED,” meaning they spent tens of millions of dollars supporting a block of software that wasn’t going to get deployed en masse.

Kubernetes and its commercial implementations have similar attributes. While all the major cloud providers support Kubernetes, their runtime implementations are different, confounding interoperability across multi-cloud providers and commercial Kubernetes implementations. The main question for Kubernetes is: how big is the application space or addressable market that it supports? Yes, there are native micro services applications and refactoring legacy applications that make up this market. But many struggle with the cost/benefit of refactoring legacy applications. If Kubernetes supports a large addressable application market, then the enterprise market will choose which implementation of Kubernetes it wants thus allowing the market to select the winners and losers, which will define default interoperability.

Moving in Kubernetes’s favor is that the large enterprise market is entering into the digital economy. It has to as IT-delivered productivity has plateaued at automating the real or physical world workflows. New productivity will come from machine learning and artificial intelligence automated tasks plus new digital business applications found at the intersection of real and virtual worlds. But how do Global 2000 firms practically enter the digital economy in a significant way when decisions and key business applications are based and delivered upon 20-year-old IT, respectively? If Kubernetes can deliver on its promise of agility, choice and options for multi-cloud application deployments, then it will be a key building block for the digital economy. At ONUG Spring in San Francisco hosted by Kaiser Permanente on May 8th and 9th, IT executives will share their views about whether Kubernetes is OpenStack 2.0 or the real deal.

Driven by ONUG and the IT Community, the Future of SD-WAN Depends on Responsible Service Provider Delivery

by Kevin O’Toole

Software-defined wide area networking (SD-WAN) has come a long way since ONUG ignited the conversation about the market and practical use cases for this promising new technology just four years ago. Eighty-seven percent of 800 network management executives use or plan to use SD-WAN within the next two years, according to a March 2017 IDC survey of mid-size and large companies with at least 10 locations and representing a variety of industries. SD-WAN sales are predicted to grow at a 69 percent compound annual growth rate, reaching $8.05 billion in 2021, notes IDC’s Worldwide SD-WAN Forecast for 2017–2021. Continue reading

How AI will transform your Wi-Fi

by Bob Friday

I’ve always had a lot of respect for veterinarians, because they are masters at solving problems based purely on fuzzy symptoms that their patients cannot explain: where it hurts, how long it’s been hurting, and what events led up to the problem. Many times the patients don’t even know they are sick. Yet a vet is able to make educated guesses with the data they do have, which often results in successful diagnoses and treatments. Continue reading

How do we Define What is and is Not Actionable Intelligence in Cybersecurity Defense?

by Smit Kadakia

In the study of Machine Learning, the focus is on supervised and unsupervised learning. (We will not be considering deep learning in this article.) Supervised learning, and many aspects of unsupervised learning, require the known anomalies to be available to learn from and then predict anomalies in test data using the trained models and then fine tune them through techniques such as cross-validation. In cybersecurity, one is usually looking for an anomaly in the midst of a huge amount of normal traffic or behavior.  Such a characteristic makes the anomaly detection a very difficult problem—like finding a needle in a haystack. Furthermore, it is unrealistic to expect that training with anomalous data points in one industry, say  eCommerce, is applicable to another one such as a healthcare datacenter. Additionally, modern attacks are more sophisticated and they hide themselves among many false attacks to defeat threat detection systems.  Such complexity makes identifying anomalous training data points for all target industries a huge uphill battle. Lack of or difficulties in obtaining training data points make unsupervised learning a necessity in the world of cyber defense. Continue reading

SD-WAN Turns the Traditional Network and Its Legacy “Rules” on Its Head

by Tim Van Herck

This scenario should sound familiar to you: You’ve been running IT organizations for what seems like forever and while you work tirelessly to make sure the network is working right, that everyone can access the network at all times, and that network traffic goes where it needs to when it needs to be there, the fact that you can’t control every nuance can be frustrating and impeding to your service level objectives. There’s a certain lack of control when it comes to managing WAN links that are responsible for connecting your workforce to mission critical cloud applications and data, and you wish that more of that control was in your own hands. Continue reading

Evaluating Architectural Approaches to Micro-Segmentation

by Mukesh Gupta

The concept of segmentation has existed ever since we started connecting data centers to each other. In the early days, firewalls controlled what was able to get in from the outside. Perimeter firewalls are still a critical part of protecting the data center and that will never go away, despite the dissolving perimeter. As networks became more complex, we saw the concept of segmentation move inside with VLANs creating segments for the right size broadcast zones to ensure network performance. For more granular control, we’ve seen ACLs used to control what can communicate across networks. As application traffic and communication increases behind the perimeter, we’ve even seen the concept of the firewall move deeper into the data center to try to provide more granular control of East-West communications. Continue reading

Dynamically Securing Applications in a Multi-Cloud World

by Dilip Sundarraj

Security threats continue to increase exponentially in volume and in risk. According to a recent CBR article, cybercrime is expected to cost the world more than $2 trillion by 2019. Developers are creating applications more frequently and many are migrating them between different clouds for business agility. The greater volume and dynamic nature of applications make businesses more vulnerable. In fact, Microsoft predicts that we will be writing 111 billion lines of new code every year that will generate 50 times more data volume by 2020. This should give you an idea of the increased threat surface in a multi-cloud world.   Continue reading

The Multi-Cloud World is Here to Stay: Be Sure You Are Ready…

by Calvin Rowland

The whole tech industry is abuzz with talk of multi-cloud environments. Survey after survey shows definitively that the race is on to a multi-cloud world. In fact, according to IDC, 30%[1] or more of organizations have already migrated or have plans to migrate literally every workload to the cloud. Further, 85%[2] of large businesses will be committed by 2018 to multi-cloud strategies as IT continues to transform. Continue reading