Learn firsthand about OpenStack, its challenges and opportunities, market adoption and Piston Cloud Computing’s engagement in the community. My goal is to interview all 24 members of the OpenStack board, and I will post these talks sequentially at Dell TechCenter. In order to make these interviews easier to read, I structured them into my (subjectively) most important takeaways. Enjoy reading!
Rafael: How do you envision OpenStack over the next 2 to 3 years?
Josh: OpenStack is the future of the datacenter. It’s the right metaphor for managing IT resources for the next 10 to 20 years. OpenStack’s vision should really be … and in fact is … to do a great job of abstracting and managing IT resources in compute, networking and storage, and then supporting an ecosystem of additional components like Database as a Service (DaaS) or Load Balancer as a Service (LBaaS) that can be part of OpenStack but not part of OpenStack core … in the same sense that Linux is the kernel and a Linux distribution is the kernel plus a set of userspace packages.
#1 Takeaway: OpenStack is moving forward at warp speed, significantly expanding its aaS-is capabilities
Rafael: What are the key accomplishments in OpenStack and what still needs to be worked on?
Josh: That’s an interesting question. When we launched OpenStack I had a five-year roadmap on the NASA side of what we wanted to build next. We have gone through a lot of that roadmap in just two years. Quantum was definitely a huge milestone. What Nova network started out as was a very primitive approach … a barely good enough virtual networking. What we have now is an API defined ecosystem of SDN solutions. We really have every flavor of software defined networking from Nicira, Cisco, PLUMgrid, Big Switch, Midokura etc. Everyone has got a Quantum plug-in that delivers a different kind of SDN solution. On the networking side, Quantum has made OpenStack state-of-the-art in comparison to other infrastructure offerings.
With Cinder in the last release we now have an abstraction of Block Storage as a Service (BSaaS) that supports not just distributed storage solutions like Ceph or Gluster, but existing hardware such as EMC, NetApp or HP. Cinder is actually defining a totally new way of thinking about storage abstraction.
The last thing that I am really proud of is the work that a number of teams have done around bare metal provisioning. It’s really powerful because it gets away from this idea that cloud is about virtualization and hypervisors. Cloud is really about resource pools and the abstraction and self-service delivery of those resource pools. To the user it doesn’t matter whether there is a hypervisor or not. What they want is a run-time environment for their processes and applications.
Those are the milestones that put OpenStack beyond just best-in-breed, but in a class by itself on the technology side.
We have probably the most dynamic, engaged and diverse community ever. Linux had a perfect recipe of academic partners, enterprise partners and non-profits that had really moved the project forward. OpenStack has that same mix of academic users such as CERN and NeCTAR in Australia, and commercial users like eBay, Sony and PayPal, and a certain amount of non-profits like Wikimedia as well as big entities like IBM, HP and Intel.
#2 Takeaway: The ecosystem around OpenStack can be bucketed into distros vs. products, mature vs. immature products (with and without lifecycle in mind) and public vs. private cloud offerings
Rafael: Josh, can you help me understand the distribution ecosystem around OpenStack? How would you bucket the solutions people can find in the marketplace?
Josh: I would split them on two axes.
The first axis would be distribution versus product, and that’s a tricky one because the products have also been called distributions. We just recently started making this distinction. Traditionally a distribution is a collection of packages, and the expectation with the product is that it’s somewhat turn-key. What we sell in (Piston) Enterprise OpenStack is definitely a product. The Ubuntu Cloud is much more of a traditional distribution in the sense that it is a collection of packages. Not even Canonical says you can download and install that. They sell professional services and packages called Jumpstart, an intended week-long process to get you up and running.
Mirantis is doing the same thing. They sell professional services on top of that Ubuntu package. I think RedHat and SUSE are really following the Ubuntu model. They make all the OpenStack packages available, but they haven’t really made it opinionated.
On the opinionated side - the other axis - there are folks who are building software for service providers versus those who are building software for private clouds. Cloudscaling is definitely on the service provider side. We are definitely on the enterprise or private cloud side. The emphasis is how the system is configured, whether it supports lots of hypervisors or simply one really well, supports different storage backends and different Cinder drivers or whether it bundles one solution for that. That level of opinionatedness is part of how we see it being a product instead of a distribution.
I guess the question that all of these products are trying to answer is: “How much control do your people want versus how well do they want it to work and be reliable?”
If I were to be so bold to say: There are mature products and immature ones. The mature products are the ones that have lifecycle in mind. There is a way to do upgrades and updates without a ridiculous amount of downtime and without ridiculous amounts of manual intervention. This is something we built from day one in order to be able to do upgrades with zero downtime and zero administrative involvement.
In fact, upgrading the cloud is really a much more drastic event than upgrading an operating system. You’re upgrading a whole datacenter, especially when you take advantage of software defined networking or your virtualized storage. They have SLAs that are a lot higher than you would expect for a single server.
That’s why the term “distribution” has been used by operating system vendors to describe their product. The Canonicals, RedHats and SUSEs understand the “one server at time” collection of packages. But I think it’s more the IBMs, Dells and HPs who really understand larger distributed systems and the planning, processes and orchestration that goes into managing and upgrading the entire system as opposed to a single server.
#3 Takeaway: Piston Cloud Computing offering Enterprise OpenStack is a private cloud solution providing a turn-key user experience
Rafael: Let’s talk a bit more about OpenStack raw versus Piston Enterprise OpenStack, which is the Piston Cloud Computing product. What is unique about it, and how are you positioning your product?
Josh:Piston Enterprise OpenStack is very opinionated software. The big difference with Piston Enterprise OpenStack: It’s turnkey. From the bare metal up to the operating system, the hypervisor, the distributed storage framework, master election, high availability services and all of OpenStack. It includes update services that I mentioned before, that allow you to upgrade the entire cloud without turning VMs off. It’s an enterprise private cloud solution as opposed to a service provider solution. The thinking out there today is that a VM in a cloud is a transient thing and you should build your application to expect that the VMs go away. That’s what Amazon does. They have a zero uptime SLA on VMs. It’s not what we do. I don’t think it’s inherent in cloud, but it is inherent in lazy IT architecture. Piston Enterprise OpenStack delivers highly available VMs in storage, and they are even available through an upgrade and update process. In order to do that we’ve built a number of components that are not open source and we license some technology on the hypervisor side to support true live migration to support memory oversubscription and to deliver scale out experience so that when folks want to expand the amount of capacity in their cloud they can add additional servers and not manually think about rebalancing of the loads. Most of what Piston Enterprise OpenStack has that folks are really excited about are these enterprise ideas around availability… it just works, it stays up and it’s a lot simpler to administer.
It’s ironic that we usually think that we are selling software and the customer often thinks that they are buying support. The upgrade service and the security update service that we provide, and frankly the fact that we answer the phone 24/7 - that’s a lot of what our customers are buying. That’s the value that they are getting out of working with us. We do obviously do the integration work I mentioned where folks that need OpenStack to connect to other systems they have in their datacenter are serviced.
We’ve worked very hard to build the technologies that work under the hood to make OpenStack better. We don’t ever change OpenStack in our product. If we are going to improve OpenStack, we do that in the open source community. Historically we have done a lot of contributions to Cinder and to Nova core. We also did a lot of work to support Cloud Foundry as a PaaS solution. This is part of our interoperability story: If it’s going to touch the APIs or if it’s going to affect the user experience in the cloud, it has to be in OpenStack.
#4 Takeaway: Government agencies, public cloud providers, financial services, life sciences … OpenStack thrives on a very diverse user community
Rafael: Who are the early adopters and do you see OpenStack going main stream?
Josh: Main stream … absolutely. OpenStack has been adopted by three very different communities: There is the service provider community, including The Rackspaces, the Dreamhosts etc. … those folks who are running public clouds. They led OpenStack because it gives them a competitive advantage. I don’t think of them as being mainstream, I think we will see gradually more and more folks using those platforms, in fact not even realizing it’s OpenStack. It starts to become more and more ubiquitous.
The second is the academic community, including CERN, NeCTAR, the French government etc. I think those folks are particularly interesting because they help drive requirements, help us understand what else we need OpenStack to doand represent really large scale deployments. But they are not where the business ecosystem happens; they are not by-and-large where commercial dollars are to be made.
That third category is large private clouds. By large I mean an organization that needs to have a few dozen physical servers as a private cloud in order for the economics to really make sense. The break-even point for a mid-sized to large enterprise to benefit from cloud is a minimum of a couple of racks of gear. The folks we’re are seeing as early adopters are the ones who have this economic incentive to go to cloud and who for security or performance reasons can’t take advantage of public clouds.
Our target has always been financial services, governments and life sciences. Weirdly enough we do see governments as early adopters. That may be because of the work we did for NASA. We blazed the trail for government into cloud. But it’s bizarre because they are interested; they are very early adopters by government standards, but that still means their procurement cycle is two years long. I am not hearing of many OpenStack deployments in government, simply because in their POC phase it can take a year or two to close out.
What’s interesting on the financial services side is that OpenStack adoption is not happening at the business unit level, nor is it happening one division at a time. It’s a major strategic initiative for these larger organizations, and it has been driven by the CIOs and CTOs. It’s been driven by them wanting to be an internal service provider to their business units. But that level of strategic decision means that they are cautious, so there are long POCs and they will remain in that phase for a while.
But on the biotech and life science side, I think it’s that they don’t have a choice when you take a look at the cost of a super computer versus the cost of a private cloud. They need to get these kinds of commodity solutions, they need to be able to buy a Dell C6220, lots of direct attached storage and a lot of commodity 10 Gb switching from the Dell Force10 portfolio for example. They also need to pull together a solution that gives them the capabilities they need to handle enormous amount of data. They don’t have the budget anymore to do this with traditional super computing so they need to get more value out of their dollars.
#5 Takeaway: Dell is viewed as an engaged community member with a clear willingness to help OpenStack suceed
Rafael: Last question, Josh. How do you view Dell as a member of the OpenStack community?
Josh: I will tell you a story. We were at VMworld as a sponsor, as part of our partnership with VMware originally around Cloud Foundry and more broadly around OpenStack. We won Best of VMworld for private cloud computing technology, which was bizarre and surprising because we are an OpenStack company presenting at VMworld. And this was before VMware joined OpenStack.
Shortly after it was announced that we won this award, Michael Dell came by our booth … with no entourage. He hung around and chatted with me and some of my team members for about ten minutes, and I congratulated him on rationalizing Dell’s cloud computing strategy. Because at that point it was clear Dell was going to build a public cloud using OpenStack, and they would sell their private cloud using OpenStack … and they have done I think seven partnerships so far in OpenStack. First with Rackspace and Equinix, then Mirantis. They also just recently announced a collaboration with Morphlabs.
There are two or three interesting things that Dell has done. They keep showing up. They are always there and they are always supportive. They are an open source contributor, and I think that was tough for Dell initially as they tried to stake out something they could own: “Hey, we’ve got Crowbar and this is our thing …” Unfortunately, the space for OpenStack installers just got really crowded and noisy pretty quick including Chef recipes from two or three different teams, Puppet recipes etc. Crowbar I think does some things in an interesting manner, but my take was: It was really a tool for very talented administrators to use, rather than intended to replace those very talented administrators.
What I have seen now is Dell making a very concerted effort under Nnamdi Orakwue’s team to engage with OpenStack on all fronts, in a way that makes sense. I think that’s a tough process for Dell to go through because there are now five teams engaged with OpenStack in different ways, but certainly as a hardware vendor they have been great to work with. As a community partner they are willing to dive in and contribute. As board members, John Igoe and Rob Hirschfeld are fantastic and I really appreciate having them there. I think at the highest level (from Michael Dell) Dell really cares about OpenStack’s future and Dell is doing the right thing about OpenStack.
Their commercial strategy for OpenStack may be very confusing. It’s still confusing to me, but I have never doubted that Dell wanted OpenStack to succeed. We certainly have other large hardware vendors involved in OpenStack whose commercial agendas are way more clear, but their relationship with the community is a little bit more hazy.
Rafael: Thank you very much for these answers, Josh!
Josh: You’re welcome.
Resources
Piston Cloud: http://www.pistoncloud.com/
Twitter: https://twitter.com/jmckenty
Feedback
Twitter: @RafaelKnuth
Email: rafael_knuth@dellteam.com