This interview was published first at www.datacenter-flo.de
Yesterday I had an very interesting and awesome chat with my good friend, Rockstar Bro and MVP Didier van Hoye, also known as @WorkingHardinIT.
Now let me share a few important points we discussed.
This post has no relation to my job or my employer. Everything I post is my personal opinion and I write complete independent.
Flo: Hi Didier, in November last year we had our last interview on Windows Server 2012. At this time Windows Server 2012 was pretty new. What do you think, in which ways Microsoft and Windows Server 2012 had changed the datacenter over the past year?
It’s become better and cheaper to do a lot for things. The value in box with Windows is awesome and often provides all one needs in a 80/20 world of good enough is good enough. And don’t dismiss that as SMB/SME plays, or you are dismissing 90% of the market. Unless you’re catering exclusively for the Fortune 100 companies you can’t ignore them. Certainly not while the move to the cloud, which is happening, isn’t going that fast that they have disappeared as customers for vendors. Bar the talk about PRISM putting a cap on cloud growth there is another worry I hear some of the better managers talk about: an exit strategy. They have a fear that the costs in the long run will become (a lot) higher and that they might be locked in. If you don’t need elasticity you might have other options. One of the other reasons I see is 24/7 support. But that only holds true for those companies or organizations that can’t find the skilled personnel or can’t afford to pay (or just won’t). There again they worry about the cost, as even with a cloud vendor this is not cheap and failing hardware, while very real, is rare in a well-run organization and often mitigated by good design principles. Partially it’s fear of the unknown an being lured into something that might bite you in the future. So if, for whatever reason, you need or want to do things on premise, partially or completely, Windows Server 2012 (R2) is what you’ll use. If ISVs are stopping you from doing that, get rid of them, as you’re being held hostage and you should never ever tolerate that.
Flo: We were also talking about how Windows Server 2012 influences new Hardware and Infrastructure ideas. From your point of view, did customer adopt this ideas like Cluster in a Box (CiB) or SMB 3 over Infiniband?
Cluster In a Box, yes. I see a keen interest for this in smaller shops, industrial production lines, branch offices or even a building block for lager environments. What’s holding people back is lack of solutions from the OEMs (easy access in existing contracts, established logistics & know support => no fear of the unknown). Once they become available, like DELL’s VRTX, things start moving. And that’s only v1. There is potential there. Mind you, once the smaller new playersestablish a solid support reputation things can go a bit faster & smoother as well. I see most people thinking of or working at getting 10Gbps. 40Gbps just for the uplinks/interconnects. The next big move in the DC might be 100Gbps. Infiniband, not so much. While cost is actually not that high as one might think there is a bit of a psychological barrier & it is different. But I have spoken to someone who’s doing it in real life and they are not the usual HPC shop.
Flo: When you see the Hardwareideas, do you think the OEM liked this or are they still want to push the “classical” Datacenter environment?
There is always this balance between keeping the order book filled with current & planned offerings versus exploring new ideas, opportunities & technologies. The smart ones will discuss this openly and be quick & agile in introducing some of the new ideas. That way they’ll be considered a conversation partner on these matters. But they’ll also be able to test out these technologies & designs in real life. That means they’ll be a leader when it succeeds while minimizing risk and cost when it doesn’t.
On the one hand I see a lot of vendors focusing on the fortune 500 market. But 90% of the customers are not in that segment and they need to be serviced as well. Good enough is good enough is a very strong principle right now. You have to remember that the question it’s not if smaller startups with new ideas or initiatives like Storage Spaces can match all the bells and whistles of a multi node SAN storage array. It’s if those enterprise storage arrays offer enough value for their price to still be considered. It’s no good having all kinds of fancy replications mechanisms, snapshot capabilities, deduplication & thin provisioning if you can only use one and can’t leverage the other mechanisms or if it conflicts with UNMAP, or CSV etc.
The classical data center environment is not going away that fast but what will the components be? Most vendors are gunning for converged infrastructure combined with software defined everything. Both the hardware & software vendors are in this game and as such entering each other’s realms. Which causes concerns but also creates opportunities. At the moment it’s hard to see a complete data center abstraction layer that is multivendor. It’s hard enough to get it to work with one vendor and they are, in this game, each other’s competition. Interesting times J
Flo: As anybody know Windows Server 2012 R2 is comming up. It will be public available on 18. of october. What do you think, will it be the same wallbreaker like Windows Server 2012 or is it a nice to have realease?
That depends on your needs. I you really need features like resizing of VHDX files, yes it makes sense. If you’re pushing NIC teaming to some of it limits you might be waiting for the new “Dynamic Mode” for load balancing. Compression for Live Migrations is going to be great and people who can’t go to 10GBps yet might very well extend the life of their 1Gbps networks thanks to this. Shared VHDX is a great tool to uphold some sacred boundaries in the data center. vRSS might save you when doing heavy file copies inside of VMs. R2 is bring a lot of enhancements that extend & enrich the capabilities of Windows 2012. It’s up to the user to decide if it’s compelling enough. If you’re still on W2K8R2, well the reasons to make the move, as your infrastructure is aging anyway, just got a whole lot bigger & better. So I never consider a release as nice to have. Some features perhaps yes, but a release, R2 is better in functionality & capabilities. If you’re already rocking Windows Server 2012 I think you should weigh the pros & cons. For what it’s worth, we’ll be upgrading.
Flo: Let us take a look on the market shares in virtualization. Microsoft grow from 0% to 38% in the last 5 years since Windows Server 2008. How will it go on, is it the end or is there more space to grow?
Oh yes. Every release it makes more sense for Microsoft shops to ask why they’re using 3rd party product X for again and do these reasons still hold true today. Most of the time we have environments where the rule is to do all you can with in box tools (that are getting better and better) and only use 3rd party products when it really matters & makes a difference. I think they can grab 45 to 50% of the market, mostly at the expense of VMware, clearly as they hold such a huge part of that market.
Flo: What is your opinion, how companies like VMware, Redhat and Citrix will go on to position there products in the datacenter? Do you think there is a big badabumm somewhere in the VMware labs, that will give them more opportunities against Microsoft?
Big bangs are relatively rare occasions. Evolution is ever ongoing and can go quite fast given the right circumstances or pressure J
VMware is trying to focus a lot more on their core business. Virtualization. You can’t do that without storage & networking, where they’ve show initiatives. They got rid of Zimbra as evidence of this focus. Cloud wise it might be a more problematic situation. The open source world isn’t idle either and in the public cloud it’s about cost/value and there they might not be in the strongest position. In the datacenter for private could they can put up a good fight but the easy days are over.
Citrix will rule High End VDI for a long time to come it seems. MSFT is missing realistic 3D capabilities for software running OpenGL and that’s a lot of products. Citrix there holds the high ground. For other VDI scenarios, good enough is good enough combined with management & deployment tools that couldn’t care less about whether it’s a physical or virtual machine make Microsoft a more attractive player. The big reasons we see for VDI are often as a solution to business continuity & flex offices, or sometimes data security. If you’re hunting high performance customers it becomes hard to beat a physical workstation with 16GB or RAM, an 8 Core i7 and a couple of SSD disks at an attractive price point. Add replication and/or shared storage to that and the extra costs hit you hard. But again mileage varies between customers & environments and I’m speaking from the GIS/Engineering side of things as that is my current area of operations.
Thank you very much for your time Didier and as anytime it was awesome to talk to you.