|By Don MacVittie||
|May 23, 2016 11:13 AM EDT|
http://www.stacki.com/wp-content/uploads/2016/05/Raid_controller-768x437... 768w, http://www.stacki.com/wp-content/uploads/2016/05/Raid_controller-1024x58... 1024w, http://www.stacki.com/wp-content/uploads/2016/05/Raid_controller.jpg 1265w" sizes="(max-width: 300px) 100vw, 300px" />The world of datacenter automation is a complex one. We all knew that going into it, but faced with the reality of varying hardware, operating systems, networks, security tools, programming languages, app servers… The list goes on, while we have conquered the general provisioning tasks, we have now to figure out what to do with the complex fiddly bits.
Recently on Twitter, we were discussing how hardware incompatibilities are a thorn in the side of any automation project, including DevOps. Let’s say you’ve set up a state-of-the-art provisioning system that will spin up VMs, containers, or a physical server and provision application A on it. So far, so good. We want easy, stable, repeatable processes to roll out software and services.
But then you have a piece of hardware (be it a server going to support VMware, a compute node added to OpenStack, or a stand-alone app server) that is out of date. Out of date means a lot of things, we’ll focus on two killers – in need of a BIOS update, and in need of a RAID card firmware update. There are others, firmware on networking and SSD cards (particularly specialized ones), hardware replacement – like swapping out RAID cards – etc. We’ll stick with two to explore the problems, but it’s good to keep in mind this is in the context of a broader topic.
The problem with this bit of the automation domain is that everybody does it a little differently, and most still are designed to have a person be sitting there to perform the update (there are entire pages dedicated to the different methods of SSD firmware upgrade alone). Which is very contrarian to full datacenter automation.
The level of support provided by server provisioning vendors ranges from none (you are expected to continue doing this configuration by hand) to nascent. While support for some specific products/families has been around in this server provisioning software or that, none has a comprehensive solution to hardware upgrade/configuration. That makes sense, both because there is no standard in the hardware interfaces, and because there is a steady change in the interfaces required. SSD firmware updating wasn’t a concern until the last few years, for example.
So what do you do? Two things. First, add support for your preferred hardware to your checklist for server provisioning. While you may not need a comprehensive solution, getting to 70 or 80% of affected servers using automation saves a massive amount of time. Secondly, demand that your chosen vendor be doing more in this space. While it is unfair to take server provisioning vendors to task for the hardware vendors’ inability to create a usable standard, the server provisioning vendors know the market they are in, and are well aware that they need to be working on it. Alternative to these two options is going 100% public cloud or hosted servers. Then server provisioning – at least the hardware part of server provisioning – is no longer necessary because your cloud provider will be dealing with hardware. Internal cloud and virtualization still leave the physical servers in your domain, and thus still an issue to wrestle with.
At this point in time, the “hardware layer” of server provisioning is the least well served. While that will no doubt get better over time, it underpins other server provisioning tools, which assume the hardware is ready to rock. Some tools can do configuration of this or that part – the stacki project, of which I am a member, can do about 70% of RAID cards (based upon Avago market share estimated at 80% by an analyst friend, and differences in individual card configuration options). I offer this tidbit up because the tool has some of the broadest (if not the broadest) coverage in the market, and yet isn’t that universal yet.
Longer term, a standardized API to make hardware more easily programmable should be the goal. Underlying implementation can still be proprietary, but a standard interface that can be used to update or configure all products in a given space should be what we’re shooting for.
Sound fantastical? Think I’m dreaming? Well network hardware vendors are slowly moving to programmability and away from secret knowledge. Five years ago that would have been seen as fantastical also. That long ago, some vendors like F5 Networks had APIs, but the APIs mirrored their command lines, keeping the product-specific knowledge requirements. Their current REST API is better organized and less atomic. So I contend that if networking companies can move that direction, so can RAID and BIOS vendors.
Until then, tools that do a “good enough” job of it, and the occasional manual intervention will be the best we can manage, but it’s still far better than doing it all by hand.
- Cloud Database – Are You Prepared?
- The Storage Future Is Cloudy, and It's About Time
- Making Android SSL Work Correctly
- Oracle Enterprise Manager Grid Control Plug-in for BIG-IP LTM Beta
- CIOs Should Know that IT Is Infrastructure as a Service
- A Rose by Any Other Name - Appliances Are More Than Systems
- So You Think You're Enterprise Class?
- Minnesota and Private Cloud
- v.10 - iSessions in the Cloud (or a remote data center, you choose)
- Like “API” Is “Storage Tier” Redefining Itself?