|By Don MacVittie||
|January 28, 2016 02:00 PM EST||
Because of the explosive growth of DevOps, there is still a seemingly large amount of confusion on the topic. While I’ve written about this before, this article takes a shot at dividing things in a way that we naturally do as users, but vendors often fail to differentiate, simply because “DevOps” is a hot term, and it’s all too often about SEO and AdWords when vendors talk/write.
Dev? Or Ops?
DevOps has two different sources – Development that is getting Ops added in – like automated test, integration, etc., and Ops that is getting Development added in – like provisioning and configuration management. While both sides of IT are used in both, their genesis is different, and responsibility in all but a few organizations is different.
Since discussing all three boxes would become more of a treatise than a blog, today we’ll focus on devOPS, and what each of these is actually for.
devOPS is where development methodologies (and often actual development) are being added or enhanced with regard to Operations traditional practices to create an automated datacenter for deployment and maintenance.
Automated spin-up of servers, be they physical or virtual, with a customizable image set and configuration options. Repeatability is key – the system must be able to restore a damaged install to a known good state. Also important is total automation – configuration of things like disk and network controllers is part of spinning up a machine. (Full disclosure, I work for a server provisioning company, and this blog will originally appear on the www.stacki.com website – home of an open source provisioning tool).
Configuration Management (Application Provisioning)
Once the machine is spun up, the applications required to make it perform a specific task are required. Today this is largely a separate step from provisioning, but the market is clearly moving to a stage where the two are bundled into a single toolset. Provisioning a server without apps is half the job, and provisioning apps without a server is not possible, so it makes sense the two will slowly merge, possibly under the aegis of orchestration (below). The purpose of configuration management tools is to make certain that the applications, application pre-requisites, and configuration of both are correctly set up on target machines. Most of us know these tools better than any of the others.
Today the focus of most orchestration tools is on coordinating the deployment of complex software across the datacenter. The ability to fully deploy and configure an application no matter how many servers that application is spread across is powerful, as long as the automation gets it right, and the leaders in the space certainly do take steps to make certain the orchestration tool gets it right or reports to users what is broken in the infrastructure.
Moving forward, I see it as likely that orchestration will increasingly focus on including server provisioning so that the orchestration tool orchestrates everything from spin up to spin down. Today there is good support for virtual and cloud spin-up, but less for hardware spin-up. Watch for that to change, so where you want the orchestration tool to deploy is less important than what you want it to deploy.
Process Management is often included in monitoring, but because it has a very focused role and does more than monitor, I’ve chosen to split it out. The point of process management is to make certain your services and apps are running. Depending upon the tool, they can also make certain the processes are responding, but that is more often done as part of overall automated monitoring. No matter how good your deployment methodologies are, a crashed process can not respond to users, so this is a pretty important bit in the automation world.
Monitoring encompasses logging and watching systems for errors. While included in many of the other toolsets, a proper monitoring system will work with the overall information available across the datacenter and application to provide a more comprehensive picture of what is going on. For example, a monitoring tool can watch hardware failures, OS counters and errors, and application issues together to help trace down what actually happened from a “single pane of glass” as they say.
Knowing what broke and where, or knowing that processes are actually running do not cover the entire picture of application availability. Knowing how responsive the application is and where the bottlenecks are is every bit as important, particularly in high-volume or spike situations. That’s where metrics come in. They track historic performance and response times, and offer insight into what might be slowing down an application both in daily use and in heavy peak periods. Insight into overall app performance and specific subsystems/infrastructure helps resolve problems quickly, and even helps prevent problems with early detection. From low-level disk through API response times, measurement is the key to fine-tuning performance.
DevOps has a weird relationship with network automation. Most DevOps pundits simply ignore it, feeling that’s the realm of SDN. Others try to include the network but stick to only basic functionality – because that enables “real DevOps”. But reality is that for a fully automated datacenter, networking simply must be included. Without proper VLANs, DNS, DHCP, all those pretty apps are pretty useless. In fact, according to this blog on F5.com (relation disclosure, that’s my wife – but also a genius level geek), there are a whole host of network services that need to be automated to achieve an automated datacenter. So network automation is still in its relative infancy, with vendors just starting to take APIs seriously, and tools to manage cross-vendor network architectures few and far between, but it’s coming. Using network automation, a Big Data installation could be spun up with its own subnet or VLAN, complete automation of the network part adding to complete automation of server and application parts. The better vendor APIs get, the faster this can become a reality, but you can do it today, if you’re willing to put in a little work.
There are a ton of ways you can subdivide DevOps, and certainly lots of people have tried. I offer this alternative simply as a way to consider it that follows the natural path from manual to automated and then into DevOps. It allows for conversation not to be cluttered (and as I’ve mentioned before, it often is), so we can talk about what’s important, not the generic “DevOps”.
- Cloud Database – Are You Prepared?
- The Storage Future Is Cloudy, and It's About Time
- Making Android SSL Work Correctly
- Oracle Enterprise Manager Grid Control Plug-in for BIG-IP LTM Beta
- CIOs Should Know that IT Is Infrastructure as a Service
- A Rose by Any Other Name - Appliances Are More Than Systems
- So You Think You're Enterprise Class?
- Minnesota and Private Cloud
- v.10 - iSessions in the Cloud (or a remote data center, you choose)
- Like “API” Is “Storage Tier” Redefining Itself?