Welcome!

IT Is Power. Try Living Without A Single Breadboard For A Day.

Don MacVittie

Subscribe to Don MacVittie: eMailAlertsEmail Alerts
Get Don MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Virtualization Magazine

Blog Feed Post

If Hardware Is Commodity... By @DMacVittie | @CloudExpo #Cloud

It makes me wonder, why is it that we spend so arbitrarily much time and money on hardware?

If Hardware is Commodity... Why Are We Still Spending so Much Time on It?

We really are moving in the direction of truly commoditized hardware. Some uses will always have specific requirements that are not mainstream and thus will require specialized builds; this is true in every industry. But increasingly, who made your hardware and where they got their parts from is a secondary issue.

Which makes one consider what really sells hardware these days. Years ago when I was working for Network Computing, I reviewed a low-end blade server company capable of cranking up blades at a fraction of the cost of most vendors. They (like far too many good companies) ran out of money before they could grab market traction, but they did show that it could be done at a price even small enterprises could afford.

The cycle in hardware has traditionally been “CPU bottleneck” goes to “Disk bottleneck” goes to “Network bottleneck”. As each is resolved (not necessarily in that order), the next becomes the stumbling block, then we start over again. This is still somewhat true, but an amazing thing happened along the way. We started having more hardware power than software was using. In the nineties I heard a quote from a hardware engineer that was “It doesn’t matter how fast we make it, the software boys will just piss it away”. And that definitely sounded true at the time. But here we are now, and hardware has improved at a faster pace than software has consumed it, except in the areas of Big Data, cloud, and virtualization… and even those are often under-utilized.

And conveniently, prices have continued their downward trend. For a project I’m working on, I recently bought an enterprise-class (SuperMicro) server. The server has a MegaRAID card, four 1TB disks, 16GB of memory, and four CPUs. Total cost: less than $2,000 USD. Sure, there is more to a datacenter than just dropping a bunch of servers in, but considering that you can throw together an OpenStack cloud that is usable for more than testing with four or five of these servers, it’s pretty impressive. And since the premium for the big name vendors is not that high – 30% or so – it does make one pause.

It makes me wonder, why is it that we spend so arbitrarily much time and money on hardware? We really do. The poster child would probably be Cisco UCS, which is really expensive, but it also offers a ton of functionality, so for those who can afford UCS setups, it’s likely worth it.  Except that it requires specialized knowledge. Which is exactly the opposite direction from where everything else is going.

The same is true of the current round of other blade servers. Instead of attempting to make life simpler, they have “added value” by adding complexity. Yes, a ton of the added details are sweet, but I’m a developer at heart, it seems overkill to me unless you’re terribly short of rack space and need the little bit of space that blade servers can give you back (assuming the blade server is full anyway).

So what am I getting at? It might just be time to consider white-boxing a project. Think of it this way, it all works the same at the basic level that white boxes operate at. Operators know how to configure a white-box, because they have to configure the same things – RAID, networking, OS – on any system they install. There is, by definition, nothing unique to white boxes.

And between RAID and the redundancy possibilities with reduced prices, it is every bit as safe as buying that big name brand. Of course, “same day service” with white boxes is “if you take care of it”, but if that is a problem for you, buy it through your VAR, and have them give you same day service. It’s worth the time though to analyze the ROI on those maintenance fees you’ve been paying out. And you can always buy an extra white box to use as a stand-by without coming anywhere near the price of the big names

Let’s face it, we all just want functional systems. I don’t want to have to jerk around with hardware or VMs just to write some code (Docker and Vagrant are appealing to me in that regard), and business people don’t want to have to pay any more than they have to. Somewhere in between those two needs is a white-box project waiting to happen. Dropping a provisioning tool like Stacki onto them means they’ll even be live faster than your average big name deployment.

Not all projects are white-box. They’ve come a long way, and some would advocate moving your entire org to them… But you know me, I’m a fan of letting you decide what’s best for your org, rather than trying to tell you. Because a pundit like me just knows what looks best for me without knowledge of your environment.

And that mission-critical, 24×365 system? You might just want it on name brand hardware. Not because you’re certain it’s better, but because you want to sleep at night, which has a value all its own.

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.