“Utter mayhem”, as Paul Venezia says. While he has in mind “When virtualization becomes your worst enemy“, it’s both more and less than this.
His anecdotes are perfectly apt, of course: one network interface controller (NIC) goes out, and an hour-and-a-half later you find yourself with your entire network dysfunctional, the telephone system entirely unavailable, and even the heating-ventilation-air-conditioning (HVAC) controllers starting to toss themselves on the funeral pyre as their DHCP leases expire. All our virtualization and redundancy and more virtualization were supposed to prevent catastrophes–and they do prevent many!–but they also leave our configurations only a character or two away from utter fragility. Make a single mistake, and your carefully constructed failover configuration becomes a house of cards.
These stories are in no way unique to virtualization. I get as much amusement as the next technician from recognizing that we often have to replace whole subsystems for maladies once fixed with just a little carburetor cleaner or judicious terminator re-crimping. However much nostalgia I have for previous generations of technology, and however many faults I see in currently-defined standards, I know that today’s systems largely serve us better as systems than the ones from even a few years ago.
True, as Venezia concludes, “[w]hen the foundation fails, we have more work to do than ever to repair it.” This is right “up our alley”, though. It’s OK to virtualize everything, and for software to take over the core of every organization, because we’re good at software. With as much as possible of operations virtualized into software configurations, we just have to figure out how to manage and verify those configurations, and I’m sure we can solve that problem.
To create trustworthy virtualization configurations is not just possible; it has myriad benefits. Mark Vaughn explains several in “Why virtualization admins and desktop hypervisors should be BFFs“. Among the ones prominent in Vaugh’s own career:
- it costs a lot less administrative time to provision a couple dozen virtualized copies of a training environment, than to configure “real” instances of the same training environment;
- virtualization makes it possible to preserve a broken server for postmortem forensics while simultaneously restoring all the server functions; and
- he can carry with him on his laptop emulations of blades, storage arrays, specific desktop operating systems, network equipment, and more.
In his words, “… that is the kind of productivity and agility that can be a game changer. … You may be surprised at just how many ways it provides value.”
Risks, but bigger rewards
Virtualization is a kind of recognition that our operating systems fail to give the efficient system services we need. Virtualization certainly has performance costs and design hazards. When done right, though, virtualization provides great value in making our resources as manageable, inexpensive, and reliable as well-configured software can be. Don’t see the risks and turn away. See the opportunities, and make virtualization work right.
In the coming weeks, the Real User Monitoring blog will return to the topic of virtualization, and especially such topics as the role of mainframes in the datacenter, how to create high-quality virtualized artifacts, and virtualized networking equipment.
Leave a Reply