Higher-level datacenter planning
You have your datacenter humming along nicely. Incidents arise, but you routinely clear them. How do you make sure that you’ll continue to be able to keep up?
Part of the answer is to measure performance accurately and comprehensively. One of “Real User Monitoring blog“‘s most repeated messages is that application performance monitoring (APM) needs to be performed at a level that has business, not just technical, meaning. An earlier generation of performance indicators focused on technical parameters (think of network collisions or percentage of disk capacity used) that feebly illuminate how our applications perform for real end-users. We can do better than that, though.
Several recent articles sketch the direction your own planning should take. Jimmy Augustine of HP reports in “The Future of Application Monitoring” that, for him, “it’s not about the application [but] … the business process”. Information Technology (IT) departments will increasingly need to think in terms of “what’s core and differentiating”. As we’ve already had to learn and re-learn, what is important needs our attention earlier in our processes. Testability and documentation matter, for instance, and smart development managers know to include them in plans and schedule when projects start. Similarly, we’re headed to a day when a necessary aspect of any deployment is effective APM. That will happen, Augustine hints, when monitoring code is “injected during development” to make it a predictable, manageable element of every delivery.
“Real User Monitoring” was among those to criticize the New York Times for its recent series on datacenters. On further reflection, I’ve begun to wonder if a common error in IT management manifested as a systematic breakdown in the Times‘ analysis: too often, decision-makers think of capacity in terms of server count or another proxy for computing power. This represents at best a third of real-world performance, for it neglects storage and networking. A recent report even claims that CPU capacity has been close to flat for some recent years–at the same time as storage growth is “rapid and sustained”. A different survey, on the future of networking, turned up “one big surprise”, a “shocker” among Cisco customers: among those looking to reduce use of Cisco equipment in the future, 38% claimed to be considering VMware as an alternative. That’s a strong signal that software-defined networking (SDN) and related virtualization and “open” technologies are poised to make a big impact.
One opportunity to practice the unifying perspectives now demanded in the datacenter is this week’s webinar, “APM and Capacity Planning Imperatives for a Virtualized World“. Correlsense, sponsor of “Real User Monitoring”, is also a co-organizer of the webinar. There’s still a lot of room to improve our use of virtualization, and meetings such as “… Planning Imperatives …” are an inexpensive way to survey the possibilities.
Look at whole-application performance; consider the teamwork between network, storage, and processing your datacenter requires; focus on business requirements and end-user experience; and plan ahead. Do these things, and your Operations will be as healthy as possible. “Real User Monitoring” will bring you specific ideas on APM, SDN, storage techniques and related matters over the coming weeks to help achieve exactly that.