Big Data plays a crucial role in successful application performance management (APM). What’s frustrating, though, is how much else we have to get right at the same time to achieve that success.
That’s how Mike Gualtieri’s interview with Forrester Principal Analyst Glenn O’Donnell, styled, “Big Data Has Big Place in IT Ops“, leaves me feeling. O’Donnell accurately portrays how far we have not come, in that APM now faces many of the same challenges that were already apparent in network monitoring and related dashboards around 1990: “breaking through the silos”, and separating the signal from the noise. O’Donnell credits products in the marketplace with plenty of capability to make “pretty charts and graphs”, but little “actionable” intelligence about how to fix problems.
The most frustrating aspect of the interview is that it supplies so little advice–actionable advice!–that responsible devops can implement right now, apart from study of his upcoming report.
If I understand O’Donnell correctly, he’s crediting APM with impressive capacity to generate, in near-real-time, graphs that unmistakeably alert conditions such as, “response time of applications dependent on the Dallas network is deteriorating and will soon cross our established threshold”. There’s value in that. What he laments is that there’s no quick button to select which delivers advice on the order of, “immediately re-route all Forest domain traffice and observe whether congestion clears!”
Worse, any optimism he gives about prospects for improvement in the short term is limited. O’Donnell seems to have a strong belief that big data-style analytics will replicate the accomplishments of statistical processing in, say, geosciences, or neuroscience, for the benefit of APM. At the same time, he sees predictive prevention as too ambitious for now, appears to judge that statistical inference is practiced well by only a handful of small and peripheral vendors, makes a strong case that anything which smacks of centralized polling is simply incapable of keeping up with realistic requirements, and advises that “you’re not going to see into the cloud” with APM any time soon. That doesn’t leave much hope!
The most frustrating aspect of the interview is that it supplies so little advice–actionable advice!–that responsible devops can implement right now, apart from study of his upcoming report. Few datacenter managers or APM decision-makers are in a position to judge the technical merits of statistical packages in a deep way, for instance. It’s almost equally hard to distinguish vendors who reap real value from big data, from those who know big data only as a branding tactic. How do we decide between these? What choices do we make today to make the most of our APM?
While I disagree with O’Donnell on details–I expect APM of cloud-hosted components to work out well over the next year or two–I think his analysis of APM’s position is largely correct. My disappointment is that he hasn’t done more to generate solutions to the question: what, then, do we do?
Too many industry analysts and commentators give me the impression that they’re more comfortable as spokesmen for vendors than advocates for engineering advance. O’Donnell’s better than that: he’s experienced and engaged with APM’s situation. Listen for yourself. I think you’ll find that, at least in this conversation, he supplies valuable insight. For concrete steps to achieve the high availability and performance we’re after, we have to turn elsewhere. More on that, next week.