“The right tool for the job” is a great slogan. It’s only a fraction of an analysis, though. Let’s see how it applies to monitoring products.
Nearly all engineers know this scheme, in one of several homely renderings: only an idiot would use a saw to hammer in screw; you don’t deliver a load of concrete in a Ferrari; or so on.
The problem with the slogan also shows up in several ways, best summarized like this: ask five different engineers for the right tool for a particular situation, and you’re likely to get six answers. Too often “right tool” is a rhetorical club aimed to close an argument, rather than the starting point for a thoughtful decision.
This particularly applies for organizations selecting tools to monitor software and end-user experience (EUE). Monitoring software must respond in real-time–often in less than a second–while running unattended for months at a stretch. Monitoring software is a substantial investment whose long-term characteristics are related only weakly to anything that can be determined beforehand. Reliability and availability are paramount virtues for monitoring software, yet there is no independent measurement of these crucial indicators. Selection of a monitoring tool can feel more like a mail-order marriage than an engineering project.
What’s the solution? One step is to recognize that “right tool” has a lot to do with organizational environment. In most companies, a mediocre tool which is the responsibility of an enthusiastic advocate ends up providing more benefit than a top-end purchase that no one bothers to consult after installation. Another big predictor of success: persistence and a good attitude within the vendor’s support team. IBM once was famous for teaching this lesson: with enough commitment and personal attention to “customer satisfaction“, even the most underdone systems could be enhanced to usability.
Here are a few ideas, then, to help you locate the “right tool” for your monitoring jobs:
- Most of your use of the tool will be “in maintenance mode”: after you’ve configured and customized it to your specific requirements. The difference between taking ten minutes or ten hours to get the first result, therefore, isn’t even rounding error in the ultimate calculation of the system’s success.
- On the other hand, whether the out-of-the-box experience takes ten minutes or ten hours to understand might be a powerful indicator of the over-all quality and approach of the product.
- Support and documentation matter. It also matters how the support and documentation fit your organization. Some companies only make progress with a physical person on-site to give individual guidance; others are perfectly comfortable or even prefer conducting support through e-mail. Be realistic about what works in your neighborhood.
- Work with the vendor and internal advocates to design meaningful milestones. An example: “within a week of installation, application AA or site SS needs to appear on the real-time dashboard. At that point, if someone experimentally pulls the network connection for server MM, the tool will diagnose the fault within five seconds.” The latency can just as well be five minutes; the essence is to construct an objective, useful measurement that fits your requirements.
- Plan ahead of time how to deal with misses. If your choice fails to meet its first milestones, do you replace it with a different product, stop payment to the original vendor, demand more effort from the product’s internal advocate, or have your own information technology (IT) team build a new tool? Any of these might be right for your situation; what’s sure, though, is that missing a milestone and taking no action doesn’t help.
EUE and other monitoring tools have enormous potential to improve your operations. Make the most of that potential by thinking through beforehand what is the “right tool” for you.