Too Much Information: Monitoring Becomes a Big Data Problem

As the amount of information increases, it will be up to the system to help us monitor for signficant events.

In a keynote address this week at EMC World, VMware CEO Paul Maritz talked about the growing monitoring challenge as system pools and the sheer amount of information these systems generate grows ever larger.

To put it another way, monitoring has become a big data problem.

He claimed that the number of devices connected to the Internet is going up by a factor of 10, and the growth is not just being fueled by humans, but also by an increasing number of internet-enabled devices feeding more and more data into the system.

This ‘Internet of Things’ will interact with the virtual world in a fundamentally different way than we do and they will be feeding us vast quantities of information, which we in turn have to process to build a better understanding of our world. These tools could be even baked into the systems we need to monitor.

Yet there is such a thing as too much information. As your logs fill up with more and more data, it becomes impossible for a human to find an issue within such large pool of data. The volume becomes a curse.

Martiz says the underlying architectures on which we base our systems have to change to meet the new information processing challenges. He said if you look at the ways Facebook and Google have handled this, they have not used traditional architecture because they couldn’t handle the processing requirements. IT pros across a variety of companies, not just the biggest data generators, have to take a cue from these companies and begin to transform the data center to meet these new computing issues.

Martiz said at a certain point, it becomes up to the system to help us find the real problems. It’s what he called real-time analytics at scale problem. He gave an example of a customer with a pool of 30,000 virtual machines, which generates an astonishing 500 million events per hour. As he pointed out, “there is no way a human can make heads or tails of that.”

He said at that point, you need system automation where the system finds issues and signals you when things go wrong. At the simplest level, that could be a red light indicating a problem instead of a green one that indicates things are fine.

But it needs to be more than a simple signal that lets you know something is wrong. It has to also point you in the right direction, or even give you specific information and suggest ways to fix the issue (and it has to know the difference between really major problems and minor ones that won’t bring down the system).

Maritz went so far as to suggest you need the systems to monitor at a whole new level, looking at usage patterns and predicting future trouble before the trouble even happens. That could let you know a server pool needs tuning or you need to provision additional services.

The future of monitoring is about automation and prevention. Data can be a blessing or a curse. If our future systems can help you find and resolve issues, sometimes even before they happen, that will make the job of a monitoring professional a lot easier than it is today.

Photo by cheyennezj | iStock Photo

 

No Comments

Trackbacks/Pingbacks

  1. Survey Finds Organizations Struggle to Understand Big Data | Real User Monitoring - [...] this confirms what I reported last week in the post, Too Much Information: Monitoring Becomes a Big Data Problem. …

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>