Open Source Web Site Performance Tools

Louis St-Amour wrote a very helpful comment on my “The Complete List of End-User Experience Monitoring Tools” post – helpful enough that I thought it deserved a whole post. OWA, indeed commonly means Outlook, so it’s kind of unfortunate naming clash, but in this case I’m referring to Open Web Analytics from http://www.openwebanalytics.com/ which has what it calls, “domstream recording”. This lets you see exactly what a user sees in their browser and where their mouse goes, though I’m not sure how embedded it is for performance timing or error catching. A similar feature is in piwik as a plugin, as I understand it. This brings up an interesting point that I heard a number of times at this year’s Velocity conference – when will web analytics start providing better web performance data?  Yes, I know that Google Analytics has a Site Speed Analytics Report – but that is just a start. PageSpeed is not just a Firefox/Chrome extension  It’s a web service now too! Seriously though, what I was referring to was mod_pagespeed, the Apache module, which beyond all its other fun goodies has an “instrumentation” feature that — you guessed it: measures “the time the client spends loading and rendering the page, and report that measurement back to the server”http://code.google.com/speed/page-speed/docs/filter-instrumentation-add.html Google helpfully mentions that their instrumentation feature is not needed if you already have one of your own and cites Boomerang, Episodes or Jiffy: – http://yahoo.github.com/boomerang/doc/ – http://stevesouders.com/episodes/paper.php – http://code.google.com/p/jiffy-web/ It is interesting that I have not found enterprises leverage these wonderful free tools (if you work for a large enterprise and use these tools – please comment!). I believe the main issue for enterprises is that there is a lot of work that one has to do in order to build a more complete solution from these free tools – a lot of it is around displaying the data in a meaningful way. Sampling is indeed the worst part about Google Analytics … sigh. Sampling is a big problem when it comes to identifying problems. I see a lot of the new free end user experience monitors which only sample end user response times. Thing is – it gives their marketing department what they need – but it leaves their customers guessing when problems arise. The reason problems with IT systems are so hard to catch is that they are typically erratic and unpredictable. Without a complete data set it becomes difficult to prioritize and troubleshoot. Also, I would suggest that “real end user experience” tracking is something you’d need to be Google Chrome to get. After all, as Google notes about its own filter: “Note that the data reported by this filter is only approximate and does not include time the client spends resolving your domain, opening a connection, and waiting for the first few bytes of HTML.” Of course, that’s where the highly technical Speed Tracer comes in http://code.google.com/webtoolkit/speedtracer/ I agree that most vendors do not do “real end user experience tracking” – they typically only track the datacenter’s contribution in an accurate way. So that is definitely something to look out for – especially if the applications you are responsible for serve company employees. Often times just being able to identify that a slow desktop is the reason a user is unhappy can go a long way. Are there any other Web Site Performance tools worth mentioning? Please leave me a comment and I will add it to the...

read more

Application Performance Management is a Big Business Getting Bigger

If you doubt that application performance monitoring is gaining traction in the IT world, a recent Software as a Service (SaaS) system management sector ranking should dispel that idea. According to IDC’s “Worldwide System Management Software-as-a-Service 2011-2015 Forecast and 2010 Vendor Shares” report, the application performance monitoring unit of Compuware ranked third behind HP and IBM as an SaaS vendor, with a revenue of $49 million. Application performance monitoring is becoming recognized more and more as an indispensible tool in the IT manager’s toolbelt, and is almost a $6 billion market on its own. Recent moves in the APM sector certainly reflect that. In July, Compuware paid $256 million to pick up APM developer DynaTrace Software, spending big on a company that pulled in $26 million over the prior 12 months. As the month drew to a close, CA Technologies snapped up subscription-based monitoring company WatchMouse for an undisclosed amount. The demand for end user experience monitoring and business transaction management appears to be growing, and big companies like CA and Compuware are trying to take advantage. Application performance is tied driectly into all the things that makes a customer’s business transaction positive, affecting the bottom line through revenue, brand value, and overall customer loyalty. APM is becoming increasingly important as SaaS growth is exploding, and along with it the user of cloud computing and mobile applications. That means that the primary user of business software is no longer the internal employee. At least, not solely. Your most important user can be the visitor on your web site, and if a poor application experience is the difference between a sale or no sale, APM can help tip the scales in your favor. IT departments are starting to catch on, which is why we’re seeing all these big moves in the APM sector, and we are likely to see more similar moves in the...

read more

Moving Beyond Synthetic Monitoring

If you’ve been following the recent financial crisis in Europe, you may have read a headline or two this month about how some European banks failed stress tests. Without getting too much into the mechanics of the world of high finance, a stress test for a bank is pretty much what it sounds like: an outside organization runs a series of “what if” simulations that tests the institution’s capability to stand on its own should the worst happen. Simulations like this aren’t only found in the financial sector. Such simulations are also used in application performance management, in a process known as synthetic monitoring. Synthetic monitoring isn’t about checking out the latest fashions in polyester or rayon; it’s the process of simulating user transactions in an application to get a picture of how that application will respond. Multiple use cases are tossed at the application to prove (or disprove) that it won’t break. It is typically done on web applications by programming a browser with scripts that simulate user interaction. There’s a definite place for synthetic monitoring in APM: it can be used to establish a baseline performance level before an application goes live on a production website or other deployment infrastructure. Basically, synthetic monitoring is useful for determining if the app’s metaphorical engine will start and at least drive down the road without blowing up. But there’s a big gap between not blowing up and actually running smoothly. Synthetic monitoring only gets to the big highlights of application monitoring, not the fine details. For truly complete APM, actual end user experience monitoring is essential, because only a real person will be able to communicate the intangibles to developers and testers. How responsive did the application “feel”? Did the transaction work as expected in a timely manner? What happened when you click the wrong thing? These are all important questions, and they’re not something that synthetic monitoring can report well. End user monitoring can find all the subtle little bugaboos that can trip up an application’s performance because, frankly, human beings are great at breaking things. But in this case, it’s a good thing. Knowing how an application can break or slow down when something is incorrectly done is just as important as knowing how it handles when things go right. Synthetic monitoring has its place in APM, but like everything else in APM, it cannot be relied upon as the singular tool. Business transaction monitoring and end to end monitoring must also be in the mix, to get a complete picture of how an application...

read more

Top Three Tips for Cloud Application Management – From EMA

Recently, EMA came out with a very informative webinar which focused on end user experience monitoring for the Cloud. You can see a full recording here (you do not even need to provide an email address). The entire slide deck is available below via SlideShare. If I had to sum up the top three tips in three words, they would be: Be paranoid about your end user experience monitoring Watch every transaction from every user Get the key user experience metrics Look across space Be smart about your end user experience monitoring Create performance models Compare between good and bad transactions of the same type Compare between new and old production behavior Be lazy about your end user experience monitoring Enforce Service Level Agreements (SLAs) on end user experience Automatically alert when response times reach thresholds Automate actions to improve performance EMA – Measuring the User Experience in the Cloud View more presentations from Correlsense If you are interested in more on this topic you may find tomorrow’s live “Confessions of a CIO” webinar...

read more

The Complete List of End User Experience Monitoring Tools

I am attempting to put together a complete list of End User Experience Monitoring tools since I could not find one anywhere on the web. I need your help in order to complete this list – there are surely going to be tools that I miss – please leave me comments with tools you think I should add to the list. What Qualifies as an End User Experience Monitoring Tool? In order to count as an End User Experience Monitoring tool it must be able to track the response times that real users experience when visiting the site – not a robot which is synthetically pinging the site. Specifically I am referring to tools that would enable IT operations to ensure that the real end users of an application or website are experiencing good performance. As I have alluded to in a previous post “speed solves a lot of problems” – claiming that even if your usability is not perfect – if it runs fast – people are less likely to...

read more