
Monitoring SaaS Performance behind a Login Screen? No Sweat.
If you provide a SaaS app that requires users to log in, then you know how challenging it can be to monitor performance of pages behind that barrier. This is especially problematic since SaaS apps tend to be heavy, complex creations with a host of performance challenges. This means the web destinations most at-risk for performance problems are also the most difficult to monitor.
Faced with this difficulty, SaaS providers have historically had a tough choice. They could pay thousands of dollars monthly for an enterprise monitoring solution, or simply rely on the combination of local-environment testing and subjective feedback from users (and hope for the best). A new feature in Yottaa Monitor breaks this mold by offering an affordable solution for enterprise-grade monitoring behind a login.
How It Works
Yottaa Monitor now includes the ability to specify the monitoring of particular HTTP methods (GET/DELETE/HEAD/POST/PUT), coupled with options for identifying custom HTTP header and body values. This degree of flexibility allows for testing of specific pieces of a web application, such as testing an API or the functionality of a login screen — and moreover, setting up ongoing monitoring of pages beyond a login screen.
To set up a monitor for pages behind the login page of a SaaS app, first log in the app and navigate to a page you’d like to montior. Grab the URL and paste it into the URL space on Yottaa Monitor’s configuration screen, just like you would do to set up a monitor for a public page. Then, once you configure your locations and other options, click the “”Advanced”” tab on the configuration panel. Before doing anything here, go back to the page in your app that you’re setting up for a monitor. Using Firebug or a similar tool, locate the request header information for the page, where you will find the cookie that the app uses to track a logged in session. Copy this cookie, go back to Yottaa Monitor, check the box next to “”Custom HTTP Header”” and paste the cookie into the Value field. Type the word “”Cookie”” in the Name field above.
Now you’re set: the cookie will allow Yottaa Monitor to simulate a logged-in session, therefore monitoring the performance of that page as if a real user were loading it.
Note: there is often secure information contained in the Request Header cookie. Be very careful if you are using Yottaa’s Sharing feature for monitors when you use a cookie copied from a logged-in session. Only share with people whom you trust.
What Can Monitoring Show Us?
Here’s a real-world example of app monitoring in action, from a popular SaaS software product. For this example, we chose a page containing a simple form that we’d experienced long load times on in the past. The poor performance was surprising because content of the page is very light: only 325 KB total, much less than average. We set up a monitor for this page following the steps outlined above.
What we found matched our expectations. As time went on, the page reported consistently low Yottaa Scores, averaging a lowly 4 (out of 100) over the course of a few days. Time To Interact (an measure of total page load time from the user’s perspective) ranged widely from about 5 seconds up to well over 20 seconds for sustained periods of time.
“”Time to Last Byte””, an indication of total backend delivery time, averaged 13 seconds across these samples, while average Time To Interact was around 15 seconds. Since backend time is typically very short relative to the total page load time (with much of the page load occuring on the front end) this told us that the problem was likely on the backend. We decided to play with the trending graph, displaying different backend metrics in an attempt to isolate the problem. In particular, the big jump between average Connection time and average Waiting time was a red flag (see above).
Here’s a trending graph plotting raw samples of Time To Interact (Red), Waiting Time (Green), and Connection Time (Blue) for part of the sample.
We can see that the red flag with Waiting time was justified. Changes in the Time to Interact correlate almost perfectly with changes in the Waiting time, with only a couple hours where the two separated. The correlation existed no matter if page loads were over 20 seconds or around 5 seconds. This means that there is a big problem with Waiting time (a measure of how fast the servers are processing this page’s requests). Connection time, on the other hand, was consistent.
Needless to say, inconsistent performance and occasionally very long page load times are not good. Any SaaS provider would be horrified to know that one of its pages took half a minute to load on average for several hours. But even more frustrating is the fact that it’s most likely a problem that could be addressed and solved by an experienced team — if only they knew about it!
See What Your Users See
Sporadic reports of performance problems are little help when they can’t be replicated locally. To be sure you’re doing everything you can to deliver a satisfying experience for your users, rigorous performance monitoring and testing within the app is crucial. This means being able to isolate the root causes of perofmance issues, leaving nothing to chance.