Why DIY Anomaly Detection Isn’t Working
When it comes to eCommerce site issue identification and remediation, there’s no question regarding the priority level, or the fact that it needs to always be done faster and better. There is, however, the age-old question for many tech departments, “is this something we can do ourselves?”
Performance anomaly detection is an important piece of site optimization for eCommerce brands, and one that can save retailers from a ton of potential site issues — not to mention lost revenue — when done correctly.
Was your eCommerce site hit by any performance anomalies last year? Chances are good it was. Use this report to learn how you can navigate eCommerce performance anomalies moving forward.
Note that anomalies are not necessarily bad, but indicate a trend in site performance. However, even a small anomaly can have a BIG negative impact on site performance — and can affect any site, at any time. That means web pages, content, features, and other digital elements on your site may load slowly or not at all. Some anomalies can even cause complete site outages. Many eCommerce brands experience exactly that during the Cyber 5.
Now, more than ever, it’s vital to identify and remediate negative performance anomalies in real-time, as the consequences retailers face for poor site performance are harsh (and expensive). For example, 90% of consumers say they have left an eCommerce site because it did not load in the time expected.
The DIY anomaly detection challenge
So, is anomaly detection something that online brands can handle themselves? Finding an anomaly isn’t actually the hard part since they often stand out, as you can see in the example below:
Negative spikes in traffic are glaring and fairly easy to discover on your own if you are looking at your reports constantly. However, it’s uncommon that web teams are monitoring their site averages on a daily basis. For the example above, online brands can use their firewall tool to see the blocked traffic. However, in a normal environment, if those spikes didn’t cause any impact on your servers, you would most likely not see the alert.
With manual anomaly detection, especially in the eCommerce industry, it’s important to do the job right. Only catching some of your site’s issues, some of the time, can still lead to major negative performance results, not to mention poor shopper experience and lost conversions.
Unfortunately, eCommerce web teams are already spread too thin with other initiatives, like working with 3rd party vendors and maintaining a systems backup solution for data and system recovery in the event of a failure, among millions of other tasks. And in the case of monitoring for anomalies, if it isn’t something that’s causing problems, then you most likely aren’t looking.
Diversity in anomalies can deter DIY detection
For example, the average eCommerce site has 40-60 3rd party technologies. While these elements greatly enhance online experience for shoppers, they also have been proven to significantly slow down page speed, accounting for 70% of load time. But did you know any one of your 40-60 3rd parties technologies could cause an issue at any time?
That’s because if the average site has 50 3rd party technologies, and each 3rd party can require between 4 and 20 calls to servers, that’s 100+ points of failure.
Another example is that maybe your web team is monitoring around 4 pages at a time manually, including the homepage, yet you have over 10,000 product pages. All marketing has to do is upload a giant, high-res image on one of the other 9,996 pages to cause an issue. If your team isn’t monitoring that specific page, how would you know?
Without an advanced web performance anomaly detection system in place, it can be almost impossible for retailers to catch these anomalies, but many brands try to do so by using in-house anomaly detection solutions.
In-house anomaly detection is costly to develop and maintain, and can be very basic in terms of capabilities. There are too many things that can go wrong when manually detecting anomalies, as it can often be like finding a needle in a haystack. For example, maybe you didn’t set up the right thresholds, the thresholds that are setup correctly are static and need to be updated as business changes happen, and then there are the resulting true negatives (actual observations you should have identified, but missed).
The answer is automation … or is it?
There can be a lot of noise with manual web performance monitoring. But with automation, retailers can filter to the most critical alerts, and track many more metrics in real-time. Simply add in automated anomaly detection and you’re done, right? Not necessarily.
Tools like Google Analytics use automation which can be set up to monitor web performance metrics and fire off alerts when spikes in traffic or page-load time abnormalities are detected. But the system is not yet as advanced as users would like. For example, during his analysis of the feature, Nikolaj Bomann Mertz from the Data Dynasty notes that “Being able to select what metrics/dimensions to monitor (as a minimum), reviewing the forecasted bounds for all metrics among many other things are on my wish list.”
Not only do you need the right information at the right time in order to streamline remediation, but you need the entire story of what happened and when.
The real solution
In other words, the answer is actually optimization. The ideal solution for eCommerce performance anomaly detection comes with optimizing your site and all digital elements in order to understand the full picture.
YOTTAA has full visibility of all web pages and digital elements across entire eCommerce sites, at all times. Working together with its Anomaly AI tool, YOTTAA automatically assigns performance thresholds to the behavior of an eCommerce page, and machine learning algorithms adjust those thresholds based on historical trends and variations. Once performance thresholds are exceeded, Anomaly AI sends an alert that optimization is needed.
The best anomaly detection tools use automation, optimization, and machine learning to help online brands improve how quickly and precisely delays are detected from individual page elements. This way, you can make your pages load even faster and more consistently, and avoid costly site outages.