Manual Optimization Is The Root Of All Evil
“Premature optimization is the root of all evil. ”
Donald Knuth, renowned computer scientist
“Manual optimization is premature optimization. ”
Chris Weekly, web performance guy
Ergo, manual optimization is…
THE ROOT OF ALL EVIL
The more time I spend working on automating web performance optimization, the more I appreciate just how unnecessarily painful and arduous manual optimization really is. Before joining the team here at Yottaa, I spent years in a “web architect” role at a larger company where I was responsible for defining presentation-tier standards and performance guidelines. I always tried to be a champion for the end user, and constantly evangelized WPO and other UX best practices. I met with some success in educating business stakeholders as to the importance of speed, but translating their general agreement into prioritized optimization projects was a real challenge. And even when time and resources were allocated to performance-related work, we were faced with numerous obstacles.
IT’S MADE OF PEOPLE
For starters, there’s the structure of the typical mid- to larger-sized team. Segregation of responsibilities can make implementing and maintaining a robust web performance optimization strategy incredibly difficult. Picture a team that includes any number of DBAs; core backend engineers; web application developers; presentation-tier web developers; content managers; creative designers; operations and release engineers. Each of these roles bears some responsibility for the overall performance picture, and communicating, implementing and maintaining an approach to WPO across these concerns is not an easy task. For smaller companies, there may be fewer people to coordinate, but that leaves each individual less time to master and apply best practices.
Another challenge is the complexity inherent in most web applications of any real size or age. If you started a web-facing enterprise more than a couple years ago, you are likely managing a heterogenous ecosystem of technologies, including one or more web app frameworks; layout engines; content management systems; personalization engines; A/B or multivariate testing frameworks; reporting and analytics packages; and any number of third-party web-tier integrations across various web properties. It is difficult for individuals to obtain a holistic view into what the website is actually delivering to end users and precisely how each component fits in. Choices that seem reasonable in one narrow context may be a mistake in aggregate, when the final context of a page is taken into account. Competing business goals, complex and disparate systems, separation of concerns and distributed authorship all add up to the extreme balkanization of many sites’ pages and a serious challenge for consistent and effective optimization efforts.
Then there are the maintainability trade-offs. For example, image spriting is an excellent and important technique for reducing HTTP requests, but done manually it introduces potentially serious maintenance problems. Seemingly minor change requests can require a web developer to analyze markup and css for appropriate dimensions and positioning, the editing of any number of html and css source files, possibly going all the way back to the designer for .psd layers and generating the new sprite image itself. Tools like http://spriteme.org can help take some of the pain out of the actual sprite creation, but much of the overhead remains.
HTTP caching headers are another potentially high-maintenance aspect of WPO. Setting far-future cache headers on all your resources practically requires automation of filename revving and path-rewriting mechanics, as you really don’t want to have to (and may not even be able to) manually touch every reference point every time you edit a cacheable resource. File suturing / concatenation and minification similarly need to be automated to avoid maintenance headaches.
Experienced entrepreneurs, smart startups and agile teams at all kinds of successful companies know you have to ship it. Try things quickly, iterate and move on. “Fail cheaply”, right? But when you spend time performance-optimizing pages by hand, you reduce the number of things you can try, and increase the cost of failure. And developer time is always a scarce resource that shouldn’t be squandered. On the other hand, if you ignore performance considerations, your users will be unhappy (and will behave differently), which in turn will make you, your boss and your investors unhappy. So, how are you supposed to optimize for developer time while making your users happy?
AUTOMATION IS THE ANSWER… BUT CURRENT TOOLS ARE INADEQUATE
Build-time tools like Maven’s “minify” plugin, Ruby Sprockets and the like can help keep source code maintainable and are a step in the right direction for automating web performance optimization. But they only get you partway there.
Newer run-time tools like Apache’s mod_pagespeed are positioned to help as well, but they don’t account for all aspects of optimization, and don’t work across all server technologies. None of the available build-time tools nor server-side tools can optimize for all the relevant techniques, including DNS lookups; optimal CDN selection and cloud provider selection; HTTP caching headers; the many dozens of in-page YSlow and PageSpeed techniques such as compression, spriting, file suturing, minification and script deferral; and browser-specific domain sharding and protocol selection.
THE CLOUD IS COMING
Ultimately, I think the cloud is the only place where WPO can really come into its own across this entire broad spectrum of techniques. Automated WPO in the cloud is the only way to obtain all these benefits without exorbitant cost. It leaves developers free to write more maintainable code faster and to focus on new features. Now if only someone would build a robust WPO service in the cloud….
What is your WPO process? What tools do you use for automation? What role do you envision for cloud services in your WPO strategy? Leave a comment below, or drop me a line via email or twitter.