The Future of Web Performance - Part 1
Web performance is critical to a successful online business. Keeping on top of the latest technologies, techniques and practices allow us to stay ahead of the curve and get the edge over our competition. This competitive edge comes through improved user experience and reduced operational costs (and the pride in achieving a sub-1,000 Speed Index!)
- Very Soon...
- Third-Party Functionality
- Emerging Markets
- ECMAScript 6
- Low-Powered Devices
- Rich Content
- Web 3.0 / Semantic Web
Flash has been around forever (well 10 years is forever in web-years) It allowed web designers to create rich experiences and port them directly to the web in a format compatible with most operating systems and browsers.
This. Was. Great.
So you want to embed a game on a page? Use Flash. So you want to have an interactive museum exhibit? Flash. The uses were nearly endless, from poorly designed website introduction animations right through to online music albums.
The reason Flash was so universal is that you had the core code installed on your machine, the website element was just to tell the code where to download the Flash file from and how to render it. This opens the client computer up to quite a few security risks. Files from the web run directly in native code installed on your machine is generally a bad thing.
Losing Flash is a positive change. Flash was horrible for web performance and the modern replacements are more than up to the challenge. The HTML5 and interactive client-side applications that replace Flash have their own performance challenges, however... (A story for another blog post or ten).
The Electronic Frontier Foundation created a browser plugin called HTTPS Everywhere in 2011 and it has been in active development since 2012 The plugin will always try to redirect to a secure version of a site and its third-party resources. The intention of the plugin is to increase the security of your connections and to keep your data private across the network Google Chrome is planning to mark non-HTTPS pages as affirmatively non-secure in the future This means that, as a site owner, you should expect all traffic to be served securely over HTTPS (oh, and you must use TLS not SSL) This paradigm shift has significant performance implications as secure connections take time to establish. To optimise secure connections ensure that HTTP keep-alive is enabled (the default for HTTP/1.1), that you have session ticketing enabled and that your negotiating server is optimised for rapid negotiation. Ilya Grigorik runs the site isTLSfastyet.com which has much more information on the topic (hint: the answer is yes, TLS is fast when done right!)
Delivering content to multiple device-types well is hard.
Delivering content to mutliple device-types fast is really hard.
We all know that desktop is dying, giving way to mobile and tablet consumers (especially while commuting) As images make up over 50% of the average site, and bandwidth is limited with mobile connectivity, delivering a fast responsive site to all users means sending the smallest images possible whilst maintaining visual quality.
This is by no means an easy task, otherwise companies would not be selling services specifically to make it easier. Fortunately for us, new HTML specifications are [being drawn up](http://responsiveimages.org/ Responsive Image Working Group) and implemented in browsers which will help in our quest. These new specifications allow us to define images with breakpoints, just like our CSS, so that smaller devices get the correct size images and retina displays get their high resolution versions. See the srcset definition for a good place to start, and a useful explanation on CSS-tricks
Unfortunately for us, this creates more work in creating the multiple image versions on top of the additional markup required in our HTML.
Hey, I didn't say we were getting a free ride.
Some build and automation tools will make this process easier for you, however. See the Grunt Responsive Images plugin (thanks to Andi Smith as an example of responsive image automation.
If this all seems a bit much, your CDN or content delivery provider may be able to do it for you with little work on your end. Whatever happens, we need to keep marketing happy by delivering the best quality images, keep development happy by automating everything and keep IT happy by minimising cost of delivery.
This is not a small task!
I've mentioned earlier in this post that consumers are moving towards mobile devices. This shift in consumer behaviour is also causing a shift in the way we build our websites.
Mobile-first means developing your site to work on mobile devices, then scaling the design up to larger screens. This is as opposed to traditional responsive design which tends to wrangle desktop sites into smaller form-factors.
Being mobile-first means opening up your potential audience to the next billion online consumers. This point was made extremely well by Bruce Lawson in his keynote at Velocity Santa Clara 2015. Potential consumers in under-privileged areas will access your web site or application using a mobile device. Period.
Beyond the obvious advantages in being mobile-friendly, all types of online business will have customers following links from social media on their mobile phones. Unfortunately, these types of clicks generally use the webview built in to the social application rather than a 'proper' web browser. This means that there are even more platform / device / browser combinations to worry about and further considerations on web performance (which cache will I use? What resources will I get?)
Designing for mobile-first means layouts that start small and grow with the screen, minimal network interactions and use of key HTML5 features such as Open Graph
Designing for fast mobile-first means:
- Heavy use of caching
- Optimal use of network time
- Removing all unnecessary third-party assets
Microservices and Containers
Containers and Microservices are real buzz-words at the moment. If you are unfamiliar with the terms have quick read of this Medium article on the topic.
Breaking a monolithic application into atomic components - each with their own function and service definition - is a very logical move. Microservices have many benefits:
- Rapid development cycles - rapidly iterate services on fixed service definitions
- Easy replacement of functionality - changing payment provider? Replace the payment service
- Simple testing during development - stubbing of services means that each can be tested for performance and functionality in isolation
- Logical scalability - scale each component as necessary rather than the whole application
There's no escaping microservices. The concept offers many benefits and will apparently make web application development much more simple.
There are various performance and security issues that we need to be aware of though. I won't try to enumerate all of the potential issues that microservices create - because Gareth Rushgrove did it better than I could ever do at the most recent London Web Performance Meetup Please find his fantastic slides here
Wow, that's quite a list of things we need to care about, and I haven't even started to consider the way we measure and monitor site performance. Using WebPageTest, sitespeed.io, SpeedCurve or similar to test front-end performance before any release is critical. We also need to ensure that we include mobile and responsive breakpoint testing.