By Harry Roberts
Harry Roberts is an independent consultant web performance engineer. He helps companies of all shapes and sizes find and fix site speed issues.
Written by Harry Roberts on CSS Wizardry.
N.B. All code can now be licensed under the permissive MIT license. Read more about licensing CSS Wizardry code samples…
This is the second in a two-part post. Read Part 1.
Hopefully you made it here after reading part 1 of this post. If not, I’d encourage you to start there for some context.
After writing a somewhat insistent piece on the pitfalls of using Base64 encoding to inline assets (predominantly images) into stylesheets, I decided to actually gather some data. I set about a simple test in which I would measure some milestone and runtime timings across ‘traditionally’ loaded assets, and again over assets who’ve been inlined using Base64.
I started out by creating two simple HTML files that have a full-cover background image. The first was loaded normally, the second with Base64:
The source image was taken by my friend Ashley. I resized it down to 1440×900px, saved it out as a progressive JPEG, ran it through JPEGMini and ImageOptim, and only then did I take a Base64 encoded version:
harryroberts in ~/Sites/csswizardry.net/demos/base64 on (gh-pages)
» base64 -i masthead.jpg -o masthead.txt
This was so that the image was appropriately optimised, and that the Base64 version was as similar as we can possibly get.
I then created two stylesheets:
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
.masthead {
height: 100vh;
background-image: url("[masthead.jpg|<data URI>]");
background-size: cover;
}
Once I had the demo files ready, I hosted them on a live URL so that that we’d be getting realistic latency and bandwidth to measure.
I opened a performance-testing-specific profile in Chrome, closed every single other tab I had open, and I was ready to begin.
I then fired open Chrome’s Timeline and began taking measurements. The process was a little like:
Point 4 was an important one: any connection activity would have skewed any results, and in an inconsistent way: I only kept results if there was absolutely zero connection overhead.
I then emulated a mid-range mobile device by throttling my CPU by 3×, and throttled the network to Regular 2G and did the whole lot again for Mobile.
You can see all the data that I collected on Google Sheets (all numbers are in milliseconds). One thing that struck me was the quality and consistency of the data: very few statistical outliers.
Ignore the Preloaded Image data for now (we’ll come back to that later). Desktop and Mobile data are in different sheets (see the tabs toward the bottom of the screen).
The data was very easy to make sense of, and it confirmed a lot of my suspicions. Feel free to look through it in more detail yourself, but I’ve extracted the most pertinent and meaningful information below:
It’s quite clear to see that across all of these metrics, we have an outright winner: nearly everything—and on both platforms—is faster if we stay away from Base64. We need to put particular focus on lower powered devices with higher latency and restricted processing power and bandwidth, because the penalties here are substantially worse: 32× slower stylesheet parsing and 10.27× slower first paint.
One problem with loading images the regular way is the waterfall effect it has on downloads: we have to download HTML which then asks for CSS which then asks for an image, which is a very synchronous process. Base64 has the theoretical advantage in that loads the CSS and the image at the same time (in practice there is no advantage because although they both show up together, they both arrive late), which gives us a more concurrent approach to downloading assets.
Luckily, there is a way we can achieve this parallelisation without having to cram all of our images into our stylesheets. Instead of leaving the image to be a late-requested resource, we can preload it, like so:
<link rel="preload" href="masthead.jpg" as="image" />
I made another demo page:
By placing this tag in the head
of our HTML, we can actually tell the HTML to
download the image instead of leaving the CSS to ask for it later. This means
that instead of having a request chain like this:
|
|-- HTML --| |
|- CSS -| |
|---------- IMAGE ----------|
|
We have one like this:
|
|-- HTML --| |
|---------- IMAGE ----------|
|- CSS -| |
|
Notice a) how much quicker we get everything completed, and b) how the image is now starting to download before the CSS file. Preloading allows us to manually promote requests for assets who normally wouldn’t get requested until some time later in the rendering of our page.
I decided to make a page that utilised a regular image, but instead of the CSS requesting it, I was going to preload it:
<link rel="preload" href="masthead.jpg" as="image" />
<title>Preloaded Image</title>
<link rel="stylesheet" href="image.css" />
I didn’t notice any drastic improvements on this reduced test case because preload isn’t really useful here: I already have such a short request chain that we don’t get any real gains from reordering it. However, if we had a page with many assets, preload can certainly begin to give us some great boosts. I actually use it on my homepage to preload the masthead: this is above the fold content that is normally quite late requested, so promoting it this way does yield some significant change in perceived performance.
One very interesting thing I did notice, however, was the decode time. On Mobile, the image decoded in 25ms as opposed to Desktop’s 36.57ms.
I’m not sure why this is happening, but if I were to make a wild guess: I would imagine images don’t get decoded until they’re actually needed, so maybe if we already have a bunch of its bytes on the device before we actually have to decode it, the process works more quickly…? Anyone reading who knows the answer to this, please tell me!
Although I did make sure my tests were as fair and uninfluenced as possible, there are a few things I could do even better given the time (it’s the weekend, come on…):
And that concludes my two-part post on the performance impact of using Base64. It kinda just feels like confirming what we already know, but it’s good to have some numbers to look at, and it’s especially important to take note of lower powered connections and devices. Base64 still feels like a huge anti-pattern.
Excuse the semantics here: I’m basically testing on my laptop and an emulated mobile device, but I’m not talking about screensizes. ↩
N.B. All code can now be licensed under the permissive MIT license. Read more about licensing CSS Wizardry code samples…
Harry Roberts is an independent consultant web performance engineer. He helps companies of all shapes and sizes find and fix site speed issues.
Hi there, I’m Harry Roberts. I am an award-winning Consultant Web Performance Engineer, designer, developer, writer, and speaker from the UK. I write, Tweet, speak, and share code about measuring and improving site-speed. You should hire me.
You can now find me on Mastodon.
I help teams achieve class-leading web performance, providing consultancy, guidance, and hands-on expertise.
I specialise in tackling complex, large-scale projects where speed, scalability, and reliability are critical to success.