Explore My Notes

Optimising LCP | Harry Roberts

Harry has created an absolutely phenomenal talk here that provides an immense amount of depth whilst still being completely accessible to someone like me who largely doesn't deal with the technical side of web performance.

If you're trying to wrap your head around LCP, this is the talk you need to cover. There are also slides available here and what appears to be a blog post of additional content on Harry's site here, both of which supplement the video well. My own notes are below.

What is LCP?

It's the time it takes for the largest piece of content on a page to finish rendering on screen. Though not all elements are counted. Those that are include text blocks, headings, and images (including background images). Videos are partially included (more details below). Critically, this means that the LCP element may not be considered the focal point, the most useful, or even the most relevant piece of content on your page; context irrelevant, all that matters is size. (This feels very gameable. I wonder if you hide a large white background image or something, if that would trick it into always using the wrong element.)

It's a proxy for how fast was the site, visually.

What does "good" look like?

Google's LCP measurements regard 2.5s as "good". This is apparently ambitious (only around 10% of sites achieve this). It's also a moveable threshold; Google's own definition is clear that the threshold can change and Harry reckons it is likely to do so.

Why bother about it at all?

Google uses it to rank pages, but SEO is far from the only valid reason.

I was working with a client two months ago, and we worked out that a 500ms improvement in LCP would be worth an extra €11.5 million a year to them. This is staggering!

The better your overall sight speed, the better the customer experience, and the more likely you will get meaningful business engagement from the customer as a result.

What elements are used?

  • <img>;
  • <image> within <svg>;
  • block-level elements that contain text nodes (e.g. paragraphs and headings);
  • <video> elements that have a poster image (only the loading of the poster image is actually counted);
  • And any element with a background image loaded via url() (as opposed to background gradients, for instance).

How to improve LCP?

You can't really optimise LCP in its own right; it's a culmination of several other preceding steps.

Your best bet is to optimise the steps that happen first. Optimise these steps and LCP will naturally improve.

The biggest red flag is a bad Time To First Byte. If your first byte takes several seconds, then you cannot have a good LCP.

The time delta between TTFB and LCP is normally your critical path... Very crudely, but it's a good proxy.

Note: If there's a gap between First Paint and First Contentful Paint then that's likely a web fonts issue, and this can also really hurt LCP.

The biggest thing is focus on the element used.

Which is faster? Well, testing shows that text nodes are inherently the fastest option. SVGs appear to be faster than images or videos, which are about the same, and background images are the worst. But the SVG being faster is a bug in Chrome's reporting, and should actually be behind standard images and videos. So:

text > image > video > svg > background image

Also, all of the above assumes raw HTML.

We love HTML! HTML is super fast. Giving this to a browser is a good idea.

Why are some elements faster?

The big issue is that <image> elements in <svg> are hidden from the browser's preload scanner, which means they cannot be requested until most scripts have been run (Harry provides a whole technical reason for how browser engines work, but this is the tl;dr version).

Video is the inverse; the poster attribute is available to the preload scanner and therefore downloads in parallel with other resources, allowing the LCP to be much faster.

Note: there are rumours that <video> element handling may change in the near future for LCP calculations. Currently, a video without a poster attribute is just ignored, so you can have a fullscreen video playing and LCP will likely pick some floated text or a navigation element instead (fast). If the suggested changes occur, this will switch to be the first frame of the video, which will be slow. Hopefully, they keep the poster attribute caveat alongside any changes, at which point using poster will be massively beneficial for LCP (you will always download an image quicker than a video).

Background image will always be slow. Avoid them as much as possible!

External stylesheets, CSS-in-JS, even inline styles are all partially blocked. All CSS is hidden from the preload scanner, so any images will be downloaded towards the end of the waterfall.

This is a necessary part of CSS. You can have thousands of background images in a stylesheet, but the browser only wants to download the ones it needs, so it waits until the DOM is fully calculated to parse those styles (even if they are inline).

Overall, this massively helps with web performance, but it hurts LCP significantly, because it shunts your final paint to the very end of the rendering process.

Avoid background images above the fold, try to use simple <img> and text nodes instead. (I'm interested that <picture> isn't mentioned.)

Common mistakes to avoid

DOM and browser APIs

Never lazy load anything above the fold! It slows the page load down significantly. And if you do need to do this (though you really don't), please don't use JavaScript-based lazy loading 🤦‍♂️

In particular, lazy loading hides the image from the preload scanner. Again, this is super beneficial, so long as you aren't using lazy loading when you shouldn't be. This means that a natively lazy loaded image will still be much slower, even though the browser technically "ignores" that part of the HTML, because none of the resources are preloaded.

Similarly, avoid overusing preload on linked resources. Try to avoid using it for anything that is already directly linked from the URL (e.g. a src attribute), but you can (and should) use it for things you want to pull in earlier than they otherwise would be, such as background images that you know you will need for LCP.

The more you preload, though, the more bandwidth is split between multiple other resources, so the slower this benefit will get. There be dragons with preload in general too, so read up on it first (and rely on it last). And don't overuse it:

If you make everything important, then you have made nothing important.

Same goes for fetch priority levels/hints. All images are requested with low priority, but once the browser knows it's rendered in the viewport, it will upgrade it to high priority. Adding fetchpriority="high" to certain images can therefore help to tell the browser "hey, this will always be in the viewport on load" and skip that internal logic.

More recently, the decoding attribute can be used to make images decode synchronously, which is faster because it further prioritises things in the rendering order (needs testing per website though).

Content woes

Avoid using JavaScript wherever possible, but in particular for rendering your LCP. If the LCP element is only being built and rendered on the client, it will be blocked by the execution time of that script and then all of the normal DOM rendering.

Make sure your LCP candidate is right there, in HTML, ready for discovery. HTML is fast!

Don't host your LCP on a third-party site. Even hyper-optimised CDNs will massively hurt (even with significant compression enabled) purely because the round trip to another service will always be slower than self-hosted.

As an example, Harry moved his homepage image (LCP element) onto Cloudinary, and saw a 2.6x increase in LCP, even though the file size was halved.

Always self host your static assets. Never host them on someone else's origin. There is never any performance benefit to doing that.

Be very careful about dynamic content, and late-loaded JavaScript elements such as cookie banners.

Dynamic content can fundamentally change the LCP element. If you have a large title on the page, and then change this to have fewer characters or words, your LCP might jump to a different, less optimal element, such as an image.

Or even worse, if that new element is now a cookie banner or something later in the waterfall, you can hammer your LCP.

📆 31 Jan 2023  |  🔗

  • Frontend, 
  • web performance, 
  • LCP, 
  • TTFB, 
  • preload, 
  • lazy loading, 
  • JavaScript, 
  • browser, 
  • preload scanner 

Container queries & typography | Robin Rendle

I've been saying for a couple of years that we are on the brink of a "fluid design" revolution in front-end development, similar to what happened around the late 2000s with "responsive design". Container queries are a key part of that puzzle, and Robin's example here of using a combination of fluid type (via clamp()) and container queries is precisely the kind of pattern I've been thinking about. Plus, I love seeing a solid example for the new cqw unit. (Though a standard accessibility disclaimer around fluid type and how that interacts with page and text zoom.)

It's so cool to see these patterns slowly becoming a possibility 🙌

Example code:

p {
  font-size: clamp(1rem, 2.5cqw, 2rem);
  line-height: clamp(1.35rem, 3.5cqw, 1.9rem);
}

On why container sizes (and viewport sizes) can have an outsized impact on typography:

This is because in typography the font-size, the line-height, and the measure (the width of the text) are all linked together. If you change one of these variables, you likely have to change the others in response.

📆 31 Jan 2023  |  🔗

  • HTML & CSS, 
  • clamp, 
  • cqw, 
  • container queries, 
  • fluid design, 
  • fluid typography, 
  • web design 

Stop using JavaScript objects | Theo

Theo has some really interesting videos, but the more I dig into the archive the more I find little gems like this. It's the definition of a quick tip, and it helps explain Maps and Sets in JavaScript (and their advantages) way better than anything I've seen before. The video ends with a hopeful statement that the viewer can maybe think of a few times they've used Objects or Arrays where a Map or Set would have been better and oh boy, yes I can 😂

Key takeaways:

  • Use a Map for any kind of "object-like" data that you need to edit, particularly if those edits include adding or removing items. The example given is a collection of Users that are themselves data objects, but a Map gives you a much quicker way to reference specific keys and modify them, or get/set users from that list.
  • Use a Set for data arrays that need to be unique (Sets automatically remove duplicates) or whether you need to quickly add or delete values, as there are native functions for both of those operations that are more performant than looping through an array to find/replace/add data.

📆 27 Jan 2023  |  🔗

  • JavaScript, 
  • data store, 
  • data object, 
  • type, 
  • Map (type), 
  • Set (type), 
  • JSON, 
  • array, 
  • JavaScript, 
  • TypeScript 

The origin of the lady code troll | Jenn Schiffer

I've followed Jenn for some time, but somehow missed this absolutely perfect talk they gave in 2016 at XOXO Conf. The humour is fantastic; the overview of the satire Jenn has put out is super interesting; and (as with all good comedy) the messages interwoven within the presentation are much-needed (sadly). 🦎

Understanding blogs | Tracy Durnell

I am a big fan of categorisation debates, so the concept of trying to define what a "blog" is (or isn't) piqued my interest. I'm glad it did, because Tracy has written a wonderfully well-thought-through post with some interesting insights. For the most part, I think it aligns with my own gut feeling on what makes a blog a blog, but I particularly liked the style of a blog post. The fact that blogs take the form of a building argument, not necessarily voicing their intent or conclusion immediately, but instead guiding the reader through the narrative to naturally arrive at that conclusion. I agree wholeheartedly with this take, but I'm not sure that this is the essence of "blog-ness". I think that's just how people actually talk when given a platform.

It strikes me as the same style as newspaper opinion pieces, and very similar to the style that many cultures evolved for public speaking (the kind of public speaking where someone stands on a literal soapbox and espouses some ideal or idea). And I think that makes sense. Blogs tend to be personal spaces (or places attempting to make themselves appear personal, as with brand/business blogs) that give a person or persons a platform, but one which they want others to consider. A conversational tone is appealing at both ends of that transaction: it makes writing the post feel less like work, and it makes reading the post more natural and friendly. I dunno, I feel like there's something there, perhaps worth mulling over further 😊

On the reality that books and blogs are not merely the medium they inhabit:

Printing off a long blog and binding it together does not necessarily a book make; for one, books are weighted towards linear reading — start to finish — while blog posts do not have to be read in the order they were originally published.

On the irritation of graphic novels being crassly categorised in public library systems (and, indeed, bookstores):

I’m a fan of graphic novels, and consider them a different medium than prose books; it pisses me off that graphic novels and graphic non-fiction are shelved with the comic strips at my library under 741.5.

On the impact that the technology of the web has on how blogs work:

hypertextual capabilities encourage authors to supplement their text with links to their own work, forming networks of connected thought, and to references on other websites and online resources

On the nature of a blog and the impact it had/has on online culture:

As a self-published work, a blog reflects the author(s)’s or editor(s)’s premise unfiltered. This direct, decentralized form of publication democratizes writing and the sharing of ideas. As more people of all backgrounds participate in the blogosphere, the culture of blogging accepts less formal, more conversational writing styles.

On the difference with social media, and particularly why comments on social timelines tend to devolve whilst blogs at least have a chance of interesting discussion:

The immediacy of the feed encourages replies in the moment, while a blog post can be saved and mulled over for later engagement.

On Tracy's conclusion about what blogging is; I particularly like the emphasis on the "body of work":

And if you zoom out from the individual blog post level, in a sense this also describes what blogs are: a contemplation on a particular theme in depth (even if that theme is “the author’s life” or “stuff I like”). A blog is a body of work.

The great divide was indeed divisive | Chris Coyier

Chris reviews their thoughts on the infamous Great Divide article, with some useful additional nuance. Also, isn't it fun to see a blog post response to a blog post 😊

On the ultimate point of the OG Great Divide article:

Since there is too much for any web developer to know, what is the most graceful and professionally acceptable way of not knowing things?

Whatever the answer is, it’s definitely not “ignore, shit on, and downplay the things you don’t know and gatekeep the things you do.”

JavaScript, community | Zach Leatherman

There's been a growing backlash in certain circles to surveys like the State of JavaScript. I don't fully agree with the underlying rhetoric, and I do think that these surveys are both well-meaning and genuinely useful, if taken in context. Could they be more representative of the web? Sure, absolutely; but there will always be a reach issue, and some data is better than none.

Zach's (stealthy) entry on the topic feels like a much more valid critique. Rather than focusing on whether surveys like SoJS do enough to broaden their demographics, perhaps a better question is how useful they are for determining talking points about web culture more broadly. I often see stats from places like SoJS used to validate business decisions (the typical "we're using React because it's the most popular, see 👇") but Zach's points are more nuanced: by focusing on "JavaScript developers", these results ignore the vast majority of actual web work. In an industry still grappling with the Great Divide, is the divide a necessary evil, or something that is almost self-prophesied (an ouroboros style for loop, perhaps 😂)? I'm not sure, but Zach's words have definitely given me pause to think 🤔

On the falsehoods of considering the web (as a whole) through the lens of the State of JS survey:

This JavaScript community (if judged by the demographics of this survey) seems to be comprised mostly of folks that are largely building with React, webpack, and Jest. With React on 3.2% of web sites and jQuery at 77.7% (as of January 2023), that’s a pretty small slice of a much larger community.

On the Great Divide:

The question I keep asking though: is the divide borne from a healthy specialization of skills or a symptom of unnecessary tooling complexity?

This version of myself | Ana Rodrigues

A wonderful look into the practicalities and complexities of having an online presence. Should you have multiple domains for different purposes? How do you context switch online? And should you have to, or want to, in the first place? I like where Ana falls on these questions, and have come to similar conclusions myself. You can be wholly represented by a single domain, a single profile, a single purpose. But that sounds a little dull to me, so I choose otherwise 😉

On the central concern around having multiple domains, particularly for "web professionals":

There were times where I thought I regretted [having multiple domains/websites]. It crossed my mind that having only one domain to represent me would be the best marketing (this whole sentence is a can of worms).
If I have a professional domain to apply for jobs, does this mean that this is my unprofessional domain?

On whether you are "authentic" if you are divided (answer: yes):

But this, and I suppose this blog and my current social media activity, is indeed a true version of myself. Without any quote marks. But it is one of the many versions I have and they all have many things in common between them.

Written in stone | Ashur Cabrera

Both the photography and idea behind Written in Stone are great, but what really stood out to me here was the simple-yet-elegant design of the page. It works beautifully together, and I wanted to capture them simplicity:

I love the asymmetric symmetry of the whole page.

Disbanding the POSSE | Colin Devroe

I'm a big fan of the IndieWeb community, yet I've long struggled with using many of their protocols or guidelines. POSSE is one of those. I do POSSE content to a couple of platforms (though, so far, I haven't made the original "source" posts on this site public) but I do so manually, and I likely post more on those platforms natively than I engage with this system. That isn't due to any specific friction or issue, but more because I'm happier having my content distributed across multiple places. That said, since moving to Mastodon I have been toying with a "stream" domain – we'll see if I ever actually make it, though!

On the rare logic of why automation is not the answer:

I've decided I'm going to discontinue using automation in favor of manually writing posts for each of the platforms I want to post to. [...] I'd like to syndicate to more platforms and each of those have their own look, feel, and community driven norms.

On the loss of native functionality (this is something I've been thinking about a lot, and one of the key reasons I don't think I'd ever want to clone a note to multiple note platforms, e.g. Mastodon and Twitter):

Micro.blog does not support hashtags whereas on Mastodon and Twitter they are first class citizens. By POSSE-ing via this method I lose out on all of that.

Literature clock | Johannes Enevoldsen

Here's a fun idea: a website that tells the time, by showing you a paragraph or sentence from a piece of literature that contains it 😁 Simple, effective, and extremely fun!

A paragraph from "Extremely Loud and Incredibly Close", by Jonathan Safran Foer. It reads: "At 11.50pm, I got up extremely quietly, took my things from under the bed, and opened the door one millimetre at a time, so it wouldn't make any noise." The time (11:50pm) is in bold.
Yes, it is probably a little late to be noodling around the weird corners of the internet 😉

Is Web3 bullshit? | Molly White

Yes, it is 😉 Of course, Molly does a much better job of outlining why the Web3 experiment appears to be failing so spectacularly, and politely calls out the rest of the industry for allowing the existence of all the grifters, scammers, and criminals that have thrived within the bubble that Web3 has created.

The whole talk is an excellent summation of the work they have been doing over on Web3IsGoingGreat.

On the similarities between Web3 and the current web:

So many problems in today's web are driven by capitalistic forces, driving ruthlessly towards the enrichment of monopolistic tech companies, rather than the betterment of society. You'll have to excuse me for doubting that our utopian web dreams will be achieved through the introduction of a hyper-capitalist technology that aims to financialise everything on the web even further, and exposes user data on public ledges where it can be scraped by even more tech companies than are profiting off our data today.