Explore My Notes

Josh's custom CSS reset | Josh W. Comeau

Josh has added some additional thoughts to Andy's CSS reset. Personally, I like a combination of the two (with a dash of Stephanie's best practices thrown in for good measure), but wanted to capture both for posterity.

On the grandfather of CSS resets by Eric Meyer (which has been my go-to for a while now as well):

For a long time, I used Eric Meyer's famous CSS Reset. It's a solid chunk of CSS, but it's a bit long in the tooth at this point; it hasn't been updated in more than a decade, and a lot has changed since then!

On why images should be block-level rather than inline:

Typically, I treat images the same way I treat paragraphs or headers or sidebars; they're layout elements.
By setting display: block on all images by default, we sidestep a whole category of funky issues.

On a more sensible default for interactive element font styles. Also TIL about the font shorthand property – very clever:

If we want to avoid this auto-zoom behavior, the inputs need to have a font-size of at least 1rem / 16px.
This fixes the auto-zoom issue, but it's a band-aid. Let's address the root cause instead: form inputs shouldn't have their own typographical styles!
font is a rarely-used shorthand that sets a bunch of font-related properties, like font-size, font-weight, and font-family.

And the reset itself:

/*
  Josh's Custom CSS Reset
  https://www.joshwcomeau.com/css/custom-css-reset/
*/

*, *::before, *::after {
  box-sizing: border-box;
}

* {
  margin: 0;
}

body {
  line-height: 1.5;
  -webkit-font-smoothing: antialiased;
}

img, picture, video, canvas, svg {
  display: block;
  max-width: 100%;
}

input, button, textarea, select {
  font: inherit;
}

p, h1, h2, h3, h4, h5, h6 {
  overflow-wrap: break-word;
}

#root, #__next {
  isolation: isolate;
}

A modern CSS reset | Andy Bell

Andy always has some interesting thoughts about CSS, and this reset is no exception. Lots of interesting things here that fit very nicely with both my own experience and other resets that I've seen.

On the evolution of CSS resets:

In this modern era of web development, we don’t really need a heavy-handed reset, or even a reset at all, because CSS browser compatibility issues are much less likely than they were in the old IE 6 days.

On line-heights and optimising text rendering:

I only set two text styles. I set the line-height to be 1.5 because the default 1.2 just isn’t big enough to have accessible, readable text. I also set text-rendering to optimizeSpeed.

Using optimizeLegibility makes your text look nicer, but can have serious performance issues such as 30 second loading delays, so I try to avoid that now. I do sometimes add it to sections of microcopy though.
/* Box sizing rules */
*,
*::before,
*::after {
  box-sizing: border-box;
}

/* Remove default margin */
body,
h1,
h2,
h3,
h4,
p,
figure,
blockquote,
dl,
dd {
  margin: 0;
}

/* Remove list styles on ul, ol elements with a list role, which suggests default styling will be removed */
ul[role='list'],
ol[role='list'] {
  list-style: none;
}

/* Set core root defaults */
html:focus-within {
  scroll-behavior: smooth;
}

/* Set core body defaults */
body {
  min-height: 100vh;
  text-rendering: optimizeSpeed;
  line-height: 1.5;
}

/* A elements that don't have a class get default styles */
a:not([class]) {
  text-decoration-skip-ink: auto;
}

/* Make images easier to work with */
img,
picture {
  max-width: 100%;
  display: block;
}

/* Inherit fonts for inputs and buttons */
input,
button,
textarea,
select {
  font: inherit;
}

/* Remove all animations, transitions and smooth scroll for people that prefer not to see them */
@media (prefers-reduced-motion: reduce) {
  html:focus-within {
   scroll-behavior: auto;
  }
  
  *,
  *::before,
  *::after {
    animation-duration: 0.01ms !important;
    animation-iteration-count: 1 !important;
    transition-duration: 0.01ms !important;
    scroll-behavior: auto !important;
  }
}

Andy's evergreen notes | Andy Matuschak

Andy has built a career considering the impact of note-taking and ways to maximise it as a way to gain deeper insights and develop general knowledge. This notes microsite is one of their related experiments: a fluid UI that maps out navigation through interlinked notes. It's clever, with some nice touches, but I do find it a little clunky. Still, plenty here to take note of, if you'll excuse the pun.

A four-column, text-based UI. Each column has a vertical scrollbar, independent of the others, so that each one is at a different scroll position. Each column is a note, with distinct sections that include references and backlinks (at the bottom). When hovered, a backlink shows a preview of the page it links to.
I do like the distinction between references, content, and back links, as well as the hover cards.

Note-taking apps don't make us smarter | Casey Newton

A surface-level look at the world of note-taking apps, what the impact of "AI" may be, and why the much-lauded benefits of these tools never really seem to materialise. Ironically, I found the article as it's currently the most-read recent piece on Readwise, a tool explicitly designed as a note-taking app, which may not be the best indication of the benefits its users are feeling (or not feeling, as this would indicate 😅).

I agree with Casey's experience around note-taking, in that simply building a database of neatly back-linked and tagged information does not make for improved recall or understanding, but I find it funny that the single most successful tool for that very endeavour is rarely discussed in these circles: wikis. Wikipedia has managed to curate and interlink a vast amount of humanity's knowledge, and it is used by researchers and the general public all over the world for learning purposes. Personal wikis have similar capabilities (yes I am aware of the obvious bias as I write this on my personal wiki). Yet they rarely get a mention alongside tools such as Obsidian or Roam which, whilst incredible in their own way, maybe shouldn't be the final resting place for that very reason. A wiki encourages a certain level of self-curation and expansion. Though, I suppose, there is also nothing stopping people from using Obsidian like that, either.

Still, I definitely think the points here about AI are valid: if there is one area AI will come to dominate it is information retrieval and summarisation.

On the pros and cons of having all of human history and creative output at your fingertips:

Collectively, this material offers me an abundance of riches — far more to work with than any beat reporter had such easy access to even 15 years ago.

And yet most days I find myself with the same problem as the farmer: I have so much information at hand that I feel paralyzed.

On one of the few real benefits I feel AI will serve – recall:

An AI-powered link database has a perfect memory; all it’s missing is a usable chat interface. If it had one, it might be a perfect research assistant.

On the power of AI as a tool for bubbling ideas up from an archive:

But if I could chat in natural language with a massive archive, built from hand-picked trustworthy sources? That seems powerful to me, at least in the abstract.

On the problem with hallucinations and lack of citations:

A significant problem with using AI tools to summarize things is that you can’t trust the summary unless you read all the relevant documents yourself — defeating the point of asking for a summary in the first place.

On the issue with most note-taking software, how it abstracts away the central purpose (questioning your own knowledge); and also just really like "transient strings of text" bit:

I’ll admit to having forgotten those questions over the past couple years as I kept filling up documents with transient strings of text inside expensive software.as the farmer: I have so much information at hand that I feel paralyzed.

Jamie's bookshelf | Jamie Adams

I'm a sucker for a personal collection displayed on the web, and I really love the simplicity of Jamie's design for his digital bookshelf. An easy rating system; simple (and fast) filters; and a very refined UI that is made visually appealing by the use of the actual book covers and some nice whitespace application.

A webpage titled "bookshelf" displaying a reverse-chronological timeline, where each point is a specific book review. Each review details the book's author, title, cover, and rating (provided in thumbsup emojis on a 1-5 scale), as well as a review, many of which are only a few words long. Filters for "all books", "fiction", and "non-fiction" are provided via radio buttons at the top, and the page has a blurb explaining that Jamie has been recording the books they read since 2012, that this list starts in 2020, and how the rating system works.
Another thing I love: the way the design encourages brevity in the reviews. That's something I could happily pinch 😉

💀 NOTE: The original site no longer appears to be online; the source has been replaced with an Internet Archive link.

Automating the deployment of your static site on your own server | Chris Ferndinandi

Need to self-host your front end away from the "modern" services like Netlify or Vercel? As both continue to get a little sketchier with time, it's definitely something I'm having to consider. Deploying a static site to an old-school host (like my favourite: Krystal) is easy enough, but you lose that wonderful "rebuild on Git push" that we've all become accustomed to. Or do you?

Whenever I push an update in my websites master branch in GitHub, it notifies my server through something called a webhook. My server pulls the latest version of my site from GitHub, runs a Hugo build, and moves the built files into my live site directory.

Chris has written up how to achieve similar functionality on Digital Ocean. I'm not sure how well this would translate to other services, but the toolchain seems generic enough. You need:

  • A GitHub account;
  • Some form of PHP hosting with control over file and folder placement and SSH/CLI access;

And that seems to be about it.

The steps look like this:

  1. Set up your repo on GitHub;
  2. Install whatever build tool you want to use on your server (Chris uses Hugo, but any front-end framework should work the same way);
  3. Create an SSH key for the server;
  4. Clone your GitHub repository to the server via SSH: git clone git@github.com:<YOUR_USERNAME>/<YOUR_RESPOSITORY>.git
  5. Create an automation script within a PHP file; Chris uses deploy.php and the code below;
  6. Add a deploy secret as an environment variable (likely in the htaccess file);
  7. On GitHub, set up a webhook with the "payload URL" as your domain plus the name of the PHP script, e.g: theadhocracy.co.uk/deploy.php;
  8. Test it out.

Here's the code (note Chris' version may be more up to date, just saving here in case that post disappears):

<?php

    /**
     * Automated deploy from GitHub
     *
     * https://developer.github.com/webhooks/
     * Template from ServerPilot (https://serverpilot.io/community/articles/how-to-automatically-deploy-a-git-repo-from-bitbucket.html)
     * Hash validation from Craig Blanchette (http://isometriks.com/verify-github-webhooks-with-php)
     */

    // Variables
    $secret = getenv('GH_DEPLOY_SECRET');
    $repo_dir = '/srv/users/serverpilot/apps/<YOUR_APP>/build';
    $web_root_dir = '/srv/users/serverpilot/apps/<YOUR_APP>/public';
    $rendered_dir = '/public';
    $hugo_path = '/usr/local/bin/hugo';

    // Validate hook secret
    if ($secret !== NULL) {

        // Get signature
        $hub_signature = $_SERVER['HTTP_X_HUB_SIGNATURE'];

        // Make sure signature is provided
        if (!isset($hub_signature)) {
            file_put_contents('deploy.log', date('m/d/Y h:i:s a') . ' Error: HTTP header "X-Hub-Signature" is missing.' . "\n", FILE_APPEND);
            die('HTTP header "X-Hub-Signature" is missing.');
        } elseif (!extension_loaded('hash')) {
            file_put_contents('deploy.log', date('m/d/Y h:i:s a') . ' Error: Missing "hash" extension to check the secret code validity.' . "\n", FILE_APPEND);
            die('Missing "hash" extension to check the secret code validity.');
        }

        // Split signature into algorithm and hash
        list($algo, $hash) = explode('=', $hub_signature, 2);

        // Get payload
        $payload = file_get_contents('php://input');

        // Calculate hash based on payload and the secret
        $payload_hash = hash_hmac($algo, $payload, $secret);

        // Check if hashes are equivalent
        if (!hash_equals($hash, $payload_hash)) {
            // Kill the script or do something else here.
            file_put_contents('deploy.log', date('m/d/Y h:i:s a') . ' Error: Bad Secret' . "\n", FILE_APPEND);
            die('Bad secret');
        }

    };

    // Parse data from GitHub hook payload
    $data = json_decode($_POST['payload']);

    $commit_message;
    if (empty($data->commits)){
        // When merging and pushing to GitHub, the commits array will be empty.
        // In this case there is no way to know what branch was pushed to, so we will do an update.
        $commit_message .= 'true';
    } else {
        foreach ($data->commits as $commit) {
            $commit_message .= $commit->message;
        }
    }

    if (!empty($commit_message)) {

        // Do a git checkout, run Hugo, and copy files to public directory
        exec('cd ' . $repo_dir . ' && git fetch --all && git reset --hard origin/master');
        exec('cd ' . $repo_dir . ' && ' . $hugo_path);
        exec('cd ' . $repo_dir . ' && cp -r ' . $repo_dir . $rendered_dir . '/. ' . $web_root_dir);

        // Log the deployment
        file_put_contents('deploy.log', date('m/d/Y h:i:s a') . " Deployed branch: " .  $branch . " Commit: " . $commit_message . "\n", FILE_APPEND);

    }    

EAA: what you need to know | Craig Abbott

An excellent overview of the European Accessibility Act, how it overlaps with existing regulations, and the impact it might have. Doesn't get too technical, but does do a good job of explaining the high-level concepts and addressing a few tangential questions, like the result of Brexit on the UK's position.

(Can you tell I'm currently researching the EAA for work?)

On the WAD and where it applies:

It only applies to you if you're a public sector body in Europe, including the UK, as it was implemented in 2016 before Brexit.

On the UK equivalent of the WAD:

The Public Sector Bodies Accessibility Regulations 2018 are, as the name suggests, aimed at public sector bodies also. But, the important difference here is that this is UK specific law.
Because they are harmonised, if you meet the Public Sector Bodies Accessibility Regulations, you will also meet the EU Accessibility Directive. Cool huh?

On the standard behind these laws, EN 301 549:

However, in short, EN 301 549 is a technical standard that lays down the accessibility requirements for ICT (information and communications technology) products and services.
It is essentially just a set of guidelines which tells organisations how to make products accessible, rather than all the legalities which explains what happens to them if they do not!

On the EAA in general:

The European Accessibility Act, on the other hand, is focused on private sector organisations selling products or services to customers that live in EU member states.
And, it's the first time that, whether you provide public services to people in the EU, or you sell products or services to customers in the EU, you will all need to meet the same standard of accessibility.
It's a significant step towards making the digital world more inclusive for everyone and I couldn't be more excited for it to arrive!
Simply put, if you're in the private sector and you operate within the EU, you'll have to make sure your products and services meet the EAA's accessibility requirements. That means more than just making your website accessible.

On the "increased market reach"(I hate these kinds of business-focussed accessibility arguments, but the stat is a good one to have up your sleeve) :

With more than 80 million people in the EU living with disabilities, making your digital offerings accessible can significantly expand your customer base.

On how Craig feels the same way 😅:

Again, I realise I’m making this about money, but we are talking about the private sector here, and unfortunately in private sector, money makes the world go round!

On the impact of the EAA in a post-Brexit Britain:

As we just mentioned, the EAA affects organisations selling to customers in EU member states. So if thats your organisaion, then put simply, you need to comply!
In the UK, private sector companies are also still bound to the Equality Act 2010, which makes it illegal to discriminate against people with protected characteristics, of which one is disability. This means private sector organisations have been expected to make reasonable adjustments for people with disabilities for over a decade anyway.

On the way UK companies could find themselves outcompeted by European, accessible ones:

Even if you only ever plan on selling to people in the UK, your product is going to appear far inferior to your EU competitors if you don’t adopt the same standards. Trust me, there is no competitive advantage in having an ableist product or service in 2023!
Accessibility is not just about ticking boxes; it's about inclusivity. Implementing these standards shows that your business values all customers, regardless of their physical abilities.

Article 32, and why it sucks! | Craig Abbot

The EAA's Article 32 is the one that people get a bit angry about, particularly in accessibility circles. In brief, it gives companies with existing contracts or customers an additional five years to make their stuff accessible (i.e. until June 2030). Bizarrely, that caveat is even longer for physical self-service machines, which get a whopping twenty years additional grace period. As Craig points out, that feels excessive, particularly given how slow to upgrade those services are in the first place (i.e. they probably need to be upgraded sooner than most digital services).

Still, whilst it's an annoying caveat that will certainly be abused by some companies to kick the can down the line – particularly with internal tooling – it doesn't really offer that large of a loophole. Any new sales, customers, or contracts must be accessible from 2025, so if you're updating your software and hardware anyway, why wouldn't you push that out to all customers?

I guess it gives legacy codebases a pass (the kind where one stubborn client refuses to upgrade for a decade, because they prefer one small feature or have been grandfathered into a sweetheart payment scheme or something), so companies have a five-year runway to sunset them, but considering we've already had a five-year runway it does feel a little unnecessary. Still, as Craig says, they might get some extra leeway, but...

But, come 2030, we'll all be here to nail them to the floor!

Tick-tock.
Tick-tock.

On what Article 32 actually says:

The first part of Article 32 sets up a transitional period that ends on 28 June 2030.
It's a grace period for service providers to continue using products they were already using before the 28 June 2025 cut-off date.
Any new stuff purchased will need to be accessible immediately once that 28 June 2025 deadline passes.
Service contracts that were agreed before 28 June 2025 can keep going without any changes until they expire, but only for a maximum of five years from that date.
Article 32 basically says, if they were lawfully used before 28 June 2025, they can still be used until they're no longer 'economically useful', or up to a maximum of 20 years.

On the annoying 20-year grace period for self-service machines:

This bit is probably the part that sucks the most, because most self-service terminals are already about 20 years out of date! Lets be honest, ticket machines for subway trains and super-market self-checkouts aren't exactly what we think of when we give examples of 'cutting edge technology'!

Tailwind and the death of web craftsmanship | Jeff Sandberg

I have used Tailwind on various projects. I think for prototyping and quick proof of concepts, for one-off projects that never need to be updated, it has some advantages. But for code that you want to last; code that needs to be maintained; code that should have thought applied to it, Tailwind has issues. Jeff has done a great job of outlining what those issues are, pointing out that CSS is currently undergoing a CSS3-level reinvention that solves almost all of the issues Tailwind purportedly solves, but better, and offering some additional thoughts around the edges of why Tailwind seems to have been adopted so rapidly.

On how Tailwind causes long-term debugging headaches, particularly for those of us who build/debug in the browser (read: front-end specialists): 

Yeah, you might have auto-completion in your editor, but the browser inspector is utterly neutered. You can't use the applied styles checkbox, because you'll wind up disabling a style for every usage of that tailwind class.
Due to the tailwind compiler, which was a solution to the "shipping massive amounts of CSS" problem, you don't have a guarantee that the class you're trying to poke into the inspector view will actually exist.

On the needless repetition of writing CSS via Tailwind (or any utility-class-only system):

You can't chain selectors. If you want your hover, focus, and active classes to be the same, you have to write them all.

On the issues with declaring "specificity is the devil" or "the cascade is too complex to scale" and refusing to use the language as designed:

Tailwind, and other utility frameworks, throw out the best and most often misunderstood aspects of CSS. CSS was explicitly designed with certain features, that, while complicated, are immensely useful once you can master them. The cascade, selector chaining (see above), and specificity are all very powerful tools. If you misunderstand them, yeah you can cut yourself. Just like if you don't use a saw properly you can cut yourself.
Selector chaining, particularly with the advent of new CSS selectors like is and where, allows very clean reuse of common bits of CSS. You get to separate the properties that affect how something looks from what is being affected. It's obvious, but in tailwind, you don't get that.

And on how Tailwind still has specificity even if proponents like to argue it doesn't (a breed of specificity with no clear method to work around, no less, which often causes bigger headaches than the native cascade would have in the first place):

And it's a lie to say you don't have to worry about specificity in tailwind; tailwind has its own specificity. Classes are read left-to-right by the browser, and so if you do b-1 bg-blue b-2, guess what? Your element gets both b-1 and b-2, and the browser will probably choose b-2 as your border, and throw away b-1.

On @apply:

@apply is a gigantic code-smell, it goes against everything tailwind supposedly stands for, and doesn't make writing your CSS any easier.

On who advocates the loudest for Tailwind:

The people I've seen who are most excited over tailwind are generally those that would view frontend as something they have to do, not something they want to do.

On how Tailwind's time (if it ever truly existed) may be coming to an end:

I've seen other engineers, of all levels, stuck in a mire of bad CSS, and so to them maybe Tailwind seems like a lifesaver. But CSS is better now. It's not perfect, but it's better than its ever been, and it's better than tailwind. Give it another try.

Libre Wolf

An open-source fork of the Firefox web browsers, with a strong(er) focus on privacy and security. Basically disables all telemetry and most of the slightly questionable decisions that Mozilla have made over the years, either by stripping them out completely or relegating them to user-controlled settings (such as session restore). Also comes bundled with an ad-blocker and a few privacy settings enabled by default. Claims to keep up-to-date with the latest Firefox release, with a 1-3 day lag to ensure nothing slips through.

Not worth the switch right now (it obviously also disables features like Firefox Sync) but useful to remember in case Mozilla goes off the deep end 😉

The ideal viewport doesn’t exist | Set Studio

A useful piece of research diving into the continued fragmentation of viewport and screen sizes across the web. I was actually one of the data points, as I saw the original call to arms (a clever microsite that did nothing but log out the current viewport dimensions, which were saved to a back end somewhere), and had been wondering what would come of the experiment, so it's nice to see it written up in such an interesting and creative way.

The results are not overly surprising, but I particularly like the comparison between common design tool "breakpoint templates" and how often those values were actually recorded. Spoiler: it isn't very often 😉

On the size of the dataset:

We gathered over 120,000 datapoints with over 2,300 unique viewport sizes.

On how fragmented even a supposedly locked-down web experience is, where an iOS browsing session may take place in Safari, an in-app browser, or even the new 3D preview:

Even on one iOS device, there's a minimum of 3 environments a website could find itself in, based on operating system states.

On how breakpoint-based design results in a worse overall UX:

If however, you tend to build with very specific breakpoints and hard values for typography, sizing and spacing, you might find that even with the best intentions, you’re not providing the optimal user experience.
Instead of making design decisions on strict, limited breakpoints, keep in mind the sheer amount of fragmentation there is in viewports.

Vision possible | Martian Craft

There's been a lot of hyperbole – in both directions – around Apple's much-anticipated first-steps into VR. I feel this write-up from MartianCraft does a fairly good job of weighing up some of the immediate pros and cons, but largely I just really liked this sentiment:

No one should be writing the Vision Pro’s epitaph — no one knows how the market will respond to a new category. (For the same reason, no one should be proclaiming it as the next iPhone.) But many seem to be seeing the Apple Vision Pro and visionOS with blinders on (see what I did there?).

There definitely does seem to be a hint of "haven't we been here before?" around the doomerism to do with the new product. And as someone who is distinctly not an Apple fanboy, I feel like the Vision Pro may well be the next big hit. But then I just really want it to be. XR is so obviously a solution to many computing problems that I've believed in it since the Google Glass days.

(I'll caveat this by saying the rest of the article is good but I don't agree with many of the takeaways. I don't think people will ultimately care that much about the isolationism of the device, nor do I think that will be much of a talking point outside of certain tech circles. And I also don't think many of the much-lauded applications will ever come to fruition. People seem to forget that most of us already have capable XR devices in our pockets, but you don't see fancy apps letting plumbers X-ray building schematics onto walls or helping chefs get real-time updates on customer allergies. Those applications are already possible, they just rely on infrastructure which would be incredibly costly and far too restrictive to ever make sense, and a fancy new VR headset won't change that. But immersive gaming, home cinemas, and Zoom calls? Yeah, this'll do those things pretty darn well, hopefully as well or better than existing solutions. Because that's where I see the real benefit of these devices: the ability to replace dozens of existing hardware decisions around your home. No more TVs, no more multi-monitors, no more iPad, no more home automation hub. That's the future I want.)