Explore My Notes

Making sense of React Server Components | Josh W. Comeau

A superb breakdown of the changes being made in React 18+ around the new React Server Components paradigm. Josh has a knack for explaining complex problems in simpler ways, and this is no exception; the little graphs showing data flows in the various React paradigms are particularly useful.

On the difference between CSR and SSR, with a bonus explanation of hydration that is clearer than I've ever seen before:

Server Side Rendering was designed to improve this experience. Instead of sending an empty HTML file, the server will render our application to generate the actual HTML. The user receives a fully-formed HTML document.

That HTML file will still include the <script> tag, since we still need React to run on the client, to handle any interactivity. But we configure React to work a little bit differently in-browser: instead of conjuring all of the DOM nodes from scratch, it instead adopts the existing HTML. This process is known as hydration.

On SSR and hydration:

A server generates the initial HTML so that users don't have to stare at an empty white page while the JS bundles are downloaded and parsed. Client-side React then picks up where server-side React left off, adopting the DOM and sprinkling in the interactivity.
The way I see it, “Server Side Rendering” is an umbrella term that includes several different rendering strategies. They all have one thing in common: the initial render happens in a server runtime like Node.js, using the ReactDOMServer APIs. It doesn't actually matter when this happens, whether it's on-demand or at compile-time. Either way, it's Server Side Rendering.

On how SSR feels like it is helping (by presenting a loading state faster) but ultimately doesn't always improve the actual UX (and, arguably, can degrade it in certain scenarios e.g. where loading state is not well conveyed):

[SSR] is an improvement — a shell is better than a blank white page — but ultimately, it doesn't really move the needle in a significant way. The user isn't visiting our app to see a loading screen, they're visiting to see the content (restaurants, hotel listings, search results, messages, whatever).

On how SSR feels a little illogical once you start to graph out the data flows involved:

But doesn't this flow feel a bit silly? When I look at the SSR graph, I can't help but notice that the request starts on the server. Instead of requiring a second round-trip network request, why don't we do the database work during that initial request?

On how Server Components can't have side effects or mutations – they only render once:

The key thing to understand is this: Server Components never re-render. They run once on the server to generate the UI. The rendered value is sent to the client and locked in place. As far as React is concerned, this output is immutable, and will never change.

On the impact of moving logic to a single-render paradigm:

This means that a big chunk of React's API is incompatible with Server Components. For example, we can't use state, because state can change, but Server Components can't re-render. And we can't use effects because effects only run after the render, on the client, and Server Components never make it to the client.

On why the term "Client Component" is a bit confusing:

The name “Client Component” implies that these components only render on the client, but that's not actually true. Client Components render on both the client and the server.

On the fact that "client" is no longer the default:

In this new “React Server Components” paradigm, all components are assumed to be Server Components by default. We have to “opt in” for Client Components.
That standalone string at the top, "use client", is how we signal to React that the component(s) in this file are Client Components

On when to use Client vs. Server Components:

As a general rule, if a component can be a Server Component, it should be a Server Component.
Some of our components will need to run on the client, because they use state variables or effects.

On how state works now, particularly in high-level components like page layouts:

In order to prevent this impossible situation, the React team added a rule: Client Components can only render other Client Components. When we convert a component to a Client Component, it automatically converts its descendants.

On how the React DOM now repopulates itself with the content of Server Components (this feels, well, quite clunky in some ways, and does mean that you still have to ship some JS even though you're fully rendering on the server):

Typically, when React hydrates on the client, it speed-renders all of the components, building up a virtual representation of the application. It can't do that for Server Components, because the code isn't included in the JS bundle.

And so, we send along the rendered value, the virtual representation that was generated on the server. When React loads on the client, it re-uses that description instead of re-generating it.

On some of the core benefits of React Server Components (and the benefits of tools like Bright built to work on the server only):

A proper syntax-highlighting library, with support for all popular programming languages, would be several megabytes, far too large to stick in a JS bundle. As a result, we have to make compromises, trimming out languages and features that aren't mission-critical.

But, suppose we do the syntax highlighting *in a Server Component.* In that case, none of the library code would actually be included in our JS bundles. As a result, we wouldn't have to make any compromises, we could use all of the bells and whistles.

📆 07 Sep 2023  | 🔗

React server components tips | Alex Anderson

A quick overview of React Server Components and some of the mental models that are useful when thinking about how they might be applied, in context.

On how server components now need to be much more granular in what gets put in each file, and how splitting is the new paradigm:

Don’t try to build everything in one component, or even in one file. As soon as you get an inkling that some markup or logic should be in its own component, don’t hesitate - just split it out

On when to use client components rather than server ones:

If you need interactivity, like event handlers, useState, useRef, or useEffect, don’t hesitate to break that component into its own file, and add use client to the top.

On the new hierarchy of server > client components:

Once a client component is rendered on the client, how does it execute that server code? Is the client magically able to make calls to the database? No, that’s silly. One of the immutable rules of RSC is that client components can’t render server components.

On how you can get around that hierarchy (sort of):

But there is a literal loophole in this: Using component composition, you can render a server component on the server, but as a child of a client component. Just like you can pass server-fetched data as props from a server component to a client component, you can pass rendered server components as props to client components too.

On how static components shouldn't automatically be regarded as client components, but rather client status should be a last resort only used where absolutely necessary:

So even if a component isn’t fetching any data, there’s no need to slap a use client at the top of the file. In fact, you should only do that if the component is explicitly using client-only features

Intro to design token schema | James Nash, Louis Chenais

An interesting look at the early draft proposal for an official design token specification and file format. The pitch is a strong one: standardise design tokens so that every tool can understand them, and display them in the most appropriate way. An IDE could show the underlying code; a design tool like Figma could show visual widgets like colour swatches and type hierarchies. Better still, you have one file that all systems ingest, so a change in one place should flood throughout the system: a designer changes a colour token in Figma, the live code updates.

Josh's custom CSS reset | Josh W. Comeau

Josh has added some additional thoughts to Andy's CSS reset. Personally, I like a combination of the two (with a dash of Stephanie's best practices thrown in for good measure), but wanted to capture both for posterity.

On the grandfather of CSS resets by Eric Meyer (which has been my go-to for a while now as well):

For a long time, I used Eric Meyer's famous CSS Reset. It's a solid chunk of CSS, but it's a bit long in the tooth at this point; it hasn't been updated in more than a decade, and a lot has changed since then!

On why images should be block-level rather than inline:

Typically, I treat images the same way I treat paragraphs or headers or sidebars; they're layout elements.
By setting display: block on all images by default, we sidestep a whole category of funky issues.

On a more sensible default for interactive element font styles. Also TIL about the font shorthand property – very clever:

If we want to avoid this auto-zoom behavior, the inputs need to have a font-size of at least 1rem / 16px.
This fixes the auto-zoom issue, but it's a band-aid. Let's address the root cause instead: form inputs shouldn't have their own typographical styles!
font is a rarely-used shorthand that sets a bunch of font-related properties, like font-size, font-weight, and font-family.

And the reset itself:

/*
  Josh's Custom CSS Reset
  https://www.joshwcomeau.com/css/custom-css-reset/
*/

*, *::before, *::after {
  box-sizing: border-box;
}

* {
  margin: 0;
}

body {
  line-height: 1.5;
  -webkit-font-smoothing: antialiased;
}

img, picture, video, canvas, svg {
  display: block;
  max-width: 100%;
}

input, button, textarea, select {
  font: inherit;
}

p, h1, h2, h3, h4, h5, h6 {
  overflow-wrap: break-word;
}

#root, #__next {
  isolation: isolate;
}

📆 05 Sep 2023  | 🔗

  • HTML & CSS
  • CSS resets
  • form
  • iOS
  • font
  • responsive design 

A modern CSS reset | Andy Bell

Andy always has some interesting thoughts about CSS, and this reset is no exception. Lots of interesting things here that fit very nicely with both my own experience and other resets that I've seen.

On the evolution of CSS resets:

In this modern era of web development, we don’t really need a heavy-handed reset, or even a reset at all, because CSS browser compatibility issues are much less likely than they were in the old IE 6 days.

On line-heights and optimising text rendering:

I only set two text styles. I set the line-height to be 1.5 because the default 1.2 just isn’t big enough to have accessible, readable text. I also set text-rendering to optimizeSpeed.

Using optimizeLegibility makes your text look nicer, but can have serious performance issues such as 30 second loading delays, so I try to avoid that now. I do sometimes add it to sections of microcopy though.
/* Box sizing rules */
*,
*::before,
*::after {
  box-sizing: border-box;
}

/* Remove default margin */
body,
h1,
h2,
h3,
h4,
p,
figure,
blockquote,
dl,
dd {
  margin: 0;
}

/* Remove list styles on ul, ol elements with a list role, which suggests default styling will be removed */
ul[role='list'],
ol[role='list'] {
  list-style: none;
}

/* Set core root defaults */
html:focus-within {
  scroll-behavior: smooth;
}

/* Set core body defaults */
body {
  min-height: 100vh;
  text-rendering: optimizeSpeed;
  line-height: 1.5;
}

/* A elements that don't have a class get default styles */
a:not([class]) {
  text-decoration-skip-ink: auto;
}

/* Make images easier to work with */
img,
picture {
  max-width: 100%;
  display: block;
}

/* Inherit fonts for inputs and buttons */
input,
button,
textarea,
select {
  font: inherit;
}

/* Remove all animations, transitions and smooth scroll for people that prefer not to see them */
@media (prefers-reduced-motion: reduce) {
  html:focus-within {
   scroll-behavior: auto;
  }
  
  *,
  *::before,
  *::after {
    animation-duration: 0.01ms !important;
    animation-iteration-count: 1 !important;
    transition-duration: 0.01ms !important;
    scroll-behavior: auto !important;
  }
}

Andy's evergreen notes | Andy Matuschak

Andy has built a career considering the impact of note-taking and ways to maximise it as a way to gain deeper insights and develop general knowledge. This notes microsite is one of their related experiments: a fluid UI that maps out navigation through interlinked notes. It's clever, with some nice touches, but I do find it a little clunky. Still, plenty here to take note of, if you'll excuse the pun.

A four-column, text-based UI. Each column has a vertical scrollbar, independent of the others, so that each one is at a different scroll position. Each column is a note, with distinct sections that include references and backlinks (at the bottom). When hovered, a backlink shows a preview of the page it links to.
I do like the distinction between references, content, and back links, as well as the hover cards.

Note-taking apps don't make us smarter | Casey Newton

A surface-level look at the world of note-taking apps, what the impact of "AI" may be, and why the much-lauded benefits of these tools never really seem to materialise. Ironically, I found the article as it's currently the most-read recent piece on Readwise, a tool explicitly designed as a note-taking app, which may not be the best indication of the benefits its users are feeling (or not feeling, as this would indicate 😅).

I agree with Casey's experience around note-taking, in that simply building a database of neatly back-linked and tagged information does not make for improved recall or understanding, but I find it funny that the single most successful tool for that very endeavour is rarely discussed in these circles: wikis. Wikipedia has managed to curate and interlink a vast amount of humanity's knowledge, and it is used by researchers and the general public all over the world for learning purposes. Personal wikis have similar capabilities (yes I am aware of the obvious bias as I write this on my personal wiki). Yet they rarely get a mention alongside tools such as Obsidian or Roam which, whilst incredible in their own way, maybe shouldn't be the final resting place for that very reason. A wiki encourages a certain level of self-curation and expansion. Though, I suppose, there is also nothing stopping people from using Obsidian like that, either.

Still, I definitely think the points here about AI are valid: if there is one area AI will come to dominate it is information retrieval and summarisation.

On the pros and cons of having all of human history and creative output at your fingertips:

Collectively, this material offers me an abundance of riches — far more to work with than any beat reporter had such easy access to even 15 years ago.

And yet most days I find myself with the same problem as the farmer: I have so much information at hand that I feel paralyzed.

On one of the few real benefits I feel AI will serve – recall:

An AI-powered link database has a perfect memory; all it’s missing is a usable chat interface. If it had one, it might be a perfect research assistant.

On the power of AI as a tool for bubbling ideas up from an archive:

But if I could chat in natural language with a massive archive, built from hand-picked trustworthy sources? That seems powerful to me, at least in the abstract.

On the problem with hallucinations and lack of citations:

A significant problem with using AI tools to summarize things is that you can’t trust the summary unless you read all the relevant documents yourself — defeating the point of asking for a summary in the first place.

On the issue with most note-taking software, how it abstracts away the central purpose (questioning your own knowledge); and also just really like "transient strings of text" bit:

I’ll admit to having forgotten those questions over the past couple years as I kept filling up documents with transient strings of text inside expensive software.as the farmer: I have so much information at hand that I feel paralyzed.

Jamie's bookshelf | Jamie Adams

I'm a sucker for a personal collection displayed on the web, and I really love the simplicity of Jamie's design for his digital bookshelf. An easy rating system; simple (and fast) filters; and a very refined UI that is made visually appealing by the use of the actual book covers and some nice whitespace application.

A webpage titled "bookshelf" displaying a reverse-chronological timeline, where each point is a specific book review. Each review details the book's author, title, cover, and rating (provided in thumbsup emojis on a 1-5 scale), as well as a review, many of which are only a few words long. Filters for "all books", "fiction", and "non-fiction" are provided via radio buttons at the top, and the page has a blurb explaining that Jamie has been recording the books they read since 2012, that this list starts in 2020, and how the rating system works.
Another thing I love: the way the design encourages brevity in the reviews. That's something I could happily pinch 😉

💀 NOTE: The original site no longer appears to be online; the source has been replaced with an Internet Archive link.

Automating the deployment of your static site on your own server | Chris Ferndinandi

Need to self-host your front end away from the "modern" services like Netlify or Vercel? As both continue to get a little sketchier with time, it's definitely something I'm having to consider. Deploying a static site to an old-school host (like my favourite: Krystal) is easy enough, but you lose that wonderful "rebuild on Git push" that we've all become accustomed to. Or do you?

Whenever I push an update in my websites master branch in GitHub, it notifies my server through something called a webhook. My server pulls the latest version of my site from GitHub, runs a Hugo build, and moves the built files into my live site directory.

Chris has written up how to achieve similar functionality on Digital Ocean. I'm not sure how well this would translate to other services, but the toolchain seems generic enough. You need:

  • A GitHub account;
  • Some form of PHP hosting with control over file and folder placement and SSH/CLI access;

And that seems to be about it.

The steps look like this:

  1. Set up your repo on GitHub;
  2. Install whatever build tool you want to use on your server (Chris uses Hugo, but any front-end framework should work the same way);
  3. Create an SSH key for the server;
  4. Clone your GitHub repository to the server via SSH: git clone git@github.com:<YOUR_USERNAME>/<YOUR_RESPOSITORY>.git
  5. Create an automation script within a PHP file; Chris uses deploy.php and the code below;
  6. Add a deploy secret as an environment variable (likely in the htaccess file);
  7. On GitHub, set up a webhook with the "payload URL" as your domain plus the name of the PHP script, e.g: theadhocracy.co.uk/deploy.php;
  8. Test it out.

Here's the code (note Chris' version may be more up to date, just saving here in case that post disappears):

<?php

    /**
     * Automated deploy from GitHub
     *
     * https://developer.github.com/webhooks/
     * Template from ServerPilot (https://serverpilot.io/community/articles/how-to-automatically-deploy-a-git-repo-from-bitbucket.html)
     * Hash validation from Craig Blanchette (http://isometriks.com/verify-github-webhooks-with-php)
     */

    // Variables
    $secret = getenv('GH_DEPLOY_SECRET');
    $repo_dir = '/srv/users/serverpilot/apps/<YOUR_APP>/build';
    $web_root_dir = '/srv/users/serverpilot/apps/<YOUR_APP>/public';
    $rendered_dir = '/public';
    $hugo_path = '/usr/local/bin/hugo';

    // Validate hook secret
    if ($secret !== NULL) {

        // Get signature
        $hub_signature = $_SERVER['HTTP_X_HUB_SIGNATURE'];

        // Make sure signature is provided
        if (!isset($hub_signature)) {
            file_put_contents('deploy.log', date('m/d/Y h:i:s a') . ' Error: HTTP header "X-Hub-Signature" is missing.' . "\n", FILE_APPEND);
            die('HTTP header "X-Hub-Signature" is missing.');
        } elseif (!extension_loaded('hash')) {
            file_put_contents('deploy.log', date('m/d/Y h:i:s a') . ' Error: Missing "hash" extension to check the secret code validity.' . "\n", FILE_APPEND);
            die('Missing "hash" extension to check the secret code validity.');
        }

        // Split signature into algorithm and hash
        list($algo, $hash) = explode('=', $hub_signature, 2);

        // Get payload
        $payload = file_get_contents('php://input');

        // Calculate hash based on payload and the secret
        $payload_hash = hash_hmac($algo, $payload, $secret);

        // Check if hashes are equivalent
        if (!hash_equals($hash, $payload_hash)) {
            // Kill the script or do something else here.
            file_put_contents('deploy.log', date('m/d/Y h:i:s a') . ' Error: Bad Secret' . "\n", FILE_APPEND);
            die('Bad secret');
        }

    };

    // Parse data from GitHub hook payload
    $data = json_decode($_POST['payload']);

    $commit_message;
    if (empty($data->commits)){
        // When merging and pushing to GitHub, the commits array will be empty.
        // In this case there is no way to know what branch was pushed to, so we will do an update.
        $commit_message .= 'true';
    } else {
        foreach ($data->commits as $commit) {
            $commit_message .= $commit->message;
        }
    }

    if (!empty($commit_message)) {

        // Do a git checkout, run Hugo, and copy files to public directory
        exec('cd ' . $repo_dir . ' && git fetch --all && git reset --hard origin/master');
        exec('cd ' . $repo_dir . ' && ' . $hugo_path);
        exec('cd ' . $repo_dir . ' && cp -r ' . $repo_dir . $rendered_dir . '/. ' . $web_root_dir);

        // Log the deployment
        file_put_contents('deploy.log', date('m/d/Y h:i:s a') . " Deployed branch: " .  $branch . " Commit: " . $commit_message . "\n", FILE_APPEND);

    }    

📆 01 Sep 2023  | 🔗

EAA: what you need to know | Craig Abbott

An excellent overview of the European Accessibility Act, how it overlaps with existing regulations, and the impact it might have. Doesn't get too technical, but does do a good job of explaining the high-level concepts and addressing a few tangential questions, like the result of Brexit on the UK's position.

(Can you tell I'm currently researching the EAA for work?)

On the WAD and where it applies:

It only applies to you if you're a public sector body in Europe, including the UK, as it was implemented in 2016 before Brexit.

On the UK equivalent of the WAD:

The Public Sector Bodies Accessibility Regulations 2018 are, as the name suggests, aimed at public sector bodies also. But, the important difference here is that this is UK specific law.
Because they are harmonised, if you meet the Public Sector Bodies Accessibility Regulations, you will also meet the EU Accessibility Directive. Cool huh?

On the standard behind these laws, EN 301 549:

However, in short, EN 301 549 is a technical standard that lays down the accessibility requirements for ICT (information and communications technology) products and services.
It is essentially just a set of guidelines which tells organisations how to make products accessible, rather than all the legalities which explains what happens to them if they do not!

On the EAA in general:

The European Accessibility Act, on the other hand, is focused on private sector organisations selling products or services to customers that live in EU member states.
And, it's the first time that, whether you provide public services to people in the EU, or you sell products or services to customers in the EU, you will all need to meet the same standard of accessibility.
It's a significant step towards making the digital world more inclusive for everyone and I couldn't be more excited for it to arrive!
Simply put, if you're in the private sector and you operate within the EU, you'll have to make sure your products and services meet the EAA's accessibility requirements. That means more than just making your website accessible.

On the "increased market reach"(I hate these kinds of business-focussed accessibility arguments, but the stat is a good one to have up your sleeve) :

With more than 80 million people in the EU living with disabilities, making your digital offerings accessible can significantly expand your customer base.

On how Craig feels the same way 😅:

Again, I realise I’m making this about money, but we are talking about the private sector here, and unfortunately in private sector, money makes the world go round!

On the impact of the EAA in a post-Brexit Britain:

As we just mentioned, the EAA affects organisations selling to customers in EU member states. So if thats your organisaion, then put simply, you need to comply!
In the UK, private sector companies are also still bound to the Equality Act 2010, which makes it illegal to discriminate against people with protected characteristics, of which one is disability. This means private sector organisations have been expected to make reasonable adjustments for people with disabilities for over a decade anyway.

On the way UK companies could find themselves outcompeted by European, accessible ones:

Even if you only ever plan on selling to people in the UK, your product is going to appear far inferior to your EU competitors if you don’t adopt the same standards. Trust me, there is no competitive advantage in having an ableist product or service in 2023!
Accessibility is not just about ticking boxes; it's about inclusivity. Implementing these standards shows that your business values all customers, regardless of their physical abilities.

Article 32, and why it sucks! | Craig Abbot

The EAA's Article 32 is the one that people get a bit angry about, particularly in accessibility circles. In brief, it gives companies with existing contracts or customers an additional five years to make their stuff accessible (i.e. until June 2030). Bizarrely, that caveat is even longer for physical self-service machines, which get a whopping twenty years additional grace period. As Craig points out, that feels excessive, particularly given how slow to upgrade those services are in the first place (i.e. they probably need to be upgraded sooner than most digital services).

Still, whilst it's an annoying caveat that will certainly be abused by some companies to kick the can down the line – particularly with internal tooling – it doesn't really offer that large of a loophole. Any new sales, customers, or contracts must be accessible from 2025, so if you're updating your software and hardware anyway, why wouldn't you push that out to all customers?

I guess it gives legacy codebases a pass (the kind where one stubborn client refuses to upgrade for a decade, because they prefer one small feature or have been grandfathered into a sweetheart payment scheme or something), so companies have a five-year runway to sunset them, but considering we've already had a five-year runway it does feel a little unnecessary. Still, as Craig says, they might get some extra leeway, but...

But, come 2030, we'll all be here to nail them to the floor!

Tick-tock.
Tick-tock.

On what Article 32 actually says:

The first part of Article 32 sets up a transitional period that ends on 28 June 2030.
It's a grace period for service providers to continue using products they were already using before the 28 June 2025 cut-off date.
Any new stuff purchased will need to be accessible immediately once that 28 June 2025 deadline passes.
Service contracts that were agreed before 28 June 2025 can keep going without any changes until they expire, but only for a maximum of five years from that date.
Article 32 basically says, if they were lawfully used before 28 June 2025, they can still be used until they're no longer 'economically useful', or up to a maximum of 20 years.

On the annoying 20-year grace period for self-service machines:

This bit is probably the part that sucks the most, because most self-service terminals are already about 20 years out of date! Lets be honest, ticket machines for subway trains and super-market self-checkouts aren't exactly what we think of when we give examples of 'cutting edge technology'!

Tailwind and the death of web craftsmanship | Jeff Sandberg

I have used Tailwind on various projects. I think for prototyping and quick proof of concepts, for one-off projects that never need to be updated, it has some advantages. But for code that you want to last; code that needs to be maintained; code that should have thought applied to it, Tailwind has issues. Jeff has done a great job of outlining what those issues are, pointing out that CSS is currently undergoing a CSS3-level reinvention that solves almost all of the issues Tailwind purportedly solves, but better, and offering some additional thoughts around the edges of why Tailwind seems to have been adopted so rapidly.

On how Tailwind causes long-term debugging headaches, particularly for those of us who build/debug in the browser (read: front-end specialists): 

Yeah, you might have auto-completion in your editor, but the browser inspector is utterly neutered. You can't use the applied styles checkbox, because you'll wind up disabling a style for every usage of that tailwind class.
Due to the tailwind compiler, which was a solution to the "shipping massive amounts of CSS" problem, you don't have a guarantee that the class you're trying to poke into the inspector view will actually exist.

On the needless repetition of writing CSS via Tailwind (or any utility-class-only system):

You can't chain selectors. If you want your hover, focus, and active classes to be the same, you have to write them all.

On the issues with declaring "specificity is the devil" or "the cascade is too complex to scale" and refusing to use the language as designed:

Tailwind, and other utility frameworks, throw out the best and most often misunderstood aspects of CSS. CSS was explicitly designed with certain features, that, while complicated, are immensely useful once you can master them. The cascade, selector chaining (see above), and specificity are all very powerful tools. If you misunderstand them, yeah you can cut yourself. Just like if you don't use a saw properly you can cut yourself.
Selector chaining, particularly with the advent of new CSS selectors like is and where, allows very clean reuse of common bits of CSS. You get to separate the properties that affect how something looks from what is being affected. It's obvious, but in tailwind, you don't get that.

And on how Tailwind still has specificity even if proponents like to argue it doesn't (a breed of specificity with no clear method to work around, no less, which often causes bigger headaches than the native cascade would have in the first place):

And it's a lie to say you don't have to worry about specificity in tailwind; tailwind has its own specificity. Classes are read left-to-right by the browser, and so if you do b-1 bg-blue b-2, guess what? Your element gets both b-1 and b-2, and the browser will probably choose b-2 as your border, and throw away b-1.

On @apply:

@apply is a gigantic code-smell, it goes against everything tailwind supposedly stands for, and doesn't make writing your CSS any easier.

On who advocates the loudest for Tailwind:

The people I've seen who are most excited over tailwind are generally those that would view frontend as something they have to do, not something they want to do.

On how Tailwind's time (if it ever truly existed) may be coming to an end:

I've seen other engineers, of all levels, stuck in a mire of bad CSS, and so to them maybe Tailwind seems like a lifesaver. But CSS is better now. It's not perfect, but it's better than its ever been, and it's better than tailwind. Give it another try.

Made By Me, But Made Possible By:

CMS:

Build: Gatsby

Deployment: GitHub

Hosting: Netlify

Connect With Me:

Twitter Twitter

Instagram Instragram

500px 500px

GitHub GitHub

Keep Up To Date:

All Posts RSS feed.

Articles RSS feed.

Journal RSS feed.

Notes RSS feed.