React Summit 2020 Notes

I've been wanting to get to a React Summit for a few years, but they take place in other countries and can be expensive. "Luckily", in the time of social distancing, this year's conference has gone fully remote so that wasn't an issue (plus the streams were free, so double yay!). I'm not necessarily interested in every single talk, but there are a few I definitely want to catch, and I'm going to try and blog them (as live as possible) as I go, whilst also potentially testing out Noter a bit. Here we go!

Guillermo Rauch: React at the Edge

Guillermo is focusing on why it makes sense to put your websites on the "edge" and how this helps with both global distribution and user reach.

[woops, we lost the stream. Missed a bit as a result]

During the pandemic, it's really highlighted how designing apps and websites to work globally from day one can pay off. The "classic" markets, such as US and EU, are no longer growing that quickly, but Asia and Africa are coming online at incredible rates. These could just as easily be your first or next user, so it makes sense to do everything you can to maximise their access.

From that perspective, the developer's core job is to "transmit information as fast as possible". But it's more than just raw information; we also need to achieve that end in ways that are pleasant, respect privacy, are culturally sensitive, and are a joy to use. #sparksjoy

Traditionally, software development has been "confined to a black box". Our software development environments (local environments) are "illusions", they aren't an "accurate metaphor for the global network". Localhost tends to be slower, it's localised by definition, it's unoptimised code and assets, you won't have SSL etc. This concept led to "shipping the black box" i.e. Docker. It works on my machine, so if we ship the machine then we're all good. But we're still making false assumptions and we're ultimately still serving that black box to a single location (the server) which doesn't overly help the end-user.

Out of this muddle came the Jamstack. This flips things and makes sure the shipped code is just markup, so it's as universal as possible, but then it's made dynamic by JavaScript interacting with APIs on the client-side. The actual dynamism, therefore, happens on the edge, on the final location, not the server. It's also very similar to how native apps work: you download the core logic and the app then becomes dynamic via login or fetch requests.

The Jamstack also helps speed up the information fetching process. By using server-side builds, the information is hard-baked into your site, making it instantly accessible to the user. It begins to remove the "boxing" step (e.g. Docker) from the software process.

If you take a look back at NextJS from a few years ago, it followed the "black box" model. You build and then you start; you create your little box and you ship it to a location (a server). CDNs enabled NextJS to push those boxes closer to the edge by duplicating assets around the world. But that relies on caching and, as we all know, caches can become problematic. Ultimately, the fallback is therefore still a single location (server) with that black box waiting.

Guillermo started collecting instances of websites that should be fully static but where he was getting 502 errors and other server errors. Why? These sites shouldn't need to access a server, so what was happening? The black box model was failing. Jamstack helps with that: at the core, any part of the site that can be static should be static.

Since NextJS 9 their team have embraced this idea. With their latest release, they feel they have "opened the black box". A major benefit of that is your site and app are effectively built and then pushed directly to the end-user/edge. When paired with something like Zeit, you end up being able to do this via Git and that means you can spin up individual branch-level sites, making it far easier to test end-to-end in the browser, outside of the local environment[1]. To learn more: NextJS Learn.

Tweet Thread

Q&A

How does this compare to a PWA, which is one step beyond the edge and literally on the client? Basically, NextJS allows you to do more "ahead of time" computation, such as building out dynamic routing and embedding that into the static site. NextJS and other SSGs effectively give you an additional performance boost. But, critically, you can use (and probably should use) both in tandem. SSGs work well with PWAs and NextJS is specifically looking into how to create deeper PWA integration over the next 6-12 months.

Panel: Jamstack + Serverless

Panel with Guillermo Rauch, Jason Lengstorf, and Max Stolber. [This was way too short! Too much interesting info for such a tight timeframe, especially when you've got both Gatsby and NextJS in the room!]

  • Blocks sounds pretty interesting. A GUI page builder for React components, using an underlying component library (by the sounds of things) so you have control over what's available.
  • Max Stolber: Gatsby are working on "incremental builds" as a way to help make static sites even more powerful.
  • Guillermo Rauch: Jamstack helps enforce privacy, encryption etc. by pushing that computation to the client, meaning that even edges or single points (servers) can't access it, which feels more right.
  • Max Stolber: Blocks and visual editing will be the biggest evolution of the Jamstack.
  • Guillermo Rauch: Tina and Type (?) are new headless CMS solutions built with the Jamstack in mind. They allow anyone to edit a webpage in the same way (or more easily) than developers do now with browser web tools: just click, edit, save, and it auto-builds in the background.

Kent C. Dodds: AHA Programming

Forget DRY and WET, we're looking at AHA: Avoid Hasty Abstractions. Kent is live coding an admittedly contrived example of an abstraction to show his point. He's using VSCode and Quokka. [<-- looks very interesting for console logging directly into the file]

Abstractions are useful. They stop us having to fix the same bug in multiple places. But abstractions become complex. We naturally feel like building out abstractions with new use cases as they crop up. Doing so feels right because it gives that enhancement to every instance of the abstraction and it allows us to build on existing code (don't reinvent the wheel, right?).

But, extending an abstraction means we need to provide fallbacks. By their nature, abstractions are used in multiple places within a codebase (otherwise why abstract it in the first place?), so if you extend an abstraction you don't want to break existing usage. So you start adding if statements and parameter toggles (i.e. includeUsername = true).

As the abstraction grows and further enhancements are added, we start to get overlapping parameters. For example, you've added initialisations to a username abstraction and then add honorifics. Well now, you have three base states: username, username as initials, and username with an honorific. But you also have cross-states, such as initialisation with honorifics, which are supported by the abstraction. So now we need to add tests for use cases we may not even be using (or ever need).

You can quickly get to a point where you have multiple uses of an abstraction within a codebase. But as that abstraction evolves beyond the original use case, those uses have diverged fairly significantly. Sometimes they look completely different from one another. On top of which, we're now supporting tests for potentially unnecessary functionality.

Worse still, when our underlying use-cases change (say we no longer need initialisations anywhere), we get worried about removing test cases or amending the abstraction. After all, this is an abstraction – who knows where else it's being used – so we leave the code in... but now we have functionality which only exists to allow test cases to pass.

You've arrived at a situation where your abstraction is a tangled mess and can be pretty scary to work with or support. But wasn't the point of this to be useful and simpler to maintain at the start? Kent's example is contrived, sure, but it also highlights how a simple abstraction can quickly get jumbled up.

A lot of Kent's thinking around AHA is based on the work of Sandi Metz: "the fastest way forward is back". [see further reading below for her blog post and talk on the subject that Kent linked to.]

Sandi has a great method of refactoring tangled abstractions into simpler code by inlining abstractions as constants. Once you start working like this, you become much better at determining when an abstraction is genuinely useful, as well as enabling you to remove jumbled abstractions that your codebase already has.

How does this work with DRY? Do not repeat yourself is a good idea, as it can help you to avoid duplication across a codebase, and duplication can lead to the propagation of bugs all over the place. But, overall, it's better to have some duplication than the wrong abstraction (paraphrasing Sandi again).

Tweet Thread

Q&A

How did the name AHA come about? AHA was born out of frustrations with both WET and DRY programming; they were too dogmatic. Better to be mindful about the abstractions that we make and then the code should be better overall. Initially, that led to a ridiculous name for this as "MOIST" programming (😂) as it was in between DRY and WET, but it was too quickly ridiculed, so we've landed on AHA.

How to roll this out into a project? You don't need to architect everything from the beginning. Take an iterative approach to building a codebase. If you're working with bad abstractions, then try to inline them as you come across them, until they're gone. But don't architect an application of 3,000 lines the same way you would architect one for 3 million lines.

Why not abstract first and then inline/destroy the abstraction when it gets complicated? No, fight that urge, because you'll waste time defining variables and building test cases for an abstraction that you may destroy. It's better to stick with some duplication until you can prove that an abstraction will work. For example, take two nearly identical buttons (login and register). These could be a single component which takes props to define text, colour etc. But that abstraction is likely to be more lines of code than just having two buttons, and would need dozens of props. They may look visually the same but they aren't, so a common "button" component ends up a ridiculous jumbled mess.[2]

Jason Jean: Scalable React Development with Nx

Jason is looking at the tradeoffs that come with having monolithic code repositories. The big issue here is that, as your codebase scales, you need to start sharing code. But if you split your code into multiple repos to make that sharing easier, you end up with duplicate tooling (different npm requirements, dependencies etc.), as well as potentially divergent workflows depending on which teams are working on which repositories. It also becomes much harder to map your internal dependencies i.e. how those repos are connected together, which can result in repos falling out of sync and your teams having to support different versions of one repository for other teams across the business.

Instead, just put all your code into one monorepo. You have one version, one CI flow, one method of working, one set of dependencies. You can also use tools to analyse which parts of that repo are more or less tightly coupled, giving you an easier test flow and a better understanding of your own organisational needs.

Ultimately, people are organised within a single company. You may have different teams, but you have a singular organisational goal (hopefully). So a secondary benefit is now your codebase works the same way as your business: unified.

Nx helps your team work with a monorepo and solves a lot of the historic issues of having monolithic codebases. It does this by quickly generating shared components and Redux states, all from the CLI with single-line commands. Better yet, Nx has customisable "schematics" which allows you to enforce internal consistency and best practices across an entire organisation.

It can actually go further, by allowing you to tag and categorise parts of a codebase and then run testing logic using those tags. So, for example, if you tag one function as "state" and another component as "UI" (yes, you can tag different "levels" of code too), but then try to drive state from the UI component (which you shouldn't), you can set up a linting error to appear.

Once you have those tags in place (and even without them), Nx can dynamically infer connections within the repo between sections of code. That means when you change one function or component, Nx can intelligently test only those bits that have been impacted. Clever, as that means you're not stuck testing/building everything with each minor change; better yet, it potentially makes tests in a monorepo faster than the same setup in separated, decoupled repos. Combine that with intelligent behind-the-scenes shared caching (made possible by being monolithic) and the ability to distribute compute across various workers (ditto), and Nx claims to be faster than traditional models for testing entire codebases by a fairly sizeable chunk.

Nx comes with Typescript, Jest, Enzyme and Storybook configured out of the box. That's pretty clever and (personally) a pretty ideal tech stack. It's an interesting model and I can definitely see the benefits it brings.

Tweet Thread

Q&A

How do you convince leadership that a monorepo isn't problematic? Well, you're going to see a lot of collaboration between teams and dependencies cause friction. But a monorepo changes that and massively reduces friction.

How dependent does a codebase become on Nx? Realistically, most of the underlying functionality of Nx (Storybook etc.) is something you could set up yourself. It doesn't change anything about the way you write your code, it just works around it.

Can you pair micro-frontends with monorepos? They work perfectly. In some ways a monorepo makes it easier to work with microservices, micro-frontends etc. because you can track changes and testing across all parts simultaneously. The example given is a common header style that you use for visual consistency across all front-ends, but which you decide to change. If each micro-frontend has its own repo then you have a long list of changes and tests that need to be run. With a monorepo, you can make all those changes at once and then flood the tests across the system, particularly if you're using intelligent testing like with Nx. It feels like you're still using a decoupled structure with your code, it's just all in one place; the same as a website would normally be with stylesheets and src folders. Makes sense.

Elizabet Oliveira: Designing with Code in Mind

[First up, this is a beautifully well-designed slide deck and I'm jealous 😁]

Elizabet is looking at helping to bridge the gap between designers and coders, from the perspective of a designer.

Collaboration through images in a modern workflow just feels wrong and it leads to discontent between the two camps. It's hard for designers to think of all the possible ways their design needs to work. Design libraries and design systems help, but they still leave gaps, so they aren't a complete solution.

That said, working with a design system makes a lot of issues simpler, so they are a good starting point. But the big takeaway is that you need to get rid of "pixel-perfect"; design an idea, but leave the precise implementation to the developer. That means you can design with concepts like statefulness in mind without worrying about every possible nuance.

Modern tools like Sketch and Figma make this type of ideation much simpler and provide a natural bridge between design teams and development teams straight out of the box. They also make real-time editing possible, so as a designer (if you understand at least a little code) you can jump into their projects and provide guidance directly.

Being able to develop just enough to get a working prototype makes the whole process a lot more efficient. As a designer, Elizabet is able to draft out the ideas as simple image files and then jump into the codebase. Rather than having to develop dozens of image files showing animations, state changes, etc. she can show that interactivity directly and hand that rough draft off to a development team to refine, refactor, and make into useable code. Much like pixels not needing to be perfect, her code doesn't need to be production-ready; it just needs to be good enough to communicate the design's intent.

Also, she stresses to avoid having a design team that are all the same. Have a balance of designers that are better at coding, or better at CSS, or better at UX etc. That includes having some who maybe can't code at all; that's not a problem, so long as the entire team isn't the same.

Tweet Thread

Q&A

How do you help your designers get started with coding? GitHub can be a useful entry point in for designers that don't know very much code. It gives them an easy environment to look at how the code is written and provides a useful medium for feedback between the two teams. As you get more used to leaving comments or even making pull requests, you become a contributor to the source code and that can lead designers to want to get more involved. It creates a sense of communal ownership.

Tanner Linsley: It's Time to Break up your Global State

No matter how you're managing global state, we almost always end up with a mix of genuinely global state and other, slightly murkier "server state". Server state is asynchronous by nature and can be updated and modified from multiple points across a React codebase.

[okay, so this was a talk I was looking forward to, but it went so incredibly fast and contained so much information that I just had to stop even trying to live blog and just desperately try to keep up with trying to understand what Tanner was saying 😂 Not sure I really managed that, but it's a talk I'd happily come back to in the future next time I need some help avoiding global state.]

Basically, React Query looks very clever in terms of managing state and cleaning up global state. As someone who largely prefers to only glance at state management from the corner of my eye, this actually makes me want to jump in with it... at least a little bit 😁

Tweet

Lightning Talks

[I didn't quite manage to listen to everything during these rapid-fire micro-talks; interesting format though to round out the day, I enjoyed it a lot.]

Matt Mclure: Broadcasting live from a browser

Live broadcast and live chat are very different beasts. Live chat can easily function peer-to-peer, even with a few different people all sharing video at the same time. Live broadcast is a one-to-many system which needs to be able to scale almost indefinitely. That means the same tech that works with chat struggles with broadcast.

For example, broadcast uses RTMP and HLS in order to encode in real-time to share to as many people as possible. Chat uses WebRTC, and this doesn't interface with RTMP. In other words, to broadcast you need a server. You can use server-side WebRTC but it's a bit of a mess right now; the easiest way is via headless Chrome. But that means you're now running a Chrome instance for everyone who wants to livestream, which is complicated.

What about WebSockets? Well, we can request access to camera and audio, then clone that connection to an instance of the media recorder API, and then pass that to a server via a web socket. It's not the neatest solution, but it does work. This way, every time you get a web socket packet you can spin up encoding or anything else. That's what they're doing over on Glitch with Streamr.

Jen Luker: Button vs Div

[A talk right up my own street!]

There's a solid reason that accessibility experts recommend to "use the platform" rather than trying to roll it yourself. To make that point, Jen is converting a <button> into a <div> and vice versa. After all, if you can "make an accessible div, then what's the big deal?". [Spoiler: it's super inefficient, that's why]

Interesting: the first button on a page is assumed by all browsers to be a submit. As a result, if you don't want to submit with that element you need to set role=button. It feels a little redundant, and after the first instance on a page it technically is, but it is still required in most cases.

In terms of getting your button to be, well, a button... that's it. Set the role and you're done.

Now, doing things the other way around... yeah, it's tricky. You need to define your onClick event, add tab functionality, style it to look like a button etc. Cool trick here: color: buttontext is a legitimate part of CSS and will inherit the text colour of all other buttons on the page.

Also, fun fact, but iOS devices don't detect elements which lack cursor: pointer as buttons... so yeah, you still need that.

Tweet

Uri Goldstein: GraphQL Mesh

I missed the first part of Uri's talk (my fault) but it sounds fascinating and hyper-relevant to the IndieWeb movement. He seems to be working on GraphQL tools that would enable people to more easily open up data sources: GraphQL Mesh. It lets you query non-GraphQL APIs using GraphQL. Cool!

Mike Hartington: Ionic & React

Ionic lets you compile native apps, websites, and PWAs all from the same core codebase of HTML, CSS, and JavaScript. Very clever. Historically it's been based on Angular, but now they're porting it to React.

Q&A

Are there situations where an accessible div is better than a button? Yes, it's not semantically valid to use a <button> as a wrapper for other elements. So if, for example, you want to make a whole card a clickable element/button you'd want to make that an accessible <div>.

---

Well, that's the lot. A super busy, incredibly informative day! There were a few teething issues with their streams (particularly on their own website, which switched to private mode partway through the first talk and was muted at the very start of the day), but overall it ran smoothly, even with two tracks. From my perspective, I picked up a few new tools and tricks I want to try out, plus Noter worked well. It took me a little while to get used to and I'd definitely like to see the UI spruced up a bit, but as far as creating Twitter threats it works way better than the native solution! On that note, I've added Twitter thread links where possible above, or just check out my account. I ended up with a few of the speakers interacting via there and saw myself flash up on the livestream, which was pretty cool and definitely made the duel logging worthwhile 😁

Screenshot of ReactSummit livestream with my tweet visible at the bottom below speaker Guillermo Rauch. Tweet reads "Let's go! Time for ReactSummit remote edition!" with a party emoji.
It was kinda fun to see one of my tweets pop up on the livestream 🥳

Explore Other Articles

Newer

Self Categorisation

In which I start off asking a simple question: what content categories should I use on this website? Four hours later, I've discoverd information gardening, now pages, digital-me libraries, and oh so much more. And yes, I think I've answered that first question. Fancy a trip down the rabbit hole?

Further Reading & Sources

Conversation

Want to take part?

Comments are powered by Webmentions; if you know what that means, do your thing 👍

Footnotes

  • <p>Notes from the fully remote React Summit 2020 (or at least the talks I tuned in for). Lots covered, from static-site generators and the Jamstack through to React state management and accessibility. What a fun day!</p>
  • Murray Adcock.
Article permalink

Made By Me, But Made Possible By:

CMS:

Build: Gatsby

Deployment: GitHub

Hosting: Netlify

Connect With Me:

Twitter Twitter

Instagram Instragram

500px 500px

GitHub GitHub

Keep Up To Date:

All Posts RSS feed.

Articles RSS feed.

Journal RSS feed.

Notes RSS feed.