I don't think I've come across a clearer explanation of the full cascade before, nor one so beautifully crafted. An excellent resource and inspiration.
Critically, rehydration is not the same thing as a render. In a typical render, when props or state change, React is prepared to reconcile any differences and update the DOM. In a rehydration, React assumes that the DOM won't change. It's just trying to adopt the existing DOM.
An excellent overview of (most of) the unit options you can use in CSS Grid columns and rows, with examples. (No RSS)
A neat little project that highlights potentially insensitive language used within the provided copy (for English, at least).
Oh boy, there's a lot worth pulling out of this overview of upcoming web technologies:
[Mid-range Android phones are the] median device that web users have, Apple’s market share in terms of 2019 Q1 sales in the US is 39%, with Samsung at 28%, while the rest is a range of Android devices which sell for between $500 and $50 dollars.
In other words, as Simon says, the single biggest UX impact is the size of JS shipped. Mid-range phones don't just struggle to download large JS packages, they also struggle to process them quickly.
The web is an open and democratic place, when people are forced to use only native apps they have a limited window on the web experience and the knowledge available therein.
Another potential solution is Web Assembly, which (apparently) compiles 20x faster than JS 😲 That could significantly decrease the JS load needed for, say, React by converting process-heavy parts to WASM. That could be a massive gain to the entire web community by one team's efforts:
In this example a single React release could raise the tide of web performance for all React-based websites.
In terms of Google's proposed "slow website" badge, Simon raises some interesting counter points:
Who knows what the workaround techniques will be: loading screens like 1999, screenshots rendered while the page loads in the background, UA sniffing to deliver a faster experience to googlebot.
He also takes aim at Google Analytics trailing behind on key UX features like Real User Measurements (things like largest contentful paint and blocking time). I'm not sure I buy this quite as much, but any (non-invasive) improvements to the metrics that GA provides would be welcome.
Slow sites cost money, and observability helps us to allocate funds to improve performance.
I guess I'm reading up on React Context a lot today. Kent provides a useful step-by-step guide in his normal steady manner, which I found pretty easy to grasp. He also makes a very valid point:
Context does NOT have to be global to the whole app, but can be applied to one part of your tree
I've yet to need context very heavily, but this feels like a solid rule of thumb: if it can be localised, do it. It's also interesting hearing his thoughts on default states:
None of us likes runtime errors, so your knee-jerk reaction may be to add a default value to avoid the runtime error. However, what use would the context be if it didn't have an actual value? If it's just using the default value that's been provided, then it can't really do much good.
Finally, I'm a big fan of how he imports hooks as a priority via his context API. It just feels a lot cleaner that way 👍
The Rev. Martin Luther King Jr. quoted Theodore Parker: “The arc of the moral universe is long, but it bends towards justice.”
But it’s not bending itself. And it’s not waiting for someone from away to bend it either.
It’s on us. Even when it doesn’t work (yet). Even when it’s difficult. Even when it’s inconvenient.
Jonas has put together a useful overview of why the "new" Context API in React is probably a better option than Redux for many simple use cases, as well as step-by-step instructions on how to set up a Redux-like global store using it.
In combination with the
useReducerhooks we can create a global store that manages the entire app’s state and supports convenient ways to update the state throughout the app regardless of how deep the component tree goes.
The only downside to this approach appears to be scalability, but honestly very few sites are going to need to track more than 2-3 pieces of global information which this should manage nicely.
I have a tendency to prefer the kind of "inside-out" control that Kent is advocating in this piece, though I've never head it called "inversion of control" before. The idea of giving your users the ability to manipulate the output of a given function/API is a great way to futureproof your work and something I think is generally useful doing earlier rather than later.
The problem comes, as he identifies, when you overdo it:
What if that's all we ever needed
filterto do and we never ran into a situation where we needed to filter on anything but
undefined? In that case, adding inversion of control for a single use case would just make the code more complicated and not provide much value.
So how do you walk that fine line? Like most things, there's a lot of grey in the decision. Personally, I like to do things with no inversion first, sticking to the principle of "doing one thing well". Then, if I (or others) find that they need to extend that functionality, I begin adding overrides. Overrides are great for maintaining existing code and allowing for an inversion of control. For example, in the pseudo-array-filter example Kent uses, I would cede control of the filter options to a prop/variable (as he does), but then add a line that populates that variable with the original
undefined checks if it's blank. That way existing implementations still work, nothing breaks, and moving forward the code is much more flexible (and, crucially, avoids the multiple-extension spaghetti that we're trying to avoid).
Sure, occasionally someone might inadvertently duplicate that setup, by re-specifying the default behaviour in the function call, but that's fine. If you didn't set the default behaviour every call would have to do that anyway, so it still saves time in the long run, and you avoid pre-extending code that never needs the flexibility in the first place.
I don’t think we have any clue about how disruptive this shift is going to be.
There are people and organizations that are racing to break the fabric of community that we all depend on. Either to make a short-term profit or to atomize/vaporize widespread trust to hide from accountability and to slow change...
In the meantime, it’s worth confirming the source before you believe what you see.
The GOV.UK UX and design team are fascinating. Every time I've had to use the website I've found it a breeze, which is an enormous achievement on their behalf. Better yet, they're incredibly transparent and make a lot of their research and reasoning available.
Recently, they changed all numeric input fields (dates, phone numbers, age etc.) across the entire GOV.UK design system to use `<input type=”text” inputmode=”numeric” pattern="[0-9]*">` instead of `<input type="number">`. That almost sounds counter intuitive, given that their goal is to trigger the (very useful) numeric keypad on mobile devices, but their reasoning is pretty bulletproof.
- Number inputs don't have great support across various screen readers and other accessibility software (which does seem a little odd to me but facts are facts);
- Some browsers attempt to round large numbers, potentially into exponential notation (e.g. 1429327e+18) or just to the nearest 10;
- Old version of Safari have, well, irritating traits like adding commas to numbers over a certain size; particularly unhelpful for credit card fields!
- Scrolling can accidentally change numbers, which is an issue (I've felt that one personally).
Their proposed solution of using a `text` version with the numeric keypad specified solves pretty much all of these issues, and the `pattern` attribute polyfills in for older iOS devices and some other old browsers which may not understand `inputmode` that well. Neat.
Original source: Reddit
Senongo Akpem’s Cross-Cultural Design has been on my radar a lot lately; I probably should pick it up. In the meantime, A List Apart have released a little subsection with some interesting insights:
- Stereotypography is the stereotyping of a culture, region, or other group via a specific font style or typeface. Example used is Neuland, which to me is the Jurassic Park font but apparently has a long historical association with Western ideas of the "Dark Continent" version of Africa i.e. racism. Advice is to avoid any type that attempts to "invoke" a culture, particularly if it has been designed apart from that culture. Makes sense, but worth being reminded of.
- Google Fonts is banned in China, so use of their free web fonts is non-global without workarounds.
- Different alphabets have different "visual density" expectations. Asian alphabets such as Japanese are information dense, so readers are used to less white-space but need larger glyphs than in traditional Western alphabets like Latin. I'd never considered that white-space would be a culturally defined construct, not to that basal a level at least.
I don't work with localisation at all, but it's clear that if you do, there's a lot more to consider.