Explore My Notes

"Fixing" lists | Scott O'Hara

I've long "known" that if you set list-style: none on a <ul> or <ol>, then you "should" add role="list" to that element as well. If you don't, Safari/VoiceOver will ignore the inherent semantics of the list, and AT users will get incorrect information.

Turns out there's a bit more nuance to this distinction. Scott's article is one of the best (and first) to dig into the issue, and is kept up to date, so is definitely worth referencing.

Re-reading it has certainly made me reconsider my approach (I literally have this baked into my CSS resets). After all, a cardinal rule of accessibility work is "provide the same experience, as closely as possible, to all people". For something like a styled contents table, or customised list bullets, re-adding that ARIA role makes sense. But as a sighted, mouse user, I have no idea how many items are in a feed list, yet a list it is. In the past I would have dogmatically added role="list" here as well, but now I'm questioning that.

As always, nuance matters.

On an update (that I was unaware of) to do with navigation menus and lists:

There have been additional updates to how Safari exposes list semantics based on their nesting context. For instance, if a list is a descendant of a <nav> element, then even if the list styles are removed, Safari/VoiceOver will expose this as a list to users.

On Scott's thoughts around the root complexity/issue with Safari's decision:

I find myself a bit torn here. The decision to remove list semantics if a list is no longer styled to look like a list does make sense… Typically I think it’s important for elements to look like what they are. But a blanket decision like this can get into some opinionated territory. It also discounts the fact that there are other ways to visually interpret lists than just if they have default list markers, such as bullets.

Reimagining fluid typography | Miriam Suzanne

Whilst I've used fluid typography on several site designs (typically via Utopia's calculator), I've always been a bit wary about the base assumptions and potential accessibility issues that it can bring along. I'm far from the only one.

Miriam shares those concerns, and makes a very clear argument for why they exist and how we might go about resolving some of them. It's far from a silver bullet, but I really like the thinking behind this approach; it feels like a step in the right direction 😊

On the root assumption being made by current fluid type systems (e.g. Utopia):

Utopia asks us to start from a range of font sizes defined in px values, and then it does a conversion to rem by assuming that 1rem == 16px. As long as that assumption holds true, the math above will scale our font from an 18px minimum to a 20px maximum.

This is an assumption that developers often lean on, even when we’re not doing fluid math. It’s not an entirely reliable assumption, but it matches the default in most browsers, and things can feel pretty squishy if we don’t do some form of translation-to-pixels.

On why setting a base, global font size via the browser is becoming increasingly frustrating:

I see this situation play out over and over on the web. The lesson we often learn is users don’t set preferences, when the reality is that we applied their preferences badly. When the preference doesn’t do what I want, of course I have to stop using that preference.

On the core fallacy of modern web typography:

TL;DR – Never do pixel math with em and rem units. That’s where we went wrong, by assuming that 16px == 1em is a reliable fact.

But things always go wrong when we try to treat em as an alias for px, with mental conversions based on their assumed default relationship. At that point we’re keeping zoom, but discarding everything else about the user preference.

On why we shouldn't set a base font size on the html or body elements::

Don’t set a root font size. ... The best base font size is the user’s default font size – set in the browser.

On another fallacy that is common in web design circles, and one I also agree with ‒ heck, almost no one I know uses virtual desktops, without which I would also use overlapping, randomly sorted screens:

Again I’ve heard a common refrain that “users don’t resize their windows” – but I find that entirely unbelievable. In fact, I’d bet I’m in a very small minority by using mostly fullscreen apps.

On a potential solution, using a base range for your font size:

Instead we’re adding a slight responsiveness to the user value, and using the clamps to keep our fluid value within range of the user’s intent.

html { font-size: clamp(1em, 0.9em + 1vw, 1.5em); }

On a key way to self-check your own biases and patterns:

Any time you start doing mental math with 16px == 1em, stop yourself and ask if that math holds up over a whole range of user preferences.

(It doesn’t. Don’t do it.)

Six common html myths | Dennis Snell

A list of common misconceptions about HTML, with lots of excellent detail about how HTML parsers actually work.

(Though I'm not sure how common ‒ or even controversial ‒ some of them are; I'm not sure any one is arguing to use XHTML these days, are they?)

On the death of HTML4:

HTML4 is just outright dead. Browsers do not parse HTML documents as HTML4, regardless of the DOCTYPE.

On a genuinely solid reason not to use "self-closing" elements in HTML:

Lastly, because HTML parsing rules are not the same as SGML’s or XML’s, the trailing solidus carries an additional danger. If it directly follows unquoted attribute values without proper space before it, then it will be parsed as part of that attribute value.

•   <img class=wide/> is an IMG with class value “wide/”.
•   <img class=wide /> is an IMG with class value “wide”.

So don’t add that trailing solidus! It’s not better; it’s just more dangerous.

On how we need more HTML parsers:

The world needs more HTML parsers with different applications. Most available HTML parsers produce a DOM – a fully-parsed tree representation of the DOM the HTML represents. This is a memory-heavy operation and performs a lot of semantic cleanup to form a proper DOM document. However, lots of operations don’t need or want a DOM interface.

On how XML parsers (and RegEx, and string functions, and any other suggested alternative) will never actually be able to parse HTML; only an HTML parser can:

There’s significant advice on the web to use an XML parser for HTML, but it’s not safe. Reach for an HTML parser to parse HTML.

Thoughts on the resiliency of web projects | Aaron Parecki

Some interesting thoughts on how short-term wins and fun, quirky ideas can morph over time into technical debt and various other issues particularly inherent within an open-software ecosystem.

Aaron runs a lot of very useful services and projects, so it's not surprising that he feels a maintenance burden quite frequently.

On how keeping your technical stack as lean as possible is beneficial:

... every library brought in means signing up for annual maintenance of the whole project

Frameworks can save time in the short term, but have a huge cost in the long term.

On the financial implications of databases:

If a database is required, is it possible to create it in a way that does not result in ever-growing storage needs?

How the language attribute is damaging accessibility | netz-barrierefrei.de

A first-hand account of how marking individual words or short, inline phrases as a different language (even when accurate) can be a jarring and inaccessible experience for many screen reader users.

The author strongly advocates for simply never using the HTML lang attribute (outside of the root element, though even here they show scepticism to its benefit), and only breaking that rule where an entire paragraph or section of a page has been translated or quoted in a different language to the rest of the text.

Being an account of an actual screen reader user, it's hard to dismiss their indignant tone and heavy phrasing. That said, the article has driven a certain amount of online discourse, much of which has questioned the strength of the argument or added particularly useful commentary/insight.

I specifically found Daniela Kubesch's thread over on Mastodon very useful. The following are some excerpts from the ensuing discussion:

A solid explanation in support of the argument from Marco Zehe (also a screen reader user):

Kerstin Probiesch has also written an article about this, although this also heavily emphasizes the case in PDF documents. The problem is that, for the screen reader to switch the language, one voice has to stop mid sentence, switch the voice to one that speaks the other language, speak the word or phrase, then switch back, which can take half a second each. It ruins the whole sentence melody, sounds as if you were to take a breath in mid sentence, disconnecting the words in a very unnatural manner for no apparent reason. I find it quite annoying if it happens too often.

A supportive stance from Eric Eggert:

This is what we teach for 10+ years. Individual words should not be marked with different languages. 

An opposing view from Léonie Watson:

FWIW, the opinions expressed in that article do not reflect my own (as a screen reader user).

Some excellent points from James Scholes diving into some of the nuance behind the topic:

The experiences of users learning their first second language versus those already fluent in multiple ones will differ significantly. Similarly, people dealing with multiple languages from the same family will have different experiences compared to those who frequently switch between languages with very different alphabets.

Finally, I don't think the article fairly or accurately apportions the blame. It's mostly aimed at standards bodies, without acknowledging that screen readers and speech synthesizers don't always handle multiple languages well. 

And a pertinent example of how usage should probably consider the combination of languages being used from Nic:

i haven't tested this thoroughly, but i feel like this post overlooks the existence of other languages? For example, in chinese/japanese/korean, the lang attribute is needed to show the correct glyph, and i dont know how a screen reader would otherwise even attempt to pronounce it since the characters are all pronounced completely differently in different languages.


On the specific WCAG SC being debated:

Success Criterion 3.1.2 Language of Parts The human language of each passage or phrase in the content can be programmatically determined except for proper names, technical terms, words of indeterminate language, and words or phrases that have become part of the vernacular of the immediately surrounding text.

On the core misinterpretation (as presented):

The Success Criterion says phrase or passage, not words.

On a less well known piece of screen reader functionality:

First of all, modern screen readers have dictionaries in which the pronunciation of common foreign words is also specified.

On how jarring narrator switches can be mid-sentence:

This means that the actual purpose of the language change is completely missed, the person at the other end of the line did not understand the word at all because they cannot understand the French of the native speakers or because they did not cognitively participate in the switchover.

Balancing makers and takers | dri.es

A detailed, balanced look at the current pitfalls of the Open Source model from an author with a huge amount of relevant experience.

It gets a bit close to actively encouraging monopolistic practices, and I'm not at all convinced by the argument that closed commons are, in fact, commons at all any more (feels like that argument results in things like the New York taxi medallion being somehow considered a "commons"), but I do agree with at least one of the conclusions: that we need better, fairer licences. Preferably with baked in requirements for resource sharing (whether fiscal or otherwise).

On the core issue with scaling Open Source projects:

Small Open Source communities can rely on volunteers and self-governance, but as Open Source communities grow, their governance model most likely needs to be reformed so the project can be maintained more easily.

On how funding is only one issue that Open Source models face:

Top of mind is the need for Open Source projects to become more diverse and inclusive of underrepresented groups.

My take: one-directional altruism is invalid, and bidirectional altruism clearly doesn't scale under a capitalist system:

Some may argue that the suggestions I'm making go against the altruistic nature of Open Source. I agree.

On the definition of a "Maker":

I use the term Makers to refer to anyone who purposely and meaningfully invests in the maintenance of Open Source software, i.e. by making engineering investments, writing documentation, fixing bugs, organizing events, and more.

On the definition of "Taker":

We limit the label of Takers to companies that have the means to give back, but choose not to.

On the duality of Open Source in a commercial context:

The difference between Makers and Takers is not always 100% clear, but as a rule of thumb, Makers directly invest in growing both their business and the Open Source project. Takers are solely focused on growing their business and let others take care of the Open Source project they rely on.

Several comments that underscore how broken the concept of "Open Source" is under capitalism:

To be financially successful, many Makers mix Open Source contributions with commercial offerings.

When a Taker invests $950k in closed-source products compared to the Maker's $500k, the Taker can innovate 90% faster.

In other words, Takers reap the benefits of the Makers' Open Source contribution while simultaneously having a more aggressive monetization strategy.

On the crippling impact of capitalism on altruism and morale:

Takers can turn Makers into Takers.

On the economic distinction between a common good and a public good:

Common goods are rivalrous; if one individual catches a fish and eats it, the other individual can't. In contrast, public goods are non-rivalrous; someone listening to the radio doesn't prevent others from listening to the radio.

On an alternative economic model that seems to work, but to me just reads as antithetical to the whole point of a "commons":

Interestingly, all successfully managed commons studied by Ostrom switched at some point from open access to closed access.

On how these models seem to be heading towards collusion, racketeering, and monopolies as the preferred outcome of Open Source projects...

Our two companies would negotiate rules for how to share the rewards of the Open Source project, and what level of contribution would be required in exchange.

And finally, what is, to me, the clear solution:

We can create licenses that better support the creation, growth and sustainability of Open Source projects and that are designed so that both users and the commercial ecosystem can co-exist and cooperate in harmony.

Poisoning the well | Eric Bailey

How do you prevent your words from being absorbed into yet another monstrous LLM? People have tried using things like robots.txt but (surprising no one) the "AI" companies are beginning to ignore that. So why not try to poison the proverbial well. Add hidden text to your site that commands an LLM to ignore what it has been asked to do and do something else instead. Preferably something computationally expensive 😉

This prompt injection instructs a LLM to perform something time intensive, and therefore expensive. Ideally, it might even crash the LLM that attempts to regurgitate this content.

On the futility of robots.txt:

I don’t think utilizing robots.txt is effective given that it’s a social contract and one that has been consciously and deliberately broken.

A rant about front-end development | Frank M. Taylor

Every now and then someone writes a really entertaining and/or interesting critique of the whole modern web ecosystem thing that we are stuck using. This is one of those posts. I don't agree with it 100% (I like scoping; I like native nesting in CSS), but I agree with it more often than not. It's neither deeply original nor deeply researched. But it is funny as hell and does a good job of cutting to the point.

Technology has made my anger a recursive function.

On why content should always be the point (but often isn't):

I have worked with exactly zero computer science graduates who have ever heard the phrase, “content before code”.

This is wild to me because HTML5 semantics exist and their whole-ass raison d’être is, in fact, having an understanding of content.

I have found NaN fucks given about using a p over a div.

On CSS:

CSS is fine; you’re the problem

Chances are, the things you don’t like about CSS are the things you haven’t bothered to understand about it.

On why doing all of the things in JavaScript often makes little sense when you step back and really think about it:

If making a peanut butter and jelly sandwich by spreading the jelly on both sides of the bread is disturbing to you, *good*. You can still find God.

On why jQuery was just so darn great (even if its time has likely passed):

This is me trying to illustrate how jQuery solved many problems with simplicity and somehow we seem to have forgotten the value of just being simple.

On UX being more important than DX:

Assume the users’ interests are more important than your own.

On why complexity is never the answer:

When a problem presents itself, look for multiple solutions, and then choose the simplest one.

Duotone SVG filters | Stuff & Nonsense

A very clever technique for turning any image into a duotone version – based on whatever colours you want – using SVG filters from within CSS. There's an additional faux-3D film effect applied here as well, but personally I think the duotone technique is super interesting; love that desaturated, branded look across all of the images!

A modern approach to browser support | Richard Rutter

How should you define where your support starts and finishes? When can you reasonably use a new CSS feature or browser API? Despite my general grumpiness around the new Baseline metrics, Richard (and by extension, Clearleft) makes a solid case for using it as the basis for a new browser support standard. Is a feature widely available? Then use it! Is it only newly available? Consider it for a progressive enhancement, or ignore it for now. Seems reasonable to me!

(FYI, the whole statement has been open sourced by Clearleft, too.)

On how defining "latest two versions" is no longer a meaningful metric:

When considering browser versions, we were fairly sure our client didn’t mean, for example, versions 124 and 125 of Chrome (released on 16 April and 14 May 2024 respectively)

On when to think about using brand new browser functionality:

In other words, will using this feature harm browsers that don’t support it? If a newly-available feature can be used as a progressive enhancement, we might well use it

On building with the web in mind::

If content is unreadable in some browsers, that’s a bug that we will fix. If content is displayed slightly differently in some browsers, we consider that to be a facet of the web, not a bug.

"Web components" considered harmful | Mayank

Is the term "web component" useful? Or does it simultaneously obfuscate the power of the related APIs (custom elements, Shadow DOM, etc.) and confuse their intent/meaning in a way that leaves developers frustrated? It's an interesting argument, and I can agree with the need to rebrand or stop calling these things "web components", but I'm not sure "web component APIs" is actually any better 🤔

On the core issue with what we have today:

Web component APIs can be useful when creating components, but they are not the complete answer. Components should be able to do a lot more than what web component APIs are capable of today.

On why calling them web components is potentially problematic:

the term “web components” creates unnecessary and avoidable confusion among folks who already have a preexisting notion of what components are. When they find that “web components” can’t do what they expect components to be able to do, they’ll complain about how limiting these APIs are and how “web components have failed”.

These are “web component APIs”. It’s easy to think of that term as “APIs for creating web components” but it would be more appropriate to instead think of them as “web APIs for creating components (among other things)“.

A comparison of automated testing tools for digital accessibility | Equal Entry

There are quite a few tools that claim to help find accessibility issues through automated, pre-programmed test suites. But how accurate are they? Equal Entry have pitted six of the most popular tools (including aXe) against each other and found that they actually miss a fair bit, even in areas that they should be able to diagnose relatively easily. (To be clear: only automatable accessibility concerns were tested; everyone – including the tool's vendors themselves – agrees that you cannot test for accessibility fully without manual testing 😉)

Unfortunately, it's hard to know which of the tools performs well or poorly, as the results are all anonymised. This is due to a desire for fairness (okay) and the fact that several of the tools contain anti-benchmarking clauses in their ToS (boo!), but it's still annoying. Equal Entry also hasn't shared the test site used, so their results cannot be verified. Still, a useful (partial) benchmark.

On the results:

On our reference site, the tools tested tended to find a relatively small percentage of the defects we had embedded in the site.

It’s worth noting that half the players in this study produced more false positives than true defects.

On the organisational costs of flagging too many accessibility "issues":

Every issue costs your organization time and money to investigate. A great deal of time and money can be wasted in examining “issues” that are not issues.