Explore My Notes

How the language attribute is damaging accessibility | netz-barrierefrei.de

A first-hand account of how marking individual words or short, inline phrases as a different language (even when accurate) can be a jarring and inaccessible experience for many screen reader users.

The author strongly advocates for simply never using the HTML lang attribute (outside of the root element, though even here they show scepticism to its benefit), and only breaking that rule where an entire paragraph or section of a page has been translated or quoted in a different language to the rest of the text.

Being an account of an actual screen reader user, it's hard to dismiss their indignant tone and heavy phrasing. That said, the article has driven a certain amount of online discourse, much of which has questioned the strength of the argument or added particularly useful commentary/insight.

I specifically found Daniela Kubesch's thread over on Mastodon very useful. The following are some excerpts from the ensuing discussion:

A solid explanation in support of the argument from Marco Zehe (also a screen reader user):

Kerstin Probiesch has also written an article about this, although this also heavily emphasizes the case in PDF documents. The problem is that, for the screen reader to switch the language, one voice has to stop mid sentence, switch the voice to one that speaks the other language, speak the word or phrase, then switch back, which can take half a second each. It ruins the whole sentence melody, sounds as if you were to take a breath in mid sentence, disconnecting the words in a very unnatural manner for no apparent reason. I find it quite annoying if it happens too often.

A supportive stance from Eric Eggert:

This is what we teach for 10+ years. Individual words should not be marked with different languages. 

An opposing view from Léonie Watson:

FWIW, the opinions expressed in that article do not reflect my own (as a screen reader user).

Some excellent points from James Scholes diving into some of the nuance behind the topic:

The experiences of users learning their first second language versus those already fluent in multiple ones will differ significantly. Similarly, people dealing with multiple languages from the same family will have different experiences compared to those who frequently switch between languages with very different alphabets.

Finally, I don't think the article fairly or accurately apportions the blame. It's mostly aimed at standards bodies, without acknowledging that screen readers and speech synthesizers don't always handle multiple languages well. 

And a pertinent example of how usage should probably consider the combination of languages being used from Nic:

i haven't tested this thoroughly, but i feel like this post overlooks the existence of other languages? For example, in chinese/japanese/korean, the lang attribute is needed to show the correct glyph, and i dont know how a screen reader would otherwise even attempt to pronounce it since the characters are all pronounced completely differently in different languages.


On the specific WCAG SC being debated:

Success Criterion 3.1.2 Language of Parts The human language of each passage or phrase in the content can be programmatically determined except for proper names, technical terms, words of indeterminate language, and words or phrases that have become part of the vernacular of the immediately surrounding text.

On the core misinterpretation (as presented):

The Success Criterion says phrase or passage, not words.

On a less well known piece of screen reader functionality:

First of all, modern screen readers have dictionaries in which the pronunciation of common foreign words is also specified.

On how jarring narrator switches can be mid-sentence:

This means that the actual purpose of the language change is completely missed, the person at the other end of the line did not understand the word at all because they cannot understand the French of the native speakers or because they did not cognitively participate in the switchover.

Balancing makers and takers | dri.es

A detailed, balanced look at the current pitfalls of the Open Source model from an author with a huge amount of relevant experience.

It gets a bit close to actively encouraging monopolistic practices, and I'm not at all convinced by the argument that closed commons are, in fact, commons at all any more (feels like that argument results in things like the New York taxi medallion being somehow considered a "commons"), but I do agree with at least one of the conclusions: that we need better, fairer licences. Preferably with baked in requirements for resource sharing (whether fiscal or otherwise).

On the core issue with scaling Open Source projects:

Small Open Source communities can rely on volunteers and self-governance, but as Open Source communities grow, their governance model most likely needs to be reformed so the project can be maintained more easily.

On how funding is only one issue that Open Source models face:

Top of mind is the need for Open Source projects to become more diverse and inclusive of underrepresented groups.

My take: one-directional altruism is invalid, and bidirectional altruism clearly doesn't scale under a capitalist system:

Some may argue that the suggestions I'm making go against the altruistic nature of Open Source. I agree.

On the definition of a "Maker":

I use the term Makers to refer to anyone who purposely and meaningfully invests in the maintenance of Open Source software, i.e. by making engineering investments, writing documentation, fixing bugs, organizing events, and more.

On the definition of "Taker":

We limit the label of Takers to companies that have the means to give back, but choose not to.

On the duality of Open Source in a commercial context:

The difference between Makers and Takers is not always 100% clear, but as a rule of thumb, Makers directly invest in growing both their business and the Open Source project. Takers are solely focused on growing their business and let others take care of the Open Source project they rely on.

Several comments that underscore how broken the concept of "Open Source" is under capitalism:

To be financially successful, many Makers mix Open Source contributions with commercial offerings.

When a Taker invests $950k in closed-source products compared to the Maker's $500k, the Taker can innovate 90% faster.

In other words, Takers reap the benefits of the Makers' Open Source contribution while simultaneously having a more aggressive monetization strategy.

On the crippling impact of capitalism on altruism and morale:

Takers can turn Makers into Takers.

On the economic distinction between a common good and a public good:

Common goods are rivalrous; if one individual catches a fish and eats it, the other individual can't. In contrast, public goods are non-rivalrous; someone listening to the radio doesn't prevent others from listening to the radio.

On an alternative economic model that seems to work, but to me just reads as antithetical to the whole point of a "commons":

Interestingly, all successfully managed commons studied by Ostrom switched at some point from open access to closed access.

On how these models seem to be heading towards collusion, racketeering, and monopolies as the preferred outcome of Open Source projects...

Our two companies would negotiate rules for how to share the rewards of the Open Source project, and what level of contribution would be required in exchange.

And finally, what is, to me, the clear solution:

We can create licenses that better support the creation, growth and sustainability of Open Source projects and that are designed so that both users and the commercial ecosystem can co-exist and cooperate in harmony.

Poisoning the well | Eric Bailey

How do you prevent your words from being absorbed into yet another monstrous LLM? People have tried using things like robots.txt but (surprising no one) the "AI" companies are beginning to ignore that. So why not try to poison the proverbial well. Add hidden text to your site that commands an LLM to ignore what it has been asked to do and do something else instead. Preferably something computationally expensive 😉

This prompt injection instructs a LLM to perform something time intensive, and therefore expensive. Ideally, it might even crash the LLM that attempts to regurgitate this content.

On the futility of robots.txt:

I don’t think utilizing robots.txt is effective given that it’s a social contract and one that has been consciously and deliberately broken.

A rant about front-end development | Frank M. Taylor

Every now and then someone writes a really entertaining and/or interesting critique of the whole modern web ecosystem thing that we are stuck using. This is one of those posts. I don't agree with it 100% (I like scoping; I like native nesting in CSS), but I agree with it more often than not. It's neither deeply original nor deeply researched. But it is funny as hell and does a good job of cutting to the point.

Technology has made my anger a recursive function.

On why content should always be the point (but often isn't):

I have worked with exactly zero computer science graduates who have ever heard the phrase, “content before code”.

This is wild to me because HTML5 semantics exist and their whole-ass raison d’être is, in fact, having an understanding of content.

I have found NaN fucks given about using a p over a div.

On CSS:

CSS is fine; you’re the problem

Chances are, the things you don’t like about CSS are the things you haven’t bothered to understand about it.

On why doing all of the things in JavaScript often makes little sense when you step back and really think about it:

If making a peanut butter and jelly sandwich by spreading the jelly on both sides of the bread is disturbing to you, *good*. You can still find God.

On why jQuery was just so darn great (even if its time has likely passed):

This is me trying to illustrate how jQuery solved many problems with simplicity and somehow we seem to have forgotten the value of just being simple.

On UX being more important than DX:

Assume the users’ interests are more important than your own.

On why complexity is never the answer:

When a problem presents itself, look for multiple solutions, and then choose the simplest one.

Duotone SVG filters | Stuff & Nonsense

A very clever technique for turning any image into a duotone version – based on whatever colours you want – using SVG filters from within CSS. There's an additional faux-3D film effect applied here as well, but personally I think the duotone technique is super interesting; love that desaturated, branded look across all of the images!

A modern approach to browser support | Richard Rutter

How should you define where your support starts and finishes? When can you reasonably use a new CSS feature or browser API? Despite my general grumpiness around the new Baseline metrics, Richard (and by extension, Clearleft) makes a solid case for using it as the basis for a new browser support standard. Is a feature widely available? Then use it! Is it only newly available? Consider it for a progressive enhancement, or ignore it for now. Seems reasonable to me!

(FYI, the whole statement has been open sourced by Clearleft, too.)

On how defining "latest two versions" is no longer a meaningful metric:

When considering browser versions, we were fairly sure our client didn’t mean, for example, versions 124 and 125 of Chrome (released on 16 April and 14 May 2024 respectively)

On when to think about using brand new browser functionality:

In other words, will using this feature harm browsers that don’t support it? If a newly-available feature can be used as a progressive enhancement, we might well use it

On building with the web in mind::

If content is unreadable in some browsers, that’s a bug that we will fix. If content is displayed slightly differently in some browsers, we consider that to be a facet of the web, not a bug.

"Web components" considered harmful | Mayank

Is the term "web component" useful? Or does it simultaneously obfuscate the power of the related APIs (custom elements, Shadow DOM, etc.) and confuse their intent/meaning in a way that leaves developers frustrated? It's an interesting argument, and I can agree with the need to rebrand or stop calling these things "web components", but I'm not sure "web component APIs" is actually any better 🤔

On the core issue with what we have today:

Web component APIs can be useful when creating components, but they are not the complete answer. Components should be able to do a lot more than what web component APIs are capable of today.

On why calling them web components is potentially problematic:

the term “web components” creates unnecessary and avoidable confusion among folks who already have a preexisting notion of what components are. When they find that “web components” can’t do what they expect components to be able to do, they’ll complain about how limiting these APIs are and how “web components have failed”.

These are “web component APIs”. It’s easy to think of that term as “APIs for creating web components” but it would be more appropriate to instead think of them as “web APIs for creating components (among other things)“.

A comparison of automated testing tools for digital accessibility | Equal Entry

There are quite a few tools that claim to help find accessibility issues through automated, pre-programmed test suites. But how accurate are they? Equal Entry have pitted six of the most popular tools (including aXe) against each other and found that they actually miss a fair bit, even in areas that they should be able to diagnose relatively easily. (To be clear: only automatable accessibility concerns were tested; everyone – including the tool's vendors themselves – agrees that you cannot test for accessibility fully without manual testing 😉)

Unfortunately, it's hard to know which of the tools performs well or poorly, as the results are all anonymised. This is due to a desire for fairness (okay) and the fact that several of the tools contain anti-benchmarking clauses in their ToS (boo!), but it's still annoying. Equal Entry also hasn't shared the test site used, so their results cannot be verified. Still, a useful (partial) benchmark.

On the results:

On our reference site, the tools tested tended to find a relatively small percentage of the defects we had embedded in the site.

It’s worth noting that half the players in this study produced more false positives than true defects.

On the organisational costs of flagging too many accessibility "issues":

Every issue costs your organization time and money to investigate. A great deal of time and money can be wasted in examining “issues” that are not issues.

The machine stops | Jeremy Keith

For many folks writing or sharing art on the open web in 2024, the rise of corporate theft under the guise of "AI" has become a real sticking point. I share these sentiments, though have yet to start taking any real action against them. Still, when I do consider the actions I could take, I find my thoughts echoing those that Jeremy has shared. Beyond merely requesting to be kept out of "training datasets", can we somehow punish or poison them for daring to steal our words, images, and videos without our consent?

I certainly like the idea 😏

On the rational response to LLMs – fight back:

If my words are going to be snatched away, I want them to be poison pills.

On techniques people are using to make their content dangerous for LLMs to "consume":

Smarter people than me are coming up with ways to protect content through sabotage: hidden pixels in images; hidden words on web pages.

If enough people do this we’ll probably end up in an arms race with the bots. It’ll be like reverse SEO. Instead of trying to trick crawlers into liking us, let’s collectively kill ’em.

Digital litter picking | Terence Eden

The idea here is far from revolutionary, but I really like the naming and overall approach. Digital litter picking: a small, scalable, civic good that you can just do. Nice!

I want to live in a world where every voter can quickly and easily find out who they can vote for - and where they can vote. So I engage in digital litter picking.

It isn't glamorous or sophisticated work. It doesn't require much training, or a huge time commitment. But it's the sort of thing that I think can make a real difference to the civic environment.

Why I'm over GraphQL | Matt Bessey

An interesting dive into the long-term complications and issues that Matt has come across whilst using GraphQL. From self-professed "hype train member" for the technology to now considering it a niche tool only beneficial in specific areas, it's an interesting read, though largely boils down to "GraphQL is probably too complex to manage yourself", which means if you're outsourcing that to a third party already, few of the criticisms should matter to you. Still, it's nice to see a more nuanced take and it provides some excellent points around security and performance that are absolutely worth being aware of.

Stacking grids without media queries | Kevin Powell

A very clever technique of using combinations of flex-basis, asymmetrical flex-grow, or (for Grid) some quirky minmax() magic to generate flexible layout shifts that, in practice, are controlled by their container's size, but do not use media or container queries at all. Useful for sidebars that you want to fall into a stacking context if there isn't enough room, or (and this is very clever) for making grids with a fixed number of columns e.g. a stack unless you can fit exactly three columns of a minimum width, and then it snaps into a grid layout. Super useful!