Explore My Notes

The layers of the web | Jeremy Keith

Vague but exciting...

The three words that spawned the world wide web as we know it; the response from Tim Berners-Lee's supervisor to Tim's initial proposal. Jeremy's talk dives into the history of how the web came to be, including insight into his trip to CERN for the web's 30th birthday, before taking a look at why it has achieved longevity and how you can think about the web in layers to understand that success. Assorted notes:

  • Turing didn't actually know about Babbage and Lovelace's work;
  • Without Turing and the other codebreakers, who knows what would have happened, but the war would likely have gone on a bit longer. That's important for many reasons, but for computing it means that the 1945 publication of Vannevar Bush's As We May Think which describes a hypothetical device that could be used to connect data together known as the memex;
  • In the same year, Douglas Engelbart is drafted into the US Navy, though because the war is over he isn't put on active duty, hence how he reads As We May Think which begins his own thought processes about computers, resulting in his creation of both hypertext and the concept of a computer mouse:
I don't know why we call it a mouse. Sometimes I apologies for it, but it started that way and we never did change it.
  • The Whole Earth Catalog by Stewart Brand is a form of pre-Internet Wikipedia, a collection of all the things you might want to know in order to rebuild society in a commune or for off-grid living. He also said:
Computers are the new LSD.
  • Brand also created the Long Now foundation, an organisation that focuses on long-term thinking. For example, they are building a clock within a mountain in Texas that will tell the time accurately for 10,000 years;
  • His work with that foundation let him to begin abstracting concepts into "pace layers", an idea that most concepts can be broken down into fairly clean layers that build on each other and change at different rates, with the outermost layers being the least stationery:
Fast gets all of the attention, but slow gets all of the power.
  • If you map the web into pace layers, the structure becomes roughly: TCP/IP ➡ HTTP ➡ URLs ➡ HTML ➡ CSS ➡ JavaScript ecosystem (not the language specifically, but everything else). When you map the web like this, the rapid iteration of JS doesn't just make sense, it's a benefit. It gets to experiment and the stuff that sticks, the stuff that becomes permanent, moves down the stack and becomes absorbed into lower layers. Good examples of this are image rollers (now :hover) or form validation (required in HTML);
I now don't feel so bad about feeling overwhelmed by all the constant change in the JavaScript ecosystem because I feel like it's kind of its job to be overwhelming, it's where change and iteration happens quickly.
  • Of course, these layers also map well to the considerations of the web: URLs are most important, HTML is next etc. That means you can build in layers too (notably not mentioning progressive enhancement);
  • When thinking in layers, it is perhaps better to ask "how well does it fail?" instead of "how well does it work?" when considering how to go about building things; the web fails well, but not if it's all JS;
  • Service workers slightly mess with this layered model by allowing a website to work without the foundational layers of, well, the internet;
  • A really simple example of a service worker is providing a customised offline page, like a custom 404 page – that can be really useful – but of course you can do much more, like having "save for offline" buttons;
If you took a word processing file made today and tried to open it in a 30-year old word processor? Good luck with that. And yet the web has this unbroken line.

What a great talk!

📆 29 May 2020  | 🔗

  • Frontend
  • The World Wide Web
  • web
  • history
  • computing
  • Charles Babbage
  • Ada Lovelace
  • Alan Turing
  • Tim Berners-Lee
  • CERN
  • memex
  • Vannevar Bush
  • Douglas Engelbart
  • mouse
  • Stewart Brand
  • Long Now Foundation
  • Wikipedia
  • long-term
  • pace layers
  • quote
  • service worker
  • HTTP
  • HTML
  • CSS
  • JavaScript
  • URL
  • offline 

How Edge is becoming the best browser for PWAs | Samuele Dassatti

I still have my reservations over the whole Edge-Chromium combination but it has enabled the Edge team to begin taking a leading stance on certain topics, and it looks like PWAs are one they're looking at in detail. It makes sense to me, given how lacklustre use of the Windows App store still is, but regardless of motivation it's a real positive for Windows users and the web. New features include:

  • Customisable title bars (think Home, View, etc.);
  • Native file system access (that's huge!);
  • App badges (those little notification number icons we're all used to from phones; nice but hardly a hard requirement, though could be a neater API than push notifications are currently);
  • Quick entry to the Windows Store (unsurprising and not where I hope PWAs will ultimately flourish, but it's still nice to see).

I'm particularly hopeful that these efforts are baked back into Chromium (evidence suggests they will be) so that all related browsers benefit as well.

📆 29 May 2020  | 🔗

  • Frontend
  • PWA
  • Chromium
  • Edge
  • feature
  • app
  • API
  • badges
  • title bar
  • file system
  • app store 

Micro-interactions for powerful design | Cloud Four

A useful and well-written overview of micro-interactions and how they can take a design from good to great. Includes some excellent examples of animated buttons, swipe interactions, and more from CodePen.

There is no HTTP code for censorship (but perhaps there should be) | edent

Should there be an HTTP error code for censorship? Quite probably and I agree with Terence that 403 (forbidden) is a misleading response. I really like his various proposals and the format he uses is actually a decent quick overview of the HTTP response header categorisation (1xx, 2xx, etc.) as well. Personally, if the 450 response is widely accepted and considered a standard for parental controls then I really, really love 451. It's a great reference and a useful response header. However, I do agree with a lot of the comments that it doesn't feel like censorship should be bracketed in with the 4xx category. It also looks like the proposal has been rejected or is possibly just stalled.

New Zealand bird identification | New Zealand Birds Online

A frankly excellent website for identifying bird species, particularly useful if you're like me and trying to do so years after having seen them 😁 It's a really neat interface that makes narrowing down species pretty simple, though their filtering sometimes goes a bit haywire and it would be great to be able to have a few more photos for some species (there is a bias towards more colourful/dynamic poses, rather than simple identification patterns, juveniles, or instances where one gender has a dull colouration).

Bottlenose dolphine adopts common dolphin | NZ Herald

Stashing for personal reference, as my original BBC source appears to have vanished. A 2014 news story from Paihia, New Zealand, about a female bottlenose dolphin that adopted a stray common dolphin calf (given the nickname Pee-Wee). It made international news as an incredibly rare documented instance of cross-species adoption, but it hit my radar because we were there when the discovery was made (or, technically, just after). We were just days into our time in New Zealand and went out on a boat trip to see the famous Hole in the Rock with Fullers GreatSights, which included some dolphin spotting. The first pod we found were a group of half a dozen bottlenoses with a calf, which was cool, but our guide kept scanning back and forth over the calf until she confirmed that it was Pee-Wee. The same company had been the ones who had realised what had happened a couple of days earlier and it was on our trip that they felt the adoption was confirmed, with the calf still with her adopted mother after a few days. Pretty cool stuff and, yes, I do have a photo of Pee-Wee – or at least its dorsal fin!

Extraordinary echinoderms of New Zealand | NIWA

The National Institute of Water and Atmospheric Research (NIWA) in New Zealand has a whole host of exceptional identification guides for the creatures that live in, on, and around their national coastlines. I've been going back over photos I took when we were living the Kiwi life in 2014 and have they've helped with a number of identifications, as well as just generally being an interesting resource to read through.

📆 29 May 2020  | 🔗

  • Natural World
  • NIWA
  • New Zealand
  • ocean
  • identification
  • guide
  • starfish
  • sea star
  • echinoderm
  • wildlife
  • coast 

Today's JavaScript, from an outsider's perspective | Lea Verou

I can relate to Lea's frustrations (or, more specifically, those of her friend who doesn't often stray into JavaScript territory.

I still remember the first time I tried to use ReactJS. The hours spent battling a completely foreign command line, inconsistencies in results when using npm versus Yarn, the utterly unhelpful packages. Above all else, the fact that I was doing so on a Windows machine and kept running into a debugging wall where other devs would just write this off as the underlying reason, shrug, and walk away. It turned out that I was installing my yarn packages into the wrong folder, having accidentally set it up during initial install incorrectly (so no, nothing to do with Windows). Although I wouldn't even work that out for a few months, when I found the other folder whilst looking for something else. At the time, we literally wiped everything dev-related from my machine and started from a blank slate 🤦‍♂️

John gives up. Concludes never to touch Node, npm, or ES6 modules with a barge pole.

I had a similar thought...

A guide to responsive images | CSS Tricks

A pretty exhaustive overview of the HTML and CSS options that we now have for responsive image layouts. Here are some key takeaways:

  • Prioritise srcset for the best performance gains, <picture> for the greatest level of editorial control (think different images for dark mode, or completely different layouts on smaller/larger screens);
  • Setting the lowest resolution image within a srcset to the src value is normally best;
  • The RespImageLint tool is very useful for determining srcset sizes attributes;
  • There's also the LazySizes library for lazy-loading images;
  • Despite clever tooling, sizes remain tricky to get right and often benefit from templating abstractions (as does the whole responsive image malarkey in total);
  • If you need to be able to zoom in on an image and serve a higher resolution image as it happens, srcset lets you achieve that by specifying widths wider than the viewport i.e. 300vw;
  • Art direction using the <picture> element can be used to serve static images instead of GIFs for users who prefer reduced motion (there are any interesting niche use cases with links out to full resources);
  • The <picture> element also provides a chance to use more performant image formats such as WebP, whilst providing a graceful fallback for older browsers;
  • You can combine <picture> and srcset if needed, which may net some further performance savings;
  • Netlify (and a bunch of other CDN services) offer automatic image resizing based on URL parameters;
  • Don't forget the value of using object-fit and object-position to handle the bits in-between breakpoints as well;
  • There's also a huge list of further reading and sources at the end of the article.

📆 27 May 2020  | 🔗

  • HTML & CSS
  • Frontend
  • Web Design
  • web performance
  • image
  • responsive design
  • picture
  • srcset
  • src
  • object fit
  • object position
  • viewport
  • HTML
  • CSS
  • linter
  • tool
  • lazy loading
  • guide 

Latex.css | Vincent Dörig

A bespoke stylesheet that automatically styles semantic HTML to look like Latex.

Example of text styled like Latex, including author details, an abstract, and a contents table.
There's something super appealing to me with how clean Latex looks. I really don't think it gets enough credit for producing such nice looking content.

Pitfalls of card UIs | Dave Rupert

In some ways this outcome is the opposite of what you were intending. You wanted a Card UI where everything was simple and uniform, but what you end up with is a CSS gallery website filled with baby websites.

A great breakdown from Dave as to why the typical card UI pattern has some inherent issues. Some elements – like making cards all the same height and dealing with responsively collapsing card lists – are now irrelevant thanks to flexbox and CSS grid, but others are inherent to the format. Issues with block links are one potential headache, but you also present information in a slightly bizarre reading format and create a default UI hierarchy that demands a lot of borders. I feel like that last point subtly hits an issue I have with this site's design on the head. Interesting.

The perfect block link solution | CSS Tricks

An interesting look at a "block links" at "card links": when you want large sections of HTML to be one big clickable link. It's a very common pattern and something I've done a lot, but it's also deeply problematic. It creates a double-standard UX where it is now a pattern that I expect and therefore get annoyed at when it doesn't work: see a blog post, for example, as a card and I expect to be able to click anywhere to read the article. But at the same time, when I'm scrolling down a page on my phone it's incredibly frustrating when a seemingly blank area suddenly redirects me to a new page.

It's also a terrible pattern from an accessibility perspective, because the normal implementation – wrapping the whole section in an <a> tag – screws up navigation by screen readers. It can also impact keyboard users too.

So, overall, a common pattern that is now practically an expected user interaction but which has deep flaws and should likely be stamped out. That paradox is an issue. Vikas has done a good job of outlining the current solutions to that problem alongside their respective pros/cons and comes to an interesting conclusion. He argues for semantically marked up HTML segments (good ✔) with linked headlines and "read more" text (though with better copy than "read more"; also good ✔) which is then progressively enhanced with JavaScript to listen to click events within the parent element (interesting). That allows the whole section to be clickable for JS users, whilst providing semantically relevant links as a fallback, and can be further improved using the window.getSelection() method so that users can still select or highlight text without inadvertently "clicking" the card itself (excellent ✔✔). Clever little trick 👍

const card = document.querySelector(".card")
const mainLink = document.querySelector('.main-link')


card.addEventListener("click", handleClick)


function handleClick(event) {
  const isTextSelected = window.getSelection().toString();
  if (!isTextSelected) {
    mainLink.click();
  }
}

Made By Me, But Made Possible By:

CMS:

Build: Gatsby

Deployment: GitHub

Hosting: Netlify

Connect With Me:

Twitter Twitter

Instagram Instragram

500px 500px

GitHub GitHub

Keep Up To Date:

All Posts RSS feed.

Articles RSS feed.

Journal RSS feed.

Notes RSS feed.