Blog

38 posts 2009 16 posts 2010 50 posts 2011 28 posts 2012 15 posts 2013 7 posts 2014 10 posts 2015 5 posts 2016 4 posts 2017 7 posts 2018 2 posts 2019 17 posts 2020 7 posts 2021 7 posts 2022 11 posts 2023

The failed promise of Web Components

4 min read 0 comments Report broken page

Web Components had so much potential to empower HTML to do more, and make web development more accessible to non-programmers and easier for programmers. Remember how exciting it was every time we got new shiny HTML elements that actually do stuff? Remember how exciting it was to be able to do sliders, color pickers, dialogs, disclosure widgets straight in the HTML, without having to include any widget libraries?

The promise of Web Components was that we’d get this convenience, but for a much wider range of HTML elements, developed much faster, as nobody needs to wait for the full spec + implementation process. We’d just include a script, and boom, we have more elements at our disposal!

Or, that was the idea. Somewhere along the way, the space got flooded by JS frameworks aficionados, who revel in complex APIs, overengineered build processes and dependency graphs that look like the roots of a banyan tree.

This is what the roots of a Banyan tree look like. Photo by David Stanley on Flickr (CC-BY).

Perusing the components on webcomponents.org fills me with anxiety, and I’m perfectly comfortable writing JS — I write JS for a living! What hope do those who can’t write JS have? Using a custom element from the directory often needs to be preceded by a ritual of npm flugelhorn, import clownshoes, build quux, all completely unapologetically because “here is my truckload of dependencies, yeah, what”. Many steps are even omitted, likely because they are “obvious”. Often, you wade through the maze only to find the component doesn’t work anymore, or is not fit for your purpose.

Besides setup, the main problem is that HTML is not treated with the appropriate respect in the design of these components. They are not designed as closely as possible to standard HTML elements, but expect JS to be written for them to do anything. HTML is simply treated as a shorthand, or worse, as merely a marker to indicate where the element goes in the DOM, with all parameters passed in via JS. I recall a wonderful talk by Jeremy Keith a few years ago about this very phenomenon, where he discussed this e-shop Web components demo by Google, which is the poster child of this practice. These are the entire contents of its <body> element:

<body>
	<shop-app unresolved="">SHOP</shop-app>
	<script src="node_assets/@webcomponents/webcomponentsjs/webcomponents-loader.js"></script>
	<script type="module" src="src/shop-app.js"></script>
	<script>window.performance&&performance.mark&&performance.mark("index.html");</script>
</body>

If this is how Google is leading the way, how can we hope for contributors to design components that follow established HTML conventions?

Jeremy criticized this practice from the aspect of backwards compatibility: when JS is broken or not enabled, or the browser doesn’t support Web Components, the entire website is blank. While this is indeed a serious concern, my primary concern is one of usability: HTML is a lower barrier to entry language. Far more people can write HTML than JS. Even for those who do eventually write JS, it often comes after spending years writing HTML & CSS.

If components are designed in a way that requires JS, this excludes thousands of people from using them. And even for those who can write JS, HTML is often easier: you don’t see many people rolling their own sliders or using JS-based ones once <input type="range"> became widely supported, right?

Even when JS is unavoidable, it’s not black and white. A well designed HTML element can reduce the amount and complexity of JS needed to a minimum. Think of the <dialog> element: it usually does require *some* JS, but it’s usually rather simple JS. Similarly, the <video> element is perfectly usable just by writing HTML, and has a comprehensive JS API for anyone who wants to do fancy custom things.

The other day I was looking for a simple, dependency free, tabs component. You know, the canonical example of something that is easy to do with Web Components, the example 50% of tutorials mention. I didn’t even care what it looked like, it was for a testing interface. I just wanted something that is small and works like a normal HTML element. Yet, it proved so hard I ended up writing my own!

Can we fix this?

I’m not sure if this is a design issue, or a documentation issue. Perhaps for many of these web components, there are easier ways to use them. Perhaps there are vanilla web components out there that I just can’t find. Perhaps I’m looking in the wrong place and there is another directory somewhere with different goals and a different target audience.

But if not, and if I’m not alone in feeling this way, we need a directory of web components with strict inclusion criteria:

  • Plug and play. No dependencies, no setup beyond including one <script> tag. If a dependency is absolutely needed (e.g. in a map component it doesn’t make sense to draw your own maps), the component loads it automatically if it’s not already loaded.
  • Syntax and API follows conventions established by built-in HTML elements and anything that can be done without the component user writing JS, is doable without JS, per the W3C principle of least power.
  • Accessible by default via sensible ARIA defaults, just like normal HTML elements.
  • Themable via ::part(), selective inheritance and custom properties. Very minimal style by default. Normal CSS properties should just “work” to the the extent possible.
  • Only one component of a given type in the directory, that is flexible and extensible and continuously iterated on and improved by the community. Not 30 different sliders and 15 different tabs that users have to wade through. No branding, no silos of “component libraries”. Only elements that are designed as closely as possible to what a browser would implement in every way the current technology allows.

I would be up for working on this if others feel the same way, since that is not a project for one person to tackle. Who’s with me?

UPDATE: Wow this post blew up! Thank you all for your interest in participating in a potential future effort. I’m currently talking to stakeholders of some of the existing efforts to see if there are any potential collaborations before I go off and create a new one. Follow me on Twitter to hear about the outcome!


Developer priorities throughout their career

2 min read 0 comments Report broken page

I made this chart in the amazing Excalidraw about two weeks ago:

It only took me 10 minutes! Shortly after, my laptop broke down into repeated kernel panics, and it spent about 10 days in service (I was in a remote place when it broke, so it took some time to get it to service). Yesterday, I was finally reunited with it, turned it on, launched Chrome, and saw it again. It gave me a smile, and I realized I never got to post it, so I tweeted this:

The tweet kinda blew up! It seems many, many developers identify with it. A few also disagreed with it, especially with the “Does it actually work?” line. So I figured I should write a bit about the rationale behind it. I originally wrote it in a tweet, but then I realized I should probably post it in a less transient medium, that is more well suited to longer text.

When somebody starts coding, getting the code to work is already difficult enough, so there is no space for other priorities. Learning to formalize one’s thought to the degree a computer demands, and then serialize this thinking with an unforgiving syntax, is hard. Writing code that works is THE priority, and whether it’s good code is not even a consideration.

For more experienced programmers, whether it works is ephemeral: today it works, tomorrow a commit causes a regression, the day after another commit fixes it (yes, even with TDD. No testsuite gets close to 100% coverage). Whereas readability & maintainability do not fluctuate much. If they are not prioritized from the beginning, they are much harder to accomplish when you already have a large codebase full of technical debt.

Code written by experienced programmers that doesn’t work, can often be fixed with hours or days of debugging. A nontrivial codebase that is not readable can take months or years to rewrite. So one tends to gravitate towards prioritizing what is easier to fix.

The “peak of drought” and other over-abstractions

Many developers identified with the “peak of drought”. Indeed, like other aspects of maintainability, DRY is not even a concern at first. At some point, a programmer learns about the importance of DRY and gradually begins abstracting away duplication. However, you can have too much of a good thing: soon the need to abstract away any duplication becomes all consuming and leads to absurd, awkward abstractions which actually get in the way and produce needless couplings, often to avoid duplicating very little code, once. In my own “peak of drought” (which lasted far longer than the graph above suggests), I’ve written many useless functions, with parameters that make no sense, just to avoid duplicating a few lines of code once.

Many articles have been written about this phenomenon, so I’m not going to repeat their arguments here. As a programmer accumulates even more experience, they start seeing the downsides of over-abstraction and over-normalization and start favoring a more moderate approach which prioritizes readability over DRY when they are at odds.

A similar thing happens with design patterns too. At some point, a few years in, a developer reads a book or takes a course about design patterns. Soon thereafter, their code becomes so littered with design patterns that it is practically incomprehensible. “When all you have is a hammer, everything looks like a nail”. I have a feeling that Java and Java-like languages are particularly accommodating to this ailment, so this phenomenon tends to proliferate in their codebases. At some point, the developer has to go back to their past code, and they realize themselves that it is unreadable. Eventually, they learn to use design patterns when they are actually useful, and favor readability over design patterns when the two are at odds.

What aspects of your coding practice have changed over the years? How has your perspective shifted? What mistakes of the past did you eventually realize?


Parsel: A tiny, permissive CSS selector parser

3 min read 0 comments Report broken page

I’ve posted before about my work for the Web Almanac this year. To make it easier to calculate the stats about CSS selectors, we looked to use an existing selector parser, but most were too big and/or had dependencies or didn’t account for all selectors we wanted to parse, and we’d need to write our own walk and specificity methods anyway. So I did what I usually do in these cases: I wrote my own!

You can find it here: https://projects.verou.me/parsel/

It not only parses CSS selectors, but also includes methods to walk the AST produced, as well as calculate specificity as an array and convert it to a number for easy comparison.

It is one of my first libraries released as an ES module, and there are instructions about both using it as a module, and as a global, for those who would rather not deal with ES modules yet, because convenient as ESM are, I wouldn’t want to exclude those less familiar with modern JS.

Please try it out and report any bugs! We plan to use it for Almanac stats in the next few days, so if you can spot bugs sooner rather than later, you can help that volunteer effort. I’m primarily interested in (realistic) valid selectors that are parsed incorrectly. I’m aware there are many invalid selectors that are parsed weirdly, but that’s not a focus (hence the “permissive” aspect, there are many invalid selectors it won’t throw on, and that’s by design to keep the code small, the logic simple, and the functionality future-proof).

How it works

If you’re just interested in using this selector parser, read no further. This section is about how the parser works, for those interested in this kind of thing. :)

I first started by writing a typical parser, with character-by-character gobbling and different modes, with code somewhat inspired by my familiarity with jsep. I quickly realized that was a more fragile approach for what I wanted to do, and would result in a much larger module. I also missed the ease and flexibility of doing things with regexes.

However, since CSS selectors include strings and parens that can be nested, parsing them with regexes is a fool’s errand. Nested structures are not regular languages as my CS friends know. You cannot use a regex to find the closing parenthesis that corresponds to an opening parenthesis, since you can have other nested parens inside it. And it gets even more complex when there are other tokens that can nest, such as strings or comments. What if you have an opening paren that contains a string with a closing paren, like e.g. ("foo)")? A regex would match the closing paren inside the string. In fact, parsing the language of nested parens (strings like (()(()))) with regexes is one of the typical (futile) exercises in a compilers course. Students struggle to do it because it’s an impossible task, and learn the hard way that not everything can be parsed with regexes.

Unlike a typical programming language with lots of nested structures however, the language of CSS selectors is more limited. There are only two nested structures: strings and parens, and they only appear in specific types of selectors (namely attribute selectors, pseudo-classes and pseudo-elements). Once we get those out of the way, everything else can be easily parsed by regexes. So I decided to go with a hybrid approach: The selector is first looked at character-by-character, to extract strings and parens. We only extract top-level parens, since anything inside them can be parsed separately (when it’s a selector), or not at all. The strings are replaced by a single character, as many times as the length of the string, so that any character offsets do not change, and the strings themselves are stored in a stack. Same with parens.

After that point, this modified selector language is a regular language that can be parsed with regexes. To do so, I follow an approach inspired by the early days of Prism: An object literal of tokens in the order they should be matched in, and a function that tokenizes a string by iteratively matching tokens from an object literal. In fact, this function was taken from an early version of Prism and modified.

After we have the list of tokens as a flat array, we can restore strings and parens, and then nest them appropriately to create an AST.

Also note that the token regexes use the new-ish named capture groups feature in ES2018, since it’s now supported pretty widely in terms of market share. For wider support, you can transpile :)


Introspecting CSS via the CSS OM: Get supported properties, shorthands, longhands

3 min read 0 comments Report broken page

For some of the statistics we are going to study for this year’s Web Almanac we may end up needing a list of CSS shorthands and their longhands. Now this is typically done by maintaining a data structure by hand or guessing based on property name structure. But I knew that if we were going to do it by hand, it’s very easy to miss a few of the less popular ones, and the naming rule where shorthands are a prefix of their longhands has failed to get standardized and now has even more exceptions than it used to. And even if we do an incredibly thorough job, next year the data structure will be inaccurate, because CSS and its implementations evolve fast. The browser knows what the shorthands are, surely we should be able to get the information from it …right? Then we could use it directly if this is a client-side library, or in the case of the Almanac, where code needs to be fast because it will run on millions of websites, paste the precomputed result into whatever script we run.

There are essentially two steps for this:

  1. Get a list of all CSS properties
  2. Figure out how to test if a given property is a shorthand and how to get its longhands if so.

I decided to tell this story in the inverse order. In my exploration, I first focused on figuring out shorthands (2), because I had coded getting a list of properties many times before, but since (1) is useful in its own right (and probably in more use cases), I felt it makes more sense to examine that first.

Note: I’m using document.body instead of a dummy element in these examples, because I like to experiment in about:blank, and it’s just there and because this way you can just copy stuff to the console and try it wherever, even right here while reading this post. However, if you use this as part of code that runs on a real website, it goes without saying that you should create and test things on a dummy element instead!

Getting a list of all CSS properties from the browser

In Chrome and Safari, this is as simple as Object.getOwnPropertyNames(document.body.style). However, in Firefox, this doesn’t work. Why is that? To understand this (and how to work around it), we need to dig a bit deeper.

In Chrome and Safari, element.style is a CSSStyleDeclaration instance. In Firefox however, it is a CSS2Properties instance, which inherits from CSSStyleDeclaration. CSS2Properties is an older interface, defined in the DOM 2 Specification, which is now obsolete. In the current relevant specification, CSS2Properties is gone, and has been merged with CSSStyleDeclaration. However, Firefox hasn’t caught up yet.

Firefox on the left, Safari on the right. Chrome behaves like Safari.

Since the properties are on CSSStyleDeclaration, they are not own properties of element.style, so Object.getOwnPropertyNames() fails to return them. However, we can extract the CSSStyleDeclaration instance by using __proto__ or Object.getPrototypeOf(), and then Object.getOwnPropertyNames(Object.getPrototypeOf(document.body.style)) gives us what we want!

So we can combine the two to get a list of properties regardless of browser:

let properties = Object.getOwnPropertyNames(
	style.hasOwnProperty("background")?
	style : style.__proto__
);

And then, we just drop non-properties, and de-camelCase:

properties = properties.filter(p => style[p] === "") // drop functions etc
	.map(prop => { // de-camelCase
		prop = prop.replace(/[A-Z]/g, function($0) { return '-' + $0.toLowerCase() });

if (prop.indexOf("webkit-") > -1) { prop = "-" + prop; }

return prop; });

You can see a codepen with the result here:

https://codepen.io/leaverou/pen/eYJodjb?editors=0010

Testing if a property is a shorthand and getting a list of longhands

The main things to note are:

  • When you set a shorthand on an element’s inline style, you are essentially setting all its longhands.
  • element.style is actually array-like, with numerical properties and .length that gives you the number of properties set on it. This means you can use the spread operator on it:
> document.body.style.background = "red";
> [...document.body.style]
< [
	"background-image",
	"background-position-x",
	"background-position-y",
	"background-size",
	"background-repeat-x",
	"background-repeat-y",
	"background-attachment",
	"background-origin",
	"background-clip",
	"background-color"
]

Interestingly, document.body.style.cssText serializes to background: red and not all the longhands.

There is one exception: The all property. In Chrome, it does not quite behave as a shorthand:

> document.body.style.all = "inherit";
> [...document.body.style]
< ["all"]

Whereas in Safari and Firefox, it actually returns every single property that is not a shorthand!

Firefox and Safari expand all to literally all non-shorthand properties.

While this is interesting from a trivia point of view, it doesn’t actually matter for our use case, since we don’t typically care about all when constructing a list of shorthands, and if we do we can always add or remove it manually.

So, to recap, we can easily get the longhands of a given shorthand:

function getLonghands(property) {
	let style = document.body.style;
	style[property] = "inherit"; // a value that works in every property
	let ret = [...style];
	style.cssText = ""; // clean up
	return ret;
}

Putting the pieces together

You can see how all the pieces fit together (and the output!) in this codepen:

https://codepen.io/leaverou/pen/gOPEJxz?editors=0010

How many of these shorthands did you already know?


Import non-ESM libraries in ES Modules, with client-side vanilla JS

4 min read 0 comments Report broken page

In case you haven’t heard, ECMAScript modules (ESM) are now supported everywhere!

While I do have some gripes with them, it’s too late for any of these things to change, so I’m embracing the good parts and have cautiously started using them in new projects. I do quite like that I can just use import statements and dynamic import() for dependencies with URLs right from my JS, without module loaders, extra <script> tags in my HTML, or hacks with dynamic <script> tags and load events (in fact, Bliss has had a helper for this very thing that I’ve used extensively in older projects). I love that I don’t need any libraries for this, and I can use it client-side, anywhere, even in my codepens.

Once you start using ESM, you realize that most libraries out there are not written in ESM, nor do they include ESM builds. Many are still using globals, and those that target Node.js use CommonJS (CJS). What can we do in that case? Unfortunately, ES Modules are not really designed with any import (pun intended) mechanism for these syntaxes, but, there are some strategies we could employ.

Libraries using globals

Technically, a JS file can be parsed as a module even with no imports or exports. Therefore, almost any library that uses globals can be fair game, it can just be imported as a module with no exports! How do we do that?

While you may not see this syntax a lot, you don’t actually need to name anything in the import statement. There is a syntax to import a module entirely for its side effects:

import "url/to/library.js";

This syntax works fine for libraries that use globals, since declaring a global is essentially a side effect, and all modules share the same global scope. For this to work, the imported library needs to satisfy the following conditions:

  • It should declare the global as a property on window (or self), not via var Foo or this. In modules top-level variables are local to the module scope, and this is undefined, so the last two ways would not work.
  • Its code should not violate strict mode
  • The URL is either same-origin or CORS-enabled. While <script> can run cross-origin resources, import sadly cannot.

Basically, you are running a library as a module that was never written with the intention to be run as a module. Many are written in a way that also works in a module context, but not all. ExploringJS has an excellent summary of the differences between the two. For example, here is a trivial codepen loading jQuery via this method.

Libraries using CJS without dependencies

I dealt with this today, and it’s what prompted this post. I was trying to play around with Rework CSS, a CSS parser used by the HTTPArchive for analyzing CSS in the wild. However, all its code and documentation assumes Node.js. If I could avoid it, I’d really rather not have to make a Node.js app to try this out, or have to dive in module loaders to be able to require CJS modules in the browser. Was there anything I could do to just run this in a codepen, no strings attached?

After a little googling, I found this issue. So there was a JS file I could import and get all the parser functionality. Except …there was one little problem. When you look at the source, it uses module.exports. If you just import that file, you predictably get an error that module is not defined, not to mention there are no ESM exports.

My first thought was to stub module as a global variable, import this as a module, and then read module.exports and give it a proper name:

window.module = {};
import "https://cdn.jsdelivr.net/gh/reworkcss/css@latest/lib/parse/index.js";
console.log(module.exports);

However, I was still getting the error that module was not defined. How was that possible?! They all share the same global context!! *pulls hair out* After some debugging, it dawned on me: static import statements are hoisted; the “module” was getting executed before the code that imports it and stubs module.

Dynamic imports to the rescue! import() is executed exactly where it’s called, and returns a promise. So this actually works:

window.module = {};
import("https://cdn.jsdelivr.net/gh/reworkcss/css@latest/lib/parse/index.js").then(_ => {
	console.log(module.exports);
});

We could even turn it into a wee function, which I cheekily called require():

async function require(path) {
	let _module = window.module;
	window.module = {};
	await import(path);
	let exports = module.exports;
	window.module = _module; // restore global
	return exports;
}

(async () => { // top-level await cannot come soon enough…

let parse = await require("https://cdn.jsdelivr.net/gh/reworkcss/css@latest/lib/parse/index.js&quot;); console.log(parse("body { color: red }"));

})();

You can fiddle with this code in a live pen here.

Do note that this technique will only work if the module you’re importing doesn’t import other CJS modules. If it does, you’d need a more elaborate require() function, which is left as an exercise for the reader. Also, just like the previous technique, the code needs to comply with strict mode and not be cross-origin.

A similar technique can be used to load AMD modules via import(), just stub define() and you’re good to go.

So, with this technique I was able to quickly whip up a ReworkCSS playground. You just edit the CSS in CodePen and see the resulting AST, and you can even fork it to share a specific AST with others! :)

https://codepen.io/leaverou/pen/qBbQdGG

Update: CJS with static imports

After this article was posted, a clever hack was pointed out to me on Twitter:

While this works great if you can have multiple separate files, it doesn’t work when you’re e.g. quickly trying out a pen. Data URIs to the rescue! Turns out you can import a module from a data URI!

So let’s adapt our Rework example to use this:

https://codepen.io/leaverou/pen/xxZmWvx

Addendum: ESM gripes

Since I was bound to get questions about what my gripes are with ESM, I figured I should mention them pre-emptively.

First off, a little context. Nearly all of the JS I write is for libraries. I write libraries as a hobby, I write libraries as my job, and sometimes I write libraries to help me do my job. My job is usability (HCI) research (and specifically making programming easier), so I’m very sensitive to developer experience issues. I want my libraries to be usable not just by seasoned developers, but by novices too.

ESM has not been designed with novices in mind. It evolved from the CJS/UMD/AMD ecosystem, in which most voices are seasoned developers.

My main gripe with them, is how they expect full adoption, and settle for nothing less. There is no way to create a bundle of a library that can be used both traditionally, with a global, or as an ES module. There is also no standard way to import older libraries, or libraries using other module patterns (yes, this very post is about doing that, but essentially these are hacks, and there should be a better way). I understand the benefits of static analysis for imports and exports, but I wish there was a dynamic alternative to export, analogous to the dynamic import().

In terms of migrating to ESM, I also dislike how opinionated they are: strict mode is great, but forcing it doesn’t help people trying to migrate older codebases. Restricting them to cross-origin is also a pain, using <script>s from other domains made it possible to quickly experiment with various libraries, and I would love for that to be true for modules too.

But overall, I’m excited that JS now natively supports a module mechanism, and I expect any library I release in the future to utilize it.


Releasing MaVoice: A free app to vote on repo issues

2 min read 0 comments Report broken page

First off, some news: I agreed to be this year’s CSS content lead for the Web Almanac! One of the first things to do is to flesh out what statistics we should study to answer the question “What is the state of CSS in 2020?”. You can see last year’s chapter to get an idea of what kind of statistics could help answer that question.

Of course, my first thought was “We should involve the community! People might have great ideas of statistics we could study!”. But what should we use to vote on ideas and make them rise to the top?

I wanted to use a repo to manage all this, since I like all the conveniences for managing issues. However, there is not much on Github for voting. You can add 👍 reactions, but not sort by them, and voting itself is tedious: you need to open the comment, click on the reaction, then go back to the list of issues, rinse and repeat. Ideally, I wanted something like UserVoice™️, which lets you vote with one click, and sorts proposals by votes.

And then it dawned on me: I’ll just build a Mavo app on top of the repo issues, that displays them as proposals to be voted on and sorts by 👍 reactions, UserVoice™️-style but without the UserVoice™️ price tag. 😎 In fact, I had started such a Mavo app a couple years ago, and never finished or released it. So, I just dug it up and resurrected it from its ashes! It’s — quite fittingly I think — called MaVoice.

You can set it to any repo via the repo URL parameter, and any label via the labels URL param (defaults to enhancement) to create a customized URL for any repo you want in seconds! For example, here’s the URL for the css-almanac repo, which only displays issues with the label “proposed stat”: https://projects.verou.me/mavoice/?repo=leaverou/css-almanac&labels=proposed%20stat

While this did need some custom JS, unlike other Mavo apps which need none, I’m still pretty happy I could spin up this kind of app with < 100 lines of JS :)

Yes, it’s still rough around the edges, and I’m sure you can find many things that could be improved, but it does the job for now, and PRs are always welcome 🤷🏽‍♀️

The main caveat if you decide to use this for your own repo: Because (to my knowledge) Github API still does not provide a way to sort issues by 👍 reactions, or even reactions in general (in either the v3 REST API, or the GraphQL API), issues are instead requested sorted by comment count, and are sorted by 👍 reactions client-side, right before render. Due to API limitations, this API call can only fetch the top 100 results. This means that if you have more than 100 issues to display (i.e. more than 100 open issues with the given label), it could potentially be inaccurate, especially if you have issues with many reactions and few comments.

Another caveat is that because this is basically reactions on Github issues, there is no limit on how many issues someone can vote on. In theory, if they’re a bad actor (or just overexcited), they can just vote on everything. But I suppose that’s an intrinsic problem with using reactions to vote for things, having a UI for it just reveals the existing issue, it doesn’t create it.

Hope you enjoy, and don’t forget to vote on which CSS stats we should study!


The Cicada Principle, revisited with CSS variables

4 min read 0 comments Report broken page

Many of today’s web crafters were not writing CSS at the time Alex Walker’s landmark article The Cicada Principle and Why it Matters to Web Designers was published in 2011. Last I heard of it was in 2016, when it was used in conjunction with blend modes to pseudo-randomize backgrounds even further.

So what is the Cicada Principle and how does it relate to web design in a nutshell? It boils down to: when using repeating elements (tiled backgrounds, different effects on multiple elements etc), using prime numbers for the size of the repeating unit maximizes the appearance of organic randomness. Note that this only works when the parameters you set are independent.

When I recently redesigned my blog, I ended up using a variation of the Cicada principle to pseudo-randomize the angles of code snippets. I didn’t think much of it until I saw this tweet:

This made me think: hey, maybe I should actually write a blog post about the technique. After all, the technique itself is useful for way more than angles on code snippets.

The main idea is simple: You write your main rule using CSS variables, and then use :nth-of-*() rules to set these variables to something different every N items. If you use enough variables, and choose your Ns for them to be prime numbers, you reach a good appearance of pseudo-randomness with relatively small Ns.

In the case of code samples, I only have two different top cuts (going up or going down) and two different bottom cuts (same), which produce 2*2 = 4 different shapes. Since I only had four shapes, I wanted to maximize the pseudo-randomness of their order. A first attempt looks like this:

pre {
	clip-path: polygon(var(--clip-top), var(--clip-bottom));
	--clip-top: 0 0, 100% 2em;
	--clip-bottom: 100% calc(100% - 1.5em), 0 100%;
}

pre:nth-of-type(odd) { –clip-top: 0 2em, 100% 0; }

pre:nth-of-type(3n + 1) { –clip-bottom: 100% 100%, 0 calc(100% - 1.5em); }

This way, the exact sequence of shapes repeats every 2 * 3 = 6 code snippets. Also, the alternative --clip-bottom doesn’t really get the same visibility as the others, being present only 33.333% of the time. However, if we just add one more selector:

pre {
	clip-path: polygon(var(--clip-top), var(--clip-bottom));
	--clip-top: 0 0, 100% 2em;
	--clip-bottom: 100% calc(100% - 1.5em), 0 100%;
}

pre:nth-of-type(odd) { –clip-top: 0 2em, 100% 0; }

pre:nth-of-type(3n + 1), pre:nth-of-type(5n + 1) { –clip-bottom: 100% 100%, 0 calc(100% - 1.5em); }

Now the exact same sequence of shapes repeats every 2 * 3 * 5 = 30 code snippets, probably way more than I will have in any article. And it’s more fair to the alternate --clip-bottom, which now gets 1/3 + 1/5 - 1/15 = 46.67%, which is almost as much as the alternate --clip-top gets!

You can explore this effect in this codepen:

https://codepen.io/leaverou/pen/8541bfd3a42551f8845d668f29596ef9?editors=1100

Or, to better explore how different CSS creates different pseudo-randomness, you can use this content-less version with three variations:

https://codepen.io/leaverou/pen/NWxaPVx

Of course, the illusion of randomness is much better with more shapes, e.g. if we introduce a third type of edge we get 3 * 3 = 9 possible shapes:

https://codepen.io/leaverou/pen/dyGmbJJ?editors=1100

I also used primes 7 and 11, so that the sequence repeats every 77 items. In general, the larger primes you use, the better the illusion of randomness, but you need to include more selectors, which can get tedious.

Other examples

So this got me thinking: What else would this technique be cool on? Especially if we include more values as well, we can pseudo-randomize the result itself better, and not just the order of only 4 different results.

So I did a few experiments.

Pseudo-randomized color swatches

https://codepen.io/leaverou/pen/NWxXQKX

Pseudo-randomized color swatches, with variables for hue, saturation, and lightness.

And an alternative version:

https://codepen.io/leaverou/pen/RwrLPer

Which one looks more random? Why do you think that is?

Pseudo-randomized border-radius

Admittedly, this one can be done with just longhands, but since I realized this after I had already made it, I figured eh, I may as well include it 🤷🏽‍♀️

https://codepen.io/leaverou/pen/ZEQXOrd

It is also really cool when combined with pseudo-random colors (just hue this time):

https://codepen.io/leaverou/pen/oNbGzeE

Pseudo-randomized snowfall

Lots of things here:

  • Using translate and transform together to animate them separately without resorting to CSS.registerPropery()
  • Pseudo-randomized horizontal offset, animation-delay, font-size
  • Technically we don’t need CSS variables to pseudo-randomize font-size, we can just set the property itself. However, variables enable us to pseudo-randomize it via a multiplier, in order to decouple the base font size from the pseudo-randomness, so we can edit them independently. And then we can use the same multiplier in animation-duration to make smaller snowflakes fall slower!

https://codepen.io/leaverou/pen/YzwrWvV?editors=1100

Conclusions

In general, the larger the primes you use, the better the illusion of randomness. With smaller primes, you will get more variation, but less appearance of randomness.

There are two main ways to use primes to create the illusion of randomness with :nth-child() selectors:

The first way is to set each trait on :nth-child(pn + b) where p is a prime that increases with each value and b is constant for each trait, like so:

:nth-child(3n + 1)  { property1: value11; }
:nth-child(5n + 1)  { property1: value12; }
:nth-child(7n + 1)  { property1: value13; }
:nth-child(11n + 1) { property1: value14; }
...
:nth-child(3n + 2)  { property2: value21; }
:nth-child(5n + 2)  { property2: value22; }
:nth-child(7n + 2)  { property2: value23; }
:nth-child(11n + 2) { property2: value24; }
...

The benefit of this approach is that you can have as few or as many values as you like. The drawback is that because primes are sparse, and become sparser as we go, you will have a lot of “holes” where your base value is applied.

The second way (which is more on par with the original Cicada principle) is to set each trait on :nth-child(pn + b) where p is constant per trait, and b increases with each value:

:nth-child(5n + 1) { property1: value11; }
:nth-child(5n + 2) { property1: value12; }
:nth-child(5n + 3) { property1: value13; }
:nth-child(5n + 4) { property1: value14; }
...
:nth-child(7n + 1) { property2: value21; }
:nth-child(7n + 2) { property2: value22; }
:nth-child(7n + 3) { property2: value23; }
:nth-child(7n + 4) { property2: value24; }
...

This creates a better overall impression of randomness (especially if you order the values in a pseudo-random way too) without “holes”, but is more tedious, as you need as many values as the prime you’re using.

What other cool examples can you think of?


Refactoring optional chaining into a large codebase: lessons learned

6 min read 0 comments Report broken page

Chinese translation by Coink Wang

Now that optional chaining is supported across the board, I decided to finally refactor Mavo to use it (yes, yes, we do provide a transpiled version as well for older browsers, settle down). This is a moment I have been waiting for a long time, as I think optional chaining is the single most substantial JS syntax improvement since arrow functions and template strings. Yes, I think it’s more significant than async/await, just because of the mere frequency of code it improves. Property access is literally everywhere.

First off, what is optional chaining, in case you haven’t heard of it before?

You know how you can’t just do foo.bar.baz() without checking if foo exists, and then if foo.bar exists, and then if foo.bar.baz exists because you’ll get an error? So you have to do something awkward like:

if (foo && foo.bar && foo.bar.baz) {
	foo.bar.baz();
}

Or even:

foo && foo.bar && foo.bar.baz && foo.bar.baz();

Some even contort object destructuring to help with this. With optional chaining, you can just do this:

foo?.bar?.baz?.()

It supports normal property access, brackets (foo?.[bar]), and even function invocation (foo?.()). Sweet, right??

Yes, mostly. Indeed, there is SO MUCH code that can be simplified with it, it’s incredible. But there are a few caveats.

Patterns to search for

Suppose you decided to go ahead and refactor your code as well. What to look for?

There is of course the obvious foo && foo.bar that becomes foo?.bar.

There is also the conditional version of it, that we described in the beginning of this article, which uses if() for some or all of the checks in the chain.

There are also a few more patterns.

Ternary

foo? foo.bar : defaultValue

Which can now be written as:

foo?.bar || defaultValue

or, using the other awesome new operator, the nullish coalescing operator:

foo?.bar ?? defaultValue

Array checking

if (foo.length > 3) {
	foo[2]
}

which now becomes:

foo?.[2]

Note that this is no substitute for a real array check, like the one done by Array.isArray(foo). Do not go about replacing proper array checking with duck typing because it’s shorter. We stopped doing that over a decade ago.

Regex match

Forget about things like this:

let match = "#C0FFEE".match(/#([A-Z]+)/i);
let hex = match && match[1];

Or even things like that:

let hex = ("#C0FFEE".match(/#([A-Z]+)/i) || [,])[1];

Now it’s just:

let hex = "#C0FFEE".match(/#([A-Z]+)/i)?.[1];

In our case, I was able to even remove two utility functions and replace their invocations with this.

Feature detection

In simple cases, feature detection can be replaced by ?.. For example:

if (element.prepend) element.prepend(otherElement);

becomes:

element.prepend?.(otherElement);

Don’t overdo it

While it may be tempting to convert code like this:

if (foo) {
	something(foo.bar);
	somethingElse(foo.baz);
	andOneLastThing(foo.yolo);
}

to this:

something(foo?.bar);
somethingElse(foo?.baz);
andOneLastThing(foo?.yolo);

Don’t. You’re essentially having the JS runtime check foo three times instead of one. You may argue these things don’t matter much anymore performance-wise, but it’s the same repetition for the human reading your code: they have to mentally process the check for foo three times instead of one. And if they need to add another statement using property access on foo, they need to add yet another check, instead of just using the conditional that’s already there.

Caveats

You still need to check before assignment

You may be tempted to convert things like:

if (foo && foo.bar) {
	foo.bar.baz = someValue;
}

to:

foo?.bar?.baz = someValue;

Unfortunately, that’s not possible and will error. This was an actual snippet from our codebase:

if (this.bar && this.bar.edit) {
	this.bar.edit.textContent = this._("edit");
}

Which I happily refactored to:

if (this.bar?.edit) {
	this.bar.edit.textContent = this._("edit");
}

All good so far, this works nicely. But then I thought, wait a second… do I need the conditional at all? Maybe I can just do this:

this.bar?.edit?.textContent = this._("edit");

Nope. Uncaught SyntaxError: Invalid left-hand side in assignment. Can’t do that. You still need the conditional. I literally kept doing this, and I’m glad I had ESLint in my editor to warn me about it without having to actually run the code.

It’s very easy to put the ?. in the wrong place or forget some ?.

Note that if you’re refactoring a long chain with optional chaining, you often need to insert multiple ?. after the first one, for every member access that may or may not exist, otherwise you will get errors once the optional chaining returns undefined.

Or, sometimes you may think you do, because you put the ?. in the wrong place.

Take the following real example. I originally refactored this:

this.children[index]? this.children[index].element : this.marker

into this:

this.children?.[index].element ?? this.marker

then got a TypeError: Cannot read property 'element' of undefined. Oops! Then I fixed it by adding an additional ?.:

this.children?.[index]?.element ?? this.marker

This works, but is superfluous, as pointed out in the comments. I just needed to move the ?.:

this.children.[index]?.element ?? this.marker

Note that as pointed out in the comments be careful about replacing array length checks with optional access to the index. This might be bad for performance, because out-of-bounds access on an array is de-optimizing the code in V8 (as it has to check the prototype chain for such a property too, not only decide that there is no such index in the array).

It can introduce bugs if you’re not careful

If, like me, you go on a refactoring spree, it’s easy after a certain point to just introduce optional chaining in places where it actually ends up changing what your code does and introducing subtle bugs.

null vs undefined

Possibly the most common pattern is replacing foo && foo.bar with foo?.bar. While in most cases these work equivalently, this is not true for every case. When foo is null, the former returns null, whereas the latter returns undefined. This can cause bugs to creep up in cases where the distinction matters and is probably the most common way to introduce bugs with this type of refactoring.

Equality checks

Be careful about converting code like this:

if (foo && bar && foo.prop1 === bar.prop2) { /* ... */ }

to code like this:

if (foo?.prop1 === bar?.prop2) { /* ... */ }

In the first case, the condition will not be true, unless both foo and bar are truthy. However, in the second case, if both foo and bar are nullish, the conditional will be true, because both operands will return undefined!

The same bug can creep in even if the second operand doesn’t include any optional chaining, as long as it could be undefined you can get unintended matches.

Operator precedence slips

One thing to look out for is that optional chaining has higher precedence than &&. This becomes particularly significant when you replace an expression using && that also involves equality checks, since the (in)equality operators are sandwiched between ?. and &&, having lower precedence than the former and higher than the latter.

if (foo && foo.bar === baz) { /* ... */ }

What is compared with baz here? foo.bar or foo && foo.bar? Since && has lower precedence than ===, it’s as if we had written:

if (foo && (foo.bar === baz)) { /* ... */ }

Note that the conditional cannot ever be executed if foo is falsy. However, once we refactor it to use optional chaining, it is now as if we were comparing (foo && foo.bar) to baz:

if (foo?.bar === baz) { /* ... */ }

An obvious case where the different semantics affect execution is when baz is undefined. In that case, we can enter the conditional when foo is nullish, since then optional chaining will return undefined, which is basically the case we described above. In most other cases this doesn’t make a big difference. It can however be pretty bad when instead of an equality operator, you have an inequality operator, which still has the same precedence. Compare this:

if (foo && foo.bar !== baz) { /* ... */ }

with this:

if (foo?.bar !== baz) { /* ... */ }

Now, we are going to enter the conditional every time foo is nullish, as long as baz is not undefined! The difference is not noticeable in an edge case anymore, but in the average case! 😱

Return statements

Rather obvious after you think about it, but it’s easy to forget return statements in the heat of the moment. You cannot replace things like this:

if (foo && foo.bar) {
	return foo.bar();
}

with:

return foo?.bar?.();

In the first case, you return conditionally, whereas in the second case you return always. This will not introduce any issues if the conditional is the last statement in your function, but it will change the control flow if it’s not.

Sometimes, it can fix bugs too!

Take a look at this code I encountered during my refactoring:

/**
 * Get the current value of a CSS property on an element
 */
getStyle: (element, property) => {
	if (element) {
		var value = getComputedStyle(element).getPropertyValue(property);

if (value) { return value.trim(); } } },

Can you spot the bug? If value is an empty string (and given the context, it could very well be), the function will return undefined, because an empty string is falsy! Rewriting it to use optional chaining fixes this:

if (element) {
	var value = getComputedStyle(element).getPropertyValue(property);

return value?.trim(); }

Now, if value is the empty string, it will still return an empty string and it will only return undefined when value is nullish.

Finding usages becomes trickier

This was pointed out by Razvan Caliman on Twitter:

Bottom line

In the end, this refactor made Mavo about 2KB lighter and saved 37 lines of code. It did however make the transpiled version 79 lines and 9KB (!) heavier.

Here is the relevant commit, for your perusal. I tried my best to exercise restraint and not introduce any unrelated refactoring in this commit, so that the diff is chock-full of optional chaining examples. It has 104 additions and 141 deletions, so I’d wager it has about 100 examples of optional chaining in practice. Hope it’s helpful!


Hybrid positioning with CSS variables and max()

4 min read 0 comments Report broken page

Notice how the navigation on the left behaves wrt scrolling: It’s like absolute at first that becomes fixed once the header scrolls out of the viewport.

One of my side projects these days is a color space agnostic color conversion & manipulation library, which I’m developing together with my husband, Chris Lilley (you can see a sneak peek of its docs above). He brings his color science expertise to the table, and I bring my JS & API design experience, so it’s a great match and I’m really excited about it! (if you’re serious about color and you’re building a tool or demo that would benefit from it contact me, we need as much early feedback on the API as we can get! )

For the documentation, I wanted to have the page navigation on the side (when there is enough space), right under the header when scrolled all the way to the top, but I wanted it to scroll with the page (as if it was absolutely positioned) until the header is out of view, and then stay at the top for the rest of the scrolling (as if it used fixed positioning).

It sounds very much like a case for position: sticky, doesn’t it? However, an element with position: sticky behaves like it’s relatively positioned when it’s in view and like it’s using position: fixed when its scrolled out of view but its container is still in view. What I wanted here was different. I basically wanted position: absolute while the header was in view and position: fixed after. Yes, there are ways I could have contorted position: sticky to do what I wanted, but was there another solution?

In the past, we’d just go straight to JS, slap position: absolute on our element, calculate the offset in a scroll event listener and set a top CSS property on our element. However, this is flimsy and violates separation of concerns, as we now need to modify Javascript to change styling. Pass!

What if instead we had access to the scroll offset in CSS? Would that be sufficient to solve our use case? Let’s find out!

As I pointed out in my Increment article about CSS Variables last month, and in my CSS Variables series of talks a few years ago, we can use JS to set & update CSS variables on the root that describe pure data (mouse position, input values, scroll offset etc), and then use them as-needed throughout our CSS, reaching near-perfect separation of concerns for many common cases. In this case, we write 3 lines of JS to set a --scrolltop variable:

let root = document.documentElement;
document.addEventListener("scroll", evt => {
	root.style.setProperty("--scrolltop", root.scrollTop);
});

Then, we can position our navigation absolutely, and subtract var(--scrolltop) to offset any scroll (11rem is our header height):

#toc {
	position: fixed;
	top: calc(11rem - var(--scrolltop) * 1px);
}

This works up to a certain point, but once scrolltop exceeds the height of the header, top becomes negative and our navigation starts drifting off screen:

Just subtracting --scrolltop essentially implements absolute positioning with position: fixed.

We’ve basically re-implemented absolute positioning with position: fixed, which is not very useful! What we really want is to cap the result of the calculation to 0 so that our navigation always remains visible. Wouldn’t it be great if there was a max-top attribute, just like max-width so that we could do this?

One thought might be to change the JS and use Math.max() to cap --scrolltop to a specific number that corresponds to our header height. However, while this would work for this particular case, it means that --scrolltop cannot be used generically anymore, because it’s tailored to our specific use case and does not correspond to the actual scroll offset. Also, this encodes more about styling in the JS than is ideal, since the clamping we need is presentation-related — if our style was different, we may not need it anymore. But how can we do this without resorting to JS?

Thankfully, we recently got implementations for probably the one feature I was pining for the most in CSS, for years: min(), max() and clamp() functions, which bring the power of min/max constraints to any CSS property! And even for width and height, they are strictly more powerful than min/max-* because you can have any number of minimums and maximums, whereas the min/max-* properties limit you to only one.

While brower compatibility is actually pretty good, we can’t just use it with no fallback, since this is one of the features where lack of support can be destructive. We will provide a fallback in our base style and use @supports to conditonally override it:

#toc {
	position: fixed;
	top: 11em;
}

@supports (top: max(1em, 1px)) { #toc { top: max(0em, 11rem - var(–scrolltop) * 1px); } }

Aaand that was it, this gives us the result we wanted!

And because --scrolltop is sufficiently generic, we can re-use it anywhere in our CSS where we need access to the scroll offset. I’ve actually used exactly the scame --scrolltop setting JS code in my blog, to keep the gradient centerpoint on my logo while maintaining a fixed background attachment, so that various elements can use the same background and having it appear continuous, i.e. not affected by their own background positioning area:

The website header and the post header are actually different element. The background appears continuous because it’s using background-attachment: fixed, and the scrolltop variable is used to emulate background-attachment: scroll while still using the viewport as the background positioning area for both backgrounds.

Appendix: Why didn’t we just use the cascade?

You might wonder, why do we even need @supports? Why not use the cascade, like we’ve always done to provide fallbacks for values without sufficiently universal support? I.e., why not just do this:

#toc {
	position: fixed;
	top: 11em;
	top: max(0em, 11rem - var(--scrolltop) * 1px);
}

The reason is that when you use CSS variables, this does not work as expected. The browser doesn’t know if your property value is valid until the variable is resolved, and by then it has already processed the cascade and has thrown away any potential fallbacks.

So, what would happen if we went this route and max() was not supported? Once the browser realizes that the second value is invalid due to using an unknown function, it will make the property invalid at computed value time, which essentially equates to the initial keyword, and for the top property, the initial value is 0. This would mean your navigation would overlap the header when scrolled close to the top, which is terrible!


New decade, new theme

2 min read 0 comments Report broken page

It has been almost a decade since this blog last saw a redesign.

This blog’s theme 2011 - 2020. RIP!

In these 9 years, my life changed dramatically. I joined and left W3C, joined the CSS WG, went to MIT for a PhD, published a book, got married, had a baby, among other things. I designed dozens of websites for dozens of projects, but this theme remained constant, with probably a hasty tweak here and there but nothing more than that. Even its mobile version was a few quick media queries to make it palatable on mobile.

To put this into perspective, when I designed that theme:

  • CSS gradients were still cutting edge
  • We were still using browser prefixes all over the place
  • RSS was still a thing that websites advertised
  • Skeuomorphism was all the rage
  • Websites were desktop first, and often desktop-only.
  • Opera was a browser we tested in.
  • IE8 was the latest IE version. It didn’t support SVG, gradients, border-radius, shadows, web fonts (except .eot), transforms, <video>, <audio>, <canvas>
  • We were still hacking layout with floats, clearfix and overflow: hidden

Over the course of these years, I kept saying “I need to update my website’s theme”, but never got around to it, there was always something more high priority.

The stroke that broke the camel’s back was this Monday. I came up with a nice CSS tip on another website I was working on, and realized I was hesitating to blog about it because I was embarrassed at how my website looked. This is it, I thought. If it has gotten so bad that I avoid blogging because I don’t want people to be reminded of how old my website looks, I need to get my shit together and fix this, I told myself.

My plan was to design something entirely from scratch, like I had done the previous time (the previous theme used a blank HTML5 starter theme as its only starting point). However, when I previewed the new Wordpress default (Twenty Twenty), I fell in love, especially with its typography: it used a very Helvetica-esque variable font as its heading typeface, and Hoefler Text for body text. 😍

It would surely be very convenient to be able to adapt an existing theme, but on the other hand, isn’t it embarrassing to be known for CSS and use the default theme or something close to it?

In the end, I kept the things I liked about it and it certainly still looks a lot like Twenty Twenty, but I think I’ve made enough tweaks that it’s also very Lea. And of course there are animated conic gradients in it, because duh. 😂

Do keep in mind that this is just a day’s work, so it will be rough around the edges and still very much a work in progress. Let me know about any issues you find in the comments!

PS: Yes, yes, I will eventually get around to enforcing https://!


Today's Javascript, from an outsider's perspective

3 min read 0 comments Report broken page

Today I tried to help a friend who is a great computer scientist, but not a JS person use a JS module he found on Github. Since for the past 6 years my day job is doing usability research & teaching at MIT, I couldn’t help but cringe at the slog that this was. Lo and behold, a pile of unnecessary error conditions, cryptic errors, and lack of proper feedback. And I don’t feel I did a good job communicating the frustration he went through in the one hour or so until he gave up.

It went a bit like this…

Note: N_ames of packages and people have been changed to protect their identity. I’ve also omitted a few issues he faced that were too specific to the package at hand. Some of the errors are reconstructed from memory, so let me know if I got anything wrong!_

John: Hey, I want to try out this algorithm I found on Github, it says to use import functionName from packageName and then call functionName(arguments). Seems simple enough! I don’t really need a UI, so I’m gonna use Node!

Lea: Sure, Node seems appropriate for this!

John runs npm install packageName --save as recommended by the package’s README John runs node index.js

Node:

Warning: To load an ES module, set “type”: “module” in the package.json or use the .mjs extension. SyntaxError: Cannot use import statement outside a module

John: But I don’t have a package.json… Lea: Run npm init, it will generate it for you!

John runs npm init, goes through the wizard, adds type: "module" manually to the generated package.json. John runs node index.js

Node:

SyntaxError: Cannot use import statement outside a module

Oddly, the error was thrown from an internal module of the project this time. WAT?!

Lea: Ok, screw this, just run it in a browser, it’s an ES6 module and it’s just a pure JS algorithm that doesn’t use any Node APIs, it should work.

John makes a simple index.html with a <script type="module" src="index.js"> John loads index.html in a browser

Nothing in the console. Nada. Crickets. 🦗

Lea: Oh, you need to adjust your module path to import packageName. Node does special stuff to resolve based on node_modules, now you’re in a browser you need to specify an explicit path yourself.

John looks, at his filesystem, but there was no node_modules directory.

Lea: Oh, you ran npm install before you had a package.json, that’s probably it! Try it again!

John runs npm install packageName --save again

John: Oh yeah, there is a node_modules now!

John desperately looks in node_modules to find the entry point John edits his index.js accordingly, reloads index.html

Firefox:

Incorrect MIME type: text/html

Lea: Oh, you’re in file://! Dude, what are you doing these days without a localhost? Javascript is severely restricted in file:// today.

John: But why do I… ok fine, I’m going to start a localhost.

John starts localhost, visits his index.html under http://localhost:80

Firefox:

Incorrect MIME type: text/html

John: Sigh. Do I need to configure my localhost to serve JS files with a text/javascript MIME type? Lea: What? No! It knows this. Um… look at the Networks tab, I suspect it can’t find your module, so it’s returning an HTML page for the 404, then it complains because the MIME type of the error page is not text/javascript.

Looks at node_modules again, corrects path. Turns out VS Code collapses folders with only 1 subfolder, which is why we hadn’t noticed.

FWIW I do think this is a good usability improvement on VS Code’s behalf, it improves efficiency, but they need to make it more visible that this is what has happened.

Firefox:

SyntaxError: missing ) after formal parameters

Lea: What? That’s coming from the package source, it’s not your fault. I don’t understand… can we look at this line?

John clicks at line throwing the error

Lea: Oh my goodness. This is not Javascript, it’s Typescript!! With a .js extension!! John: I just wanted to run one line of code to test this algorithm… 😭😭😭

John gives up. Concludes never to touch Node, npm, or ES6 modules with a barge pole.

The End.

Note that John is a computer scientist that knows a fair bit about the Web: He had Node & npm installed, he knew what MIME types are, he could start a localhost when needed. What hope do actual novices have?


LCH colors in CSS: what, why, and how?

7 min read 0 comments Report broken page

I was always interested in color science. In 2014, I gave a talk about CSS Color 4 at various conferences around the world called “The Chroma Zone”. Even before that, in 2009, I wrote a color picker that used a hidden Java applet to support ICC color profiles to do CMYK properly, a first on the Web at the time (to my knowledge). I never released it, but it sparked this angry rant.

Color is also how I originally met my now husband, Chris Lilley: In my first CSS WG meeting in 2012, he approached me to ask a question about CSS and Greek, and once he introduced himself I said “You’re Chris Lilley, the color expert?!? I have questions for you!”. I later discovered that he had done even more cool things (he was a co-author of PNG and started SVG 🤯), but at the time, I only knew of him as “the W3C color expert”, that’s how much into color I was (I got my color questions answered much later, in 2015 that we actually got together).

My interest in color science was renewed in 2019, after I became co-editor of CSS Color 5, with the goal of fleshing out my color modification proposal, which aims to allow arbitrary tweaking of color channels to create color variations, and combine it with Una’s color modification proposal. LCH colors in CSS is something I’m very excited about, and I strongly believe designers would be outraged we don’t have them yet if they knew more about them.

What is LCH?

CSS Color 4 defines lch() colors, among other things, and as of recently, all major browsers have started implementing them or are seriously considering it:

LCH is a color space that has several advantages over the RGB/HSL colors we’re familiar with in CSS. In fact, I’d go as far as to call it a game-changer, and here’s why.

1. We actually get access to about 50% more colors.

This is huge. Currently, every CSS color we can specify, is defined to be in the sRGB color space. This was more than sufficient a few years ago, since all but professional monitors had gamuts smaller than sRGB. However, that’s not true any more. Today, the gamut (range of possible colors displayed) of most monitors is closer to P3, which has a 50% larger volume than sRGB. CSS right now cannot access these colors at all. Let me repeat: We have no access to one third of the colors in most modern monitors. And these are not just any colors, but the most vivid colors the screen can display. Our websites are washed out because monitor hardware evolved faster than CSS specs and browser implementations.

Gamut volume of sRGB vs P3

2. LCH (and Lab) is perceptually uniform

In LCH, the same numerical change in coordinates produces the same perceptual color difference. This property of a color space is called “perceptual uniformity”. RGB or HSL are not perceptually uniform. A very illustrative example is the following [example source]:

Both the colors in the first row, as well as the colors in the second row, only differ by 20 degrees in hue. Is the perceptual difference between them equal?

3. LCH lightness actually means something

In HSL, lightness is meaningless. Colors can have the same lightness value, with wildly different perceptual lightness. My favorite examples are yellow and blue. Believe it or not, both have the same HSL lightness!

Both of these colors have a lightness of 50%, but they are most certainly not equally light. What does HSL lightness actually mean then?

You might argue that at least lightness means something for constant hue and saturation, i.e. for adjustments within the same color. It is true that we do get a lighter color if we increase the HSL lightness and a darker one if we decrease it, but it’s not necessarily the same color:

Both of these have the same hue and saturation, but do they really look like darker and lighter variants of the same color?

With LCH, any colors with the same lightness are equally perceptually light, and any colors with the same chroma are equally perceptually saturated.

How does LCH work?

LCH stands for “Lightness Chroma Hue”. The parameters loosely correspond to HSL’s, however there are a few crucial differences:

The hue angles don’t fully correspond to HSL’s hues. E.g. 0 is not red, but more of a magenta and 180 is not turquoise but more of a bluish green, and is exactly complementary.

Note how these colors, while wildly different in hue, perceptually have the same lightness.

In HSL, saturation is a neat 0-100 percentage, since it’s a simple transformation of RGB into polar coordinates. In LCH however, Chroma is theoretically unbounded. LCH (like Lab) is designed to be able to represent the entire spectrum of human vision, and not all of these colors can be displayed by a screen, even a P3 screen. Not only is the maximum chroma different depending on screen gamut, it’s actually different per color.

This may be better understood with an example. For simplicity, assume you have a screen whose gamut exactly matches the sRGB color space (for comparison, the screen of a 2013 MacBook Air was about 60% of sRGB, although most modern screens are about 150% of sRGB, as discussed above). For L=50 H=180 (the cyan above), the maximum Chroma is only 35! For L=50 H=0 (the magenta above), Chroma can go up to 77 without exceeding the boundaries of sRGB. For L=50 H=320 (the purple above), it can go up to 108!

While the lack of boundaries can be somewhat unsettling (in people and in color spaces), don’t worry: if you specify a color that is not displayable in a given monitor, it will be scaled down so that it becomes visible while preserving its essence. After all, that’s not new: before monitors got gamuts wider than sRGB, this is what was happening with regular CSS colors when they were displayed in monitors with gamuts smaller than sRGB.

An LCH color picker

Hopefully, you are now somewhat excited about LCH, but how to visualize it?

I actually made this a while ago, primarily to help me, Chris, Adam, and Una in wrapping our heads around LCH sufficiently to edit CSS Color 5. It’s different to know the theory, and it’s different to be able to play with sliders and see the result. I even bought a domain, css.land, to host similar demos eventually. We used it a fair bit, and Chris got me to add a few features too, but I never really posted about it, so it was only accessible to us, and anybody that noticed its Github repo.

Why not just use an existing LCH color picker?

  • The conversion code for this is written by Chris, and he was confident the math is at least intended to be correct (i.e. if it’s wrong it’s a bug in the code, not a gap in understanding)
  • The Chroma is not 0-100 like in some color pickers we found
  • We wanted to allow inputting arbitrary CSS colors (the “Import…” button above)
  • We wanted to allow inputting decimals (the sliders only do integers, but the black number inputs allow any number)
  • I wanted to be able to store colors, and see how they interpolate.
  • We wanted to be able to see whether the LCH color was within sRGB, P3, (or Rec.2020, an even larger color space).
  • We wanted alpha
  • And lastly, because it’s fun! Especially since it’s implemented with Mavo (and a little bit of JS, this is not a pure Mavo HTML demo).

Recently, Chris posted it in a whatwg/html issue thread and many people discovered it, so it nudged me to post about it, so, here it is: css.land/lch

FAQ

Based on the questions I got after I posted this article, I should clarify a few common misconceptions.

“You said that these colors are not implemented yet, but I see them in your article”

All of the colors displayed in this article are within the sRGB gamut, exactly because we can’t display those outside it yet. sRGB is a color space, not a syntax. E.g. rgb(255 0 0) and lch(54.292% 106.839 40.853) specify the same color.

“How does the LCH picker display colors outside sRGB?”

It doesn’t. Neither does any other on the Web (to my knowledge). The color picker is implemented with web technologies, and therefore suffers from the same issues. It has to scale them down to display something similar, that is within sRGB (it used to just clip the RGB components to 0-100%, but thanks to this PR from Tab it now uses a far superior algorithm: it just reduces the Chroma until the color is within sRGB). This is why increasing the Chroma doesn’t produce a brighter color beyond a certain point: because that color cannot be displayed with CSS right now.

“I’ve noticed that Firefox displays more vivid colors than Chrome and Safari, is that related?”

Firefox does not implement the spec that restricts CSS colors to sRGB. Instead, it just throws the raw RGB coordinates on the screen, so e.g. rgb(100% 0% 0%) is the brightest red your screen can display. While this may seem like a superior solution, it’s incredibly inconsistent: specifying a color is approximate at best, since every screen displays it differently. By restricting CSS colors to a known color space (sRGB) we gained device independence. LCH and Lab are also device independent as they are based on actual measured color.

What about color(display-p3 r g b)? Safari supports that since 2017!

I was notified of this after I posted this article. I was aware Safari was implementing this syntax a while ago, but somehow missed that they shipped it. In fact, WebKit published an article about this syntax last month! How exciting!

color(colorspaceid params) is another syntax added by CSS Color 4 and is the swiss army knife of color management in CSS: in its full glory it allows specifying an ICC color profile and colors from it (e.g. you want real CMYK colors on a webpage? You want Pantone? With color profiles, you can do that too!). It also supports some predefined color spaces, of which display-p3 is one. So, for example, color(display-p3 0 1 0) gives us the brightest green in the P3 color space. You can use this test case to test support: you’ll see red if color() is not supported and bright green if it is.

Exciting as it may be (and I should tweak the color picker to use it when available!), do note that it only addresses the first issue I mentioned: getting to all gamut colors. However, since it’s RGB-based, it still suffers from the other issues of RGB. It is not perceptually uniform, and is difficult to create variants (lighter or darker, more or less vivid etc) by tweaking its parameters.

Furthermore, it’s a short-term solution. It works now, because screens that can display a wider gamut than P3 are rare. Once hardware advances again, color(display-p3 ...) will have the same problem as sRGB colors have today. LCH and Lab are device independent, and can represent the entire gamut of human vision so they will work regardless of how hardware advances.

How does LCH relate to the Lab color space that I know from Photoshop and other applications?

LCH is the same color space as Lab, just viewed differently! Take a look at the following diagram that I made for my students:

The L in Lab and LCH is exactly the same (perceptual Lightness). For a given lightness L, in Lab, a color has cartesian coordinates (L, a, b) and polar coordinates (L, C, H). Chroma is just the length of the line from 0 to point (a, b) and Hue is the angle of that ray. Therefore, the formulae to convert Lab to LCH are trivial one liners: C is sqrt(a² + b²) and H is atan(b/a) (with different handling if a = 0). atan() is just the reverse of tan(), i.e. tan(H) = b/a.


Issue closing stats for any repo

6 min read 0 comments Report broken page

tl;dr: If you just want to quickly get stats for a repo, you can find the app here. The rest of this post explains how it’s built with Mavo HTML, CSS, and 0 lines of JS. Or, if you’d prefer, you can just View Source — it’s all there!

The finished app we’re going to make, find it at https://projects.verou.me/issue-closing

One of the cool things about Mavo is how it enables one to quickly build apps that utilize the Github API. At some point I wanted to compute stats about how quickly (or rather, slowly…) Github issues are closed in the Mavo repo. And what better way to build this than a Mavo app? It was fairly easy to build a prototype for that.

Displaying a list of the last 100 closed issues and the time it took to close them

To render the last 100 closed issues in the Mavo app, I first looked up the appropriate API call in Github’s API documentation, then used it in the mv-source attribute on the Mavo root, i.e. the element with mv-app that encompasses everything in my app:

<div mv-app="issueClosing"
     mv-source="https://api.github.com/repos/mavoweb/mavo/issues?state=closed&sort=updated&per_page=100"
     mv-mode="read">
	<!-- app here -->
</div>

Then, I displayed a list of these issues with:

<div mv-multiple property="issue">
	<a class="issue-number" href="https://github.com/mavoweb/mavo/issues/[number]" title="[title]" target="_blank">#[number]</a>
	took [closed_at - created_at] ms
</div>

See the Pen Step 1 - Issue Closing App Tutorial by Lea Verou (@leaverou) on CodePen.

This would work, but the way it displays results is not very user friendly (e.g. “#542 took 149627000 ms”). We need to display the result in a more readable way.

We can use the duration() function to display a readable duration such as “1 day”:

<div mv-multiple property="issue">
	<a class="issue-number" href="https://github.com/mavoweb/mavo/issues/[number]" title="[title]" target="_blank">#[number]</a>
	took [duration(closed_at - created_at)]
</div>

See the Pen Step 2 - Issue Closing App Tutorial by Lea Verou (@leaverou) on CodePen.

Displaying aggregate statistics

However, a list of issues is not very easy to process. What’s the overall picture? Does this repo close issues fast or not? Time for some statistics! We want to calculate average, median, minimum and maximum issue closing time. To calculate these statistics, we need to use the times we have displayed in the previous step.

First, we need to give our calculation a name, so we can refer to its value in expressions:

<span property="timeToClose">[duration(closed_at - created_at)]</span>

However, as it currently stands, the value of this property is text (e.g. “1 day”, “2 months” etc). We cannot compute averages and medians on text! We need the property value to be a number. We can hide the actual raw value in an attribute and use the nicely formatted value as the visible content of the element, like so (we use the content attribute here but you can use any, e.g. a data-* attribute would work just as well):

<span property="timeToClose" mv-attribute="content" content="[closed_at - created_at]">[duration(timeToClose)]</span>

Note: There is a data formatting feature in the works which would simplify this kind of thing by allowing you to separate the raw value and its presentation without having to use separate attributes for them.

We can also add a class to color it red, green, or black depending on whether the time is longer than a month, shorter than a day, or in-between respectively:

<span property="timeToClose" mv-attribute="content" content="[closed_at - created_at]" class="[if(timeToClose > month(), 'long', if (timeToClose < day(), 'short'))]">[duration(timeToClose)]</span>

Now, on to calculate our statistics! We take advantage of the fact that timeToClose outside the issue collection gives us all the times, so we can compute aggregates on them. Therefore, the stats we want to calculate are simply average(timeToClose), median(timeToClose), min(timeToclose), and max(timeToClose). We put all these in a definition list:

<dl>
	<dt>Median</dt>
	<dd>[duration(median(timeToClose))]</dd>
	<dt>Average</dt>
	<dd>[duration(average(timeToClose))]</dd>
	<dt>Slowest</dt>
	<dd>[duration(max(timeToClose))]</dd>
	<dt>Fastest</dt>
	<dd>[duration(min(timeToClose))]</dd>
</dl>

See the Pen Step 3 - Issue Closing App Tutorial by Lea Verou (@leaverou) on CodePen.

Making repo a variable

Now that all the functionality of my app was in place, I realized this could be useful for more repos as well. Why not make the repo a property that can be changed? So I added an input for specifying the repo: <input property="repo" mv-default="mavoweb/mavo"> and then replaced mavoweb/mavo with [repo] everywhere else, i.e. mv-source became https://api.github.com/repos/[repo]/issues?state=closed&sort=updated&per_page=100.

Avoid reload on every keystroke

This worked, but since Mavo properties are reactive, it kept trying to reload data with every single keystroke, which was annoying and wasteful. Therefore, I needed to do a bit more work so that there is a definite action that submits the change. Enter Mavo Actions!

I created two properties: repo for the actual repo and repoInput for the input. repoInput still changes on every keystroke, but it’s repo that is actually being used in the app. I wrapped the input with a <form> and added an action on the form that does this (mv-action="set(repo, repoInput)"). I also added a submit button. Since Mavo actions on forms are triggered when the form is submitted, it doesn’t matter if I press Enter on the input, or click the Submit button, both work.

Setting the repo via a URL parameter

Eventually I also wanted to be able to set the repo from the URL, so I also added a hidden repoDefault property: <meta property="repoDefault" content="[url('repo') or 'mavoweb/mavo']">, and then changed the hardcoded mv-default="mavoweb/mavo" to mv-default="[repoDefault]" on both the repo and the repoInput properties. That way one can link to stats for a specific repo, e.g. https://projects.verou.me/issue-closing/?repo=prismjs/prism

Why a repoDefault property and not just mv-default="[url('repo') or 'mavoweb/mavo']? Just keeping things DRY and avoiding having to repeat the same expression twice.

See the Pen Step 5 - Issue Closing App Tutorial by Lea Verou (@leaverou) on CodePen.

Filtering by label

At some point I wondered: What would the issue closing times be if we only counted bugs? What if we only counted enhancements? Surely these would be different: When looking at issue closing times for a repo, one primarily cares about how fast bugs are fixed, not how quickly every random feature suggestion is implemented. Wouldn’t it be cool to also have a label filter?

For that, I added a series of radio buttons:

Show:
<label><input type="radio" property="labels" name="labels" checked value=""> All</label>
<label><input type="radio" name="labels" value="bug"> Bugs only</label>
<label><input type="radio" name="labels" value="enhancement"> Enhancements only</label>

Then, I modified mv-source to also use this value in its API call: mv-source="https://api.github.com/repos/[repo]/issues?state=closed&sort=updated&labels=[labels]&per_page=100".

Note that when turning radio buttons into a Mavo property you only use the property attribute on the first one. This is important because Mavo has special handling when you use the property attribute with the same name multiple times in the same group, which we don’t want here. You can add the property attribute on any of the radio buttons, it doesn’t have to be the first. Just make sure it’s only one of them.

Then I became greedy: Why not also allow filtering by custom labels too? So I added another radio with an input:

Show:
<label><input type="radio" property="labels" name="labels" checked value=""> All</label>
<label><input type="radio" name="labels" value="bug"> Bugs only</label>
<label><input type="radio" name="labels" value="enhancement"> Enhancements only</label>
<label><input type="radio" name="labels" value="[customLabel]"> Label <input property="customLabel"></label>

Note that since this is a text field, when the last value is selected, we’d have the same problem as we did with the repo input: Every keystroke would fire a new request. We can solve this in the same way as we solved it for the repo property, by having an intermediate property and only setting labels when the form is actually submitted:

Show:
<label><input type="radio" property="labelFilter" name="labels" checked value=""> All</label>
<label><input type="radio" name="labels" value="bug"> Bugs only</label>
<label><input type="radio" name="labels" value="enhancement"> Enhancements only</label>
<label><input type="radio" name="labels" value="[customLabel]"> Label <input property="customLabel"></label>
<meta property="labels" content="">

Adding label autocomplete

Since we now allow filtering by a custom label, wouldn’t it be cool to allow autocomplete too? HTML allows us to offer autocomplete in our forms via <datalist> and we can use Mavo to populate the contents!

First, we add a <datalist> and link it with our custom label input, like so:

<label><input type="radio" name="labels" value="[customLabel]"> Label <input property="customLabel" list="label-suggestions"></label>
<datalist id="label-suggestions">
</datalist>

Currently, our suggestion list is empty. How do we populate it with the labels that have actually been used in this repo? Looking at the API documentation, we see that each returned issue has a labels field with its labels as an object, and each of these objects has a name field with the textual label. This means that if we use issue.labels.name in Mavo outside of the issues collection, we get a list with all of these values, which we can then use to populate our <datalist> by passing it on to mv-value which allows us to create dynamic collections:

<label><input type="radio" name="labels" value="[customLabel]"> Label <input property="customLabel" list="label-suggestions"></label>
<datalist id="label-suggestions">
	<option mv-multiple mv-value="unique(issue.labels.name)"></option>
</datalist>

Note that we also used unique() to eliminate duplicates, since otherwise each label would appear as many times as it is used.

See the Pen Issue Closing App - Tutorial Step 6 by Lea Verou (@leaverou) on CodePen.

Adding a visual summary graphic

Now that we got the functionality down, we can be a little playful and add some visual flourish. How about a bar chart that summarizes the proportion of long vs short vs normal closing times? We start by setting the CSS variables we are going to need for our graphic, i.e. the number of issues in each category:

<summary style="--short: [count(timeToClose < day())]; --long: [count(timeToClose > month())]; --total: [count(issue)];">
	Based on [count(issue)] most recently updated issues
</summary>

Then, we draw our graphic:

summary::before {
	content: "";
	position: fixed;
	bottom: 0;
	left: 0;
	right: 0;
	z-index: 1;
	height: 5px;
	background: linear-gradient(to right, var(--short-color) calc(var(--short, 0) / var(--total) * 100%), hsl(220, 10%, 75%) 0, hsl(220, 10%, 75%) calc(100% - var(--long, 0) / var(--total) * 100%), var(--long-color) 0) bottom / auto 100% no-repeat border-box;
}

Now, wouldn’t it be cool to also show a small pie chart next to the heading, if conic gradients are supported so we can draw it? The color stops would be the same, so we define a --summary-stops variable on summary, so we can reuse them across both gradients:

summary {
	--summary-stops: var(--short-color) calc(var(--short, 0) / var(--total) * 100%), hsl(220, 10%, 75%) 0, hsl(220, 10%, 75%) calc(100% - var(--long, 0) / var(--total) * 100%), var(--long-color) 0;
}

summary::before { content: ""; position: fixed; bottom: 0; left: 0; right: 0; z-index: 1; height: 5px; background: linear-gradient(to right, var(–summary-stops)) bottom / auto 100% no-repeat border-box; }

@supports (background: conic-gradient(red, red)) { summary::after { content: ""; display: inline-block; vertical-align: middle; width: 1.2em; height: 1.2em; margin-left: .3em; border-radius: 50%; background: conic-gradient(var(–summary-stops)); } }

See the Pen Issue Closing App - Tutorial Step 7 by Lea Verou (@leaverou) on CodePen.


Utility: Convert SVG path to all-relative or all-absolute commands

2 min read 0 comments Report broken page

I like hand-editing my SVGs. Often I will create an initial version in Illustrator, and then export and continue with hand editing. Not only is it a bit of a meditative experience and it satisfies my obsessive-compulsive tendencies to clean up the code, it has actual practical benefits when you need to make certain changes or introduce animation. Some things are easier to do in a GUI, and others are easier to do in code, and I like having the flexibility to pick which one fits my use case best.

However, there was always a thing that was a PITA: modifying paths. Usually if I need anything more complicated than just moving them, I’d do it in Illustrator, but even moving them can be painful if they are not all relative (and no, I don’t like introducing pointless transforms for things that should really be in the d attribute).

For example, this was today’s result of trying to move an exported “a” glyph from Raleway Bold by modifying its first M command:

Trying to move a path by changing its first M command when not all of its commands are relative.

This happened because even though most commands were exported as relative, several were not and I had not noticed. I have no idea why some commands were exported as absolute, it seems kind of random.

When all commands are relative, moving a path is as simple as manipulating its initial M command and the rest just adapts, because that’s the whole point of relative commands. Same with manipulating every other part of the path, the rest of it just adapts. It’s beautiful. I honestly have no idea why anybody would favor absolute commands. And yet, googling “convert SVG path to relative” yields one result, whereas there are plenty of results about converting paths to absolute. No idea why that’s even desirable, ever (?).

I remembered I had come across that result before. Thankfully, there’s also a fiddle to go with it, which I had used in the past to convert my path. I love it, it uses this library called Snap.svg which supports converting paths to relative as a just-add-water utility method. However, that fiddle is a quick demo to answer a StackOverflow question, so the UI is not super pleasant to use (there is no UI: you just manipulate the path in the SVG and wait for the fiddle to run). This time around, I needed to convert multiple paths, so I needed a more efficient UI.

So I created this demo which is also based on Snap.svg, but has a slightly more efficient UI. You just paste your path in a textarea and it both displays it and instantly converts it to all-relative and all-absolute paths (also using Snap.svg). It also displays both your original path and the two converted ones, so you can make sure they still look the same. It even follows a pending-delete pattern so you can just focus on the output textarea and hit Cmd-C in one fell swoop.

I wasn’t sure about posting this or just tweeting it (it literally took less than 30 minutes — including this blog post — and I tend to only post small things like that on my twitter), but I thought it might be useful to others googling the same thing, so I may as well post it here for posterity. Enjoy!


ReferenceError: x is not defined?

2 min read 0 comments Report broken page

Today for a bit of code I was writing, I needed to be able to distinguish “x is not defined” ReferenceErrors from any other error within a try...catch block and handle them differently.

Now I know what you’re thinking. Trying to figure out exactly what kind of error you have programmatically is a well-known fool’s errand. If you express a desire to engage in such a risky endeavor, any JS veteran in sight will shake their head in remembrance of their early days, but have the wisdom to refrain from trying to convince you otherwise; they know that failing will teach you what it taught them when they were young and foolish enough to attempt such a thing.

Despite writing JS for 13 years, today I was feeling adventurous. “But what if, just this once, I could get it to work? It’s a pretty standard error message! What if I tested in so many browsers that I would be confident I’ve covered all cases?”

I made a simple page on my server that just prints out the error message written in a way that would maximize older browser coverage. Armed with that, I started visiting every browser in my BrowserStack account. Here are my findings for anyone interested:

  • Chrome (all versions, including mobile): x is not defined
  • Firefox (all versions, including mobile): x is not defined
  • Safari 4-12 : Can't find variable: x
  • Edge (16 - 18): 'x' is not defined
  • Edge 15: 'x' is undefined
  • IE6-11 and Windows Phone IE: 'x' is undefined
  • UC Browser (all versions): x is not defined
  • Samsung browser (all versions): x is not defined
  • Opera Mini and Pre-Chromium Opera: Undefined variable: x

Even if you, dear reader, are wise enough to never try and detect this error, I thought you may find the variety (or lack thereof) above interesting.

I also did a little bit of testing with a different UI language (I picked Greek), but it didn’t seem to localize the error messages. If you’re using a different UI language, please open the page above and if the message is not in English, let me know!

In the end, I decided to go ahead with it, and time will tell if it was foolish to do so. For anyone wishing to also dabble in such dangerous waters, this was my checking code:

if (e instanceof ReferenceError
    && /is (not |un)defined$|^(Can't find|Undefined) variable/.test(e.message)) {
    // do stuff
}

Found any cases I missed? Or perhaps you found a different ReferenceError that would erroneously match the regex above? Let me know in the comments!

One thing that’s important to note is that even if the code above is bulletproof for today’s browser landscape, the more developers that do things like this, the harder it is for browser makers to improve these error messages. However, until there’s a better way to do this, pointing fingers at developers for wanting to do perfectly reasonable things, is not the solution. This is why HTTP has status codes, so we don’t have to string match on the text. Imagine having to string match “Not Found” to figure out if a request was found or not! Similarly, many other technologies have error codes, so that different types of errors can be distinguished without resulting to flimsy string matching. I’m hoping that one day JS will also have a better way to distinguish errors more precisely than the general error categories of today, and we’ll look back to posts like this with a nostalgic smile, being so glad we don’t have to do crap like this ever again.