Blog

38 posts 2009 16 posts 2010 50 posts 2011 28 posts 2012 15 posts 2013 7 posts 2014 10 posts 2015 5 posts 2016 4 posts 2017 7 posts 2018 2 posts 2019 17 posts 2020 7 posts 2021 7 posts 2022 11 posts 2023

Refresh CSS Bookmarklet v2

2 min read 0 comments Report broken page

Almost 11 years ago, Paul Irish posted this brilliant bookmarklet to refresh all stylesheets on the current page. Despite the amount of tools, plugins, servers to live reload that have been released over the years, I’ve always kept coming back to it. It’s incredibly elegant in its simplicity. It works everywhere: locally or remotely, on any domain and protocol. No need to set up anything, no need to alter my process in any way, no need to use a specific local server or tool. It quietly just accepts your preferences and workflow instead of trying to change them. Sure, it doesn’t automatically detect changes and reload, but in most cases, I don’t want it to.

I’ve been using this almost daily for a decade and there’s always been one thing that bothered me: It doesn’t work with iframes. If the stylesheet you’re editing is inside an iframe, tough luck. If you can open the frame in a new tab, that works, but often that’s nontrivial (e.g. the frame is dynamically generated). After dealing with this issue today once more, I thought “this is just a few lines of JS, why not fix it?”.

The first step was to get Paul’s code in a readable format, since the bookmarklet is heavily minified:

(function() {
	var links = document.getElementsByTagName('link');
	for (var i = 0; i < links.length; i++) {
		var link = links[i];
		if (link.rel.toLowerCase().match(/stylesheet/) && link.href) {
			var href = link.href.replace(/(&|%5C?)forceReload=\d+/, '');
			link.href = href + (href.match(/\?/) ? '&' : '?') + 'forceReload=' + (new Date().valueOf())
		}
	}
})()

Once I did that, it became obvious to me that this could be shortened a lot; the last 10 years have been wonderful for JS evolution!

(()=>{
	for (let link of Array.from(document.querySelectorAll("link[rel=stylesheet][href]"))) {
		var href = new URL(link.href, location);
		href.searchParams.set("forceReload", Date.now());
		link.href = href;
	}
})()

Sure, this reduces browser support a bit (most notably it excludes IE11), but since this is a local development tool, that’s not such a big problem.

Now, let’s extend this to support iframes as well:

{
	let $$ = (selector, root = document) => Array.from(root.querySelectorAll(selector));

let refresh = (document) => { for (let link of $$("link[rel=stylesheet][href]", document)) { let href = new URL(link.href); href.searchParams.set("forceReload", Date.now()); link.href = href; }

for (let iframe of $$("iframe", document)) { iframe.contentDocument && refresh(iframe.contentDocument); } }

refresh(); }

That’s it! Do keep in mind that this will not work with cross-origin iframes, but then again, you probably don’t expect it to in that case.

Now all we need to do to turn it into a bookmarklet is to prepend it with javascript: and minify the code. Here you go:

[Refresh CSS](javascript:{let e=(e,t=document)=>Array.from(t.querySelectorAll(e)),t=r=>{for(let t of e(‘link[rel=stylesheet][href]’,r)){let e=new URL(t.href);e.searchParams.set(‘forceReload’,Date.now()),t.href=e}for(let o of e(‘iframe’,r))o.contentDocument&&t(o.contentDocument)};t()})

Hope this is useful to someone else as well :) Any improvements are always welcome!

Credits

  • Paul Irish, for the original bookmarklet
  • MaurĂ­cio Kishi, for making the iframe traversal recursive (comment)

Easy Dynamic Regular Expressions with Tagged Template Literals and Proxies

3 min read 0 comments Report broken page

If you use regular expressions a lot, you probably also create them from existing strings that you first need to escape in case they contain special characters that need to be matched literally, like $ or +. Usually, a helper function is defined (hopefully this will soon change as RegExp.escape() is coming!) that basically looks like this:

var escapeRegExp = s => s.replace(/[-\/\\^$*+?.()|[\]{}]/g, "\\$&");

and then regexps are created by escaping the static strings and concatenating them with the rest of the regex like this:

var regex = RegExp(escapeRegExp(start) + '([\\S\\s]+?)' + escapeRegExp(end), "gi")

or, with ES6 template literals, like this:

var regex = RegExp(`${escapeRegExp(start)}([\\S\\s]+?)${escapeRegExp(end)}`, "gi")

(In case you were wondering, this regex is taken directly from the Mavo source code)

Isn’t this horribly verbose? What if we could define a regex with just a template literal (`${start}([\\S\\s]+?)${end}` for the regex above) and it just worked? Well, it turns out we can! If you haven’t seen tagged template literals before, I suggest you click that MDN link and read up. Basically, you can prepend an ES6 template literal with a reference to a function and the function accepts the static parts of the string and the dynamic parts separately, allowing you to operate on them!

So, what if we defined such a function that returns a RegExp object and escapes the dynamic parts? Let’s try to do that:

var regexp = (strings, ...values) => {
	return RegExp(strings[0] + values.map((v, i) => escapeRegExp(v) + strings[i+1]).join(""))
};

And now we can try it in the console:

> regexp`^${'/*'}([\\S\\s]+?)${'*/'}`;
< /^\/\*([\S\s]+?)\*\//

Won’t somebody, please, think of the flags?!

This is all fine and dandy, but how do we specify flags? Note that the original regexp had flags (“gi”). The tagged template syntax doesn’t really allow us to pass in any additional parameters. However, thanks to functions being first-class objects in JS, we can have a function that takes the flags in as parameters and returns a function that generates regexps with the right flags:

var regexp = flags => {
	return (strings, ...values) => {
		var pattern = strings[0] + values.map((v, i) => escapeRegExp(v) + strings[i+1]).join("")
		return RegExp(pattern, flags);
	}
};

And now we can try it in the console:

> regexp("gi")`^${'/*'}([\\S\\s]+?)${'*/'}`;
< /^\/\*([\S\s]+?)\*\//gi

This works nice, but now even if we don’t want any flags, we can’t use the nice simple syntax we had earlier, we need to include a pair of empty parens:

> regexp()`^${'/*'}([\\S\\s]+?)${'*/'}`;
< /^\/\*([\S\s]+?)\*\//

Can we have our cake and eat it too? Can we have the short parenthesis-less syntax when we have no flags, and still be able to specify flags? Of course! We can check the arguments we have and either return a function, or call the function. If our function is used as a tag, the first argument will be an array (thanks Roman!). If we’re expecting it to return a function, the first argument would be a string: the flags. So, let’s try this approach!

var regexp = (...args) => {
	var ret = (flags, strings, ...values) => {
		var pattern = strings[0] + values.map((v, i) => escapeRegExp(v) + strings[i+1]).join("");
		return RegExp(pattern, flags);
	};

if (Array.isArray(args[0])) { // Used as a template tag return ret("", …args); }

return ret.bind(undefined, args[0]); };

And now we can try it in the console and verify that both syntaxes work:

> regexp("gi")`^${'/*'}([\\S\\s]+?)${'*/'}`;
< /^\/\*([\S\s]+?)\*\//gi
regexp`^${'/*'}([\\S\\s]+?)${'*/'}`;
< /^\/\*([\S\s]+?)\*\//

Even nicer syntax, with proxies!

Is there a better way? If this is not super critical for performance, we could use proxies to return the right function with a template tag like regexp.gi, no parentheses or quotes needed and the code is actually shorter too:

var _regexp = (flags, strings, ...values) => {
	var pattern = strings[0] + values.map((v, i) => escapeRegExp(v) + strings[i+1]).join("");
	return RegExp(pattern, flags);
};
var regexp = new Proxy(_regexp.bind(undefined, ""), {
	get: (t, property) => _regexp.bind(undefined, property)
});

And now we can try it in the console, both with and without flags!

> regexp.gi`^${'/*'}([\\S\\s]+?)${'*/'}`;
< /^\/\*([\S\s]+?)\*\//gi
> regexp`^${'/*'}([\\S\\s]+?)${'*/'}`;
< /^\/\*([\S\s]+?)\*\//

That’s some beauty right there! ?

PS: If you liked this, take a look at this mini-library by Dr. Axel Rauschmayer that uses a similar idea and turns it into a library that does more than just escaping strings (different syntax for flags though, they become part of the template string, like in PHP)


Never forget type="button" on generated buttons!

2 min read 0 comments Report broken page

I just dealt with one of the weirdest bugs and thought you may find it amusing too.

In one of my slides for my upcoming talk “Even More CSS Secrets”, I had a Mavo app on a <form>, and the app included a collection to quickly create a UI to manage pairs of values for something I wanted to calculate in one of my live demos. A Mavo collection is a repeatable HTML element with affordances to add items, delete items, move items etc. Many of these affordances are implemented via <button> elements generated by Mavo.

Normally, hitting Enter inside a text field within a collection adds a new item, as one would expect. However, I noticed that when I hit Enter inside any item, not only no item was added, but an item was being deleted, with the usual “Item deleted [Undo]” UI and everything!

At first I thought it was a bug with the part of Mavo code that adds items on Enter and deletes empty items on backspace, so I commented that out. Nope, still happening. I was already very puzzled, since I couldn’t remember any other part of the codebase that deletes items in response to keyboard events.

So, I added breakpoints on the delete(item) method of Mavo.Collection to inspect the call stack and see how execution got there. Turned out, it got there via a normal …click event on the actual delete button! What fresh hell was this? I never clicked any delete button!

And then it dawned on me: <button> elements with no type attribute set are submit buttons by default! Quote from spec: The missing value default and invalid value default are the Submit Button state.. This makes no difference in most cases, UNLESS you’re inside a form. The delete button of the first item had been turned into the de facto default submit button just because it was the first button in that form and it had no type!

I also remembered that regardless of how you submit a form (e.g. by hitting Enter on a single-line text field) it also fires a click event on the default submit button, because people often listen to that instead of the form’s submit event. Ironically, I was cancelling the form’s submit event in my code, but it still generated that fake click event, making it even harder to track down as no form submission was actually happening.

The solution was of course to go through every part of the Mavo code that generates buttons and add type=“button” to them. I would recommend this to everyone who is writing libraries that will operate in unfamiliar HTML code. Most of the time a type-less <button> will work just fine, but when it doesn’t, things get really weird.


Responsive tables, revisited

2 min read 0 comments Report broken page

Screenshot showing a table with 3 rows turning into 3 sets of key-value pairs

Many people have explored responsive tables. The usual idea is turning the table into key-value pairs so that cells become rows and there are only 2 columns total, which fit in any screen. However, this means table headers need to now be repeated for every row. The current ways to do that are:

  • Duplicating content in CSS or via a data-* attribute, using generated content to insert it before every row.
  • Using a definition list which naturally has duplicated <dt>s, displaying it as a table in larger screens.

A few techniques that go in an entirely different direction are:

  • Hiding non-essential columns in smaller screens
  • Showing a thumbnail of the table instead, and display the full table on click
  • Displaying a graph in smaller screens (e.g. a pie chart)

I think the key-value display is probably best because it works for any kind of table, and provides the same information. So I wondered, is there any way to create it without duplicating content either in the markup or in the CSS? After a bit of thinking, I came up with two ways, each with their own pros and cons.

Both techniques are very similar: They set table elements to display: block; so that they behave like normal elements and duplicate the <thead> contents in two different ways:

  1. Using text-shadow and creating one shadow for each row
  2. Using the element() function to duplicate the entire thead, styles and all.

Each method has its own pros and cons, but the following pros and cons apply to both:

  • Pros: Works with normal table markup
  • Cons:
    • All but the first set of headers are unselectable (since neither shadows nor element()-generated images are real text). However, keep in mind that the techniques based on generated content also have this problem — and for all rows. Also, that the markup screen readers see is the same as a normal table. However, it’s still a pretty serious flaw and makes this a hack. I’m looking forward to seeing more viable solutions.
    • Only works if none of the table cells wrap, since it depends on table cells being aligned with their headers.

Using text-shadow to copy text to other rows

  • Additional Pros: Works in every browser
  • Additional Cons: Max Number of rows needs to be hardcoded in the CSS, since each row needs another text shadow on <thead>. However, you can specify more shadows than needed, since overflow: hidden on the table prevents extra ones from showing up. Also, number of columns needs to be specified in the CSS (the --cols variable).

Demo

Using element() to copy the entire <thead> to other rows

  • Additional Cons: element() is currently only supported in Firefox :(

Demo


Quicker Storify export

2 min read 0 comments Report broken page

If you’ve used Storify, you probably know by now it’s closing down soon. They have an FAQ up to help people with the transition which explains that to export your content you need to…

  1. Log in to Storify at www.storify.com.
  2. Mouse over the story that contains content you would like to export and select “View.”
  3. Click on the ellipses icon and select “Export.”
  4. Choose your preferred format for download.
  5. To save your content and linked assets in HTML, select - File > Save as > Web Page, Complete. To export your content to PDF, select Export to HTML > File > Print > Save as PDF.
  6. Repeat the process for each story whose content you would like to preserve.

So I started doing that. I wasn’t sure if JSON or HTML would be more useful to me, so I was exporting both. It was painful. Each export required 3 page loads, and they were slow. After 5 stories, I started wondering if there’s a quicker way. I’m a programmer after all, my job is to automate things. However, I also didn’t want to spend too long on that, since I only had 40 stories, so the effort should definitely not be longer than it would have taken to manually export the remaining 35 stories.

I noticed that the HTML and JSON URLs for each story could actually be recreated by using the slug of the Story URL:

https://storify.com/LeaVerou/**css-variables-var-subtitle-cssconf-asia**.html https://api.storify.com/v1/stories/LeaVerou/**css-variables-var-subtitle-cssconf-asia**

The bold part is the only thing that changes. I tried that with a different slug and it worked just fine. Bingo! So I could write a quick console script to get all these URLs and open them in separate tabs and then all I have to do is go through each tab and hit Cmd + S to save. It’s not perfect, but it took minutes to write and saved A LOT of time.

Following is the script I wrote. Go to your profile page, click “Show more” and scroll until all your stories are visible, then paste it into the console. You will probably need to do it twice: once to disable popup blocking because the browser rightfully freaks out when you try to open this many tabs from script, and once to actually open all of them.

var slugs = [... new Set($$(".story-tile").map(e => e.dataset.path))]
slugs.forEach(s => { open(`https://api.storify.com/v1/stories/${s}`); open(`https://storify.com/${s}.html`) })

This gets a list of all unique (hence the [...new Set(array)]) slugs and opens both the JSON and HTML export URLs in new tabs. Then you can go through each tab and save.

You will notice that the browser becomes REALLY SLOW when you open this many tabs (in my case 41 stories × 2 tabs each = 82 tabs!) so you may want to do it in steps, by using array.slice(). Also, if you don’t want to save the HTML version, the whole process becomes much faster, the HTML pages took AGES to load and kept freezing the browser.

Hope this helps!

PS: If you’re content with your data being held hostage by a different company, you could also use this tool by Wakelet. I’ve done that too, but I also wanted to own my data as well.


Free Intro to Web Development slides (with demos)

2 min read 0 comments Report broken page

This semester I’m teaching 6.813 User Interface Design and Implementation at MIT, as an instructor.

Many of the assignments of this course include Web development and the course included two 2-hour labs to introduce students to these technologies. Since I’m involved this year, I decided to make new labs from scratch and increase the number of labs from 2 to 3. Even so, trying to decide what to include and what not to from the entirety of web development in only 6 hours was really hard, and I still feel I failed to include important bits.

Since many people asked me for the slides on Twitter, I decided to share them. You will find my slides here and an outline of what is covered is here. These slides were also the supporting material the students had on their own laptops and often they had to do exercises in them.

The audience for these slides is beginners in Web development but technical otherwise — people who understand OOP, trees, data structures and have experience in at least one C-like programming language.

Some demos will not make sense as they were live coded, but I included notes (top right or bottom left corner) about what was explained in each part.

Use the arrow keys to navigate. It is also quite big, so do not open this on a phone or on a data plan.

If the “Open in new Tab” button opens a tab which then closes immediately, disable Adblock.

From some quick testing, they seem to work in Firefox and Safari, but in class we were using an updated version of Chrome (since we were talking about developer tools, we needed to all have the same UI), so that’s the browser I’d recommend since they were tested much more there.

I’m sharing them as-is in case someone else finds them useful. Please do not bug me if they don’t work in your setup, or if you do not find them useful or whatever**.** If they don’t tickle your fancy, move on. I cannot provide any support or fixes. If you want to help fix the issue, you can submit a pull request, but be warned: most of the code was written under extreme time pressure (I had to produce this 6 times as fast as I usually need to make talks), so is not my finest moment.

If you want to use them to teach other people that’s fine as long as it’s a non-profit event.

[gallery columns=“2” size=“medium” ids=“2756,2757,2755,2754”]


Different remote and local resource URLs, with Service Workers!

4 min read 0 comments Report broken page

I often run into this issue where I want a different URL remotely and a different one locally so I can test my local changes to a library. Sure, relative URLs work a lot of the time, but are often not an option. Developing Mavo is yet another example of this: since Mavo is in a separate repo from mavo.io (its website) as well as test.mavo.io (the testsuite), I can’t just have relative URLs to it that also work remotely. I’ve been encountering this problem way too frequently pretty much since I started in web development. In this post, will describe all solutions and workarounds I’ve used over time for this, including the one I’m currently using for Mavo: Service Workers!

The manual one

Probably the first solution everyone tries is doing it manually: every time you need to test, you just change the URL to a relative, local one and try to remember to change it back before committing. I still use this in some cases, since us developers are a lazy bunch. Usually I have both and use my editor’s (un)commenting shortcut for enabling one or the other:

<script src="https://get.mavo.io/mavo.js"></script>
<!--<script src="../mavo/dist/mavo.js"></script>-->

However, as you might imagine, this approach has several problems, the worst of which is that more than once I forgot and committed with the active script being the local one, which resulted in the remote website totally breaking. Also, it’s clunky, especially when it’s two resources whose URLs you need to change.

The JS one

This idea uses a bit of JS to load the remote URL when the local one fails to load.

<script src="http://localhost:8000/mavo/dist/mavo.js" onerror="this.src='https://get.mavo.io/mavo.js'"></script>

This works, and doesn’t introduce any cognitive overhead for the developer, but the obvious drawback is that it slows things down for the server since a request needs to be sent and fail before the real resource can be loaded. Slowing things down for the local case might be acceptable, even though undesirable, but slowing things down on the remote website for the sake of debugging is completely unacceptable. Furthermore, this exposes the debugging URLs in the HTML source, which gives me a bit of a knee jerk reaction.

A variation of this approach that doesn’t have the performance problem is:

<script>
{
 let host = location.hostname == "localhost"? 'http://localhost:8000/dist' : 'https://get.mavo.io';
 document.write(`<script src="${host}/mavo.js"></scr` + `ipt>`);
}
</script>

This works fine, but it’s very clunky, especially if you have to do this multiple times (e.g. on multiple testing files or demos).

The build tools one

The solution I was following up to a few months ago was to use gulp to copy over the files needed, and then link to my local copies via a relative URL. I would also have a gulp.watch() that monitors changes to the original files and copies them over again:

gulp.task("copy", function() {
	gulp.src(["../mavo/dist/**/*"])
		.pipe(gulp.dest("mavo"));
});

gulp.task("watch", function() { gulp.watch(["…/mavo/dist/*"], ["copy"]); });

This worked but I had to remember to run gulp watch every time I started working on each project. Often I forgot, which was a huge source of confusion as to why my changes had no effect. Also, it meant I had copies of Mavo lying around on every repo that uses it and had to manually update them by running gulp, which was suboptimal.

The Service Worker one

In April, after being fed up with having to deal with this problem for over a decade, I posted a tweet:

@MylesBorins replied (though his tweet seems to have disappeared) and suggested that perhaps Service Workers could help. In case you’ve been hiding under a rock for the past couple of years, Service Workers are a new(ish) API that allows you to intercept requests from your website to the network and do whatever you want with them. They are mostly promoted for creating good offline experiences, though they can do a lot more.

I was looking for an excuse to dabble in Service Workers for a while, and this was a great one. Furthermore, browser support doesn’t really matter in this case because the Service Worker is only used locally.

The code I ended up with looks like this in a small script called sitewide.js, which, as you may imagine, is used sitewide:

(function() {

if (location.hostname !== "localhost") { return; }

if (!self.document) { // We’re in a service worker! Oh man, we’re living in the future! ?? self.addEventListener("fetch", function(evt) { var url = evt.request.url;

if (url.indexOf("get.mavo.io/mavo.") > -1 || url.indexOf("dev.mavo.io/dist/mavo.") > -1) { var newURL = url.replace(/.+?(get|dev).mavo.io/(dist/)?/, "http://localhost:8000/dist/&quot;) + "?" + Date.now();

var response = fetch(new Request(newURL), evt.request) .then(r => r.status < 400? r : Promise.reject()) // if that fails, return original request .catch(err => fetch(evt.request));

evt.respondWith(response); } });

return; }

if ("serviceWorker" in navigator) { // Register this script as a service worker addEventListener("load", function() { navigator.serviceWorker.register("sitewide.js"); }); }

})();

So far, this has worked more nicely than any of the aforementioned solutions and allows me to just use the normal remote URLs in my HTML. However, it’s not without its own caveats:

  • Service Workers are only activated on a cached pageload, so the first one uses the remote URL. This is almost never a problem locally anyway though, so I’m not concerned about it much.
  • The same origin restriction that service workers have is fairly annoying. So, I have to copy the service worker script on every repo I want to use this on, I cannot just link to it.
  • It needs to be explained to new contributors since most aren’t familiar with Service Workers and how they work at all.

Currently the URLs for both local and remote are baked into the code, but it’s easy to imagine a mini-library that takes care of it as long as you include the local URL as a parameter (e.g. https://get.mavo.io/mavo.js?local=http://localhost:8000/dist/mavo.js).

Other solutions

Solutions I didn’t test (but you may want to) include:

  • .htaccess redirect based on domain, suggested by @codepo8. I don’t use Apache locally, so that’s of no use to me.
  • Symbolic links, suggested by @aleschmidx
  • User scripts (e.g. Greasemonkey), suggested by @WebManWlkg
  • Modifying the hosts file, suggested by @LukeBrowell (that works if you don’t need access to the remote URL at all)

Is there any other solution? What do you do?


Introducing Mavo: Create web apps entirely by writing HTML!

1 min read 0 comments Report broken page

Today I finally released the project I’ve been working on for the last two years at MIT CSAIL: An HTML-based language for creating (many kinds of) web applications without programming or a server backend. It’s named Mavo after my late mother (Maria Verou), and is Open Source of course (yes, getting paid to work on open source is exactly as fun as it sounds).

It was the scariest release of my life, and have been postponing it for months. I kept feeling Mavo was not quite there yet, maybe I should add this one feature first, oh and this other one, oh and we can’t release without this one, surely! Eventually I realized that what I was doing had more to do with postponing the anxiety and less to do with Mavo reaching a stage where it can be released. After all, “if you’re not at least a bit embarrassed by what you release, you waited too long”, right?

The infamous Ship It Squrrel

So, there it is, I hope you find it useful. Read the post on Smashing Magazine or just head straight to mavo.io, read the docs, and play with the demos!

And do let me know what you make with it, no matter how small and trivial you may think it is, I would love to see it!


HTML APIs: What they are and how to design a good one

1 min read 0 comments Report broken page

I’m a strong believer in lowering the barrier of what it takes to create rich, interactive experiences and improving the user experience of programming. I wrote an article over at Smashing Magazine aimed at JavaScript library developers that want their libraries to be usable via HTML (i.e. without writing any JavaScript). Sounds interesting? Read it here.


Duoload: Simplest website load comparison tool, ever

1 min read 0 comments Report broken page

Today I needed a quick tool to compare the loading progression (not just loading time, but also incremental rendering) of two websites, one remote and one in my localhost. Just have them side by side and see how they load relative to each other. Maybe even record the result on video and study it afterwards. That’s all. No special features, no analysis, no stats.

So I did what I always do when I need help finding a tool, I asked Twitter:

Most suggested complicated tools, some non-free and most unlikely to work on local URLs. I thought damn, what I need is a very simple thing! I could code this in 5 minutes! So I did and here it is, in case someone else finds it useful! The (minuscule amount of) code is of course on Github.

Duoload

Of course it goes without saying that this is probably a bit inaccurate. Do not use it for mission-critical performance comparisons.

Credits for the name Duoload to Chris Lilley who came up with it in the 1 minute deadline I gave him :P


Resolve Promises externally with this one weird trick

2 min read 0 comments Report broken page

Those of us who use promises heavily, have often wished there was a Promise.prototype.resolve() method, that would force an existing Promise to resolve. However, for architectural reasons (throw safety), there is no such thing and probably never will be. Therefore, a Promise can only resolve or reject by calling the respective methods in its constructor:

var promise = new Promise((resolve, reject) => {
	if (something) {
		resolve();
	}
	else {
		reject();
	}
});

However, often it is not desirable to put your entire code inside a Promise constructor so you could resolve or reject it at any point. In my latest case today, I wanted a Promise that resolved when a tree was created, so that third-party components could defer code execution until the tree was ready. However, given that plugins could be running on any hook, that meant wrapping a ton of code with the Promise constructor, which was obviously a no-go. I had come across this problem before and usually gave up and created a Promise around all the necessary code. However, this time my aversion to what this would produce got me to think even harder. What could I do to call resolve() asynchronously from outside the Promise?

A custom event? Nah, too slow for my purposes, why involve the DOM when it’s not needed?

Another Promise? Nah, that just transfers the problem.

An setInterval to repeatedly check if the tree is created? OMG, I can’t believe you just thought that Lea, ewwww, gross!

Getters and setters? Hmmm, maybe that could work! If the setter is inside the Promise constructor, then I can resolve the Promise by just setting a property!

My first iteration looked like this:

this.treeBuilt = new Promise((resolve, reject) => {
	Object.defineProperty(this, "_treeBuilt", {
		set: value => {
			if (value) {
				resolve();
			}
		}
	});
});

// Many, many lines below…

this._treeBuilt = true;

However, it really bothered me that I had to define 2 properties when I only needed one. I could of course do some cleanup and delete them after the promise is resolved, but the fact that at some point in time these useless properties existed will still haunt me, and I’m sure the more OCD-prone of you know exactly what I mean. Can I do it with just one property? Turns out I can!

The main idea is realizing that the getter and the setter could be doing completely unrelated tasks. In this case, setting the property would resolve the promise and reading its value would return the promise:

var setter;
var promise = new Promise((resolve, reject) => {
	setter = value => {
		if (value) {
			resolve();
		}
	};
});

Object.defineProperty(this, "treeBuilt", { set: setter, get: () => promise });

// Many, many lines below…

this.treeBuilt = true;

For better performance, once the promise is resolved you could even delete the dynamic property and replace it with a normal property that just points to the promise, but be careful because in that case, any future attempts to resolve the promise by setting the property will make you lose your reference to it!

I still think the code looks a bit ugly, so if you can think a more elegant solution, I’m all ears (well, eyes really)!

Update: Joseph Silber gave an interesting solution on twitter:

function defer() {
	var deferred = {
		promise: null,
		resolve: null,
		reject: null
	};

deferred.promise = new Promise((resolve, reject) => { deferred.resolve = resolve; deferred.reject = reject; });

return deferred; }

this.treeBuilt = defer();

// Many, many lines below…

this.treeBuilt.resolve();

I love that this is reusable, and calling resolve() makes a lot more sense than setting something to true. However, I didn’t like that it involved a separate object (deferred) and that people using the treeBuilt property would not be able to call .then() directly on it, so I simplified it a bit to only use one Promise object:

function defer() {
	var res, rej;

var promise = new Promise((resolve, reject) => { res = resolve; rej = reject; });

promise.resolve = res; promise.reject = rej;

return promise; }

this.treeBuilt = defer();

// Many, many lines below…

this.treeBuilt.resolve();

Finally, something I like!


URL rewriting with Github Pages

2 min read 0 comments Report broken page

redirectI adore Github Pages. I use them for everything I can, and try to avoid server-side code like the plague, exactly so that I can use them. The convenience of pushing to a repo and having the changes immediately reflected on the website with no commit hooks or any additional setup, is awesome. The free price tag is even more awesome. So, when the time came to publish my book, naturally, I wanted the companion website to be on Github Pages.

There was only one small problem: I wanted nice URLs, like http://play.csssecrets.io/pie-animated, which would redirect to demos on dabblet.com. Any sane person would have likely bitten the bullet and used some kind of server-side language. However, I’m not a particularly sane person :D

Turns out Github uses some URL rewriting of its own on Github Pages: If you provide a 404.html, any URL that doesn’t exist will be handled by that. Wait a second, is that basically how we do nice URLs on the server anyway? We can do the same in Github Pages, by just running JS inside 404.html!

So, I created a JSON file with all demo ids and their dabblet URLs, a 404.html that shows either a redirection or an error (JS decides which one) and a tiny bit of Vanilla JS that reads the current URL, fetches the JSON file, and redirects to the right dabblet. Here it is, without the helpers:

(function(){

document.body.className = ‘redirecting’;

var slug = location.pathname.slice(1);

xhr({ src: ‘secrets.json’, onsuccess: function () { var slugs = JSON.parse(this.responseText);

var hash = slugs[slug];

if (hash) { // Redirect var url = hash.indexOf(‘http’) == 0? hash : ‘http://dabblet.com/gist/’ + hash; $(‘section.redirecting > p’).innerHTML = ‘Redirecting to <a href="’ + url + ‘">’ + url + ‘</a>…’; location.href = url; } else { document.body.className = ‘error not-found’; } }, onerror: function () { document.body.className = ‘error json’; } });

})();

That’s all! You can imagine using the same trick to redirect to other HTML pages in the same Github Pages site, have proper URLs for a single page site, and all sorts of things! Is it a hack? Of course. But when did that ever stop us? :D


Autoprefixing, with CSS variables!

1 min read 0 comments Report broken page

Recently, when I was making the minisite for markapp.io, I realized a neat trick one can do with CSS variables, precisely due to their dynamic nature. Let’s say you want to use a property that has multiple versions: an unprefixed one and one or more prefixed ones. In this example we are going to use clip-path, which currently needs both an unprefixed version and a -webkit- prefixed one, however the technique works for any property and any number of prefixes or different property names, as long as the value is the same across all variations of the property name.

The first part is to define a --clip-path property on every element with a value of initial. This prevents the property from being inherited every time it’s used, and since the * has zero specificity, any declaration that uses --clip-path can override it. Then you define all variations of the property name with var(--clip-path) as their value:

* {
	--clip-path: initial;
	-webkit-clip-path: var(--clip-path);
	clip-path: var(--clip-path);
}

Then, every time we need clip-path, we use --clip-path instead and it just works:

header {
	--clip-path: polygon(0% 0%, 100% 0%, 100% calc(100% - 2.5em), 0% 100%);
}

Even !important should work, because it affects the cascading of CSS variables. Furthermore, if for some reason you want to explicitly set -webkit-clip-path, you can do that too, again because * has zero specificity. The main downside to this is that it limits browser support to the intersection of the support for the feature you are using and support for CSS Variables. However, all browsers except Edge support CSS variables, and Edge is working on it. I can’t see any other downsides to it (except having to use a different property name obvs), but if you do, let me know in the comments!

I think there’s still a lot to be discovered about cool uses of CSS variables. I wonder if there exists a variation of this technique to produce custom longhands, e.g. breaking box-shadow into --box-shadow-x, --box-shadow-y etc, but I can’t think of anything yet. Can you? ;)


Markapp: A list of HTML libraries

1 min read 0 comments Report broken page

Screen Shot 2016-08-26 at 17.09.24I have often lamented how many JavaScript developers don’t realize that a large percentage of HTML & CSS authors are not comfortable writing JS, and struggle to use their libraries.

To encourage libraries with HTML APIs, i.e. libraries that can be used without writing a line of JS, I made a website to list and promote them: markapp.io. The list is currently quite short, so I’m counting on you to expand it. Seen any libraries with good HTML APIs? Add them!


Introducing Multirange: A tiny polyfill for HTML5.1 two-handle sliders

1 min read 0 comments Report broken page

multirangeAs part of my preparation for my talk at CSSDay HTML Special, I was perusing the most recent HTML specs (WHATWG Living Standard, W3C HTML 5.1) to see what undiscovered gems lay there. It turns out that HTML sliders have a lot of cool features specced that aren’t very well implemented:

  • Ticks that snap via the list attribute and the <datalist> element. This is fairly decently implemented, except labelled ticks, which is not supported anywhere.
  • Vertical sliders when height > width, implemented nowhere (instead, browsers employ proprietary ways for making sliders vertical: An orient=vertical attribute in Gecko, -webkit-appearance: slider-vertical; in WebKit/Blink and writing-mode: bt-lr; in IE/Edge). Good ol’ rotate transforms work too, but have the usual problems, such as layout not being affected by the transform.
  • Two-handle sliders for ranges, via the multiple attribute.

I made a quick testcase for all three, and to my disappointment (but not to my surprise), support was extremely poor. I was most excited about the last one, since I’ve been wanting range sliders in HTML for a long time. Sadly, there are no implementations. But hey, what if I could create a polyfill by cleverly overlaying two sliders? Would it be possible? I started experimenting in JSBin last night, just for the lolz, then soon realized this could actually work and started a GitHub repo. Since CSS variables are now supported almost everywhere, I’ve had a lot of fun using them. Sure, I could get broader support without them, but the code is much simpler, more elegant and customizable now. I also originally started with a Bliss dependency, but realized it wasn’t worth it for such a tiny script.

So, enjoy, and contribute!

Multirange