Blog

38 posts 2009 16 posts 2010 50 posts 2011 28 posts 2012 15 posts 2013 7 posts 2014 10 posts 2015 5 posts 2016 4 posts 2017 7 posts 2018 2 posts 2019 17 posts 2020 7 posts 2021 7 posts 2022 11 posts 2023 1 post 2024

Introducing Multirange: A tiny polyfill for HTML5.1 two-handle sliders

1 min read 0 comments Report broken page

multirangeAs part of my preparation for my talk at CSSDay HTML Special, I was perusing the most recent HTML specs (WHATWG Living Standard, W3C HTML 5.1) to see what undiscovered gems lay there. It turns out that HTML sliders have a lot of cool features specced that aren’t very well implemented:

  • Ticks that snap via the list attribute and the <datalist> element. This is fairly decently implemented, except labelled ticks, which is not supported anywhere.
  • Vertical sliders when height > width, implemented nowhere (instead, browsers employ proprietary ways for making sliders vertical: An orient=vertical attribute in Gecko, -webkit-appearance: slider-vertical; in WebKit/Blink and writing-mode: bt-lr; in IE/Edge). Good ol’ rotate transforms work too, but have the usual problems, such as layout not being affected by the transform.
  • Two-handle sliders for ranges, via the multiple attribute.

I made a quick testcase for all three, and to my disappointment (but not to my surprise), support was extremely poor. I was most excited about the last one, since I’ve been wanting range sliders in HTML for a long time. Sadly, there are no implementations. But hey, what if I could create a polyfill by cleverly overlaying two sliders? Would it be possible? I started experimenting in JSBin last night, just for the lolz, then soon realized this could actually work and started a GitHub repo. Since CSS variables are now supported almost everywhere, I’ve had a lot of fun using them. Sure, I could get broader support without them, but the code is much simpler, more elegant and customizable now. I also originally started with a Bliss dependency, but realized it wasn’t worth it for such a tiny script.

So, enjoy, and contribute!

Multirange


My positive experience as a woman in tech

4 min read 0 comments Report broken page

Women speaking up about the sexism they have experienced in tech is great for raising awareness about the issues. However, when no positive stories get out, the overall picture painted is bleak, which could scare even more women away.

Lucky for me, I fell in love with programming a decade before I even heard there is a sexism problem in tech. Had I read about it before, I might have decided to go for some other profession. Who wants to be fighting an uphill battle all her life?

Thankfully, my experience has been quite different. Being in this industry has brought me nothing but happiness. Yes, there are several women who have had terrible experiences, and I’m in no way discounting them. They may even be the majority, though I am not aware of any statistics. However, there is also the other side. Those of us who have had incredibly positive experiences, and have always been treated with nothing but respect. That side’s stories need to be heard too, not silenced out of fear that we will become complacent and stop trying for more equality. Stories like mine should become the norm, not the exception.

I’ve had a number of different roles in tech over the course of my life. I’ve been a student, a speaker & author, I’ve worked at W3C, I’ve started & maintain several successful open source projects and I’m currently dabbling in Computer Science research. In none of these roles did I ever feel I was unfairly treated due to my gender. That is not because I’m oblivious to sexism. I tend to be very sensitive to seeing it, and I often notice even the smallest acts of sexism (“death by a thousand paper cuts”). I see a lot of sexism in society overall. However, inside this industry, my gender never seemed to matter much, except perhaps in positive ways.

On my open source repos, I have several contributors, the overwhelming majority of which, is male. I’ve never felt less respected due to my gender. I’ve never felt that my work was taken less seriously than male OSS developers. I’ve never felt my contributors would not listen to me. I’ve never felt my work was unfairly scrutinized. Even when I didn’t know something, or introduced a horrible bug, I’ve never been insulted or berated. The community has been nothing but friendly, helpful and respectful. If anything, I’ve sometimes wondered if my gender is the reason I hardly ever get any shit!

On stage, I’ve never gotten any negative reactions. My talks always get excellent reviews, which have nothing to do with me being female. There is sometimes the odd complimentary tweet about my looks, but that’s not only exceedingly rare, but also always combined with a compliment about the actual talk content. My gender only affected my internal motivation: I often felt I had to be good, otherwise I would be painting all female tech speakers in a negative light. But other people are not at fault for my own stereotype threat.

My book, CSS Secrets, has been as successful as an advanced CSS book could possibly aspire to be and got to an average of 5 stars on Amazon only a few months after its release. It’s steadily the 5th bestseller on CSS and was No 1 for a while shortly after publication. My gender did not seem to negatively affect any of that, even though there’s a picture of me in the french flap so there are no doubts about me being female (as if the name Lea wasn’t enough of a hint).

As a student, I’ve never felt unfairly treated due to my gender by any of my professors, even the ones in Greece, a country that is not particularly famous for its gender equal society, to put it mildly.

As a new researcher, I have no experience with publishing papers yet, so I cannot share any experiences on that. However, I’ve been treated with nothing but respect by both my advisor and colleagues. My opinion is always heard and valued and even when people don’t agree, I can debate it as long and as intensely as I want, without being seen as aggressive or “bossy”.

I’ve worked at W3C and still participate as an Invited Expert in the CSS Working Group. In neither of these roles did my gender seem to matter in any way. I’ve always felt that my expertise and skillset were valued and my opinions heard. In fact, the most well-respected member of the CSS WG, is the only other woman in it: fantasai.

Lastly, In all my years as a working professional, I’ve always negotiated any kind of remuneration, often hard. I’ve never lost an opportunity because of it, or been treated with negativity afterwards.

On the flip side, sexism today is rarely overt. Given that hardly anybody over ten will flat out admit they think women are inferior (even to themselves), it’s often hard to tell when a certain behavior stems from sexist beliefs. If someone is a douchebag to you, are they doing it because you’re a woman, or because they’re douchebags? If someone is criticizing your work, are they doing it because they genuinely found something to criticize or because they’re negatively predisposed due to your gender? It’s impossible to know, especially since they don’t know either! If you confront them on their sexism, they will deny all of it, and truly believe it. It takes a lot of introspection to see one’s internalized stereotypes. Therefore, a lot of the time, you cannot be sure if you have experienced sexist behavior, and there is no way to find out for sure, since the perpetrator doesn’t know either. There are many false positives and false negatives there.

Perhaps I don’t feel I have experienced much sexism because I prefer to err on the side of false negatives. Paraphrasing Blackstone, I would rather not call out sexist behavior ten times, than wrongly accuse someone of it once. It might also have to do with my personality: I’m generally confident and can be very assertive. When somebody is being a jerk to me, I will not curl in a ball and question my life choices, I will reply to them in the same tone. However, those two alone cannot make the difference between a pit rampant with sexism and an egalitarian paradise. I think a lot of it is that we have genuinely made progress, and we should celebrate it with more women coming out with their positive experiences (it cannot just be me, right?).

Ironically, one of the very few times I have experienced any sexism in the industry was when a dude was trying to be nice to me. I was in a speaker room at a conference in Las Vegas, frantically working on my slides, not participating in any of the conversations around me. At some point, one of the guys said “fuck” in a conversation, then turned and apologized to me. Irritated about the sudden interruption, I lifted my head and looked around. I noticed for the first time that day that I was the only woman in the room. His effort to be courteous made me feel that I was different, the odd one out, the one we must be careful around and treat like a fragile flower. To this day, I regret being too startled to reply “Eh, I don’t give a fuck”.


Introducing Bliss: A 3KB library for happier Vanilla JS

3 min read 0 comments Report broken page

Screen Shot 2015-12-04 at 16.59.39Anyone who follows this blog, my twitter, or my work probably is aware that I’m not a huge fan of big libraries. I think wrapper objects are messy, and big libraries are overkill for smaller projects. On large projects, one uses frameworks like React or Angular anyway, not libraries.

Anyone who writes Vanilla JS on a daily basis probably is aware that it can sometimes be, ahem, somewhat unpleasant to work with. Sure, the situation is orders of magnitude better than it was when I started. Back then, IE6 was the dominant browser and you needed a helper function to even add event listeners to an element (remember element.attachEvent?) or to get elements by a class!

jasset-datepicker

Fun fact: I learned JavaScript back then by writing my own library, called jAsset. I had not heard of jQuery when I started it in 2007, so I had even coded my own selector engine! (Anyone remember slickspeed?) jAssset had plenty of nice helper functions, its own UI library and a cool logo. I had even started to make a website for its UI components, seen on the right.

Sadly, jAsset died the sad inevitable death of all unreleased projects: Without external feedback, I had nobody to hold me back from adding to its API every time I personally needed a helper function. And adding, and adding, and adding… Until it became 5000+ loc long and its benefit of being lightweight or comprehensible had completely vanished. It collapsed under its own weight before it even saw the light of day. I abandoned it and went through a few years of using jQuery as my preferred helper library. Eventually, my distaste for wrapper objects, the constantly improving browser support for new APIs that made Vanilla JS more palatable, and the decline of overly conspicuous browser bugs led me to give it up.

It was refreshing, and educational, but soon I came to realize that while Vanilla JS is orders of magnitude better than it was when I started, certain APIs are still quite unwieldy, which can be annoying if you use them often. For example, the Vanilla JS for creating an element, with other elements inside it, events and inline styles is so commonly needed, but also so verbose and WET, it can make one suicidal.

However, Vanilla JS does not mean “use no abstractions”. Programming is all about abstractions! The Vanilla JS movement, is about favoring speed, smaller abstractions and understanding of the Web Platform, over big libraries that we treat as a black box. It’s about using libraries to save time, not to skip learning.

So, I used my own tiny helpers, on every project. They were small and easy to understand, instead of several KB of code aiming to fix browser bugs I will likely never encounter and let me create complex nested DOM structures with a single JSON-like object. Over time, their API solidified and improved. On larger projects it was a separate file which I had tentatively codenamed Utopia (due to the lack of browser bug fixes and optimistic use of modern APIs). On smaller ones just a few helper methods (I could not live without at least my tiny 2 sloc $() and $$() helpers!). Here is a sample from my open source repos:

Notice any recurring themes there? :)

I never mentioned Utopia.js anywhere, besides silently including it in my projects, so it went largely unnoticed. Sometimes people would look at it, ask me to release it, I’d promise them I would and then nothing. A few years ago, someone noticed it, liked it and documented it a bit (site is down now it seems). However, it was largely my little secret, hidden in public view.

For the past half year, I’ve been working hard on my research project at MIT. It’s pretty awesome and is aimed at helping people who know HTML/ CSS but not JS, achieve more with Web technologies (and that’s all I can say for now). It’s also written in JS, so I used Utopia as a helper library, naturally. Utopia evolved even more with this project, got renamed to Bliss and got chainability via my idea about extending DOM prototypes without collisions (can be disabled and the property name is customizable).

All this worked fine while I was the only person working on the project. Thankfully, I might get some help soon, and it might be rather inexperienced (the academia equivalent of interns). Help is very welcome, but it did raise the question: How will these people, who likely only know jQuery, work on the project? [1]

The answer was that the time has come to polish, document and release Bliss to the world. My plan was to spend a weekend documenting it, but it ended up being a little over a week on and off, when procrastinating from other tasks I had to do. However, I’m very proud of the resulting docs, so much that I gifted myself a domain for it. They are fairly extensive (though some functions still need work) and has two things I always missed in other API docs:

  • Recommendations about what Vanilla JS to use instead when appropriate, instead of guiding people into using library methods even when Vanilla JS would have been perfectly sufficient.
  • A “Show Implementation” button showing the implementation, so you can both learn, and judge whether it’s needed or not, instead of assuming that you should use it over Vanilla JS because it has magic pixie dust. This way, the docs also serve as a source viewer!

So, enjoy Bliss. The helper library for people who don’t like helper libraries. :) In a way, it feels that a journey of 8 years, finally ends today. I hope the result makes you blissful too.

blissfuljs.com

Oh, and don’t forget to follow @blissfuljs on twitter!

[1]: Academia is often a little behind tech-wise, so everyone uses jQuery here — hardly any exceptions. Even though browser support doesn’t usually even matter to research projects!


Copying object properties, the robust way

2 min read 0 comments Report broken page

If, like me, you try to avoid using heavy libraries when not needed, you must have definitely written a helper to copy properties from one object to another at some point. It’s needed so often that it’s just silly to write the same loops over and over again.

These days, most of my time is spent working on my research project at MIT, which I will hopefully reveal later this year. In that, I’m using a lightweight homegrown helper library, which I might release separately at some point as I think it has potential in its own right, for a number of reasons.

Of course, it needed to have a simple extend() method as well, to copy properties from one object to another. Let’s assume for the purposes of this article that we’re talking about shallow copying, that overwrites are allowed, and let’s omit hasOwnProperty() checks to make code easier to read.

It’s a simple task, right? Our first attempt might look like this:

$.extend = function (to, from) {
	for (var property in from) {
		to[property] = from[property];
	}

return to; }

This works fine, until you try it on objects with accessors or other types of properties defined via Object.defineProperty() or get/set keywords. What do you do then? Our next iteration could look like this:

$.extend = function (to, from) {
	for (var property in from) {
		Object.defineProperty(to, property, Object.getOwnPropertyDescriptor(from, property));
	}

return to; }

This works much better, until it fails, and it can fail pretty epically. Try this:

$.extend(document.body.style, {
	backgroundColor: "red"
});

Both in Chrome and Firefox, the results are super weird. Even though reading document.body.style.backgroundColor will return "red", no style will have actually been applied. In Firefox it even destroyed the native setter entirely and any future attempts to set document.body.style.backgroundColor in the console did absolutely nothing.

In contrast, the previous naïve approach worked fine for this. It’s clear that we need to somehow combine the two approaches, using Object.defineProperty() only when actually needed. But when is it actually not needed?

One obvious case is if the descriptor is undefined (such as with some native properties). Also, in simple properties, such as those in our object literal, the descriptor will be of the form {value: somevalue, writable: true, enumerable: true, configurable: true}. So, the next obvious step would be:

$.extend = function (to, from) {
	var descriptor = Object.getOwnPropertyDescriptor(from, property);

if (descriptor && (!descriptor.writable || !descriptor.configurable || !descriptor.enumerable || descriptor.get || descriptor.set)) { Object.defineProperty(to, property, descriptor); } else { to[property] = from[property]; } }

This works perfectly, but is a little clumsy. I’ve currently left it at that, but any suggestions for making it more elegant are welcome :)

FWIW, I looked at jQuery’s implementation of jQuery.extend() after this, and it seems it doesn’t even handle accessors at all, unless I missed something. Time for a pull request, perhaps…

Edit: As MaxArt pointed out in the comments, there is a similar native method in ES6, Object.assign(). However, it does not deal with copying accessors, so does not deal with this problem either.


On the blindness of blind reviews

2 min read 0 comments Report broken page

Over the last couple of years, blind reviews have been popularized as the ultimate method for fair talk selection in industry conferences. While I don’t really submit proposals myself, I have served several times on the other side of the process, doing speaker selection in conference committees, and the more data points I collect, the more convinced I become that the blind selection process is fundamentally flawed.

Blind reviews come from the world of academia. However, in academic conferences, you do not judge a talk by a 1-2 paragraph abstract, but by a 10+ page paper, so there’s way more to judge by. In addition, in academia the content of the research matters infinitely more than the quality of a talk. In industry conferences, selection committees in blind reviews have both way less data to use, and a much harder task, as they need to balance several factors (content, speaker skill, talk quality etc). It’s no surprise that the results end up being even more of a gamble.

Blind reviews result in conservative talk selection. More often than not, I remember me and my fellow committee members saying “Damn, this talk could be great with the right presenter, but that’s rare” and giving it a poor or average score. Few topics can make good talks regardless of the presenters. Therefore, when there is little information on the speaker in the initial selection round, talk selection ends up being conservative, rejecting more challenging topics that need a skilled speaker to shine and sticking to safer choices.

One of my most successful talks ever was “The humble border-radius” which was shortlisted for a .net award for Conference Talk of The Year 2014. It would never have passed any blind review. There is no committee in their right mind that would have accepted a 45 minute talk about …border-radius. The conferences I presented it at invited me as a speaker, carte blanche, and trusted me to present on whatever I felt like. Judging by the reviews, they were not disappointed.

In addition, all too many times I’ve seen great speakers get poor scores in blind reviews, not because their talks were not good, but because writing good abstracts is an entirely separate skill. Blind reviews remove anything that could cause bias, but they do so by striping all personality away from a proposal. In addition, a good abstract for a blind review is not necessarily a good abstract in general. For example, blind reviews penalize more mysterious/teasy abstracts and tend to be skewed towards overly detailed ones, since it’s the only data the committee gets for these talks (bonus points here for CfS that have a separate field for more details to conf organizers).

“But what about newcomers to the conference circuit? What about bias elimination?” one might ask. Both very valid concerns. I’m not saying any kind of anonymization is a bad idea. I’m saying that in their present form in industry conferences, blind reviews are flawed. For example, an initial round of blind reviews to pick good talks, without rejecting any at that stage, would probably solve these issues, without suffering from the flaws mentioned above.

Disclaimer: I do recognize that most people in these committees are doing their best to select fairly, and putting many hours of (usually volunteer) work in it. I’m not criticizing them, I’m criticizing the process. And yes, I recognize that it’s a process that has come out of very good intentions (eliminating bias). However, good intentions are not a guarantee for infallibility.


Stretchy: Form element autosizing, the way it should be

1 min read 0 comments Report broken page

Screen Shot 2015-07-25 at 18.40.13As you might be aware, these days a good chunk of my time is spent working on research, at MIT. Although it’s still too early to talk about my research project, I can say that it’s related to the Web and it will be open source, both of which are pretty awesome (getting paid to work on cool open source stuff is the dream, right?).

The one thing I can mention about my project is that it involves a lot of editing of Web content. And since contentEditable is a mess, as you all know, I decided to use form controls styled like the content being edited. This meant that I needed a good script for form control autosizing, one that worked on multiple types of form controls (inputs, textareas, even select menus). In addition, I needed the script to smoothly work for newly added controls, without me having to couple the rest of my code with it and call API methods or fire custom events every time new controls were added anywhere. A quick look at the existing options quickly made it obvious that I had to write my own.

After writing it, I realized this could be released entirely separately as it was a standalone utility. So Stretchy was born :) I made a quick page for it, fixed a few cross-browser bugs that needed fixing anyway, put it up on Github and here it is!

Enjoy!

PS: You can also use it as a bookmarklet, to autosize form controls on an existing page, if a form is bothering you with its poor usability. You will find it in the footer.


Spot the unsubscribe (link)!

1 min read 0 comments Report broken page

Screen Shot 2015-07-28 at 19.39.34After getting fed up with too many “promotional” emails and newsletters with incredibly obscure unsubscribe links, I decided to make this tumblr to point out such examples of digital douchebaggery. This annoying dark pattern is so widespread that Google even added a feature to Gmail for making those unsubscribe links obvious!

Unsubscribe links are crucial to promotional emails. They are not just another menu item. They are not something that should be hidden in a blurb of tiny low contrast text. Unsubscribe links should be immediately obvious to anyone looking for them. You want people to be reading your email because they’re interested, not because they can‘t find the way out. Otherwise you are the digital equivalent of those annoying door-to-door salesmen who just won’t go away.

— From my introductory post on Spot the unsubscribe!

On the spur of the moment, after yet another email newsletter with a hard to find Unsubscribe link, I decided to quickly put together a tumblog about this UX pet peeve of mine, called Spot the Unsubscribe!. In less than an hour, it was ready and had a few posts as well :)

Hopefully if this bothers others as well, there will be submissions. Otherwise, new posts will be rather infrequent.


Conical gradients, today!

2 min read 0 comments Report broken page

Screen Shot 2015-06-18 at 16.26.40It’s no secret that I like conical gradients. In as early as 2011, I wrote a draft for conical-gradient() in CSS, that Tab later said helped him when he added them in CSS Image Values Level 4 in 2012. However, almost three years later, no progress has been made in implementing them. Sure, the spec is still relatively incomplete, but that’s not the reason conical gradients have gotten no traction. Far more underspecified features have gotten experimental implementations in the past. The reason conical gradients are still unimplemented, is because very few developers know they exist, so browsers see no demand.

Another reason was that Cairo, the graphics library used in Chrome and Firefox had no way of drawing a conical gradient. However, this changed a while ago, when they supported mesh gradients, of which conical gradients are a mere special case.

Recently, I was giving a talk on creating pie charts with CSS on a few conferences, and yet again, I was reminded of how useful conical gradients can be. While every CSS or SVG solution is several lines of code with varying levels of hackiness, conical gradients can give us a pie chart with a straightforward, DRY, one liner. For example, this is how to create a pie chart that shows 40% in gold and 60% in #f06:

padding: 5em; /* size */
background: conic-gradient(gold 40%, #f06 0);
border-radius: 50%; /* make it round */

Screen Shot 2015-06-18 at 16.23.57 So, I decided to take matters in my own hands. I wrote a polyfill, which I also used in my talk to demonstrate how awesome conical gradients can be and what cool things they can do. Today, during my CSSConf talk, I released it publicly.

In addition, I mention to developers how important speaking up is for getting their favorite features implemented. Browsers prioritize which features to implement based on what developers ask for. It’s a pity that so few of us realize how much of a say we collectively have in this. This is more obvious with Microsoft and their Uservoice forum where developers can vote on which features they want to see worked on, but pretty much every major browser works in a similar way. They monitor what developers request and what other browsers implement, and decide accordingly. The squeaky wheel will get the feature, so if you really want to see something implemented, speak up.

Since “speaking up” can be a bit vague (“speak up where?” I can hear you asking), I also filed bug reports with all major browsers, that you can also find in the polyfill page, so that you can comment or vote on them. That doesn’t mean that speaking up on blogs or social media is not useful though: That’s why browsers have devrel teams. The more noise we collectively make about the features we want, the more likely it is to be heard. However, the odds are higher if we all channel our voices to the venues browser developers follow and our voice is stronger and louder if we concentrate it in the same places instead of having many separate voices all over the place.

Also, I’m using the term “noise” here a bit figuratively. While it’s valuable to make it clear that we are interested in a certain feature, it’s even more useful to say why. Providing use cases will not only grab browsers’ attention more, but it will also convince other developers as well.

So go ahead, play with conic gradients, and if you agree with me that they are fucking awesome and we need them natively on the Web, make noise.

conic-gradient() polyfill


Idea: Extending native DOM prototypes without collisions

3 min read 0 comments Report broken page

As I pointed out in yesterday’s blog post, one of the reasons why I don’t like using jQuery is its wrapper objects. For jQuery, this was a wise decision: Back in 2006 when it was first developed, IE releases had a pretty icky memory leak bug that could be easily triggered when one added properties to elements. Oh, and we also didn’t have access to element prototypes on IE back then, so we had to add these properties manually on every element. Prototype.js attempted to go that route and the result was such a mess that they decided to change their decision in Prototype 2.0 and go with wrapper objects too. There were even long essays being written back then about how much of a monumentally bad idea it was to extend DOM elements.

The first IE release that exposed element prototypes was IE8: We got access to Node.prototype, Element.prototype and a few more. Some were mutable, some were not. On IE9, we got the full bunch, including HTMLElement.prototype and its descendants, such as HTMLParagraphElement. The memory leak bugs were mitigated in IE8 and fixed in IE9. However, we still don’t extend native DOM elements, and for good reason: collisions are still a very real risk. No library wants to add a bunch of methods on elements, it’s just bad form. It’s like being invited in someone’s house and defecating all over the floor.

But what if we could add methods to elements without the chance of collisions? (well, technically, by minimizing said chance). We could only add one property to Element.prototype, and then hang all our methods on that. E.g. if our library was called yolo and had two methods, foo() and bar(), calls to it would look like:

var element = document.querySelector(".someclass");
element.yolo.foo();
element.yolo.bar();
// or you can even chain, if you return the element in each of them!
element.yolo.foo().yolo.bar();

Sure, it’s more awkward than wrapper objects, but the benefit of using native DOM elements is worth it if you ask me. Of course, YMMV.

It’s basically exactly the same thing we do with globals: We all know that adding tons of global variables is bad practice, so every library adds one global and hangs everything off of that.

However, if we try to implement something like this in the naïve way, we will find that it’s kind of hard to reference the element used from our namespaced functions:

Element.prototype.yolo = {
	foo: function () {
		console.log(this);
	},

bar: function () { /* … */ } };

someElement.yolo.foo(); // Object {foo: function, bar: function}

What happened here? this inside any of these functions refers to the object that they are called on, not the element that object is hanging on! We need to be a bit more clever to get around this issue.

Keep in mind that this in the object inside yolo would have access to the element we’re trying to hang these methods off of. But we’re not running any code there, so we’re not taking advantage of that. If only we could get a reference to that object’s context! However, running a function (e.g. element.yolo().foo()) would spoil our nice API.

Wait a second. We can run code on properties, via ES5 accessors! We could do something like this:

Object.defineProperty(Element.prototype, "yolo", {
	get: function () {
		return {
			element: this,
			foo: function() {
				console.log(this.element);
			},

bar: function() { /* … */ } } }, configurable: true, writeable: false });

someElement.yolo.foo(); // It works! (Logs our actual element)

This works, but there is a rather annoying issue here: We are generating this object and redefining our functions every single time this property is called. This is a rather bad idea for performance. Ideally, we want to generate this object once, and then return the generated object. We also don’t want every element to have its own completely separate instance of the functions we defined, we want to define these functions on a prototype, and use the wonderful JS inheritance for them, so that our library is also dynamically extensible. Luckily, there is a way to do all this too:

var Yolo = function(element) {
	this.element = element;
};

Yolo.prototype = { foo: function() { console.log(this.element); },

bar: function() { /* … */ } };

Object.defineProperty(Element.prototype, "yolo", { get: function () { Object.defineProperty(this, "yolo", { value: new Yolo(this) });

return this.yolo; }, configurable: true, writeable: false });

someElement.yolo.foo(); // It works! (Logs our actual element)

// And it’s dynamically extensible too! Yolo.prototype.baz = function(color) { this.element.style.background = color; };

someElement.yolo.baz("red") // Our element gets a red background

Note that in the above, the getter is only executed once. After that, it overwrites the yolo property with a static value: An instance of the Yolo object. Since we’re using Object.defineProperty() we also don’t run into the issue of breaking enumeration (for..in loops), since these properties have enumerable: false by default.

There is still the wart that these methods need to use this.element instead of this. We could fix this by wrapping them:

for (let method in Yolo.prototype) {
	Yolo.prototype[method] = function(){
		var callback = Yolo.prototype[method];

Yolo.prototype[method] = function () { var ret = callback.apply(this.element, arguments);

// Return the element, for chainability! return ret === undefined? this.element : ret; } } }

However, now you can’t dynamically add methods to Yolo.prototype and have them automatically work like the native Yolo methods in element.yolo, so it kinda hurts extensibility (of course you could still add methods that use this.element and they would work).

Thoughts?


jQuery considered harmful

4 min read 0 comments Report broken page

Heh, I always wanted to do one of those “X considered harmful” posts*. :D

Before I start, let me say that I think jQuery has helped tremendously to move the Web forward. It gave developers power to do things that were previously unthinkable, and pushed the browser manufacturers to implement these things natively (without jQuery we probably wouldn’t have document.querySelectorAll now). And jQuery is still needed for those that cannot depend on the goodies we have today and have to support relics of the past like IE8 or worse.

However, as much as I feel for these poor souls, they are the minority. There are tons of developers that don’t need to support old browsers with a tiny market share. And let’s not forget those who aren’t even Web professionals: Students and researchers not only don’t need to support old browsers, but can often get by just supporting a single browser! You would expect that everyone in academia would be having tons of fun using all the modern goodies of the Open Web Platform, right? And yet, I haven’t seen jQuery being so prominent anywhere else as much as it is in academia. Why? Because this is what they know, and they really don’t have the time or interest to follow the news on the Open Web Platform. They don’t know what they need jQuery for, so they just use jQuery anyway. However, being able to do these things natively now is not the only reason I’d rather avoid jQuery.

Yes, you probably don’t really need it…

I’m certainly not the first one to point out how much of jQuery usage is about things you can do natively, so I won’t spend time repeating what others have written. Just visit the following and dive in:

I will also not spend time talking about file size or how much faster native methods are. These have been talked about before. Today, I want to make a point that is not frequently talked about…

…but that’s not even the biggest reason not to use it today

To avoid extending the native element prototypes, jQuery uses its own wrapper objects. Extending native objects in the past was a huge no-no, not only due to potential collisions, but also due to memory leaks in old IE. So, what is returned when you run $("div") is not a reference to an element, or a NodeList, it’s a jQuery object. This means that a jQuery object has completely different methods available to it than a reference to a DOM element, an array with elements or any type of NodeList. However, these native objects come up all the time in real code — as much as jQuery tries to abstract them away, you always have to deal with them, even if it’s just wrapping them in $(). For example, the context when a callback is called via jQuery’s .bind() method is a reference to an HTML element, not a jQuery collection. Not to mention that often you use code from multiple sources — some of them assume jQuery, some don’t. Therefore, you always end up with code that mixes jQuery objects, native elements and NodeLists. And this is where the hell begins.

If the developer has followed a naming convention for which variables contain jQuery objects (prepending the variable names with a dollar sign is the common one I believe) and which contain native elements, this is less of a problem (humans often end up forgetting to follow such conventions, but let’s assume a perfect world here). However, in most cases no such convention is followed, which results in the code being incredibly hard to understand by anyone unfamiliar with it. Every edit entails a lot of trial and error now (“Oh, it’s not a jQuery object, I have to wrap it with $()!” or “Oh, it’s not an element, I have to use [0] to get an element!”). To avoid such confusion, developers making edits often end up wrapping anything in $() defensively, so throughout the code, the same variable will have gone through $() multiple times. For the same reason, it also becomes especially hard to refactor jQuery out of said code. You are essentially locked in.

Even if naming conventions have been followed, you can’t just deal only with jQuery objects. You often need to use a native DOM method or call a function from a script that doesn’t depend on jQuery. Soon, conversions to and from jQuery objects are all over the place, cluttering your code.

In addition, when you add code to said codebase, you usually end up wrapping every element or nodelist reference with $() as well, because you don’t know what input you’re getting. So, not only you’re locked in, but all future code you write for the same codebase is also locked in.

Get any random script with a jQuery dependency that you didn’t write yourself and try to refactor it so that it doesn’t need jQuery. I dare you. You will see that your main issue will not be how to convert the functionality to use native APIs, but understanding what the hell is going on.

A pragmatic path to JS nudity

Sure, many libraries today require jQuery, and like I recently tweeted, avoiding it entirely can feel like you’re some sort of digital vegan. However, this doesn’t mean you have to use it yourself. Libraries can always be replaced in the future, when good non-jQuery alternatives become available.

Also, most libraries are written in such a way that they do not require the $ variable to be aliased to jQuery. Just call jQuery.noConflict() to reclaim the $ variable and be able to assign it to whatever you see fit. For example, I often define these helper functions, inspired from the Command Line API:

// Returns first element that matches CSS selector {expr}.
// Querying can optionally be restricted to {container}’s descendants
function $(expr, container) {
	return typeof expr === "string"? (container || document).querySelector(expr) : expr || null;
}

// Returns all elements that match CSS selector {expr} as an array. // Querying can optionally be restricted to {container}’s descendants function $$(expr, container) { return [].slice.call((container || document).querySelectorAll(expr)); }

In addition, I think that having to type jQuery instead of $ every time you use it somehow makes you think twice about superfluously using it without really needing to, but I could be wrong :)

Also, if you actually like the jQuery API, but want to avoid the bloat, consider using Zepto.

* I thought it was brutally obvious that the title was tongue-in-cheek, but hey, it’s the Internet, and nothing is obvious. So there: The title is tongue-in-cheek and I’m very well aware of Eric’s classic essay against such titles.


Awesomplete: 2KB autocomplete with zero dependencies

2 min read 0 comments Report broken page

awesompleteSorry for the lack of posts for the past 7 (!) months, I’ve been super busy working on my book, which up to a certain point, I couldn’t even imagine finishing, but I’m finally there! I’ve basically tried to cram all the CSS wisdom I’ve accumulated over the years in it :P (which is partly why it took so long, I kept remembering more things that just *had* to be in it. Its page count on the O’Reilly website had to be updated 3 times, from 250 to 300 to 350 and it looks like the final is gonna be closer to 400 pages) and it’s gonna be super awesome (preorder here!) :D . I have been posting a few CSS tricks now and then on my twitter account, but haven’t found any time to write a proper blog post.

Anyhow, despite being super busy with MIT (which btw is amazing, challenging in a good way, and full of fantastic people. So glad to be here!) and the book, I recently needed an autocomplete widget for something. Surprisingly, I don’t think I ever had needed to choose one in the past. I’ve worked with apps that had it, but in those cases it was already there.

At first, I didn’t fret. Finally, a chance to use the HTML5 <datalist>, so exciting! However, the more I played with it, the more my excitement was dying a slow death, taking my open web standards dreams and hopes along with it. Not only it’s incredibly inconsistent across browsers (e.g. Chrome matches only from the start, Firefox anywhere!), it’s also not hackable or customizable in any way. Not even if I got my hands dirty and used proprietary CSS, I still couldn’t do anything as simple as changing how the matching happens, styling the dropdown or highlighting the matching text!

So, with a heavy heart, I decided to use a script. However, when I looked into it, everything seemed super bloated for my needs and anything with half decent usability required jQuery, which results in even more bloat.

So, I did what every crazy person with a severe case of NIH Syndrome would: I wrote one. It was super fun, and I don’t regret it, although now I’m even more pressed for time to meet my real deadlines. I wrote it primarily for myself, so even if nobody else uses it, ho hum, it was more fun than alternative ways to take a break. However, it’s my duty to put it on Github, in case someone else wants it and in case the community wants to take it into its loving, caring hands and pull request the hell out of it.

To be honest, I think it’s both pretty and pretty useful and even though it won’t suit complex needs out of the box, it’s pretty hackable/extensible. I even wrote quite a bit of documentation at some point this week when I was too sleepy to work and not sufficiently sleepy to sleep — because apparently that’s what was missing from my life: even more technical writing.

I saved the best for last: It’s so lightweight you might end up chasing it around if there’s a lot of wind when you download it. It’s currently a little under 1.5KB minified & gzipped (the website says 2KB because it will probably grow with commits and I don’t want to have to remember to update it all the time), with zero dependencies! :D

And it’s even been verified to work in IE9 (sorta), IE10+, Chrome, Firefox, Safari 5+, Mobile Safari!

’Nuff said. Get it now!

PS: If you’re about to leave a comment on how it’s not called “autocomplete”, but “typeahead”, please go choke on a bucket of cocks instead. :P


An easy notation for grayscale colors

2 min read 0 comments Report broken page

These days, there is a lengthy discussion in the CSS WG about how to name a function that produces shades of gray (from white to black) with varying degrees of transparency, and we need your feedback about which name is easier to use.

The current proposals are:

1. gray(lightness [, alpha])

In this proposal gray(0%) is black, gray(50%) is gray and gray(100%) is white. It also accepts numbers from 0-255 which correspond to rgb(x,x,x) values, so that gray(255) is white and gray(0) is black. It also accepts an optional second argument for alpha transparency, so that gray(0, .5) would be equivalent to rgba(0,0,0,.5).

This is the naming of the function in the current CSS Color Level 4 draft.

2. white(lightness [, alpha])

Its arguments work in the same way as gray(), but it’s consistent with the expectation that function names that accept percentages give the “full effect” at 100%. gray(100%) sounds like a shade of gray, when it’s actually white. white(100%) is white, which might be more consistent with author expectations. Of course, this also accepts alpha transparency, like all the proposals listed here.

3. black(lightness [, alpha])

black() would work in the opposite way: black(0%) would be white, black(100%) would be black and black(50%,.5) would be semi-transparent gray. The idea is that people are familiar thinking that way from grayscale printing.

4. rgb() with one argument and rgba() with two arguments

rgb(x) would be a shorthand to rgb(x, x, x) and rgba(x, y) would be a shorthand to rgba(x, x, x, y). So, rgb(0) would be black and rgb(100%) or rgb(255) would be white. The benefit is that authors are already accustomed to using rgb() for colors, and this would just be a shortcut. However, note how you will need to change the function name to get a semi-transparent version of the color. Also, if in the future one needs to change the color to not be a shade of gray, a function name change is not needed.

I’ve written some SCSS to emulate these functions so you can play with them in your stylesheets and figure out which one is more intuitive. Unfortunately rgb(x)/rgba(x,a) cannot be polyfilled in that way, as that would overwrite the native rgb()/rgba() functions. Which might be an argument against them, as being able to polyfill through a preprocessor is quite a benefit for a new color format IMO.

You can vote here, but that’s mainly for easy vote counting. It’s strongly encouraged that you also leave a comment justifying your opinion, either here or in the list.

Vote now!

Also tl;dr If you can’t be bothered to read the post and understand the proposals well, please, refrain from voting.


Image comparison slider with pure CSS

2 min read 0 comments Report broken page

As a few of you know, I have been spending a good part of this year writing a book for O’Reilly called “CSS Secrets” (preorder here!). I wanted to include a “secret” about the various uses of the resize property, as it’s one of my favorite underdogs, since it rarely gets any love. However, just mentioning the typical use case of improving the UX of text fields didn’t feel like enough of a secret at all. The whole purpose of the book is to get authors to think outside the box about what’s possible with CSS, not to recite widely known applications of CSS features. So I started brainstorming: What else could we do with it?

Then I remembered Dudley’s awesome Before/After image slider from a while ago. While I loved the result, the markup isn’t great and it requires scripting. Also, both images are CSS backgrounds, so for a screen reader, there are no images there. And then it dawned on me: What if I overlaid a <div> on an image and made it horizontally resizable through the resize property? I tried it, and as you can see below, it worked!

The good parts:

  • More semantic markup (2 images & 2 divs). If object-fit was widely supported, it could even be just one div and two images.
  • No JS
  • Less CSS code

Of course, few things come with no drawbacks. In this case:

  • One big drawback is keyboard accessibility. Dudley’s demo uses a range input, so it’s keyboard accessible by design.
  • You can only drag from the bottom right corners. In Dudley’s demo, you can click at any point in the slider. And yes, I did try to style ::webkit-resizer and increase its size so that at least it has smoother UX in Webkit. However, no matter what I tried, nothing seemed to work.

Also, none of the two seems to work on mobile.

It might not be perfect, but I thought it’s a pretty cool demo of what’s possible with the resize property, as everybody seems to only use it in textareas and the like, but its potential is much bigger.

And now if you’ll excuse me, I have a chapter to write ;)

Edit: It looks like somebody figured out a similar solution a few months ago, which does manage to make the resizer full height, albeit with less semantic HTML and more flimsy CSS. The main idea is that you use a separate element for the resizing (in this case a textarea) with a height of 15px = the height of the resizer. Then, they apply a scaleY() transform to stretch that 15px to the height of the image. Pretty cool! Unfortunately, it requires hardcoding the image size in the CSS.


Dynamically generated SVG through SASS + A 3D animated RGB cube!

3 min read 0 comments Report broken page

Screenshot of the cubeToday, I was giving the opening keynote at Codemania in Auckland, New Zealand. It was a talk about color from a math/dev perspective. It went quite well, despite my complete lack of sleep. I mean that quite literally: I hadn’t slept all night. No, it wasn’t the jetlag or the nervousness that kept me up. It was my late minute decision to replace the static, low-res image of an RGB cube I was using until then with a 3D cube generated with CSS and animated with CSS animations. Next thing I knew, it was light outside and I had to start getting ready. However, I don’t regret literally losing sleep to make a slide that is only shown for 20 seconds at most. Not only it was super fun to develop, but also yielded a few things that I thought were interesting enough to blog about.

The most challenging part wasn’t actually the 3D cube. This has been done tons of times before, it was probably the most common demo for CSS 3D transforms a couple of years ago. The only part of this that could be of interest is that mine only used 2 elements for the cube. This is a dabblet of the cube, without any RGB gradients on it:

The challenging part was creating the gradients for the 6 sides. These are not plain gradients, as you can see below:

RGB cube sidesThese are basically two linear gradients from left to right, with the topmost one being masked with a gradient from top to bottom. You can use CSS Masking to achieve this (for Chrome/Safari) and SVG Masks for Firefox, but this masks the whole element, which would hide the pseudo-elements needed for the sides. What I needed was masks applied to backgrounds only, not the whole element.

It seemed obvious that the best idea would be to use SVG background images. For example, here is the SVG background needed for the top left one:

<svg xmlns="http://www.w3.org/2000/svg" width="200px" height="200px">

<linearGradient id="yellow-white" x1="0" x2="0" y1="0" y2="1"> <stop stop-color="yellow" /> <stop offset="1" stop-color="white" /> </linearGradient> <linearGradient id="magenta-red" x1="0" x2="0" y1="0" y2="1"> <stop stop-color="red" /> <stop offset="1" stop-color="magenta" /> </linearGradient> <linearGradient id="gradient" x1="0" x2="1" y1="0" y2="0"> <stop stop-color="white" /> <stop offset="1" stop-color="black" /> </linearGradient> <mask id="gradient-mask"> <rect width="100%" height="100%" fill="url(#gradient)"/> </mask>

<rect width="100%" height="100%" fill="url(#yellow-white)"/> <rect width="100%" height="100%" fill="url(#magenta-red)" mask="url(#gradient-mask)"/>

</svg>

However, I didn’t want to have 6 separate SVG files, especially with this kind of repetition (cross-linking to reuse gradients and masks across different files is still fairly buggy in certain browsers). I wanted to be able to edit this straight from my CSS. And then it hit me: I was using SASS already. I could code SASS functions that generate SVG data URIs!

Here’s the set of SVG generating SASS functions I ended up writing:

@function inline-svg($content, $width: $side, $height: $side) {
	@return url('data:image/svg+xml,#{$content}');
}

@function svg-rect($fill, $width: ‘100%’, $height: $width, $x: ‘0’, $y: ‘0’) { @return unquote(‘’); }

@function svg-gradient($id, $color1, $color2, $x1: 0, $x2: 0, $y1: 0, $y2: 1) { @return unquote(’

'); }

@function svg-mask($id, content) { @return unquote('#{content}'); }

And then I was able to generate each RGB plane with another function that made use of them:

@function rgb-plane($c1, $c2, $c3, $c4) {
	@return inline-svg(
		svg-gradient('top', $c1, $c2) +
		svg-gradient('bottom', $c3, $c4) +
		svg-gradient('gradient', white, black, 0, 1, 0, 0) +
		svg-mask('gradient-mask', svg-rect('url(%23gradient)')) +
		svg-rect('url(%23bottom)') +
		svg-rect('url(%23top)" mask="url(%23gradient-mask)')
	);
}

/* … */

.cube { background: rgb-plane(blue, black, aqua, lime);

&::before { background: rgb-plane(blue, fuchsia, aqua, white); }

&::after { background: rgb-plane(fuchsia, red, blue, black); } }

.cube .sides { background: rgb-plane(yellow, lime, red, black);

&::before { background: rgb-plane(yellow, white, red, fuchsia); }

&::after { background: rgb-plane(white, aqua, yellow, lime); } }

However, the same functions can be used for all sorts of SVG backgrounds and it’s very easy to add a new one. E.g. to make polygons:

@function svg-polygon($fill, $points) {
	@return unquote('');
}

@function svg-circle($fill, $r: ‘50%’, $cx: ‘50%’, $cy: ‘50%’) { @return unquote(‘’); }

You can see the whole SCSS file here and its CSS output here.

Warning: Keep in mind that IE9 and some older versions of other browsers have issues with unencoded SVG data URIs. Also, you still need to escape hashes (%23 instead of #), otherwise Firefox fails.


I’m going to MIT!!

2 min read 0 comments Report broken page

Last year, I did something crazy, that I’ve been wanting to do since I was little: I applied to MIT’s PhD program in Electrical Engineering and Computer Science.

One of the letters

It was not only crazy because I have been working for several years already, but also because I only applied to MIT, as I decided I did not want to go to any other university, both for pragmatic and emotional reasons. As any prospective grad student will tell you, applying to only one top university is recipe for failure. I didn’t tell many people, but everyone who knew thought I’d get in — except me. You see, I wasn’t a typical candidate. Sure, I have done lots of things I’m proud of, but I didn’t have an amazing GPA or publications in prestigious academic conferences.

It felt like a very long shot, so you can imagine my excitement when I received the letters of acceptance, about a week ago. I will remember that moment forever. I was watching Breaking Bad, feeling miserable over a breakup that happened only a few hours earlier. About a minute into the episode (s05e09), I saw an email notification titled “Your application to MIT EECS”. My first thought was that there was some problem with my application. And then I read the first few lines:

Dear Michailia Verou:

If you have not already heard from them, you will shortly receive a letter from the EECS department at MIT, informing you that you have been admitted to the graduate program in Computer Science at MIT next fall. Congratulations!!

WHAAAA? Was it a scam? But then, how did they have all my details? Holy mother of the Flying Spaghetti Monster, I got in!!! Soon thereafter, a letter from CSAIL followed (where I said I wanted to work, specifically in the UID), and then even more letters. I started calling everyone who knew I applied to share the news, though it proved quite hard to form sentences instead of uncontrollably screaming in joy. I was (and am!) so excited about the future, that it completely overshadows any other life problems (at least for the time being).

Of course, my happiness is mixed with sheer terror. I keep worrying that I will be the dumbest person in the room, or that I don’t remember as much from my undergrad studies as the others will. I’m even terrified of meeting my future advisor(s) in case getting to know me better makes them wonder why I was accepted. But I try to remind myself about impostor syndrome, and from what I’ve read in forums & blogs, it seems that I’m not alone in having such fears.

I held off blogging about it until I felt I was able to write something coherent, but I can’t wait to share my excitement any longer.

To the future!

To real life plot twists!

To MIT!

Boy, I’m thrilled. :D