The CSS “Named Colors” section of the CSS Color Module Level 4—the latest specification for color values and properties within the Cascading Style Sheets language—are 141 standard colors. Each has its own name, so beyond the essentials of “black” and “white” are shades like “papaya whip,” a warm orange pastel; “lemon chiffon,” a faint, milky yellow; and “burlywood,” which has likely made an appearance on a safari tour guide’s shorts.
At first glance, these names seem fluffy, and they bear connotations of sugary, whimsical romanticism. Where do such abstract names come from, and why are they a part of something as methodical as writing code?
The answer to these questions begins in 1980s Massachusetts. Originally, the colors were a product of the X Window System (X), a graphical user interface (GUI) released by MIT in 1984. In June of 1986, the first list of GUI colors, which was tuned to the DEC VT240 series terminal, shipped with the third release of X’s tenth version (X10R3). It comprised 69 basic shades, with 138 entries to account for different cases in the color names (e.g., lowercase with spacing like “dark red” versus camel case like “DarkRed”).
The most substantial release, created by Paul Raveling, came in 1989 with X11R4. This update heralded a slew of light neutral tones, and it was a response to complaints from Raveling’s coworkers about color fidelity. (In the ‘80s, colors could vary dramatically from monitor to monitor, depending on the machine vendor. As if to illustrate this, a particularly baffled employee exclaimed, “That’s Wheat???!!!” upon the sight of the hue in the previous text file.) In this version, programmers were introduced to the aforementioned “papaya whip” and “lemon chiffon,” as well as other loftily-named hues like “blanched almond” and “peach puff.”
Raveling drew these names from an unsurprising source: the (now-defunct) paint company Sinclair Paints. It was an arbitrary move; after failing to receive sanctions from the American National Standards Institute (ANSI), which issued standards for Web color properties, Raveling decided to take matters into his own hands. He calibrated the colors for his own HP monitor. “Nuts to ANSI & ‘ANSI standards,’” he complained.
Colors included in John C. Thomas’s RGB file. Thomas added bold shades, adopting names from Crayola.
EXPAND (CLICK) GALLERY THUMBNAILS BELOW TO FULL SIZE
Later that year, X11 gained a set of bolder colors thanks to another programmer, John C. Thomas. Just as Raveling’s update tweaked shades to assuage user confusion, Thomas’s addressed the following written objection from coworker Bruce Schuchardt in 1989:
“[I] am still shocked and horrified by the default colors in the rgb database. The ‘pink’ color in particular looks like the flesh-tone of someone who has been puking for several hours and would really rather get a bullet in the head than go on living.”
Thomas agreed. Frustrated with inconsistent displays, he started to find it futile to standardize color names. In response, he stated in an e-mail that he “sat down one evening with the handiest standard of subjective color names, a box of 72 Crayola crayons.” That birthed “aquamarine,” “orchid,” and “salmon,” to name a few.
By 2001, the World Wide Web Consortium (W3C) published the first working draft of the CSS 3 Color Module that would include the colors. In light of evolving technologies, the colors had fallen out of use, but the W3C claimed the goal was to “codify current practices.” Every browser supported the colors at this point, consequently, the W3C had been using them in compatibility tests. Incorporating the colors into CSS, then, would prevent sites from breaking.
“It was like a backwards-compatibility thing. They thought, ‘We’ve accidentally been doing this, so we might as well just not break it,’” Sexton told Ars.
Backlash ensued. The color database had been subjected to the whims of so many different programmers that it became deeply disorganized, leading some to argue it had no place in CSS. Critics attacked its naming scheme: “dark gray” was lighter than “gray”; there was a “medium violet red” but no “violet red”; “light goldenrod yellow” had no corresponding “goldenrod yellow.” In total 17 colors had dark versions, but only 13 had light ones. Color distribution was also uneven, skewing toward reds and greens and away from blues.
Perhaps the most vehement denunciation comes from a 2002 e-mail written by programmer Steven Pemberton: “The X11 colour names are an abomination that should have been stifled at birth, and adding them to CSS is a blemish on the otherwise excellent design of CSS. To say that the X11 colour set and their names have been ‘designed’ is an insult to the word ‘design.’ It is just a mess.”
Another point of contention was cultural exclusion. Some programmers took umbrage at the region-centric nature of names like “dodger blue” and the potential racial undertones of “navajo white” (from Sinclair Paints) and “indian red” (from Crayola, though the crayon has since been renamed in response to the same concerns). Others considered the English-only names alienating.
“I’m not a native English speaker. Imagine my reaction the first time I saw the ‘gainsboro’ color or ‘papaya whip,’” Daniel Glazman, co-chairman of the CSS Working Group, told Ars.
Ostensibly, these repercussions could have been prevented. In the ‘80s, X system programmers had the option to identify colors the way many developers do today: with a hexadecimal value (AKA hex value, e.g., #FFFF00) or an RGB color code (e.g., 255,255,0). These options allow a greater degree of choice and precision, and they’re based on schematic, objective, globally legible systems. Why weren’t they used in the first place?
“It was a recognition that almost nobody likes using the numeric values. People don’t think in terms of F5B as a particular shade of color. Using a name is more natural,” said Jim Fulton, a student at MIT at the time of X’s creation and manager of Raveling’s and Thomas’s files. However, he conceded, “Not every idea works out well.”
An unexpected source for color inspiration…
EXPAND (CLICK) GALLERY THUMBNAILS BELOW TO FULL SIZE
In 2014, however, an unexpected event cast the color list in a more favorable light: a new shade. “Rebecca purple,” was introduced to honor the life of Rebecca Meyer, the daughter of Eric Meyer, a respected programmer and CSS writer. Rebecca died of brain cancer at the age of six; the hue (#663399) was chosen to reflect her favorite color. (A few developers opposed the addition, maintaining that a set of standards was no place for an emotional tribute. They were dismissed as curmudgeons.)
Still, the general consensus is that these colors’ utility is minimal; they’re best reserved as placeholders (it’s easier to type “tomato” than “#FF6347” when you need a color quickly), for beginner-level design projects, or as the butt of a joke.
“I view it with amusement that the colors seem to have migrated into CSS. I just laugh at it,” Fulton told Ars. “I think if someone were to go and crawl over the top 100 or top 1,000 sites and take a look at how various colors are specified, I’m willing to bet you’d still find close to zero percent using color names beyond ‘white’ or ‘black.’”
“If I’m doing an example to show people how to use an editor or a framework, I use hex values like #C0FFEE or #BADA55. It’s about as useful to use #C0FFEE as it is to use ‘papaya whip,’” Sexton added.
Should the colors have been left without standard names then, sparing the development community from a series of angry e-mails and micro-controversies? According to Fulton, probably. Then again, as programmers navigated the uncharted territory of color-displaying GUIs in the ‘80s, it was only natural to experiment any way they knew how.
“At the time, we were dealing with, in some ways, the very beginnings of the graphical home-computer industry,” he said. “For color devices especially, this was the very beginning.”
Once upon a time, most developers used jQuery to animate things in the browser. Fade this, expand that; simple stuff. As interactive projects got more aggressive and mobile devices burst onto the scene, performance became increasingly important. Flash faded away and talented animators pressed HTML5 to do things it had never done before. They needed better tools for complex sequencing and top-notch performance. jQuery simply wasn’t designed for that. Browsers matured and started offering solutions.
As someone who’s fascinated (bordering on obsessed, actually) with animation and performance, I eagerly jumped on the CSS bandwagon. I didn’t get far, though, before I started uncovering a bunch of major problems that nobody was talking about. I was shocked.
This article is meant to raise awareness about some of the more significant shortcomings of CSS-based animation so that you can avoid the headaches I encountered, and make a more informed decision about when to use JS and when to use CSS for animation.
Lack of independent scale/rotation/position control
In my opinion, this is a glaring weakness in CSS but if you only do simpler animations that animate the entire transform state at any given time, this won’t be an issue for you.
The most frequently cited reason for using CSS for animation is “hardware acceleration”. Sounds yummy, right? Let’s break it down into two parts:
The GPU is highly optimized for tasks like moving pixels around and applying transform matrices and opacity, so modern browsers try to offload those tasks from the CPU to the GPU. The secret is to isolate the animated elements onto their own GPU layers because once a layer is created (as long as its native pixels don’t change), it’s trivial for the GPU to move those pixels around and composite them together. Instead of calculating every single pixel 60 times per second, it can save chunks of pixels (as layers) and just say “move that chunk 10 pixels over and 5 pixels down” (or whatever).
Side note: It’s not wise to give every element its own layer because GPUs have limited video memory. If you run out, things will drastically slow down.
Declaring your animations in CSS allows the browser to determine which elements should get GPU layers, and divvy them up accordingly. Super.
Also note that not all CSS properties get the GPU boost in CSS animations. In fact, most don’t. Transforms (scale, rotation, translation, and skew) and opacity are the primary beneficiaries. So don’t just assume that if you animate with CSS, everything magically gets GPU-juiced. That simply isn’t true.
Offloading calculations to a different thread
The other part of “hardware acceleration” has to do with being able to use a different CPU thread for animation-related calculations. Again, this sounds great in theory but it doesn’t come without costs, and developers often overestimate the benefits.
First of all, only properties that don’t affect document flow can truly be relegated to a different thread. So again, transforms and opacity are the primary beneficiaries. When you spin off other threads there’s overhead involved with managing that process. Since graphics rendering and document layout eat up the most processing resources (by FAR) in most animations (not calculating the intermediate values of property tweens), the benefit of using a separate thread for interpolation is minimal. For example, if 98% of the work during a particular animation is graphics rendering and document layout, and 2% is figuring out the new position/rotation/opacity/whatever values, even if you calculated them 10 times faster, you’d only see about a 1% speed boost overall.
The stress test below creates a certain number of image elements (dots) and animates them from the center to random positions around the edges using random delays, creating a starfield effect. Crank up the number of dots and see how jQuery, GSAP, and Zepto compare. Since Zepto uses CSS transitions for all of its animations, it should perform best, right?
top, left, width, height
Windows Surface RT, iPhone, iPad, iPad, Samsung Galaxy Tab, Chrome, Firefox, Safari, Opera, Kindle Fire HD, IE
Exactly how much “better”? The original version of the test had a frames-per-second counter for quantifiable results, but it quickly became apparent that there’s no truly accurate way to measure FPS across browsers especially with CSS animations, and certain browsers were reporting misleading numbers, so I removed it. You can easily gauge relative performance, though, by cranking up the number of dots, switching among engines, and watching how things perform (smooth movement, steady timing and dot dispersion, etc.). After all, the goal is to have animations look good.
Interesting things to note:
A few devices seemed highly optimized for transforms whereas others handled top/left/width/height animations better. Most notably, the older iOS6 was much better with CSS-animated transforms, but the newer iOS7 flip-flopped and now they are significantly slower.
Under heavy pressure, CSS transitions were more likely to spray out in bands/rings (this appears to be a synchronization/scheduling issue, possibly due to them being managed in a different thread).
In some browsers (like Chrome), when there were a very high number of dots animating, it completely killed the opacity fade of the text, but only when using CSS animations!
I’d encourage you to do your own testing to see which technology delivers the smoothest animation in your particular project(s). Don’t buy the myth that CSS animations are always faster, and also don’t assume that the speed test above reflects what you’d see in your apps. Test, test, test.
Runtime controls and events
Modern animation is very much tied to interactivity, so it’s incredibly useful to be able to animate from variable starting values to variable ending ones (maybe based on where the user clicks, for example), or change things on-the-fly but declarative CSS-based animation can’t do that.
For simple transitions between two states (i.e. rollovers or expanding menus, etc.), CSS Transitions are great. For sequencing things, however, you generally need to use CSS keyframe animations which force you to define things in percentages, like:
But when you’re animating, don’t you think in terms of time rather than percentages? Like “fade up the opacity for 1 second, then slide to the right for 0.75 seconds, and bounce down to a rest 1 second later”. What happens if you spend hours crafting a complicated sequence in percentages, and then the client says “make that part in the middle 3 seconds longer”? Ouch. You’d need to recalculate ALL of the percentages!
Usually building animations involves a lot of experimentation, especially with timing and eases. This is actually where a seek() method would be quite useful. Imagine building out a 60-second animation piece-by-piece and then finessing the final 5 seconds; you would need to sit through the first 55 seconds every time you want to see the results of your edits to the last parts. Yuck. With aseek() method, you could just drop that into place during production to skip to the part you’re working on, and then remove it when you’re done. Big time-saver.
It is becoming increasingly common to animate canvas-based objects and other 3rd-party library objects but unfortunately CSS animations can only target DOM elements. That means that if you invest a lot of time and energy in CSS animations, it won’t translate to those other types of projects. You’ll have to switch animation tool sets.
There are a few more workflow-related conveniences that are missing in CSS Animations:
Relative values. Like “animate the rotation 30 degrees more” or “move the element down 100px from where it is when the animation starts”.
Nesting. Imagine being able to create animations that can get nested into another animation which itself can be nested, etc. Imagine controlling that master animation while everything remains perfectly synchronized. This structure would promote modularized code that is much easier to produce and maintain.
Progress reporting. Is a particular animation finished? If not, exactly where is it at in terms of its progress?
Targeted kills. Sometimes it’s incredibly useful to kill all animations that are affecting the “scale” of an element (or whatever properties you want), while allowing the rest to continue.
You can’t really do any of the following with CSS animations:
Animate along a curve (like a Bezier path).
Use interesting eases like elastic or bounce or a rough ease. There’s a cubic-bezier()option, but it only allows 2 control points, so it’s pretty limited.
Use different eases for different properties in a CSS keyframe animation; eases apply to the whole keyframe.
Physics-based motion. For example, the smooth momentum-based flicking and snap-back implemented in this Draggable demo.
Animate the scroll position
Directional rotation (like “animate to exactly 270 degrees in the shortest direction, clockwise or counter-clockwise”).
CSS-based animation doesn’t work in IE9 and earlier. Most of us hate supporting older browsers (especially IE), but the reality is that some of us have clients who require that support.
Browser prefixes are necessary for many browsers, but you can leverage preprocessing tools to avoid having to manually write them out.
Will the Web Animations spec solve things?
The W3C created a spec called Web Animations that aims to solve a lot of the deficiencies in CSS Animations and CSS Transitions, providing better runtime controls and extra features. It certainly seems like a step forward in many ways, but it still has shortcomings (some of which are probably impossible to overcome due to the need for legacy support of existing CSS specifications, so for example, independent transform component control is unlikely). There are definitely some smart guys who worked on the spec.
In a recent interview, I was asked about simple and/or replicable steps to take when refactoring CSS. The topics of legacy, Technical Debt, and refactoring are, for one reason or another, at the front of my mind at the moment, so I thought I would write up a short post on an approach to refactoring CSS (or any language) that I call The Three I’s: Identify, Isolate, Implement.
It’s all well and good knowing that ‘your CSS is a mess’, but which bits specifically? Which parts of your codebase are causing you problems right now? Which ones will be the quickest to refactor? Which parts will provide you with the most benefit once refactored? It is important to identify prime candidates for refactoring.
For example, the CSS for your nav might be cumbersome and hard to work with, but if the nav works correctly, is accessible, and you rarely have to edit the CSS for it, refactoring it will likely not provide you with much immediate value: we can probably afford to leave it as it is for a little while longer. However, your layout/grid system might be incredibly fragmented and inconsistent, it might have cross-browser issues, its responsiveness may have been bolted on as an afterthought, and of course it is probably used extensively and frequently. As a result, you might get a lot of value from tackling that first.
Have a clear idea of which parts of your project should be refactored, if at all.
Tip: Try to limit refactoring work to the scope of single features. It’s much safer to refactor a single feature than it is to refactor your entire naming convention, for example. For further detail on this, and much more, you can see my Refactoring CSS Without Losing Your Mind talk at a number of events throughout the rest of 2016.
Once we have identified candidates for refactoring, we need to isolate them before and during working on them. Suppose we have decided to refactor our layout system first, we should rewrite this in isolation, outside of our project.
In order to do this, open a new jsFiddle, CodePen, etc. and begin work there. Do not refactor features back into a stale codebase, as doing so runs the risk of making use of legacy CSS that may itself be refactored in future. For example, we might be using a traditional reset in the current project, which may be replaced by Normalize.css in future—we don’t want to build our brand new layout system on top of a reset that will be removed in six months time.
Build the new/refactored version of your feature in complete isolation so that you know it is well encapsulated, and it’s not making use of any legacy.
The final step is to implement your refactored feature. Copy and paste the jsFiddle CSS into the relevant part of your project, and see what happens. 90% of the time, you’ll find that there are some problems: conflicts with existing CSS, naming collisions, existing styles leaking into your new code, etc.
We tackle these details at implementation stage, and we need to give careful consideration as to where we place the fixes. If the fix solves a problem with the layout system itself, it is usually best to place the fix in the layout system’s partial. If the fix addresses problems arising from conflicts with legacy code, it is often best to place it in a shame.css file. This keeps the legacy fix away from greenfield CSS, meaning it will be much easier to remove once we’ve refactored whatever legacy was causing the problem in the first place.
Move the refactored feature back into the project and tidy things up here.
Identify sensible candidates for refactoring: not all legacy is born equal. Isolate the feature in order to rebuild it: do not lean on out of date code. Implement the refactored feature into the project: do any cleanup work at this point, and in the right place(s).
Say you have a time you’d like to display on your site. You’ve entered it in your time zone. You can always be specific and say something like: 3:00 PM Eastern Standard Time. Then leave it up to whoever is reading to convert it to their local time. Every Time Zone is a great site for that.
I’m sure there are a number of ways to do this. I had to do it recently on a new project, and I followed Dave Rupert’s lead, who implemented it on ShopTalk’s website to display the localized time for our next live show.
Moment.js and Moment Timezone are a pair that can get this done. And if you need to do any other heavier date and time manipulation or relative calculations (e.g. “32 minutes ago”, “12 de Agosto de 2015”), you’ll be in good hands with these libraries.
We’ll also need to detect the time zone itself, and thankfully there is a library for that.
So we’ll be using:
Package or link those up however you will.
Step 1: Get the Timezone
Might as well stash it in localStorage after you’ve gotten it. That requires less effort on the browser’s part to retrieve next time.
If we’re only interested in the time, we can create the date portion of that string from Moment itself of today’s date (just makes the formatting way easier). The date actually kinda matters since UGHGKGJHGH daylight savings.
var date =moment().format("YYYY-MM-DD");
Then we’ll create the final string, assuming we have a variable with the time we want:
var stamp = date +"T"+ theTime +"Z";
And turn it into a Moment object:
var momentTime =moment(stamp);
Step 3: Localize the time
Now we can adjust/localize that time with Moment Timezone:
var tzTime = momentTime.tz(currTz);
And properly format it for display:
var formattedTime = tzTime.format('h:mm A');
Step 4: Use it
Now we can drop it on the page anywhere we want. If you output the time zone as well, you could even replace the time currently output on the page, since even if detecting the time zone fails it will fall back to UTC and output that as the time zone.
output.textContent ="Time in "+ currTz +": "+ formattedTime;
Sometimes it makes sense (perhaps from a CMS perspective) to ask people to input time in a local time zone. But in order to do the conversion correctly, it makes more sense to have that time in UTC.
Moment itself can help with adjusting times, like:
How the Web works provides a simplified view of what happens when you view a webpage in a web browser on your computer or phone.
This theory is not essential to writing web code in the short term, but before long you’ll really start to benefit from understanding what’s happening in the background.
Clients and servers
Computers connected to the Web are called clients and servers. A simplified diagram of how they interact might look like this:
Clients are the typical Web user’s Internet-connected devices (for example, your computer connected to your Wi-Fi, or your phone connected to your mobile network) and Web-accessing software available on those devices (usually a web browser like Firefox or Chrome).
Servers are computers that store webpages, sites, or apps. When a client device wants to access a webpage, a copy of the webpage is downloaded from the server onto the client machine to be displayed in the user’s web browser.
The other parts of the toolbox
The client and server we’ve described above don’t tell the whole story. There are many other parts involved, and we’ll describe them below.
For now, let’s imagine that the Web is a road. On one end of the road is the client, which is like your house. On the other end of the road is the server, which is a shop you want to buy something from.
In addition to the client and the server, we also need to say hello to:
Your Internet connection: Allows you to send and receive data on the Web. It’s basically like the street between your house and the shop.
TCP/IP: Transmission Control Protocol and Internet Protocol are communication protocols that define how data should travel across the Web. This is like the transport mechanisms that let you to place an order, go to the shop, and buy your goods. In our example, this is like a car or a bike (or your own two feet).
DNS: Domain Name System Servers are like an address book for websites. When you type a web address in your browser, the browser looks at the DNS before retrieving the website. The browser needs to find out which server the website lives on, so it can send HTTP messages to the right place (see below). This is like looking up the address of the shop so you can access it.
HTTP: Hypertext Transfer Protocol is an application protocol that defines a language for clients and servers to speak to each other. This is like the language you use to order your goods.
Component files: A website is made up of many different files, which are like the different parts of the goods you buy from the shop. These files come in two main types:
Assets: This is a collective name for all the other stuff that makes up a website, such as images, music, video, Word documents, and PDFs.
So what happens, exactly?
When you type a web address into your browser (for our analogy that’s like walking to the shop):
The browser goes to the DNS server and finds the real address of the server that the website lives on (you find the address of the shop).
The browser sends an HTTP request message to the server asking it to send a copy of the website to the client (you go to the shop and order your goods). This message, and all other data sent between the client and the server, is sent across your internet connection using TCP/IP.
Provided the server approves the client’s request, the server sends the client a “200 OK” message, which means “Of course you can look at that website! Here it is”, and then starts sending the website’s files to the browser as a series of small chunks called data packets (the shop gives you your goods, and you bring them back to your house).
The browser assembles the small chunks into a complete website and displays it to you (the goods arrive at your door — new stuff, awesome!).
Real web addresses aren’t the nice, memorable strings you type into your address bar to find your favorite websites. They are strings of numbers, like this: 22.214.171.124.
This is called an IP address, and it represents a unique location on the Web. However, it’s not very easy to remember, is it? That’s why Domain Name Servers were invented. These are special servers that match up a web address you type into your browser (like “mozilla.org”) to the website’s real (IP) address.
Websites can be reached directly via their IP addresses. Try going to the Mozilla website by typing126.96.36.199 into the address bar on a new browser tab.
Earlier we used the term “packets” to describe the format in which the data is sent from server to client. What do we mean here? Basically, when data is sent across the Web, it is sent as thousands of small chunks, so that many different web users can download the same website at the same time. If web sites were sent as single big chunks, only one user could download one at a time, which obviously would make the Web very inefficient and not much fun to use.