Animation Efficiencies

By Dan Yoachim  ·   February 7, 2015  ·  6 minute read

Topics: Apprenticeship

Slow and stuttering animations can be a death knell for a website. Here's how to do it right.

Dan Yoachim is Neoteric Design’s spring 2015 apprentice, researching topics on interactive documentaries, javascript driven user interfaces, and front end development. 

Animation for the web has been around for a long time. In the early days, motion was handled through the gif format. Introduced in 1987, gifs allowed limited looped animations on an otherwise mostly static web.

In the late 1990s Adobe Flash entered the picture, which allowed high quality audio and video to be transferred over the very limited networks of the time. Motion in logos or text became somewhat common, and cartoons done in Flash started to become popular.

Javascript came onto the scene around this time, which allowed for movement of DOM elements themselves. In the early days browser speeds and efficiency were generally considered too slow for this sort of animation, but as performance improved use grew. Now these types of animations are common, and multiple javascript libraries exist to animate all sorts of things.

These worked for a while, but the early tools were fairly limited. Slow connection speeds of the time meant long load times, and limited browser power often resulted in jerky animations. As cable speeds increased so too did the amount of content, offsetting old gains. High levels of media, animation, and interactions were are a trend that isn’t going to stop anytime soon, so more efficient methods and tools needed to be developed.

CSS animations have been a response to this issue. These specifications appeared toward the end of the 2000s, and are proposed for integration into CSS3 standards. These animations have several advantages over other methods in that they can be written directly into stylesheets with no extra files, and can benefit from hardware acceleration. They also function should a visitor to your site have javascript disabled. There are still some disadvantages such as browser compatibility limits and vendor specific prefixes, but the specifications are still evolving and many of these issues will be addressed in the next few years.

With any animation smooth, fluid motions are a high priority. Unlike a recorded video however, animations in a browser will depend greatly on how well that browser handles them, as well as the efficiency of the animation implementation itself.

Hardware Acceleration

Hardware acceleration is the first thing to be aware of. In certain conditions the browser will offset some processing to the GPU. This allows for faster processing than might be done with the general cpu handling other browser tasks. There are five properties that trigger hardware acceleration: The transform functions rotate, scale, skew, and translate, as well as the opacity property.

These five properties allow for most common web animations to be more efficient than other methods of accomplishing the same goal. For example, animating an element from left:0px to left:1000px accomplishes the same outcome as using transform: translateX(‘1000px’), but only the latter is accelerated. Additionally, changing the left property (as well as many others) will require the DOM to update the tree, recalculate the layers, and re-composite the page. The hardware accelerated properties avoid this inefficiency by only altering the composite, which is much easier for the browser.

//less efficient
.myElement:hover { left: 1000px; }

//more efficient
.myElement:hover { transform: translateX(1000px); }

Layout Thrashing

Sometimes avoiding the layout can’t be avoided. With the power of modern browsers this isn’t a huge issue most of the time. After all, all web animation used to be that way. Javascript is often paired with CSS in these cases, and can cause layout thrashing if we’re not careful.

When javascript reads a property of the DOM the browser will look to the layout tree, but must first determine if its valid. If the layout is found to be invalid the browser will recalculate it before reading the property. Layouts are invalidated by altering any property outside of the five listed above.

This means that alternating between reading and writing elements is significantly more taxing than doing all the reads first followed by all the writes. This can be an extremely easy trap to fall into. Grouping the getter and setter is common in coding, and unless you’re watching the Timeline closely there’s no warnings. Further, thrashing can be triggered indirectly through nested functions, making them difficult to spot.

function (element) {
elemWidth = element.clientHeight; //read = elemWidth \* 2 + 'px'; //write

elemHeight = element.clientHeight; //read = elemHeight \* 2 + 'px'; //write = element.getLeft() + 'px' //read (within function) and write
element.getAndSetMargins() //read and write (within function)

function (element) {
elemWidth = element.clientWidth; //read
elemHeight = element.clientHeight; //read
newLeft = otherElement.getLeft(); //read (within function)

element.getAndSetMargins() //read and write (within function) = elemWidth \* 2 + 'px; //write = elemHeight \* 2 + 'px'; //write = newLeft + 'px; //write


Since repainting is very taxing on the browser, we should try to limit the number of times its triggered. While fixing the thrashing example above greatly helped our repaint rate, its still causing paints to happen individually. But what if we could collect the cases where a repaint is needed, and do them all at once? That’s what #requestAnimationFrame is for. By passing in a callback function, the browser can queue changes to be completed together during the next scheduled paint. This reduces a lot of redundancy and helps optimize the animation against stutters and chop. Plus, requestAnimationFrame has an added benefit of halting our animations when in background tabs, since no frames are being drawn. This can greatly improve performance as it frees up processing power.

function () {
elemWidth = element.clientWidth; //read
elemHeight = element.clientHeight; //read
newLeft = otherElement.getLeft(); //read (within function)

window.requestAnimationFrame(function { //complete paints at next animation frame = elemWidth \* 2 + 'px'; //write = elemHeight \* 2 + 'px'; //write = newLeft + 'px'; //write

requestAnimationFrame can even be called recursively, achieving the effect of setInterval. This method has an opposite in cancelAnimationFrame, which allows a request to be removed from the queue similar to unbinding a listener.

Libraries and Frameworks

Libraries and frameworks allow developers to use advanced features by using prewritten code. When it comes to animations there are many of these, but some of the big ones are jQuery, three.js, and D3.js. Though they all have different features, these libraries can be very powerful to quickly build cross browser animations or create complicated motions without reinventing the wheel.

Sometimes it’s worth asking if a framework is required though. Traversing DOM elements and updating properties is virtually required in javascript animations, so its tempting to use jQuery right off the bat. In most cases this is fine, but its worth noting the lack of efficiency behind the scenes. For instance, getting an element by ID in native javascript is roughly 22 times more efficient than the same process through jQuery due to the jQuery object needing to be created. This object’s structure also means layout thrashing can be difficult or impossible to avoid. With modern hardware, only very intense animations will see an impact, but its worth mentioning for those times you do have to eek out performance.

Seems faster is faster

While optimizations and efficiencies are important, sometimes being faster on paper isn’t the best decision. As a rule of thumb, whichever action is perceived as faster will be the better option even if its technically slower. Things like lazy loading of images and loading scripts at the end of the body tag can help speed this perception up. When working in animations, there may be methods of dividing processing to increase user perception.