DOM selectivity

Every HTML element on a page becomes a child of the DOM tree. It is by manipulating this DOM tree that our pages can have dynamic styles. If the tree gets too big, operations in the browser slow down and the user experience suffers. If we misuse the tree, entire parts of it will need to be recalculated, further decreasing performance. If we use a very deep (tag nesting) hierarchy, performance will also slow down. If we attach too many properties to all kinds of elements, page size will grow resulting in longer download times.

Working with the DOM can be slow, so we need to touch it as rarely as possible, caching elements as often as we can. We must ensure that we don't do the same thing twice. Before we can use a DOM node, we need to select it first, potentially going through the whole hierarchy until we find it. Then we can change the style of the selected element, animate it, hide it or remove it completely from the tree. If we apply many changes sequentially, the browser will need to recalculate the geometric and non-geometric attributes of the page, which explains why it is better to batch all changes together and do them only once.

Because selecting DOM elements is such a common task, we can do it in many ways and we even have many libraries dedicated for that. There are older methods such as getElementById(), getElementsByName(), getElementsByTagName() and the more recent ones querySelector() and querySelectorAll(), which are more consistent with CSS selectors. Because of this convenience, people have started to use the new methods everywhere, to recommend them and describe them as modern and fast. But some tests have shown surprising results that they can be slower. So I thought that it might be interesting to test the behavior of the most common functions under different circumstances. I decided to test the selector performance, gradually increasing the number of DOM elements from 1 to 10000. To generate them, I used Emmett and expanded the following command: ul>li#item$*10000, which created that many <li> elements with unique ids. I wanted to have as many ids as possible to ensure that the functions that pick only a single element will need to work as hard as possible. At the same time, I could use the unique indexes to select the same elements with the functions that return array of elements, making them work as hard as possible too. I created a wrapper for every function and used the Firebug profiler to measure the results. I did multiple measurements, trying to compensate for the increasing variability in the results as the number of DOM nodes grew. This means that the results won't be very accurate due to rounding and a margin of error in the measurements themselves. Then I took the averages and plotted them on a logarithmic scale (in both x and y), to be able to see more clearly the tendencies. Here is the result:

Execution time for DOM element selection with different JavaScript functions

As we can see, some functions exhibit a widely varying performance while the execution time of others grows much slower. It was interesting to see that with a single DOM node querySelectorAll() was the fastest, more than 2 times faster than getElementById() (but I measured only once here!). Interesting was too that querySelector() improved its performance when it needed to deal with 10 elements, while querySelectorAll() quickly reached the performance of getElementsByTagName(). With 100 elements querySelectorAll() had almost equal performance to getElementById(). Here we can see that getElementsByTagName() already becomes fastest. The difference between the fastest and slowest function here is (only) 33%. With 1000 elements, we already see dramatic changes. querySelector() is slowest here and the same difference is already 450%. With 10000 elements, querySelectorAll() becomes the slowest—it started first, but finished last. The difference with getElementsByTagName() here was exactly 3600%. Throughout the measurements getElementById() and getElementsByTagName() have shown the most consistent results, even if they also started to slightly slow down. Between 100 and 10000 elements the performance of getElementById() slows down by around 7% and the one of getElementsByTagName() by around 10%, but this can't be seen very well on the logarithmic scale.

Of course, in different situations results will vary, depending on many more factors. So it's always best to test in our own environment and not to follow advice blindly.