Measurements of client-side processing delays

Recently I wrote about “holes in the waterfall“: gaps in the waterfall graph for a web page download where no network activity occurred while the browser processed some JavaScript. To measure how widespread this phenomenon might be, I downloaded the HAR files for the August 15, 2011 run of the HTTP Archive and ran a program that measured the total duration of gaps in each one. To avoid counting intentional gaps where website developers had used timers to refresh the content after the initial page load, I ignored any part of the waterfall that occurred after the onload event.

For the 17,001 web pages in the sample data set, the distribution of time spent waiting for the client side looked like this:

Note: I limited the height of the Y-axis to 500 milliseconds to keep the middle of the graph from getting compressed too much. The 99th-percentile time was over 16 seconds. (A logarithmic scale wouldn’t work well for this data set because the first 10% of the Y-values on the left side of the graph were zeros.)

The median time spent waiting for client-side processing was slightly over 50 milliseconds. In an absolute sense, 50 milliseconds is a nontrivial amount of time; it’s on the same order of magnitude as the first-byte response time for a lot of the HTTP requests in this same data set. In relative terms, though, the client-side time was only a single-digit percentage of total page load time (measured from the first HTTP request to the onLoad event) for almost all of the websites in the sample:

Subjectively speaking, the amount of time spent on the client side is:

  • Larger than I expected,
  • Still not big enough to be the primary performance bottleneck for most websites,
  • But worth checking when doing website performance optimization work.