What does it mean for a browser to be fast?
What does it mean for a browser to be fast? It turns out to be a bit hard to pin down, really. I was asked about this and tried to give a lightning talk at the recent Ubuntu Developer Summit on how we think about application speed in general but I think I didn't make my points well, so I thought I'd try to expand a bit here on what you ought to consider when comparing browsers.
Benchmarks
Many start with benchmarks. Benchmarks are loved by the tech press because they give a score and you can make nice charts to show relative scores. But benchmarks by their nature only measure a very specific thing and can only attempt to simulate what users will experience. For browsers, the majority of benchmarks are JavaScript benchmarks; while nobody disputes JavaScript's importance, it also is not the dominant speed factor for simple web pages, and most web pages are simple. I think the importance of recent work in improving JavaScript engines has more to do with the sorts of sites we will create in the future, like this JavaScript NES emulator, though certainly there are plenty of existing sites like Gmail that hugely benefit from the current generation of fast JS engines.
The result of fixating on JavaScript benchmarks are frustrating comments like "Mozilla via Wine is faster than Linux-compiled Mozilla, therefore Mozilla doesn't care about Linux". This misinformed legend came from JavaScript benchmarks. But a browser's implementation of JavaScript is likely nearly identical code across platforms! I imagine the speed difference stemmed from the different qualities of compilers, and so the benchmarking difference Mozilla experienced ought to be the same for any other cross-platform browser. I find that comment frustrating on multiple levels. First, that the conclusion doesn't follow from the premise; second, that the premise isn't exactly true, because JS benchmarks don't matter as much as they're made out to; and then on top of that these benchmarks aren't even measuring the platform-specific code so the conclusion wouldn't follow even if the premise were true.
Newer benchmarks are coming out that attempt to cover more than just JavaScript. A good example is Dromaeo, which has a portion of the test that benchmarks the DOM. But on that note, be wary of third-party benchmarks! John, the author of Dromaeo, knows more about web development than most browser developers do, so I am much less skeptical of his test than I am of others. It is very easy to write a performance test that looks good but doesn't measure something useful; see e.g. the SunSpider 0.9.1 announcement for a discussion of how a bug in the test framework interacts with power management, and that flaw existed in a test being written by experienced browser developers rather than random web enthusiasts.
Cyclers
A perhaps better measure is the end-to-end performance of loading a real web page; that incorporates the JavaScript engine as well as the rest of the web stack: parsing HTML, measuring fonts, etc. We and Mozilla (and I assume the other browser vendors) have test suites that run a set of on-disk pages through their browser. It would be natural for third parties to use these tests to compare browser rendering speeds except that the pages you'd like to test are real content like Yahoo's homepage, and copyrighted and therefore not republishable. (I know our set of pages is in a private repo; I glanced through Mozilla's MXR and only found what looks like a placeholder.)
To make these tests repeatable, they load pages off the disk rather
than fetching today's version of the web pages they test. (Same as
the speed ads; look at the description there for technical
notes.) None of the benchmarks discussed so far include network
speed, and that's a shame: there are likely a ton of interesting
things to be found in that area, like how different browsers use
different per-host connection limits (a tradeoff: more requests in
parallel, but you have to pay TCP slow start multiple times) or how
Chrome will pre-fetch DNS of sites you typically visit on startup
(this behavior in fact probably completely dominates any web-rendering
or JS difference versus other browsers — visit about:dns
in Chrome
for more info).
Even with network speed included there are other parts to a browser that affect performance, like the networking stack and the cache. I remember earlier in Chrome's development Mike discovered a network-level bug (my memory is vague but it was something buffering improperly, probably Nagle) that was causing us to fetch pages later than IE. The above tests wouldn't have revealed the performance improvement he had produced. Depending on how you determine when a page is done loading it may not even cover the time spent putting the pixels up on the screen. And loading Gmail is a crazy multi-second process involving multiple redirects and progress bars on top of the expected JS and rendering bits; I don't think anyone's tests cover Gmail load time yet.
Stopwatch
I think that sort of observation, that all tests are by their nature synthetic and don't cover real browsing, is the place from where Microsoft drew their browser benchmarks of last year, where they claimed IE 8 was the fastest browser available. Unfortunately, a benchmark where you say "we paid some people to look really hard at it and they concluded we were the fastest" doesn't convince a lot of people even if your intentions are pure. Though articles like this say it didn't pass the smell test, conceptually I think this sort of approach better captures what performance benchmarks are trying to measure. It is too bad this kind of test is not something you can reproduce reliably or I'm sure browser developers would all be optimizing for it.
http://arstechnica.com/microsoft/news/2009/03/microsofts-own-speed-tests-show-ie-beating-chrome-firefox.ars
Perception and Jank
But there are still other factors that make people call a browser fast. Continuing our journey from measurable hard numbers to fuzzier stopwatch tests, I assert that what matters more than your measured performance is what the user perceives as your speed, and in that respect here are a few more interesting areas to consider that are much farther away from web pages.
One is UI latency (what we call "jank"). Does the browser respond quickly when you type in the URL bar? When you make a new tab? Peter did a talk on this which I haven't watched, but surely that goes into a lot of detail. This was the area I had hoped to most impress on the Ubuntu developers as being important to consider in the software they develop: small little hiccups cause you to feel the application is slow even if it can render a thousand pages in a second. (For example, I think the package updating tools in Ubuntu are particularly bad in this area.) I think this is the area where we outperform Mozilla the most, and why we've become increasingly popular on Linux, though it's difficult to quantify.
One good example of a jank-reducing tactic is how inline autocomplete works in Chrome. When typing a URL, we attempt autocomplete the URL from your browsing history as well as show a drop-down of other things you may be looking for. To make it predictable what happens when you press enter, we synchronously autocomplete: there should never be a case where waiting some amount of time before pressing enter produces a different result from pressing enter immediately. But this means we can't autocomplete from data found on the disk, because waiting to load data from the disk would make the autocomplete laggy. The fix is to preload the entire completion set (it's small compared to your browsing history) into memory on startup. (But not exactly during startup — there's actually a tiny window after you startup where typing in the URL bar doesn't autocomplete.)
Startup
We measure and optimize another performance stat that is almost entirely unrelated to the above categories: startup time. In my talk I picked on GNOME's calculator (in response they've already fixed it!), but there are plenty of other similar demos, like how I just counted to five after clicking the Ubuntu menu on my laptop before the menu came up. I've written posts before with more technical details about startup work we've done: both through benchmarking and fiddling with low-level system bits, but perhaps it's useful to step back and consider why it matters.
I was skeptical at first, but now I strongly believe that the startup time of an app sets the expectation for the rest of the app. It's something of a placebo effect surely, but taking one step further along the unmeasurability spectrum I think what matters more than even the speed your user experiences is the speed your user thinks they're experiencing. When you start up as quickly as a light-weight app, people feel they're using a light-weight app even when that isn't the case (in reality, any browser that can render the web is kind of enormous, us included). For example, despite switching editors a year ago I find myself still instinctively using vi occasionally just because emacs takes so long to start, and I don't even realize I'm subconsciously avoiding emacs until after I type the wrong commands into vi and I notice I've forgotten how vi works.
For some discussion of app startup, this Mozilla engineer's blog is definitely worth reading; I intend to steal his good ideas at some point. But more fundamentally, fast startup comes from doing less work at startup, which means careful engineering across all the code.
Conclusion
One rule I've learned from working on Chrome (it's been three years now, whoa) is that if you don't measure the performance of something, that performance will regress. It's just a natural consequence of how software development is done, where more time is spent adding things than removing things. To combat this we use buildbots running performance tests to generate charts (warning: enormous, browser-killing page), and our bots go red if performance regresses on those tests. (They frequently do, and then we fix the code.)
If I could have you remember one thing from this post, it is this:
benchmarks are useful to the extent you understand the technical
details behind them. If you are not a browser developer, it is my
professional opinion* that the best way to evaluate which browser is
faster is to just try them out yourself on pages you care about. For
example Opera users claim its many features make them able to browse
faster, and if that is true for them then I hope they enjoy Opera.
And whatever you do, don't repeat that Mozilla JS stat anymore — it
really bugs me. :)
* Professional to the extent that it is my opinion as a professional, not that this is in any way a statement on behalf of my employer.