The response time fallacy: It's how many, not how fast. Duh.

So, Marissa ran an experiment where Google increased the number of search results to thirty. Traffic and revenue from Google searchers in the experimental group dropped by 20%.

Ouch. Why? Why, when users had asked for this, did they seem to hate it?

After a bit of looking, Marissa explained that they found an uncontrolled variable. The page with 10 results took .4 seconds to generate. The page with 30 results took .9 seconds.

Half a second delay caused a 20% drop in traffic. Half a second delay killed user satisfaction.
--Why Front End Performance Matters to Everyone, Not Just the High Traffic Giants via drunkenfist.com
   
This is a classic fallacy of correlation being mistaken for causation. How about 30 results overwhelming the user? Too much scrolling = forget it, I don't care anymore. Oh God, the Internet is so big. I'll never find what I'm looking for. That is what users think when you present them with 3 times the number of search results they're used to. That's the real explanation, not this response time explanation that is superficial at best.

Response time is very important. Of course it is. But superior user experience design will trump negligible performance optimizations every day of the week.
views

Tags

6 responses
Actually, Amazon did some double-blind tests with response time six or seven years ago. When they added an artificial .2-.9 second delay, the revenue from visitors linearly decreased proportional to the delay. I'll try to find the paper they published about it.
I don't doubt it, I'm just saying that it's crazy to think you can increase the number of results on a page by 3 times and then assume no effect on user satisfaction and traffic.
I think you're misreading the article: their initial assumption was that the extra results were killing traffic (i.e., more results were overwhelming the user). After they "controlled" (article's language) for extra response time, they realized that was the culprit. They're clearly using the language of scientific experimentation. The whole point is that, initially, they thought as you do and were confused why users were simultaneously clamoring for something and hating it. Only after further experiments did they confirm that it was really the extra response time that was the bigger problem.
It's not actually a control if it changes between your two tests e.g. Test A = 10 results, 200ms, Test B = 30 results, 900ms.
Yes, but the very fact that they called it an "uncontrolled variable" leads me to believe that they did in fact conduct subsequent experiments to control it. I doubt Google would be that sloppy in their experimentation. (Sloppy journalism, on the other hand, is fairly prevalent.)
Facebook, Google reader and Posterous all seem to have adopted a good solution to the user overload issue by dynamically expanding the results as the user gets to the end of the first batch.