@AceStar

Yes, Quicksort’s ‘worst-case’ scenario is of O(n^2), when the algorithm is unlucky enough to chose a pivot that happens to be the largest or smallest item of the list. V8 tries avoiding this situation by finding a pivot as the median of the first, last and middle elements. You can also try choosing the pivot randomly to prevent a user from making your sort take a long time. Quicksort is a good example of very different worst-case/average-case performances, so I understand the importance of paying attention to this detail.

On the other hand, Merge sort’s worst-case/best-case/average-case is always O(nlogn) so you don’t really have to worry about initial order-set.

Hope that answered your question!

## 8 Reader Comments

Back to the ArticleAceStar

One thing I know about quicksort is that there is one particular initial state that will cause it to take n^2 time (orders of magnitude slower, with a large enough set). So this can allow an attacker to cause excessive cpu use. Various implementations choose a different split point so that obvious initial states (e.g., already sorted, or already reverse sorted) don’t trigger this but there is still a particular order that will.

Does merge sort have something similar? Is there a quicksort implementation that eliminates this possibility even to someone who knows the implementation?

Claudia Hernández (article author)

@AceStar

Yes, Quicksort’s ‘worst-case’ scenario is of O(n^2), when the algorithm is unlucky enough to chose a pivot that happens to be the largest or smallest item of the list. V8 tries avoiding this situation by finding a pivot as the median of the first, last and middle elements. You can also try choosing the pivot randomly to prevent a user from making your sort take a long time. Quicksort is a good example of very different worst-case/average-case performances, so I understand the importance of paying attention to this detail.

On the other hand, Merge sort’s worst-case/best-case/average-case is always O(nlogn) so you don’t really have to worry about initial order-set.

Hope that answered your question!

a1ip

Nice article! Need to be translated.

iPixel

Really great article. Even if Design as such is subjective you need criticism as you can not ignore the audience nor the client you are designing for.Thanks.

Ian Bradbury

Thanks! This is a great read.

koffery

Really great article. Even if Design as such is subjective you need criticism as you can not ignore the audience nor the client you are designing for. Thanks.

http://flightstar.net

furghella

Excuse my stupid question, but I don’t quite understand why more “operations per second” translates necessarely to faster sorting. One could argue that the fewer operations are required for a given task, the faster I’ll be done with it, no?

Claudia Hernández (article author)

@furghella It means that in a single second the algorithm is able to perform more calculations than the others. Think of it as the algorithm being able to sort more elements than the others in the same amount of time. If exhibit A and exhibit B were given 10 books to sort by author in 1min and A sorted 7/10 and B sorted 9/10, which one would you think sorted things faster?

There is more into this than the simple explanation I just gave. The author of JSPerf explained that “The decision of which test is faster is based on more than just ops/sec by also accounting for margin of error. For example, a test with a lower ops/sec but higher margin of error may be statistically indistinguishable from a test with higher ops/sec and lower margin of error”.

Hope it’s clearer!