On 04/20/16 06:01, Warren Block wrote:
On Tue, 19 Apr 2016, Aleksander Alekseev wrote:
Why Wikipedia, specifically? There are a lot of places that describe
quicksort. How about just
Note: This implementation of qsort() is designed to avoid the
worst-case complexity of N**2 that is often seen with standard
I would say that this statement is just false. Worst-case complexity is
still N**2. How about something like:
This implementation of qsort() has worst case complexity of N**2.
However measures were taken that make it very unlikely that for some
random input N**2 swaps will be made. It's still possible to generate
such an input on purpose though. See code below for more details.
The quicksort algorithm worst-case is O(N**2), but this implementation
has been designed to avoid that for most real data.
There is something which I don't understand. Why is quicksort falling
back to insertion sort which is an O(N**2) algorithm, when there exist a
O(log(N)*log(N)*N) algorithms, which I propose as a solution to the
"bad" characteristics of qsort.
The replacement algorithm I propose does not allocate working memory and
it does not recurse and it does not use a significant amount of stack
space. Maybe some number theorists want to have a look? I cannot seem to
find it anywhere public.
Local test results w/D5241 running statically in userspace:
Data set size log2(N) qsort w/insert sort qsort w/hpsort mergesort
10 1.28 1.12 1.34 random0
11 2.42 2.43 2.83 random0
12 5.21 5.2 6.1 random0
10 1.26 1.14 1.44 random1
11 2.46 2.46 3.05 random1
12 5.28 5.29 6.56 random1
10 10.01 5.1 0.2 bad0
11 39.12 12.11 0.33 bad0
12 156.54 28.42 0.58 bad0
New algorithm is in the middle column. Times are in seconds.
Bad0 data set:
email@example.com mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"