On 10/15/12 8:11 AM, Philippe Sigaud wrote:
On Mon, Oct 15, 2012 at 1:04 PM, Era Scarecrow<rtcv...@yahoo.com> wrote:
On Monday, 15 October 2012 at 09:18:12 UTC, Era Scarecrow wrote:
So an example area to be sorted with 16 elements would take on average
about 100 compares while theoretically you can do it in half that number.
Correction. 16 numbers would be solved in about 49 compares while an
optimal sorting takes about 45. And for 21 numbers about 74 compares while
optimally about 63.
These numbers don't seem that large, but at the same time they do.
Somewhat related: I once played with sorting networks and how to
generate them at compile time in D. It's in template tutorial on
Github. Here are some results:
https://github.com/PhilippeSigaud/D-templates-tutorial/blob/master/templates_around.tex#L560
Discarding LaTeX markup:
n Sorting Standard ratio
network sort
5 324 532 1.642
10 555 1096 1.975
15 803 1679 2.091
20 1154 2314 2.005
25 1538 3244 2.109
30 2173 3508 1.614
35 4075 4120 1.011
40 5918 5269 0.890
45 7479 5959 0.797
50 9179 6435 0.701
Were n is the (predetermined) number of elements in an array and a
ratio of 2.0 means sorting networks are twice faster than
std.algo.sort in this particular case.
I wanted to investigate small sorts using sorting networks for ages, but
never got around to it. That's important for quicksort because it
produces many small arrays that need sorting.
Could you also test for very small sizes (2 to 4) and essentially test
for 1-increment speed up to, say, 30 elements? I assume that's where the
major wins come. I think we could use CT-generated sorting networks for
arrays below a specific size. The converse risk is growth of generated code.
Andrei