[Tim]
1. Merge 2 at a time instead of just 1. That is, first sort the
next 2 elements to be merged (1 compare and a possible swap). Then
binary search to find where the smaller belongs, and a shorter binary
search to find where the larger belongs. Then shift both into place.
[Armin]
Good
Hi Tim,
On 10 March 2015 at 18:22, Tim Peters tim.pet...@gmail.com wrote:
1. Merge 2 at a time instead of just 1. That is, first sort the
next 2 elements to be merged (1 compare and a possible swap). Then
binary search to find where the smaller belongs, and a shorter binary
search to find
I suspect that you will find the Python community extremely conservative
about any changes to its sorting algorithm, given that it took thirteen
years and some really impressive automated verification software to find
this bug:
On Mon, Mar 9, 2015 at 10:53 AM, Steven D'Aprano st...@pearwood.info wrote:
On Sun, Mar 08, 2015 at 10:57:30PM -0700, Ryan Smith-Roberts wrote:
I suspect that you will find the Python community extremely conservative
about any changes to its sorting algorithm, given that it took thirteen
years
On Sun, Mar 08, 2015 at 10:57:30PM -0700, Ryan Smith-Roberts wrote:
I suspect that you will find the Python community extremely conservative
about any changes to its sorting algorithm, given that it took thirteen
years and some really impressive automated verification software to find
this
On 15-03-08, nha pham
wrote:
We can optimize the TimSort algorithm by optimizing its binary insertion sort.
The current version of binary insertion sort use this idea:
Use binary search to find a final position in sorted list for a new element
X. Then insert X to that location.
I
I do not know exactly, one thing I can imagine is: it turns the worst case
of binary insertion sort to best case.
With sorted array in range of 32 or 64 items, built from zero element. The
new element you put into the sorted list has a high chance of being the
smallest or the the highest of the
It may be that the comparison that you do is between two elements that are
almost always in the same cache line whereas the binary search might often
incur a cache miss.
On Mon, Mar 9, 2015 at 2:49 PM, nha pham phq...@gmail.com wrote:
I do not know exactly, one thing I can imagine is: it turns
[nha pham phq...@gmail.com]
Statement_1: With an array of size N or less than N, we need at most log2(N)
comparisons to find a value
(or a position, incase the search miss), using the binary search algorithm.
proof: This statement is trivia, and I believe, someone outthere already
proved it.
Thank you very much. I am very happy that I got a reply from Tim Peter.
You are correct, my mistake.
The python code should be:
for i in range(low+1,high): //because we already add
data[low]
x = data[i]
if x = data[i-1]:
After I fix it, here is the result:
Thank you for your comment. I admit that I did not thinking about the proof
before.
Here is my naive proof. I hope you can give me some comments:
=
# This proof is an empirical thinking and is not completed, it just gives
us a closer look.
I hope someone can make it more
[nha pham phq...@gmail.com]
Thank you very much. I am very happy that I got a reply from Tim Peter.
My pleasure to speak with you too :-)
You are correct, my mistake.
The python code should be:
for i in range(low+1,high): //because we already add
data[low]
x =
On 3/8/2015 8:17 PM, nha pham wrote:
We can optimize the TimSort algorithm by optimizing its binary insertion
sort.
The current version of binary insertion sort use this idea:
Use binary search to find a final position in sorted list for a new
element X. Then insert X to that location.
I
We can optimize the TimSort algorithm by optimizing its binary insertion
sort.
The current version of binary insertion sort use this idea:
Use binary search to find a final position in sorted list for a new element
X. Then insert X to that location.
I suggest another idea:
Use binary
14 matches
Mail list logo