[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-28 Thread Stefan Pochmann


Stefan Pochmann  added the comment:

> This is already faster in pure Python than list.sort() for cases like:

Seems to depend on the system, it's slower on my laptop but faster on GCE:

Python 3.10.0 on my laptop:
 7.42 s  lexisort
 6.83 s  sort
 5.07 s  groupsort

Python 3.9.2 on Google Compute Engine:
 2.09 s  lexisort
 2.64 s  list.sort
 1.52 s  groupsort

This is my groupsort btw:

def groupsort(xs):
xs.sort(key=itemgetter(0))
start = 0
n = len(xs)
tail = itemgetter(slice(1, None))
for i in range(1, n+1):
if i == n or xs[i-1][0] < xs[i][0]:
sublist = xs[start:i]
sublist.sort(key=tail)
xs[start:i] = sublist
start = i

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-24 Thread Tim Peters


Change by Tim Peters :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-24 Thread Tim Peters


Tim Peters  added the comment:


New changeset 51ed2c56a1852cd6b09c85ba81312dc9782772ce by Tim Peters in branch 
'main':
bpo-45530: speed listobject.c's unsafe_tuple_compare() (GH-29076)
https://github.com/python/cpython/commit/51ed2c56a1852cd6b09c85ba81312dc9782772ce


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-24 Thread Tim Peters

Tim Peters  added the comment:

To be concrete, here's an implementation of a full-blown, stable lexicographic 
sort based on "bucket refinement". `xs` is a list of sequences to be sorted in 
lexicographic order. The types of the sequences don't matter (lists, tuples, 
strings, ...). Indeed, since list elements are never compared against each 
other directly, they don't even have to be the same sequence type.

This is already faster in pure Python than list.sort() for cases like:

xs = [random.choices(range(10), k=random.randrange(5, 30))
  for i in range(100)]

However, for cases like strings of the form

'x' * 10_000 + str(i)

it's very much slower than list.sort(), despite that it "should be" very much 
faster. That appears mostly due to the:

t.sort(key=lambda x: x[k])
xs[lo : hi] = t

lines. list.sort() doesn't accept `lo` and `hi` arguments, so sorting _just_ a 
slice requires copying that slice into a temp list, sorting the temp, then 
copying the sorted temp back in. So dealing with a single `x` position in the 
string prefixes effectively requires physically copying the entire list twice 
over - mad overhead to copy the list 20 thousand times.

While the need doesn't come up all that often, I'd support adding optional `lo` 
and `hi` arguments to `list.sort()`. This isn't the first time the lack has 
turned a slick approach into a timing disaster for me ;-)

About list.sort()'s merge strategy, I'm glad to be rid of the original. It's 
not just correctness, it's the effort of _reasoning_ about its consequences. It 
was about a full decade before the first proof was published of that it really 
is a worst-case O(N log N) sort. listsort.txt didn't contain a proof because 
the only proof attempts I had were so complicated not even _I_ found them 
compelling ;-)

Vincent Jugé in particular started at the other end, looking for a merge 
strategy that made proof straightforward instead of Byzantine. It's 
straightforward under the "powersort" strategy too, although it relies on "well 
known" results about approximations to optimal binary search trees.

def lexisort(xs):
buckets = [(0, len(xs), 0)]
while buckets:
lo, hi, k = buckets.pop()
t = []
for i in range(lo, hi):
x = xs[i]
if k >= len(x):
xs[lo] = x
lo += 1
else:
t.append(x)
t.sort(key=lambda x: x[k])
xs[lo : hi] = t
while lo < hi:
pivotk = xs[lo][k]
i = lo + 1
while i < hi and xs[i][k] == pivotk:
i += 1
if i - lo > 1:
buckets.append((lo, i, k + 1))
lo = i
assert lo == hi

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-22 Thread Stefan Pochmann


Stefan Pochmann  added the comment:

Yes, I'm more familiar with the issue in the context of strings or lists. Your 
example of strings like "'x' * 10_000 + str(i)" looks like something I almost 
certainly used before as counterexample to someone's time complexity claim :-)

I the context of multi-criteria sort I might not have thought of it before, I 
guess because you usually don't have many criteria. But now this made me think 
we can take even more advantage of the existing tuple_elem_compare.

I the context of multi-criteria sort I might not have thought of it before, I 
guess because you usually don't have many criteria. But now the optimized 
tuple_elem_compare makes me think we can take even more advantage of it.

I think those type-specific optimized comparison functions are what I left out 
when I previously read the sort, that's why it was new to me. I just saw you 
also use powersort's merge strategy now. Kinda sad, I had seen you lament that 
your own strategy "remains hard to grasp why it's always correct now", while I 
think it's rather straightforward and had considered adding a little 
explanation in the doc.

Bucket sort is a name I considered, but I wrote two, so named them after their 
implementation (groupsort first used itertools.groupby).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-22 Thread Tim Peters


Tim Peters  added the comment:

Stefan, thanks - I think I understand what you're driving at now.

You're (re)discovering that sorting by lexicographic ordering rules is 
_inherently_ suboptimal if list elements can only be compared "starting at 
index 0" each time. Not just tuples, but also, e.g., lists, byte arrays, and 
even strings. Any sequence type whose ordering derives from lexicographic 
generalization of its elements' ordering.

This is quite independent of whether or not CPython tries to exploit type 
homogeneity.

The usual(*) solution is indeed to exploit a kind of bucket sort instead, first 
sorting at elements' index 0 alone, saying all those elements with the same 
value at index 0 constitute "a bucket", and then applying the same idea to each 
bucket but sorting on index 1 instead. Where those in turn are partitioned into 
buckets equal at index 1, and those buckets are similarly each sorted on index 
2. And so on, and so on.

No doubt that can be a big win, and even in the absence of the type homogeneity 
tricks. It's far beyond the scope of _this_ PR, though, and is really an 
entirely different approach.

I've thought about it at times, but it's "a real project" to do it right. In 
effect, it would implement an optimized generalization of the SO OP's 
"automagically convert multikey to multisort" wish.

Fine by me if someone pursues that, but I don't have the energy for it, nor 
much interest either in doing a half-hearted job of it.

I always expected that if it came up at all, it would be in the context of 
sorting lists of strings.  For example, consider:

xs = ['x' * 10_000 + str(i) for i in range(1_000_000)]
random.shuffle(xs)

Even with type homogeneity tricks, Python's sorting of the result is very far 
from optimal. Every single comparison starts by burning time skipping over the 
(at least) ten thousands equal characters at the start.

The "bucket sort" (or it could be viewed as a form of MSD radix sort) quickly 
discovers that all the characters at index 0 are equal, and also all those at 
index 1, ..., and all those at index . The "real work" of sorting is then 
reduced to working on the (at most) 6 characters remaining.

(*) But I'm lying ;-) The actual usual solution is to use "multikey quicksort", 
because it's easy to code and works "in place". But it's not a stable sort. 
String sorting doesn't care about that, but sorting other kinds of sequences 
can care a lot.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-22 Thread Stefan Pochmann


Stefan Pochmann  added the comment:

> It's not surprising the 2nd positions are rarely equal _given_ that the 
> universe has been reduced to the 100 tuples with the same first element.

Yes, exactly.

> In any case, I don't see the point to this exercise ;-)

Just to illustrate why I disagree with what I responded to there. I'm not 
convinced that many matches at the first tuple position are really an 
indication that there'll be many matches at the second position as well, so 
that one should "admit defeat" and that it's "arrogance" to hope for only few 
matches at the second position. I'd wager it's rather typical that the matching 
rate at the second position is significantly lower. If you sort people by last 
and then first name, I think you'd get the same effect as with my above random 
example, for the same reason. If you sort a set of 2D coordinates, you'll even 
*never* get matches at the second position. Even with that SO question's data 
and its massive duplication (especially at the second position) *and* its 
correlation between first and second position (since both "sorted(x)" and 
"x[::2]" are derived from the same x) and its 43% matching rate at the first 
position, there's still only 27% matching rate at the second position (lower tha
 n the 1/3 break-even point).

> BTW, don't overlook that tuple_elem_compare can _only_ be used on the very 
> first elements of the tuples.

Yes, that's why I only talked about PyObject_RichCompareBool there.

> > 1) Sort the list only by first position (comparing with only
> >one tuple_elem_compare).

> Not sure what that means, exactly. (the "with only one tuple_elem_compare" 
> part;

Just wanted to make clear it wouldn't used two tuple_elem_compare or a 
PyObject_RichCompareBool.

> > 2) Identify equal-first-position-runs (with tuple_elem_compare)
> > and sort each run independently (comparing only positions 1+).

> What are you aiming at? There was no motivation given here.

Much fewer comparisons, especially PyObject_RichCompareBool comparisons. For 
example for sorting a million pairs with first/second element chosen from a 
population of 100,000, I get these numbers of comparisons (per element):

current  groupsort
--
== first 18.60  none(PyObject_RichCompareBool)
 < first 16.19 19.60(tuple_elem_compare)
== second 2.41  2.36(PyObject_RichCompareBool)
 < second 2.41  2.36(PyObject_RichCompareBool)
--
total slow   23.42  4.72(PyObject_RichCompareBool)
total fast   16.19 19.60(tuple_elem_compare)
--
total39.61 24.32

(With "current" I mean before your optimizations, which I can't test yet.)

> Will, this issue report is about unsafe_tuple_compare(), so, ya, if the 
> changes you envision aren't limited to that, I'd say you want to open a 
> different issue report and a different PR :-)

Ok, I might try that.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-21 Thread Tim Peters


Tim Peters  added the comment:

> I see you mentioned that PyObject_RichCompareBool(..., Py_EQ) might be
> faster for example because it checks identity.

For example, in tupsort.py replace

xs = [random() for _ in range(length)]

with

xs = ['z' * 100 for _ in range(length)]

Then sorting that directly is _slower_ than wrapping each string in a 1-tuple 
first. That's so today, and also so in the latest version of the PR (but not in 
the original PR code).

> Why not do an identity check before the ms->tuple_elem_compare calls? Too
> expensive and rarely successful?

It's a cheap test, but, ya, "rarely successful" outside of special cases. 
PyObject_RichCompareBool() already does that special-casing now, and the PR now 
adjusts to use PyObject_RichCompareBool(Py_EQ) instead when the pair of 
ms->tuple_elem_compare calls isn't paying off. It's enough that one part of the 
chain rarely pays off, but gets nearly all the potential benefit when it does 
pay off. Duplicating the special-casing would double the overhead with scant 
additional payoff when it pays.

> I sorted a million tuples where both first and second element
> are randomly chosen from a population of 10,000.

So you expect about 100 repetitions of each value in each tuple position.

> So their amounts of duplication were the same. But these are the
> statistics from sorting:
> - First position:   18,603,981 equality comparisons, 29.87% equal
> - Second position:   5,556,203 equality comparisons,  0.09% equal

Well, for any given fixed value in the first tuple position, you expect about 
100 tuples total to have that value in the first position, and the second 
position in those contains a random sample (with repetition) of 100 elements 
out of 10,000. It's not surprising the 2nd positions are rarely equal _given_ 
that the universe has been reduced to the 100 tuples with the same first 
element.

In any case, I don't see the point to this exercise ;-)

BTW, don't overlook that tuple_elem_compare can _only_ be used on the very 
first elements of the tuples. The pre-scan developed no idea whatsoever of the 
types of tuple elements other than the first.


> One more idea: What do you think of sorting lists of tuples
> (or tuple keys) in two stages?

Primarily that I'm not looking for a term project here - I'm looking to get an 
easy win from simple changes to one function.

What's there in the PR now is designed to play well with how sorting works. 
When there are many duplicates, a merge will find about 7 comparison attempts 
in a row where the first elements are equal, and the code adjusts to use 
PyObject_RichCompareBool(Py_EQ) for at least all but the first compare. 
Galloping mode will continue with that for a brief time at the start, but 
probably soon hit a chain of cases all of which can be resolved solely with 
tuple_elem_compare calls, and the code adjusts to that too, never using 
PyObject_RichCompareBool(Py_EQ) again after the first time it sees that return 
"no, not equal".

OTOH, when there are no duplicates, PyObject_RichCompareBool(Py_EQ) isn't 
helpful, and in fact will never be called.

> 1) Sort the list only by first position (comparing with only one
>tuple_elem_compare).

Not sure what that means, exactly. (the "with only one tuple_elem_compare" 
part; I know what it means to sort by the first position)

> 2) Identify equal-first-position-runs (with tuple_elem_compare) and
> sort each run independently (comparing only positions 1+).

What are you aiming at? There was no motivation given here.

> Some experiments I did with this looked good, not sure if too
> off-topic to post here...

Will, this issue report is about unsafe_tuple_compare(), so, ya, if the changes 
you envision aren't limited to that, I'd say you want to open a different issue 
report and a different PR :-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-21 Thread Stefan Pochmann


Stefan Pochmann  added the comment:

I see you mentioned that PyObject_RichCompareBool(..., Py_EQ) might be faster 
for example because it checks identity. Why not do an identity check before the 
ms->tuple_elem_compare calls? Too expensive and rarely successful?
 
> Extending the idea to positions beyond the first is dubious on those grounds: 
> if the first tuple positions in fact often match, then the optimization has 
> already backfired. Time to admit defeat then, not double down on failed 
> arrogance ;-)
 
I don't see that. First and second positions are quite different.

For example I sorted a million tuples where both first and second element are 
randomly chosen from a population of 10,000. So their amounts of duplication 
were the same. But these are the statistics from sorting:
- First position:   18,603,981 equality comparisons, 29.87% equal
- Second position:   5,556,203 equality comparisons,  0.09% equal
Many first positions matched, almost no second positions matched.

One more idea: What do you think of sorting lists of tuples (or tuple keys) in 
two stages?
1) Sort the list only by first position (comparing with only one 
tuple_elem_compare).
2) Identify equal-first-position-runs (with tuple_elem_compare) and sort each 
run independently (comparing only positions 1+).
Some experiments I did with this looked good, not sure if too off-topic to post 
here...

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-20 Thread Tim Peters


Tim Peters  added the comment:

It's rare that an optimization is a _pure_ win. Some cases win, others lose. 
There's almost never "a proof" of net gain ending with "QED".

Of course it's dead easy to construct examples where "many duplicates in the 
first tuple position" rules, and even easy to construct realistic examples.

But, as you already said in the SO report, in those cases it's a better idea to 
do multiple single-key sorts instead of a single multiple-keys-in-a-tuple sort. 
The base sorting algorithm can exploit duplicates in a single-key sort - 
indeed, the more duplicates there are, the more benefit it can get.

The sorting HOWTO could do a service by discussing this for CPython's sorting 
algorithm - it's not obvious, and can have major effects.

In the meantime, I'm only aiming to speed tuple keys for cases where they're 
well-suited to begin with: the first tuple elements _are_ mostly distinct. 
Giving up a significant win for the relative handful of people who know how to 
exploit the algorithm is not worth it, to me, to avoid making suboptimal uses 
even less optimal.

I'm more a pragmatist than a Utopian :-;

Extending the idea to positions beyond the first is dubious on those grounds: 
if the first tuple positions in fact often match, then the optimization has 
already backfired. Time to admit defeat then, not double down on failed 
arrogance ;-)

One idea I'm playing with: keep track of how often, during a tuple-key sort, a 
compare _is_ resolved by the first elements. Then adjust 
`unsafe_tuple_compare()` to switch to "the other" approach when "the current" 
approach isn't working out.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-20 Thread Stefan Pochmann


Stefan Pochmann  added the comment:

Hmm. What do you think about using "<" inside the loop for the remaining tuple 
elements as well? It's even less likely that both the first and the second 
elements are equal.

When the second elements differ, this saves 0.5 PyObject_RichCompareBool 
(average 1.5 "<" instead of one "==" and one "<").

When the second elements are equal, this costs 1 PyObject_RichCompareBool more 
(two "<" instead of one "==").

That's fewer PyObject_RichCompareBool calls if they're equal less than 1/3 of 
the time.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-20 Thread Stefan Pochmann


Stefan Pochmann  added the comment:

Ok, I agree, the change only possibly "breaks" expectations that were already 
broken (including my naive sorted-vs-minmax).

Yes, I know sort itself only asks `<` (I've actually read most of its code and 
all its documentation). That's why I was irritated for a moment when I saw a 
`>`.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Tim Peters


Tim Peters  added the comment:

> Elliot shortly after retrated from the approach, saying he
> rewrote unsafe_tuple_compare to move the less-than after
> the equality testing, to make sure it's 100% consistent".

I remember at the time having no idea what he meant by that comment - and I 
still have no idea ;-)

> The proposed method would change comparisons of *the same* tuples.
> You'd for example no longer have
> "sorted(tuples) == [min(tuples), max(tuples)]"
> for a list of two tuples.

Since very little is defined about the result of sorted() in the absence of a 
total ordering (just that it's _some_ permutation of the original), that's not 
surprising. It's already the case anyway (using the released 3.10.0):

>>> ts = [(float("nan"), 1), (float("nan"), 1.0)]
>>> sorted(ts) == [min(ts), max(ts)]
False

> I recently saw a case where despite the "only <" promise, sorting
> caused an "x > y" comparison (because "y < x" wasn't implemented).

Sorting itself never asks for __gt__, only __lt__. What the _rest_ of the 
language does to implement __lt__ is out of sorting's control. For example, 
tuple.__lt__ and list.__lt__ go on to invoke __eq__ on contained elements. A 
more general subsystem in the language falls back on x.__gt__(y) if y.__lt__(x) 
isn't implemented. Sorting has no control over any of that.

> tupsort.py does "range(2", should be "range(1" I think)

Good catch - thanks! Repaired now :-)

--
Added file: https://bugs.python.org/file50373/tupsort.py

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Stefan Pochmann


Stefan Pochmann  added the comment:

I misinterpreted you then, sorry. I guess also because Elliot shortly after 
retrated from the approach, saying he "rewrote unsafe_tuple_compare to move the 
less-than after the equality testing, to make sure it's 100% consistent".

I'd say the inconsistency the program showed was different. Yes, the xs got 
sorted differently than the ys, but the xs differed from the ys. The proposed 
method would change comparisons of *the same* tuples. You'd for example no 
longer have "sorted(tuples) == [min(tuples), max(tuples)]" for a list of two 
tuples.

Also curious factoid: I recently saw a case where despite the "only <" promise, 
sorting caused an "x > y" comparison (because "y < x" wasn't implemented).

(tiny note: tupsort.py does "range(2", should be "range(1" I think)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Tim Peters


Tim Peters  added the comment:

Stefan, I looked at that old PR and can't find anywhere I suggested that he 
change the unsafe_tuple_compare() logic. I just _asked_ him "I'm curious about 
what the patched Python prints for this program:".

And, in fact, that program showed that CPython was _already_ inconsistent with 
how NaNs were treated during tuple comparison (object identity overriding 
float.__eq__).

In any case, no, I have no problem at all with inferring "x == y" from "not (x 
< y) and not (y < x)".

Curious factoid: in [x, y].sort(), CPython never asks "x < y?". Because that's 
irrelevant ;-) The list is already sorted if and only if "not (y < x)". Which 
is how "x <= y" is spelled, because the implementation promises to do only "<" 
comparisons.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Stefan Pochmann


Stefan Pochmann  added the comment:

> What justifies "shouldn't"?

I based that on your old comments. Looked like tuple comparison results in 
sorting shouldn't differ from tuple comparison results elsewhere. I had 
actually proposed this exact method in a comment under your stackoverflow 
answer, but deleted it when I found that it had been discussed and discarded 
back then. But if it feels ok now, then good :-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Tim Peters


Tim Peters  added the comment:

Stefan, I have scant memory of ever caring, but, if I did, I got over it ;-) 

>>> math.nan == math.nan
False
>>> {math.nan : 5}[math.nan]
5

That is, PyObject_RichCompareBool() takes object identity as overriding __eq__; 
that's why the dict lookup works. But this one doesn't:

>>> {math.nan : 5}[float("nan")]
... Traceback (most recent call last):
KeyError: nan

Although that may change too.

I used to care a little, but not at all anymore. There's no sense trying to 
_make_ sense of what sorting could possibly mean in the absence of a total 
ordering.

> If you sort objects that always return True for both `<` and `==`,

A case of "garbage in, garbage out" to me.

> this would also have the opposite problem, considering tuple u smaller
> than v when it shouldn't.

What justifies "shouldn't"? If u[0] < v[0], then by the definition of 
lexicographic ordering, u < v. But if u[0] == v[0], which apparently is _also_ 
the case, then the same definition says the ordering of u and v is inherited 
from the ordering of u[1:] and v[1:]. There's no principled way of declaring 
one of those possibly contradicting definitions "the right one".

> That said, maybe the function could be optimized for
> known "well-behaving" types?

A type is well-behaving to me if and only if it implements a total ordering. If 
a type doesn't, what you get is an implementation accident, and code relying on 
any specific accident is inherently buggy.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Stefan Pochmann


Stefan Pochmann  added the comment:

That's how Elliot did it in the original proposal: 
https://bugs.python.org/issue28685

Back then you pointed out that "Else we can assume u[0] == v[0]" isn't true for 
float('nan') values:
https://github.com/python/cpython/pull/582#issuecomment-285838656
https://github.com/python/cpython/pull/582#issuecomment-285841789

If you sort objects that always return True for both `<` and `==`, this would 
also have the opposite problem, considering tuple u smaller than v when it 
shouldn't.

That said, maybe the function could be optimized for known "well-behaving" 
types?

--
nosy: +Stefan Pochmann

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Raymond Hettinger


Change by Raymond Hettinger :


--
nosy: +rhettinger

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Tim Peters


Change by Tim Peters :


--
assignee:  -> tim.peters

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Tim Peters


Change by Tim Peters :


--
keywords: +patch
pull_requests: +27345
stage: needs patch -> patch review
pull_request: https://github.com/python/cpython/pull/29076

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Tim Peters


Tim Peters  added the comment:

The attached tupsort.py gives a simple. focused example. Typical output on my 
box:

float   3.10
(float,)   11.75
[float]25.68

It's sorting a large list of floats. In the first line the list contains plain 
floats. In the second line, each float was wrapped in a 1-tuple. In the last 
line, wrapped in a singleton list.

Essentially any overhead of any kind is more expensive than merely comparing 
two floats in HW, so overhead is approximately everything here.  The tuple and 
list comparison functions are very similar, and the large advantage of 
"(float,)" over "[float]" is mostly due to that unsafe_tuple_compare() uses one 
less PyObject_RichCompareBool() call to resolve each compare (assuming that all 
floats in the list are distinct, which I didn't check, but is almost certainly 
the case).

Getting rid of its other PyObject_RichCompareBool() should yield another nice 
speed boost.

The pattern is worth addressing because tuples are routinely used as key= 
arguments to achieve multi-key sorting.

--
Added file: https://bugs.python.org/file50372/tupsort.py

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Tim Peters


Tim Peters  added the comment:

FYI, this is fallout from a StackOverflow mystery:

https://stackoverflow.com/questions/69468552/efficiency-of-sorting-by-multiple-keys-in-python/69610671#

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45530] Improve listobject.c's unsafe_tuple_compare()

2021-10-19 Thread Tim Peters


New submission from Tim Peters :

The code could typically be faster if it did what its comments imply it does: 
skip the expense of PyObject_RichCompareBool() entirely for the first pair of 
tuple elements. It actually always calls PyObject_RichCompareBool() on the 
first pair, and only if that says "not equal" does it use the faster 
ms->tuple_elem_compare to resolve "ok, so it's less than?".

Instead it could do the first pair before, and entirely apart from, the loop, 
along the lines of:

- Use ms->tuple_elem_compare to see whether u[0] < v[0]. If so, or if an error, 
we're done.

- Use ms->tuple_elem_compare to see whether v[0] < u[0]. If so, or if an error, 
we're done.

Else we can assume u[0] == v[0], and go on to compare u[1:] to v[1:] via a 
trivial variation of the current code.

In cases where the first pair does resolve it (common!), replacing a 
PyObject_RichCompareBool() with a ms->tuple_elem_compare can be significantly 
faster overall.

--
components: Interpreter Core
messages: 404360
nosy: tim.peters
priority: normal
severity: normal
stage: needs patch
status: open
title: Improve listobject.c's unsafe_tuple_compare()
type: performance
versions: Python 3.11

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com