I wondered what the performance would be and tested the following code:

#!/usr/bin/env python3
import random
import time

random.seed( hash('Testing Keys') )

lt_calls = 0
key_calls = 0

class MyObject:
    def __init__( self, value1, value2 ):
        self.value1 = value1
        self.value2 = value2

    def __lt__(self, other):
        global lt_calls
        lt_calls +=1 

        if self.value1 < other.value1:
            return True
        else:
            return self.value2 < other.value2

    def key(self):
        global key_calls
        key_calls +=1 

        return self.value1, self.value2

lt_list = []
random

for value1 in reversed(range(10000)):
    value2 = value1 - 50

    lt_list.append( MyObject( value1, value2 ) )

random.shuffle( lt_list )

key_list = lt_list[:]

print( len(lt_list) )

s = time.time()
key_list.sort( key=MyObject.key )
e = time.time() - s
print( 'key %.6fs %6d calls' % (e, key_calls) )

s = time.time()
lt_list.sort()
e = time.time() - s
print( ' lt %.6fs %6d calls' % (e, lt_calls) )


it outputs this for my with python 3.6.0

10000
key 0.010628s  10000 calls
 lt 0.053690s 119886 calls

It seems that providing a key is ~5 times faster the depending on __lt__.
(I even used a short circuit to se if __lt__ could beat key).

I often have more then one way to sort an object. Its easy for me to provide a 
set
of key functions that meet the needs of each sort context. I'm not sure what 
extra
value the __sort_key__ would offer over providing a key method as I did.

Barry

_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to