Nick Coghlan <[EMAIL PROTECTED]> wrote: > Use cases for a "may be hashable" pretest are much weaker (and typically > hypothetical), but there are cases where it makes a certain amount of sense. > For example, if you have a set-based fast path if all the objects being > handled are hashable, and a list-based slow path if one or more aren't > hashable, and the hashes themselves may be expensive to calculate (e.g. some > of the objects may be large strings) then it may make sense to perform a > precheck to ensure that all of the objects are at least *potentially* > hashable > before you try to put any of them into a set.
Why doesn't it make sense to assume the "fast path" and on failure switch to the "slow path"? (assuming the "fast path" wins on big-O or even a nontrivial constant factor) Such a method can only, in the worst case, be twice as slow as the 'full knowledge zero overhead are all items hashable?' check. But since the evaluation went to the "slow path", the "fast path" overhead may be inconsequential. However, hashability checks on the "fast path" may not be inconsequential in relation to the normal running time of the "fast path". In that sense, hashable() is a waste of time, as it slows down the "fast path" by up to a constant, without improving the "slow path" by anything more than a constant. But those constants are different, relatively large in the case of the "fast path", relatively small in the case of the "slow path". Hashable is getting a big, fat, -1 from me. On the other hand, I'm a full supporter of callable(), and don't much care about iterable(), but leaning towards -1. - Josiah _______________________________________________ Python-3000 mailing list Python-3000@python.org http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com