-- Adam Rice <[EMAIL PROTECTED]> on 07/13/01 19:06:21 +0100

> But hashes waste masses of memory. This sort of thing is why Lisp
> programmers sneer at Perl programmers. The most efficient way to do set
> operations is with sorted arrays*, but Perl doesn't have primitives for
> those so it's not Fun.

If your data comes to you sorted and de-duped, fine.  Sorting
overhead is not trivial, nor is the extra time spend dealing
with dup's in most cases.  Net result is that hashes can be 
a nice, fast way to deal with these cases.

"Masses of memory" may also be a non-issue now that a GB of 
266MHz double-clock core costs under $300.  And if you have
enough data that the hashing overhead is significant in core
then you have a big enough list that sorting it would also
cause some pain.


Reply via email to