On Feb 2, 2005, at 8:48 PM, Stewart Stremler wrote:
Anything other than a toy lisp implements those structures exactly as you would expect with the O() guarantees you would also expect.
Weren't the first lisp implementations built that way? Or do those count as "toy implementations" as well?
Sure, the *first* implementations were probably built that way. However, most of the various dialects by 1980 had much better implementations.
Amortized
vectors allocated from a data specific memory area are really friendly
to modern processors (4-8 element vectors get sucked into processor
cache lines whenever *any* element gets accessed).
Surely that doesn't have to be visible to the programmer?
Yes and no, but the problem is not confined to Lisp. I can build my vectors out of cons-cells--then I get O(n) accesses and length, but inserts and deletes are O(1). I can build my cons-cells out of vectors--then I get O(1) accesses and O(1) length, but then inserts and deletes are O(n) as I have to copy things around.
How visible is O() performance to a programmer? Depends what they are doing.
Do these have different accessors in CL? That is, if I write a generic function that, oh, sums the elements of a collection, would I need an implementation for each sort of data-structure (list, tree, vector, etc.) or would the same code handle all types?
The generic sequence accessors normally paper over this problem. So, the same code tends to work for all types.
-a
--
KPLUG-List mailing list [email protected] http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list
