On 4/28/20 2:18 AM, Guido Stepken wrote:
E.g Python dqueue doesn't show any performance loss here
The performance of a particular python data structure has no bearing on
the fact that your original statement:
In most Lisp languages, you only can "append" to a list, never "prepend"
..is inc
It’s hard to predict cache behavior without an actual workload, so i would
recommend using cachegrind with a real program (not a benchmark) to
evaluate the cost of doing things one way or the other.
On Tue, Apr 28, 2020 at 05:24 Guido Stepken wrote:
> Certainly. But how is it *implemented* inter
Certainly. But how is it *implemented* internally? Mostly you suffer
massive performance loss when prepending, because complete linked list gets
moved to a new place in memory. If internal representaion ist a double
cell, one value, pointer to next, then you quickly suffer CPU cache misses.
Wild ju
On Mon, Apr 27, 2020 at 02:38:35PM -0700, Wilhelm Fitzpatrick wrote:
> > Correct. The first character(s) need to be accessed more prominently.
> >
> Hmm... because checking the first
> byte/character is just a mask, rather than
> shift & mask?
Yes, but most of all it concerns the first up to 8 ch
On 4/27/20 2:42 PM, Guido Stepken wrote:
In most Lisp languages, you only can "append" to a list, never "prepend".:
"Prepend", aka "add to the beginning" seems the natural (and
non-destructive) operation of Lisp, e.g.
(cons 9 (1 2 3)) -> (9 1 2 3)
..perhaps that is what you meant?
-wilhel
In most Lisp languages, you only can "append" to a list, never "prepend".
Python, e.g., has such structures:
https://www.geeksforgeeks.org/deque-in-python/
The "*d*ouble *e*nded *queue*". Appending, prepending, Python simply lets
you do, what you want! ;-)
Mighty, mighty stuff, highly efficient!
Thanks for the insights, Alex!
On 4/27/20 1:53 PM, Alexander Burger wrote:
This means, a pointer to a cell points direcly to its CAR, while a pointer to a
symbol points to its VAL. In both cases (car ...) or (val ...) are just a single
pointer derefernces (memory fetches).
Ah, so car and val are
The "move-to-front" heuristic is equivalent to the LRU caching policy.
The "move-up-by-one" heuristic converges to LFU (least frequently used) caching
policy.
Perhaps LFU is what you really want?
BTW, "move-up-by-one" has a side-effect on every invocation, as does
"move-to-front".
However, you
Hi Wilhelm,
> that (get ...) has a side effect! It moves the
> key that was accessed to the head of the
> symbol "tail". I assume this is a performance
> optimization to cause recently accessed
> properties to "bubble up" to the front of the
> list in case they are re-accessed soon.
Yes, exactly.
I've been digging down to really understand the symbol implementation in
Picolisp, since symbols are used for so many purposes within the language.
One surprising thing I learned last night is that (get ...) has a side
effect! It moves the key that was accessed to the head of the symbol
"tail"
10 matches
Mail list logo