On Monday, 4 June 2012 at 20:40:03 UTC, Dmitry Olshansky wrote:
[snip]
Sorry my Trie implementation focused on "constructe once - read everywhere" case.
My cases will likely have quite low amortized number of reads per insert / delete.

How do you handle situations when not existent, etc., is needed?

Easy say you have 2-level 4 entry each level (for sake of simplicity), say the final value is int.

Then in highly redundant or just when there is little amount of values in initial data set (both cases are equivalent, thus tries are easily invertible btw):

LVL 0: [0, 1, 0, 0]
This first one always occupies full size (it's asserted that there is no index >= 4)
LVL 1: [0, 0, 0, 0] [1, 0, 2, 3]
Note again - only full pages, no cheating and half-pages, but we can save on the amount of them (note almost obligatory zero page)

So it contains only 1, 2 and 3 at indexes of 4, 6, 7 respectively, T.init is our way of not EXISTS ... yes, I think that should be user definable. This way there is no checks anywhere, only shift, add dereference, shift, add, dereference...
Smart

For example, one by
one would allow ignoring key encoding (and thus using multiple encodings
simultaneously just as easily as single).

It's just as easy with the whole thing. Treat it as bytes ;)
Except when equivalent keys in different encodings should be treated as equal. But now I can see that my counter-example is only partially valid - walklength could be used instead of length (more expensive, though), and dchars everywhere.

I guess making my own mistakes is necessary anyway.

It could enlightening just don't give up too quickly and don't jump to conclusions. In fact try to be sympathetic with "loosing party", like in ... "hm this way is much slower, so bad - I have to optimize it somehow". In other words make sure you squeezed all you can from "slow" method.
This deserves quoting somewhere :) Thanks a lot and have a good night! (It's late in Russia, isn't it?)

Reply via email to