>
> I'd suggest starting with the basic algorithm without any abstraction 
> (just hard-code in the type you want to store), then benchmark/tweak 
> the algorithm, and only then try to make it general. 


This is my conclusion too. Abstracting the code is a lot of churn if we’re 
not sure performance without abstraction works. We’ll be able to help on 
the Go front better with a foundation.

Matt

On Tuesday, April 24, 2018 at 9:22:21 AM UTC-5, Louki Sumirniy wrote:
>
> Reading through the wikipedia description of a heap, and especially a 
> binary heap... it's a heap. But that's not a sexy name! Technically it's 
> not a heap because it sorts left to right, heaps sort bottom to top.
>
> I am stripping down my code and directly declaring the struct variables as 
> function types, and I will be removing the method-interface completely, as 
> I don't need it. I will be able to then isolate the external interface 
> functions and 'hide' the internals while being able to easily link the 
> scope of functions to the structure.
>
> If I really need a secondary value to flag empty or not, then the memory 
> efficient way would be to have a bit-indexed flag for this property, so for 
> every 32 bits of data one bit of flag and keep this in an orthogonal entry 
> in the structure. You are correct, though the odds are small, probably 
> 1/2^40 for 32 bit hashes making zeros, that kinda works out to 2^32/2^40 
> frequency of all zeroes and I think that amounts to 0.390625% being zero. 
> That would have been a problem! So for 30 rows of 32 bits I will need 2^30 
> array elements, 2^30/8 bytes to store the empty/fill. I can't think of a 
> more space efficient way to store this property. About 134Mb would be 
> required for this, based on 30 rows. It can, however, be extended at the 
> same time as the tree so its size will keep this proportion linear to the 
> size of the tree array, it's not a big overhead compared to the 4Gb the 30 
> row store will use.
>
> If it seems like this is excessive numbers, this stores 1/4 of the total 
> magnitude of possible values of 32 bit hashes, or half hashes. In the 
> application I have in mind, probably most of the time trees won't grow to 
> more than about 15 rows anyway, which is not that much, before the required 
> pattern is found in a hashchain, at least until the number of miners on the 
> network gets really big. I am writing a Proof of Work algorithm and I was 
> aiming for something that would scale down to short time periods at low 
> difficulty.
>
> This tree store would satisfy that by allowing solutions to pop out at a 
> particular sequence in a hash chain derived from a nonce, and so workers 
> would be able to find solutions in a much more temporally distributed 
> manner than occurs with Cuckoo Cycle and Equihash, which both use array 
> sorts to do their searches. Hence the reason for using this type of 
> structure. And the reason for wanting to use a compact underlying storage 
> is precisely about maximising the caching efficiency in the CPU, because I 
> want to write a solver that can't be sped up significantly with a faster 
> memory bus and/or memory cell retrieval latency (and further, retrieval 
> latency in memory cells still has a minor but significant amount of extra 
> latency for random access, thus wanting to reduce the memory footprint, and 
> increase linearity). It makes absolutely no sense to use a reference based 
> binary tree for values that are as small as the references themselves, as 
> this implies automatically a 4x size of memory required.
>
> ok. again, big thanks for your help with this, I really will stop 
> ruminating, I promise!
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to