On Thursday, 10 February 2022 at 01:43:54 UTC, H. S. Teoh wrote:
On Thu, Feb 10, 2022 at 01:32:00AM +0000, MichaelBi via Digitalmars-d-learn wrote:
thanks, very helpful! i am using a assocArray now...

Are you sure that's what you need?

Depends. if you do say TYPE[long/int] then you have effectively a sparse array, if there's a few entries (*Say a hundred million or something*) it will probably be fine.

Depending on what you're storing, say if it's a few bits per entry you can probably use bitarray to store differing values. The 10^12 would take up.... 119Gb? That won't work. Wonder how 25Gb was calculated.

Though data of that size sounds more like a database. So maybe making index-access that does file access to store to media might be better, where it swaps a single 1Mb block (*or some power^2 size*) doing read/writes. Again if the sparseness/density isn't heavy you could then get away using zram drive and leaving the allocating/compression to the OS so it all remains in memory (*Though that's not going to be a universal workaround and only works on linux*), or doing compression using zlib on blocks of data and swapping them in/out handled by the array-like object, but that i'm more iffy on.

If the array is just to hold say results of a formula, you could then instead do a range with index support to generate the particular value and uses very little space (*depending on the minimum size of data needed to generate it*) though it may be slower than direct memory access.

Reply via email to