@Niminem \- I have done the `std/memfiles` thing a lot to avoid "recalculating 
The World" only because your process exited. >1 value is not hard and you're 
already close. <https://github.com/c-blake/nio> is really a generic data layer 
("like CSV but running off **_live_** mmaps because, once you have an 
MMU/Virtual memory, memory-is-memory). That has a few ways to handle string 
data in `FileArray` s.

The simplest self-contained end-to-end application example I have is probably 
the ultimate ident bike shedder's tool: <https://github.com/c-blake/thes> . 
Something with a more ornate variable-length list allocation is the older and 
less simple <https://github.com/c-blake/suggest> . 
<https://github.com/c-blake/ndup> has a couple more examples.

<https://github.com/c-blake/cligen> has an alternative layering of file memory 
maps and some utility/parsing code in cligen/mfile & cligen/mslice which is 
used by `wgt.nim` in <https://github.com/c-blake/bu> along with the (sorry, 
rather partial) <https://github.com/c-blake/adix/blob/master/adix/oats.nim> new 
style concept hash table with "more delegated" memory handling. That lets `wgt` 
run off a live file-resident hash table which is much like a "database" (but 
trusting OSes to flush data to devices and with no multi-simultaneous-process 
access coordination controls since giving up either assumption pulls in a lot 
of locking/etc. complexity that many uses cases may well not need and so should 
be layered judiciously).

If all those examples in my code are not enough, @Vindaar has recently done 
<https://github.com/Vindaar/forked> and added a std/memfiles alternative to 
<https://github.com/Vindaar/flatBuffers> . If you map a file in a RAM 
filesystem (like /dev/shm on Linux) and populate it in one process and then 
read it in another you realize 1-write and 1-read communication (aka zero 
overhead comms) much like shared-memory multi-threading, but A) **_opting in to 
danger_** for surgically scoped memory and B) as a bonus having that memory 
outlive your process, C) if you are using the file system as your primal 
allocator and you are using a modern FS like btrfs, ZFS, bcachefs, .. then you 
can also save on IO with transparent data decompression (though modern NVMe 
drives can often go faster than even multi-threaded data decompression.. and 
some OSes like Linux will also compress RAM).

Nice benefits, but there are costs, namely needing to "name memory areas" in 
the filesystem, do your own allocators against them, and having much of the 
standard library working against types like `string` rather than 
`openArray[char]`.

I should say little of this is really "new".. It's almost as old as processes 
themselves. <https://en.wikipedia.org/wiki/MIT-SHM> was doing it back in 1991 
and other systems before that as a way to optimize large message transfers. It 
has always seemed "under attended" to me in designs of PLs and programming 
systems, though. Maybe that relates to deployment portability concerns since I 
guess some embedded CPU providers still skimp on MMUs, but to me virtual 
memory, files, and folders/directories are all done deals. Anyway, hopefully 
something in the above helps.

Reply via email to