I am planning to take a stab at fixing 
https://github.com/cloudius-systems/osv/issues/884. As explained in the 
issue, possible solution would require data structure supporting efficient 
read from and write to the file as well as memory allocation. 

One candidate is std::deque. Another structure fitting a bill is 
std::unordered_map, which could store file segments of constant size (page 
4K or maybe bigger but power of 2). Then for every write we would simply 
allocate missing file segments, for reads we would simply find range of 
segments to copy from. The segments would be keyed by (offset / segment 
size) and ideally page aligned. 

This data structure would support sparse file and help with addressing 
issue https://github.com/cloudius-systems/osv/issues/979 to optimize memory 
usage when mmap()-ing ramfs files. 

I wonder what others think about this approach.

Waldek

-- 
You received this message because you are subscribed to the Google Groups "OSv 
Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to