> On Apr 19, 2019, at 12:46 PM, Carl Edquist <[email protected]> wrote:
> 
> For instance - if you have a 30GB db file on a 64bit system with <= 2GB ram, 
> you can still mmap the whole file, and benefit from that mmap.  If the 
> portion of the db that gets used for a query fits within the available 
> pagecache ram, it's a clear win.  (It's not like the whole file automatically 
> gets read from disk into the pagecache for the mmap.)

Oops, you’re right. I somehow lost sight of the obvious when replying…

> But even if the whole file is used for the query (that is, more than fits 
> into pagecache/ram), it still has the benefit of avoiding the system calls 
> for the file seeks/reads.  (Either way the kernel needs to swap disk pages 
> into/out of of the pagecache.)

Sort of. Most current OSs have a universal buffer cache, wherein filesystem 
caches and VM pages use the same RAM buffers. A page-fault and a file read will 
incur similar amounts of work. The big benefit is that the memory-mapped pages 
can be evicted from RAM when needed for other stuff, whereas a malloc-ed page 
cache is considered dirty and has to be swapped out before the RAM page can be 
reused. (That said, I am not a kernel or filesystem guru so I am probably 
oversimplifying.)

But yeah, I agree with you that it seems odd to have a compiled-in restriction 
on the maximum memory-map size.

—Jens
_______________________________________________
sqlite-users mailing list
[email protected]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to