On 31 January 2016 at 15:09, Yannick Duch?ne <yannick_duchene at yahoo.fr>
wrote:

> If it's memory mapped, it's less an efficiency issue,
>

Hm, can you elaborate on this assertion? I don't think I agree.

Lets say sqlite wants to access a page in the DB/journal. In the case of
normal file access this is a call to pread/ReadFile, and in the
memory-mapped case a call to memcpy.

Now, the data in question may or may not alreaby be in the OS's disk cache.
If it is pread/ReadFile/memcpy proceeds without delay. If the data is not
in the cache, pread/ReadFile blocks until the i/o is complete. Similarly,
memcpy will encounter a page fault and the process will block until the OS
completes the i/o required to fill the page in memory. I'll grant there's
an extra syscall per i/o in the normal file access mode, but this cost is
_vanishingly_ small compared to the time required to load data from disk.

Have I misunderstood the mechanism behind memory-mapped file access?
-Rowan

Reply via email to