On Monday, 15 April 2024 at 16:13:41 UTC, Andy Valencia wrote:
On Monday, 15 April 2024 at 08:05:25 UTC, Patrick Schluter wrote:
The setup of a memory mapped file is relatively costly. For smaller files it is a net loss and read/write beats it hands down.

Interestingly, this performance deficit is present even when run against the largest conveniently available file on my system--libQt6WebEngineCore.so.6.4.2 at 148 megs. But since this reproduces in its C counterpart, it is not at all a reflection of D.

As you say, truly random access might play to mmap's strengths.

Indeed, my statement concerning file size is misleading. It's the amount of operations done on the file that is important. For bigger files it is normal to have more operations. I have measurement for our system (Linux servers) where we have big index files which represent a ternary tree that are generally memory mapped. These files are several hundreds of megabytes big and the access is almost random. These files still grow but the growing parts are not memory mapped but accessed with pread() and pwrite() calls. Accesses via pread() take exactly twice the time from memory copy for reads of exactly 64 bytes (size of the record).


My real point is that, whichever API I use, coding in D was far less tedious; I like the resulting code, and it showed no meaningful performance cost.


Reply via email to