Hans-Peter Diettrich wrote on Wed, 14 Jul 2010:

Marco van de Voort schrieb:
Mapping does not change that picture (the head still has to move if you
access a previously unread block). Mapping mainly is more about - zero-copy access to file content
- and uses the VM system to cache _already accessed_ blocks.
- and backs up RAM pages by the original file, they never will end up in the swap file.

Apart from specific scenarios, memory mapping can easily be slower than direct reads. The main reason is that you get round trips to the OS via hardware interrupts whenever you trigger a page fault, instead of doing one or more (relatively cheap compared to interrupts) system calls. The potential savings of a few memory copies, especially for files in the range of 2-500kb, is very unlikely to compensate for this.

I see the biggest benefit in many possible optimization in the scanner and parser, which can be implemented *only if* an entire file resides in memory.

Then just read it into a buffer in one shot.

When memory management and (string) copies really are as expensive as some people say, then these *additional* optimizations should give the really achievable speed gain.

a) the memory management overhead primarily comes from allocating and freeing machine instruction (and to a lesser extent node tree) instances b) the string copy cost I mentioned primarily comes from getting symbol names for the purpose of generating rtti and assembler symbol names


Jonas

PS: please update the subject when changing the topic

----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to