Marco van de Voort schrieb:
In our previous episode, Hans-Peter Diettrich said:
For me, a much higher priority when doing rewrites might be
multithreading nf the compiler itself.
That's questionable, depending on the real bottlenecks in compiler operation. I suspect that disk I/O is the narrowest bottleneck, that can not be widened by parallel processing.

No that has to be solved by a bigger granularity (compiling more units in
one go).  That avoids ppu reloading and limits directory searching (there is
a cache iirc) freeing up more bandwidth for source loading.

ACK. The compiler should process in one go as many units as possible - but this is more a matter of the framework (Make, Lazarus...), that should pass complete lists of units to the compiler (projects, packages).

As a workaround a dedicated server process could hold the least recently processed unit objects in RAM, for use in immediately following compilation of other units. But this would only cure the symptoms, not the reason for slow compiles :-(


Not only compiling goes in paralel, I assume one could also load a ppu in
parallel? (and so parallelize the blocking time of the disk I/O and the
parsing of the .ppu contents.

It may be a good idea to implement different models, that either read entire files or use the current (buffered) access. Depending on disk fragmentation it may be faster to read entire (unfragmented) source or ppu files, before requests for other files can cause disk seeks and slow down continued reading of files from other places. Both models can be used concurrently, when an arbitration is possible from certain system (load) parameters. In the easy case an adjustable source file buffer size (command line argument) would allow to check and optimize file processing from outside the compiler, independently from (future) internal threading.

DoDi

_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to