Florian Klaempfl schrieb:
Memory throughput is a bottleneck, I/O not really. So multithreading has a real advantage on NUMA systems and systems where different cores have dedicated caches. One or two years ago, I did some experiments with asynchronous assembler calls and it already improved significantly compilation times on platforms using an external assembler.
Good to know :-)
The problem is that the whole compiler is not designed to do so. This could be solved by an approach we want to implement for years: split the compilation process into tasks (like parse unit X, load unit Y, code gen unit X) with dependencies. This should also solve the fundamental problems with unit loading/compilation causing sometimes internal errors. The first step would be to do this without multithreading, later it could be tried to execute several tasks in parallel.
I should know more about available threading features (blocking, synchronization...). IMO compilation should be done in two steps, with the first step providing the interface for used units, from a .ppu file or by a new parse. Once this information is available, the using units (threads) can resume their work. The final code generation can occur in further threads.
At least I know now what to look for, in my parser redesign. It seems to be a good idea to reduce the number of global links, so that in a following compiler redesign multiple threads can do their work independently.
DoDi _______________________________________________ fpc-devel maillist - fpc-devel@lists.freepascal.org http://lists.freepascal.org/mailman/listinfo/fpc-devel