Reinier Olislagers wrote:
On 16/03/2014 19:45, Marco van de Voort wrote:
In our previous episode, Reinier Olislagers said:
The build faq (that I have) states:
1. Does make -j work reliably on Windows, too?
2. I intend to detect the number of logical cores as per
http://wiki.lazarus.freepascal.org/Example_of_multi-threaded_application:_array_of_threads#1._Detect_number_of_cores_available.
and run that many jobs. Is that a good idea?
No. Even not on Windows. You need the number of physical cores, not logical
ones.  This will return the duplicate number of cores on systems with
hyperthreading. (I tested on my system and indeed, 8 instead of 4)

Do you see a performance dropoff or instability/crashes... or both when
increasing the job number past the amount of physical cores?

The limited testing of make etc. that I've done on systems with hyperthreading showed that optimal performance matched the HT count, not the core count. Obviously "your mileage may vary" depending on cache architecture and the amount of RAM.

Irrespective of whether we're talking about cores or HT, for a system that isn't grossly NUMA I'd expect time-to-completion to be roughly logarithmic against the number of CPUs in play. I'd suspect that by some metric, there's a "point of inflection" roughly corresponding to the number of CPUs or cores, but that's not to say that performance can't be improved by going slightly higher.

For a large-scale NUMA system, or for something like MOSIX or an explicit build farm, all bets are off.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]
_______________________________________________
fpc-devel maillist  -  [email protected]
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel

Reply via email to