Am 01.03.2011 20:19, schrieb dsimcha:
> == Quote from jasonw ([email protected])'s article
>> dsimcha Wrote:
>>> Ok, so that's one issue to cross off the list.  To summarize the discussion 
>>> so
>>> far, most of it's revolved around the issue of automatically determining 
>>> how many
>>> CPUs are available and therefore how many threads the default pool should 
>>> have.
>>> Previously, std.parallelism had been using core.cpuid for this task.  This 
>>> module
>>> doesn't work yet on 64 bits and doesn't and isn't supposed to determine how 
>>> many
>>> sockets/physical CPUs are available.  This was a point of miscommunication.
>>>
>>> std.parallelism now uses OS-specific APIs to determine the total number of 
>>> cores
>>> available across all physical CPUs.  This appears to Just Work (TM) on 
>>> 32-bit
>>> Windows, 32- and 64-bit Linux, and 32-bit Mac OS.
>> Does a Hyperthread machine have 2x as much cores & worker threads ? In 
>> Pentium 4
> HT might reduce throughput, in Core i7 increase it.
> 
> Someone please check on this for me.  I'd assume that these OS functions 
> return
> the number of logical CPUs, but they don't really seem to document and I don't
> have the relevant hardware.

Who cares, the Pentium4 sucks anyway :P
Intel decided to implement Hyperthreading and to report 2 core when there really
is only one, so it should be treated like 2 cores. If this makes things slower..
bad luck (Why did Intel introduce Hyperthreading if it makes things slower,
anyway?). Pentium4 (and probably also Pentium D, it's a dual-core Pentium4)
users still can set the number of worker threads manually.
Furthermore people (hopefully!) don't use Pentium4 for serious heavy 
calculations.

Cheers,
- Daniel

Reply via email to