Brian Beesley wrote:
>
> I think there may be an issue with multi(core) processor systems here. The 
> relative timings were AFAIK derived on uniprocessor systems. If you are 
> running several instances of P-1 stage 2 in parallel you may well be cache 
> thrashing to an unacceptable degree. 


That is a possible problem I hadn't considered.


> However this situation will probably not 
> persist as after the system has been running for a while the multiple 
> instances will "desynchronise" so that it will be unusual for more than one 
> instance to be running P-1 stage 2.
>   

You'd again likely be correct.  I re-joined, and fired up my 
newly-water-cooled Quad Core on four exponents at once, one per core.  
They're "in sync" and all on P-1 stage 2... though proceeding at 
different rates (Core 0 obviously gets fewer cycles).   Eventually, 
they'll desync... though probably not for a couple full exponents 
(unless I help them along by purposely pausing cores).



> Setting memory to 8MB (which used to be the default) disables P-1 stage 2, 
> but 
> allows stage 1 (with a larger limit to compensate); this may be the short 
> term answer to this "problem".
>   


8 MB is still the default.  I didn't know 8 MB disabled stage 2.   
You're right yet again -- the "problem" was not in the defaults, but in 
my misunderstanding of the settings I was altering in the first place.   
It sounds like the changes George wrote about for v25 will address this 
anyway, though, with multi-core-specific features designed right in -- 
which is a good idea, since many casual users may be uncomfortable 
trying to set up core affinity and such, and manage four separate config 
files and four separate command line alterations and four system 
tray^W^Wnotification area icons, etc.  

Having fun writing threaded code yet, George?  

Mutexes and Semaphores and Threads (Oh My!). 
Mutexes and Semaphores and Threads (Oh My!). 
Mutexes and Semaphores and Threads (Oh My!). 

Jeff


_______________________________________________
Prime mailing list
[email protected]
http://hogranch.com/mailman/listinfo/prime

Reply via email to