Interesting how in the consumer PC world, they're starting to realize the 
challenge of effectively parallelizing.  This article talks about the whole 
cores vs speed thing, since they theorize a power dissipation limit results in 
speed*#of cores = constant.

http://www.marco.org/2013/08/10/ivy-bridge-ep-prices

"Many applications still only max out one or two cores effectively, so for most 
usage, a higher clock speed is better than more cores if you can’t have both. 
But for highly parallelizable tasks, such as video processing, 3D rendering, 
and scientific research,…"
"And for all of those applications that don’t parallelize well (hi, Adobe and 
LAME<http://lame.sourceforge.net/>!), the higher-core, lower-clocked, 
more-expensive CPUs will probably perform worse than the cheaper, fewer-core, 
higher-clocked ones."

Amdahl's law strikes again <grin>
And I wonder how many of those video processing, 3D rendering, and scientific 
research tasks actually have off the shelf user applications that can 
effectively use multiple cores?  Not everyone is coding up their own solutions, 
particularly for a MacPro (the subject of the article).  I'm also not sure that 
parallelizing into N cores running at X/N clock rate is faster than running 1 
core at X clock rate.  If you are rendering animation frames (and for a lot of 
Finite Element codes)  there's a certain number of arithmetic operations to be 
done to get the job done, and whether you do N parallel streams at 1/N rate or 
1 stream at full rate doesn't matter.

Potentially, of course, once you bite the bullet to parallelize, and you do it 
in a scalable manner, then, you can presumably scale to architectures where you 
have N cores running at full speed (e.g. A classic cluster).  I wonder, though, 
whether the end-user applications codes actually do that, or whether they 
design for the "single user on a single box" model.  That is, they design to 
use multiple cores in the same box,but don't really design for multiple boxes, 
in terms of concurrency, latency between nodes, etc.


James Lux, P.E.
Task Manager, FINDER – Finding Individuals for Disaster and Emergency Response
Co-Principal Investigator, SCaN Testbed (née CoNNeCT) Project
Jet Propulsion Laboratory
4800 Oak Grove Drive, MS 161-213
Pasadena CA 91109
+(818)354-2075

_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to