And this is precisely why I like MPI based solutions (or message passing in 
general).  It forces the software architecture to explicitly decouple the 
threads in a timing sense (none of that "we'll use a shared memory semaphore" 
stuff) so it tends to be more easily scaled/ported to other architectures.

However, it is a BIG conceptual jump in architecture for most applications that 
aren't in the embarrassingly parallel, just fire off a parallel thread kind of 
bucket.  

It's comparable to the fear and trepidation inspired by asynchronous logic 
designs.  Harder to prove correct, etc.

Jim Lux


-----Original Message-----
From: Douglas Eadline [mailto:[email protected]] 
Sent: Monday, August 12, 2013 9:35 AM
To: Lux, Jim (337C)
Cc: [email protected]
Subject: Re: [Beowulf] thermal/power limits

..snip..

> Potentially, of course, once you bite the bullet to parallelize, and 
> you do it in a scalable manner, then, you can presumably scale to 
> architectures where you have N cores running at full speed (e.g. A 
> classic cluster).  I wonder, though, whether the end-user applications 
> codes actually do that, or whether they design for the "single user on 
> a single box" model.  That is, they design to use multiple cores in 
> the same box,but don't really design for multiple boxes, in terms of 
> concurrency, latency between nodes, etc.

This in my mind is not an easy question to answer. Assuming an application can 
use more cores in a scalable fashion, the issue with SMP multi-core is how many 
effective cores you get vs actual -- due to memory contention.
In my tests "it all depends on the application"
One of the nice things about MPI codes is the ability to run on 16 separate 
nodes, one 16-way node, or anything in between.
OpenMP has no way to get off the motherboard, but soon will open the door the 
onboard SIMD units. OpenMP does not guarantee an automatic win over MPI on 
multi-core either.


--
Doug


>
>
> James Lux, P.E.
> Task Manager, FINDER - Finding Individuals for Disaster and Emergency 
> Response Co-Principal Investigator, SCaN Testbed (née CoNNeCT) Project 
> Jet Propulsion Laboratory
> 4800 Oak Grove Drive, MS 161-213
> Pasadena CA 91109
> +(818)354-2075
>
>
> --
> Mailscanner: Clean
>
> _______________________________________________
> Beowulf mailing list, [email protected] sponsored by Penguin 
> Computing To change your subscription (digest mode or unsubscribe) 
> visit http://www.beowulf.org/mailman/listinfo/beowulf
>


--
Doug

--
Mailscanner: Clean

_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to