On 5 October 2010 14:23, Bogdan Costescu <[email protected]> wrote:
>
> HPC usage is a mixture of point-to-point and collective
> communications; most (all?) MPI library use low level point-to-point
> communications to achieve collective ones over Ethernet.. Another
> important point is that the collective communications can be started
> by any of the nodes - it's not one particular node which generates
> data and then spreads it to the others; it's also relatively common
> that 2 or more nodes reach the point of collective communication at
> the same time, leading to a higher load on the interconnect, maybe
> congestion.

True indeed.
However this device might be very interesting if you redefine your
parallel processing paradigm.
How about problems where you send out identical datasets to (say) a
farm of GPUs.
_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to