On Mar 27, 2009, at 6:20 , Nifty Tom Mitchell wrote:
On Thu, Mar 26, 2009 at 09:03:30PM -0700, Greg Lindahl wrote:
On Thu, Mar 26, 2009 at 11:32:23PM -0400, Dow Hurst DPHURST wrote:
We've got a couple of weeks max to finalize spec'ing a new
cluster. Has
anyone knowledge of lowering
I have been tracking the developments in Advanced Message Queue Protocol (AMQP)
and there has been some progress in implementation. Does this middleware have
any applicability in the HPC cluster domain? What would you use it for?
Adrian Wong
Slide
The only way I got under 1usec in PingPong test or with
ib_[write/send/read]_lat is with QDR and back to back (ie. no switch).
With switch I get 1.1[3-7]usec [HP-MPI, OpenMPI, MVAPICH].
It does not matter the MPI although I have to agree with Greg that multirail
also increases latency.
Multirail
http://www.penguincomputing.com/cluster_computing
Can the above be of any help to you ?
Regards
Prajeev
On Fri, Mar 27, 2009 at 11:16 AM, Dow Hurst DPHURST dphu...@uncg.eduwrote:
To: beowulf@beowulf.org
From: Greg Lindahl lind...@pbm.com
Sent by: beowulf-boun...@beowulf.org
Date:
From the blurb on SGI MPT - their ICE systems have multirail-
http://www.sgi.fr/WP_MPT_SGI.pdf
SGI MPT utilizes multiple InfiniBand rails to perform message
pathway distribution and message striping. Message pathway
distribution is done by strategically mapping individual routes
(source to
From the blurb on SGI MPT -(their ICE systems have multirail)
http://www.sgi.fr/WP_MPT_SGI.pdf
SGI MPT utilizes multiple InfiniBand rails to perform message
pathway distribution and message striping. Message pathway
distribution is done by strategically mapping individual routes
(source to
Joshua mora acosta wrote:
The only way I got under 1usec in PingPong test or with
ib_[write/send/read]_lat is with QDR and back to back (ie. no switch).
With switch I get 1.1[3-7]usec [HP-MPI, OpenMPI, MVAPICH].
It does not matter the MPI although I have to agree with Greg that multirail
also
On Mar 27, 2009, at 18:20 , Craig Tierney wrote:
What about using multi-rail to increase message rate? That isn't
the same as latency, but if you put messages on both wires you
should get more.
Exactly why we saw almost 2x speedup on message rate (latency)
sensitive apps using Platform
JC on a bike... I need multirail email for resiliency. If this email
is blank I will shoot myself.
From the blurb on SGI MPT - their ICE systems have multirail-
http://www.sgi.fr/WP_MPT_SGI.pdf
SGI MPT utilizes multiple InfiniBand rails to perform message
pathway distribution and message
In message from Dow Hurst DPHURST dphu...@uncg.edu (Thu, 26 Mar 2009
23:32:23 -0400):
We've got a couple of weeks max to finalize spec'ing a new cluster.
Has
anyone knowledge of lowering latency for NAMD by implementing a
multi-rail IB solution using MVAPICH or Intel's MPI? My research
, Håkon
On Mar 27, 2009, at 19:09 , Joshua mora acosta wrote:
So a way to quantify if multirail helps on network latency driven
workloads
there should be a sinthetic benchmark that can be built to show off
the impact
of balancing these requests among multiple HCAs bound to different
On Thu, Mar 26, 2009 at 08:30:05PM -0700, Adrian Wong wrote:
I have been tracking the developments in Advanced Message Queue
Protocol (AMQP) and there has been some progress in implementation.
Does this middleware have any applicability in the HPC cluster
domain? What would you use it for?
So a way to quantify if multirail helps on network latency driven workloads
there should be a sinthetic benchmark that can be built to show off the impact
of balancing these requests among multiple HCAs bound to different network
paths or core pairs like an all-to-[all,gather,scatter] or barrier
13 matches
Mail list logo