Over IB, I'm not sure there is much of a drawback.  It might be slightly slower 
to establish QP's, but I don't think that matters much.

Over iWARP, rdmacm can cause connection storms as you scale to thousands of MPI 
processes.


On Apr 20, 2011, at 5:03 PM, Brock Palen wrote:

> We managed to have another user hit the bug that causes collectives (this 
> time MPI_Bcast() ) to hang on IB that was fixed by setting:
> 
> btl_openib_cpc_include rdmacm
> 
> My question is if we set this to the default on our system with an 
> environment variable does it introduce any performance or other issues we 
> should be aware of?
> 
> Is there a reason we should not use rdmacm?
> 
> Thanks!
> 
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> bro...@umich.edu
> (734)936-1985
> 
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to