On Oct 31, 2011, at 12:33 PM, Ralph Castain wrote:

>> A colleague who reads this list pointed out to me that the
>> problem is probably because the cluster that I'm using has
>> QLogic infiniband cards that apparently require 
>> OMPI_MCA_orte_precondition_transports to be set.  That
>> may be the answer to my question.
> 
> That was my next question :-)
> 
> Your colleague is correct. Alternatively, you can tell OMPI to ignore the psm 
> interface to those cards by either configuring it out (--without-psm) or at 
> run time by setting the envar OMPI_MCA_mtl=^psm

Note that your original workaround:

>>> I can work around the problem by setting the
>>> "OMPI_MCA_orte_precondition_transports" environment variable
>>> before running the program using the command:
>>> 
>>> % eval "export `mpirun env | grep OMPI_MCA_orte_precondition_transports`"

might actually be a good idea here, because PSM is the most performant 
interface on QLogic cards.

However, if your application doesn't care (i.e., it's not bound by extremely 
low latency, message injection rates, or high bandwidth), then it might not 
matter.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to