We’re seeing some abnormal performance behavior when running an OpenMPI 1.4.4 
application on RH6.4 using Mellanox OFED 1.5.3.  Under certain circumstances, 
system CPU starts dominating and performance tails off severely.  This behavior 
does not happen with the same job run with TCP.  Is there a resource that shows 
what’s compatible with what?  For example, does OpenMPI need to be rebuilt when 
an OFED changes, etc?  

Reply via email to