>> Our jobs range from 4 cores to 1000 cores, looking at the FAQ page it states 
>> that MXM was used in the past only for >128 ranks, but is in 1.6 used for 
>> rank counts of any size.
> 
> 
> This is reasonable threshold if you use openib btl with RC (default). Since 
> XRC provides better scalability, you may move the threshold up. Bottom line 
> you have to experiment and
> see what is good for you :)

You sound like our vendors, "what is your app"  we are a generic HPC provider 
on campus so we don't have a standard workload, unless "everything" is a 
workload.

We will do some testing, we are setting up a time to talk to our Mellonox SA to 
try to understand these components better.

Note most of our users run just fine with the standard Peer-Peer queues, 
default out the box OpenMPI.

> 
> -Pasha
> 
>> 
>> Brock Palen
>> www.umich.edu/~brockp
>> CAEN Advanced Computing
>> bro...@umich.edu
>> (734)936-1985
>> 
>> 
>> 
>> On Jan 22, 2013, at 2:58 PM, Shamis, Pavel wrote:
>> 
>>>> We just learned about MXM, and given most our cards are Mellonox ConnectX 
>>>> cards (though not all, have have islands of previous to ConnectX and 
>>>> Qlogic supported in the same OpenMPI environment),
>>>> 
>>>> Will MXM correctly fail though to PSM if on qlogic gear and fail though to 
>>>> OpenIB if on previous to connectX cards?
>>> 
>>> Do you want to run MXM and PSM in the same MPI session ? You can't do it. 
>>> MXM and PSM use different network protocols.
>>> If you want to use MXM in your MPI job, all nodes should be configured to 
>>> use MXM.
>>> 
>>> On the other hand, OpenIB btl should support mixed environments out of the 
>>> box.
>>> 
>>> - Pasha
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to