On Thu, 16 Mar 2006, Jean Latour wrote:

> My questions are :
> a)  Is OpenMPI doing in this case TCP/IP over IB ? (I guess so)

If the path to the mvapi library is correct then Open MPI will use mvapi 
not TCP over IB. There is a simple way to check. "ompi_info --param btl 
mvapi" will print all the parameters attached to the mvapi driver. If 
there is no mvapi in the output, then mvapi was not correctly detected. 
But I don't think it's the case, because if I remember well we have a 
protection at configure time. If you specify one of the drivers and we're 
not able to correctly use the libraries, we will stop the configure.


> b) Is it possible to improve significantly these values by changing the 
> defaults ?

By default we are using a very conservative approach. We never leave the 
memory pinned down, and that decrease the performance for a ping-pong. 
There are pro and cons for that, too long to be explained here, but in 
general we're seeing better performance for real-life applications with 
our default approach, and that's our main goal.

Now, if you want to get better performance for the ping-pong test please 
read the FAQ at http://www.open-mpi.org/faq/?category=infiniband.

These are the 3 flags that affect the mvapi performance for the ping-pong 
case (add them in $(HOME)/.openmpi/mca-params.conf):
btl_mvapi_flags=6
mpi_leave_pinned=1
pml_ob1_leave_pinned_pipeline=1

>
>   I have used several mca btl parameters but without improving the maximum 
> bandwith.
>  For example :  --mca btl mvapi   --mca btl_mvapi_max_send_size 8388608

It is difficult to improve the maximum bandwidth without the leave_pinned 
activated. But you can improve the bandwidth for medium size messages. 
Play with btl_mvapi_eager_limit to set the limit between short and 
rendez-vous protocol. "ompi_info --param btl mvapi" will give you a full 
list of parameters as well as their description.

>
> c) Is it possible that other IB hardware implementations  have better
>   performances with OpenMPI ?

The maximum bandwidth depend on several factors. One of the most 
importants is the maximum bandwidth on your node bus. To reach 800 and 
more MB/s you definitively need a PCI-X 16 ...

>
> d) Is it possible to use specific IB drivers  for optimal performance  ? 
> (should reach almost 800 MB/sec)

Once the 3 options are set, you should see an improvement on the 
bandwidth.

Let me know if it does not solve your problems.

   george.

"We must accept finite disappointment, but we must never lose infinite
hope."
                                   Martin Luther King

Reply via email to