Re: [OMPI devel] Infiniband memory usage with XRC

2010-05-23 Thread Pavel Shamis (Pasha)
~ 2300 KB - is it difference per machine or per MPI process ? In OMPI XRC mode we allocate some additional resources that may consume some memory (the hash table), but even so ~2M sounds too much for me. When I will have time I will try to calculate the "resonable" difference. Pasha Sylvain J

Re: [OMPI devel] Infiniband memory usage with XRC

2010-05-19 Thread Sylvain Jeaugey
On Mon, 17 May 2010, Pavel Shamis (Pasha) wrote: Sylvain Jeaugey wrote: The XRC protocol seems to create shared receive queues, which is a good thing. However, comparing memory used by an "X" queue versus and "S" queue, we can see a large difference. Digging a bit into the code, we found some

Re: [OMPI devel] Infiniband memory usage with XRC

2010-05-17 Thread Pavel Shamis (Pasha)
Sylvain Jeaugey wrote: The XRC protocol seems to create shared receive queues, which is a good thing. However, comparing memory used by an "X" queue versus and "S" queue, we can see a large difference. Digging a bit into the code, we found some So, do you see that X consumes more that S ? This

Re: [OMPI devel] Infiniband memory usage with XRC

2010-05-17 Thread Sylvain Jeaugey
Thanks Pasha for these details. On Mon, 17 May 2010, Pavel Shamis (Pasha) wrote: blocking is the receive queues, because they are created during MPI_Init, so in a way, they are the "basic fare" of MPI. BTW SRQ resources are also allocated on demand. We start with very small SRQ and it is incre

Re: [OMPI devel] Infiniband memory usage with XRC

2010-05-17 Thread Pavel Shamis (Pasha)
Please see below. When using XRC queues, Open MPI is indeed creating only one XRC queue per node (instead of per-host). The problem is that the number of send elements in this queue is multiplied by the number of processes on the remote host. So, what are we getting from this ? Not much, e