Re: [OMPI users] processes hang with openmpi-dev-602-g82c02b4

2014-12-23 Thread Kawashima, Takahiro
Hi Siegmar, Heterogeneous environment is not supported officially. README of Open MPI master says: --enable-heterogeneous Enable support for running on heterogeneous clusters (e.g., machines with different endian representations). Heterogeneous support is disabled by default because it

Re: [OMPI users] Whether to use the IB BTL or not

2014-12-23 Thread Howard Pritchard
HI Gary, The decision occurs within the MPI processes themselves (during the call to MPI_Init) - so after the orte daemons have started on the nodes. The BTL's report their "latency" and "bandwidth" - up the stack to the PML/BML layer which then decides based on these metrics which BTL to use to

Re: [OMPI users] Whether to use the IB BTL or not

2014-12-23 Thread Gary Jackson
I'm not having any trouble getting it to start, and it's definitely using the openib btl. I was just wondering how it decided whether the openib btl was appropriate before going down the btl list to tcp when all mpirun gets is a hostname and no other information about connectivity on the

Re: [OMPI users] Whether to use the IB BTL or not

2014-12-23 Thread Howard Pritchard
Hello Gary, It depends on how the Open MPI was built, and on mca parameters passed to the job either via settings in an mca params conf file or the mpirun command line or env. variables. If you have mxm (MLNX) or psm (qlogic/intel) installed on the system where your open mpi was built, you may

[OMPI users] Whether to use the IB BTL or not

2014-12-23 Thread Gary Jackson
How does OpenMPI decide whether to use the IB BTL between a given pair of hosts, assuming there is an IB interface available? -- Gary

Re: [OMPI users] send partial vector from all to all + ALLGATHERV

2014-12-23 Thread Diego Avesani
dear all, I get it. CALL MPI_ALLGATHERV(sendbuf(MPI%MCstart:MPI%MCend),MPI%nmc,MPI_INTEGER,MCrank,MCncGlobal,MCdisplay,MPI_INTEGER,COMM_CART,MPI%iErr) I have to use : displs [in] integer array (of length group size). Entry i specifies the displacement (relative to recvbuf ) at which to place

[OMPI users] send partial vector from all to all + ALLGATHERV

2014-12-23 Thread Diego Avesani
Dear, I my program, I have created a vector and each processor assigns a value to a part of it: *do i=MPI%start,MPI%end* *sendbuf(i)=MPI%rank* * enddo* *MPI%start *and* MPI%end* define the starting and ending positions in the vector. Now, I would like that each processor knows all the

Re: [OMPI users] best function to send data

2014-12-23 Thread Diego Avesani
Dear Brock Palen, you are right. I rushed too much. However, I studied MPI, not very much, I Know it a little bit Thanks a lot, also for the link to the courses. Diego On 22 December 2014 at 06:04, Brock Palen wrote: > Diego, > > That is what you want. > > This isn't