Re: [OMPI users] Simple openmpi-mca-params.conf question

2015-04-06 Thread Ray Sheppard
Thanks again! Ray On 4/6/2015 8:58 PM, Ralph Castain wrote: Yep - it will automatically pick it up. The file should be in the /etc directory. On Apr 6, 2015, at 5:49 PM, Ray Sheppard wrote: Thanks Ralph, The FAQ had me putting in prefixes to that line and I just never

Re: [OMPI users] Simple openmpi-mca-params.conf question

2015-04-06 Thread Ralph Castain
Yep - it will automatically pick it up. The file should be in the /etc directory. > On Apr 6, 2015, at 5:49 PM, Ray Sheppard wrote: > > Thanks Ralph, > The FAQ had me putting in prefixes to that line and I just never figured it > out. I have just dumbly added these things

Re: [OMPI users] Simple openmpi-mca-params.conf question

2015-04-06 Thread Ray Sheppard
Thanks Ralph, The FAQ had me putting in prefixes to that line and I just never figured it out. I have just dumbly added these things to my mpirun line. I have one other question. When I write into the system conf file, will the mpirun know to look there (which seems what the file says) or

Re: [OMPI users] Simple openmpi-mca-params.conf question

2015-04-06 Thread Ralph Castain
btl_tcp_if_exclude=eth2 should work > On Apr 6, 2015, at 5:09 PM, Ray Sheppard wrote: > > Hello list, > I have been given permission to impose my usual defaults on the system. I > have been reading documentation for the openmpi-mca-params.conf file. > "ompi_info --param

[OMPI users] Simple openmpi-mca-params.conf question

2015-04-06 Thread Ray Sheppard
Hello list, I have been given permission to impose my usual defaults on the system. I have been reading documentation for the openmpi-mca-params.conf file. "ompi_info --param all all" did not help. All the FAQ's seem to do was confuse me. I can not seem to understand how to instantiate a

Re: [OMPI users] Different HCA from different OpenMP threads (same rank using MPI_THREAD_MULTIPLE)

2015-04-06 Thread Ralph Castain
I’m afraid Rolf is correct. We can only define the binding pattern at time of initial process execution, which is well before you start spinning up individual threads. At that point, we no longer have the ability to do binding. That said, you can certainly have your application specify a

Re: [OMPI users] Different HCA from different OpenMP threads (same rank using MPI_THREAD_MULTIPLE)

2015-04-06 Thread Rolf vandeVaart
It is my belief that you cannot do this at least with the openib BTL. The IB card to be used for communication is selected during the MPI _Init() phase based on where the CPU process is bound to. You can see some of this selection by using the --mca btl_base_verbose 1 flag. There is a bunch

Re: [OMPI users] OpenMPI 1.8.4 - Java Library - allToAllv()

2015-04-06 Thread Ralph Castain
That would imply that the issue is in the underlying C implementation in OMPI, not the Java bindings. The reproducer would definitely help pin it down. If you change the size2 values to the ones we sent you, does the program by chance work? > On Apr 6, 2015, at 1:44 PM, Hamidreza Anvari

Re: [OMPI users] OpenMPI 1.8.4 - Java Library - allToAllv()

2015-04-06 Thread Hamidreza Anvari
I'll try that as well. Meanwhile, I found that my c++ code is running fine on a machine running OpenMPI 1.5.4, but I receive the same error under OpenMPI 1.8.4 for both Java and C++. On Mon, Apr 6, 2015 at 2:21 PM, Howard Pritchard wrote: > Hello HR, > > Thanks! If you

Re: [OMPI users] OpenMPI 1.8.2 problems on CentOS 6.3

2015-04-06 Thread Ralph Castain
Hmmm…well, that shouldn’t be the issue. To check, try running it with “bind-to none”. If you can get a backtrace telling us where it is crashing, that would also help. > On Apr 6, 2015, at 12:24 PM, Lane, William wrote: > > Ralph, > > For the following two different

Re: [OMPI users] OpenMPI 1.8.4 - Java Library - allToAllv()

2015-04-06 Thread Howard Pritchard
Hello HR, Thanks! If you have Java 1.7 installed on your system would you mind trying to test against that version too? Thanks, Howard 2015-04-06 13:09 GMT-06:00 Hamidreza Anvari : > Hello, > > 1. I'm using Java/Javac version 1.8.0_20 under OS X 10.10.2. > > 2. I have

Re: [OMPI users] [mpich-discuss] Buffered sends are evil?

2015-04-06 Thread Jeff Hammond
While we are tilting at windmills, can we also discuss the evils of MPI_Cancel for MPI_Send, everything about MPI_Alltoallw, how MPI_Reduce_scatter is named wrong, and any number of other pet peeves that people have about MPI-3? :-D The MPI standard contains many useful functions and at least a

Re: [OMPI users] OpenMPI 1.8.2 problems on CentOS 6.3

2015-04-06 Thread Lane, William
Ralph, For the following two different commandline invocations of the LAPACK benchmark $MPI_DIR/bin/mpirun -np $NSLOTS --report-bindings --hostfile hostfile-no_slots --mca btl_tcp_if_include eth0 --hetero-nodes --use-hwthread-cpus --bind-to hwthread --prefix $MPI_DIR

Re: [OMPI users] OpenMPI 1.8.4 - Java Library - allToAllv()

2015-04-06 Thread Hamidreza Anvari
Hello, 1. I'm using Java/Javac version 1.8.0_20 under OS X 10.10.2. 2. I have used the following configuration for making OpenMPI: ./configure --enable-mpi-java --with-jdk-bindir="/System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands"

Re: [OMPI users] OpenMPI 1.8.4 - Java Library - allToAllv()

2015-04-06 Thread Ralph Castain
I’ve talked to the folks who wrote the Java bindings. One possibility we identified is that there may be an error in your code when you did the translation > My immediate thought is that each process can not receive more elements than > it was sent to them. That's the reason of truncation

Re: [OMPI users] OpenMPI 1.8.4 - Java Library - allToAllv()

2015-04-06 Thread Howard Pritchard
Hello HR, It would also be useful to know which java version you are using, as well as the configure options used when building open mpi. Thanks, Howard 2015-04-05 19:10 GMT-06:00 Ralph Castain : > If not too much trouble, can you extract just the alltoallv portion and >

[OMPI users] Different HCA from different OpenMP threads (same rank using MPI_THREAD_MULTIPLE)

2015-04-06 Thread Filippo Spiga
Dear Open MPI developers, I wonder if there is a way to address this particular scenario using MPI_T or other strategies in Open MPI. I saw a similar discussion few days ago, I assume the same challenges are applied in this case but I just want to check. Here is the scenario: We have a system