I was hanging problems with 1.2.5 hanging during collective operations
(MPI_Gather and MPI_Barrier):
2008/3/27 Matt Hughes :
> A similar problem was reported in this message, and a 1.3 nightly was
> reported to work:
> http://www.open-mpi.org/community/lists/users/2008/01/4891.php
>
> I tested
The source file is attached, and compile instructions are on the first
line. The program takes input from readline(), does mpi stuff (spawn
and merge), and prints the line index and line length to the terminal.
To reproduce, compile with should_merge set to true, then paste the
contents of main.c
I neglected to consider that. My apologies to Terry Frankcombe
as he was correct. Now I have a follow up
question; how do we set the non-interactive path on a per user basis?
Although the shell used is my default bash, it seems my .bashrc or
.bash_login are not read in non-interactive mode.
Hi,
I don't know if it is my sample code or if it is a problem whit MPI_Scatter()
on inter-communicator (maybe similar to the problem we found with
MPI_Allgather() on inter-communicator a few weeks ago) but a simple program I
wrote freeze during its second iteration of a loop doing an MPI_Scatt
Hi Joao,
Thanks for the bug report! You do not have to call free/disconnect
before MPI_Finalize. If you do not, they will be called automatically.
Unfortunately, there was a bug in the code that did the free/disconnect
automatically. This is fixed in r18079.
Thanks again,
Tim
Joao Vicente
On Apr 3, 2008, at 5:52 PM, Will Portnoy wrote:
Do you mean that you are starting it via ./my_mpi_program?
Yes.
This is quite odd; OMPI shouldn't be interfering much in this scenario
-- our IO forwarding stuff mostly comes into play when mpirun is used.
However, I'd like to understand wha
Looks like an interactive vs. non-interactive PATH problem. Please do
a "ssh node02 printenv" and see if you get what you expect in the PATH.
george.
On Apr 3, 2008, at 11:41 PM, trnja...@umn.edu wrote:
OpenMPI does not use PATH, at least not by default (or my default).
Node 1:
PATH=/usr/k
Hi,
I'm just in the progress of moving our application from LAM/MPI to
OpenMPI, mainly because OpenMPI makes it easier for a user to run
multiple jobs(MPI universa) simultaneously. This is useful if a user
wants to run smaller experiments without disturbing a large experiment
running in the backgr