[OMPI users] problem with .bashrc stetting of openmpi

2010-08-13 Thread sunita
Dear Open-mpi users, I installed openmpi-1.4.1 in my user area and then set the path for openmpi in the .bashrc file as follow. However, am still getting following error message whenever am starting the parallel molecular dynamics simulation using GROMACS. So every time am starting the MD job, I

Re: [OMPI users] problem with .bashrc stetting of openmpi

2010-08-13 Thread Cristobal Navarro
hello Sunita, what linux distribution is this? On Fri, Aug 13, 2010 at 1:57 AM, wrote: > Dear Open-mpi users, > > I installed openmpi-1.4.1 in my user area and then set the path for > openmpi in the .bashrc file as follow. However, am still getting following > error

Re: [OMPI users] problem with .bashrc stetting of openmpi

2010-08-13 Thread Terry Dontje
sun...@chem.iitb.ac.in wrote: Dear Open-mpi users, I installed openmpi-1.4.1 in my user area and then set the path for openmpi in the .bashrc file as follow. However, am still getting following error message whenever am starting the parallel molecular dynamics simulation using GROMACS. So every

Re: [OMPI users] users Digest, Vol 1658, Issue 2

2010-08-13 Thread ananda.mudar
Josh I am having problems compiling the sources from the latest trunk. It complains of libgomp.spec missing even though that file exists on my system. I will see if I have to change any other environment variables to have a successful compilation. I will keep you posted. BTW, were you successful

Re: [OMPI users] users Digest, Vol 1658, Issue 2

2010-08-13 Thread Joshua Hursey
I probably won't have an opportunity to work on reproducing this on the 1.4.2. The trunk has a bunch of bug fixes that probably will not be backported to the 1.4 series (things have changed too much since that branch). So I would suggest trying the 1.5 series. -- Josh On Aug 13, 2010, at

Re: [OMPI users] problem with .bashrc stetting of openmpi

2010-08-13 Thread Jeff Squyres
You might want to make sure that this .bashrc is both the same and is executated properly upon both interactive and non-interactive logins on all the systems that you are running on. On Aug 13, 2010, at 1:57 AM, sun...@chem.iitb.ac.in wrote: > Dear Open-mpi users, > > I installed

Re: [OMPI users] problem with .bashrc stetting of openmpi

2010-08-13 Thread Gus Correa
Hi Sunita My guess is that you are picking a wrong mpiexec, because of the way you set your PATH. What do you get from "which mpiexec"? Try *pre-pending" the OpenMPI path to the existing PATH, instead of appending it (that's what you did with the LD_LIBRARY_PATH): export

Re: [OMPI users] Checkpointing mpi4py program

2010-08-13 Thread ananda.mudar
Josh I have stack traces of all 8 python processes when I observed the hang after successful completion of checkpoint. They are in the attached document. Please see if these stack traces provide any clue. Thanks Ananda From: Ananda Babu Mudar (WT01 - Energy

Re: [OMPI users] Checkpointing mpi4py program

2010-08-13 Thread Joshua Hursey
Nope. I probably won't get to it for a while. I'll let you know if I do. On Aug 13, 2010, at 12:17 PM, wrote: > OK, I will do that. > > But did you try this program on a system where the latest trunk is > installed? Were you successful in

Re: [OMPI users] OpenMPI Run-Time "Freedom" Question

2010-08-13 Thread Michael E. Thomadakis
On 08/12/10 21:53, Jed Brown wrote: Or OMPI_CC=icc-xx.y mpicc ... If we enable a different set of run time library paths for Intel compilers than those used to build OMPI when we compile and execute the MPI app these new run-time libs will be accessible to OMPI libs to run against

Re: [OMPI users] Abort

2010-08-13 Thread David Ronis
Thanks to all who replied. First, I'm running openmpi 1.4.2. Second coredumpsize is unlimited, and indeed I DO get core dumps when I'm running a single-processor version. Third, the problem isn't stopping the program, MPI_Abort does that just fine, rather it's getting a cordump. According

Re: [OMPI users] Abort

2010-08-13 Thread Jeff Squyres
On Aug 13, 2010, at 1:18 PM, David Ronis wrote: > Second coredumpsize is unlimited, and indeed I DO get core dumps when > I'm running a single-processor version. What launcher are you using underneath Open MPI? You might want to make sure that the underlying launcher actually sets the

Re: [OMPI users] Abort

2010-08-13 Thread David Ronis
I'm using mpirun and the nodes are all on the same machin (a 8 cpu box with an intel i7). coresize is unlimited: ulimit -a core file size (blocks, -c) unlimited David n Fri, 2010-08-13 at 13:47 -0400, Jeff Squyres wrote: > On Aug 13, 2010, at 1:18 PM, David Ronis wrote: > > >