Re: [OMPI users] Segmentation fault / Address not mapped (1) with 2-node job on Rocks 5.2

2010-06-22 Thread Ralph Castain
Sorry for the problem - the issue is a bug in the handling of the pernode option in 1.4.2. This has been fixed and awaits release in 1.4.3. On Jun 21, 2010, at 5:27 PM, Riccardo Murri wrote: > Hello, > > I'm using OpenMPI 1.4.2 on a Rocks 5.2 cluster. I compiled it on my > own to have a

[OMPI users] Segmentation fault / Address not mapped (1) with 2-node job on Rocks 5.2

2010-06-21 Thread Riccardo Murri
Hello, I'm using OpenMPI 1.4.2 on a Rocks 5.2 cluster. I compiled it on my own to have a thread-enabled MPI (the OMPI coming with Rocks 5.2 apparently only supports MPI_THREAD_SINGLE), and installed into ~/sw. To test the newly installed library I compiled a simple "hello world" that comes with

Re: [OMPI users] segmentation fault: Address not mapped

2009-11-24 Thread Iris Pernille Lohmann
: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] > On Behalf Of Iris Pernille Lohmann > Sent: 04 November 2009 10:20 > To: Open MPI Users > Subject: Re: [OMPI users] segmentation fault: Address not mapped > > Hi Jeff, > > Thanks for your reply. > > There

Re: [OMPI users] segmentation fault: Address not mapped

2009-11-23 Thread Jed Brown
On Mon, 23 Nov 2009 10:39:28 -0800, George Bosilca wrote: > In the case of Open MPI we use pointers, which are different than int > on most cases I just want to comment that Open MPI's opaque (to the user) pointers are significantly better than int because it offers type

Re: [OMPI users] segmentation fault: Address not mapped

2009-11-23 Thread George Bosilca
may give you an idea. Thanks, Iris Lohmann -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Iris Pernille Lohmann Sent: 04 November 2009 10:20 To: Open MPI Users Subject: Re: [OMPI users] segmentation fault: Address not mapped Hi Jeff

Re: [OMPI users] segmentation fault: Address not mapped

2009-11-23 Thread Iris Pernille Lohmann
-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Iris Pernille Lohmann Sent: 04 November 2009 10:20 To: Open MPI Users Subject: Re: [OMPI users] segmentation fault: Address not mapped Hi Jeff, Thanks for your reply. There are no core files associated with the crash. Based on your answer

Re: [OMPI users] segmentation fault: Address not mapped

2009-11-04 Thread Iris Pernille Lohmann
Users Subject: Re: [OMPI users] segmentation fault: Address not mapped Many thanks for all this information. Unfortunately, it's not enough to know what's going on. :-( Do you know for sure that the application is correct? E.g., is it possible that a bad buffer is being passed to MPI_Isend? I

Re: [OMPI users] segmentation fault: Address not mapped

2009-11-02 Thread Jeff Squyres
Many thanks for all this information. Unfortunately, it's not enough to know what's going on. :-( Do you know for sure that the application is correct? E.g., is it possible that a bad buffer is being passed to MPI_Isend? I note that it is fairly odd to fail in MPI_Isend itself because

[OMPI users] segmentation fault: Address not mapped

2009-10-26 Thread Iris Pernille Lohmann
Dear list members I am using openmpi 1.3.3 with OFED on a HP cluster with redhatLinux. Occasionally (not always) I get a crash with the following message: [hydra11:09312] *** Process received signal *** [hydra11:09312] Signal: Segmentation fault (11) [hydra11:09312] Signal code: Address not

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-07 Thread Jeff Squyres
On Jul 7, 2009, at 8:08 AM, Catalin David wrote: Thank you very much for the help and assistance :) Using -isystem /users/cluster/cdavid/local/include the program now runs fine (loads the correct mpi.h). This is very fishy. If mpic++ is in /users/cluster/cdavid/local/bin, and that

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-07 Thread Catalin David
Thank you very much for the help and assistance :) Using -isystem /users/cluster/cdavid/local/include the program now runs fine (loads the correct mpi.h). Thank you again, Catalin On Tue, Jul 7, 2009 at 12:29 PM, Catalin David wrote: >  #include >  #include >  int

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-07 Thread Catalin David
#include #include int main(int argc, char *argv[]) { printf("%d %d %d\n", OMPI_MAJOR_VERSION, OMPI_MINOR_VERSION,OMPI_RELEASE_VERSION); return 0; } returns: test.cpp: In function ‘int main(int, char**)’: test.cpp:11: error: ‘OMPI_MAJOR_VERSION’ was not declared in this scope

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-07 Thread Dorian Krause
Catalin David wrote: Hello, all! Just installed Valgrind (since this seems like a memory issue) and got this interesting output (when running the test program): ==4616== Syscall param sched_setaffinity(mask) points to unaddressable byte(s) ==4616==at 0x43656BD: syscall (in

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-07 Thread Ashley Pittman
This is the error you get when an invalid communicator handle is passed to a MPI function, the handle is deferenced so you may or may not get a SEGV from it depending on the value you pass. The 0x44a0 address is an offset from 0x4400, the value of MPI_COMM_WORLD in mpich2, my guess

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-07 Thread Catalin David
Hello, all! Just installed Valgrind (since this seems like a memory issue) and got this interesting output (when running the test program): ==4616== Syscall param sched_setaffinity(mask) points to unaddressable byte(s) ==4616==at 0x43656BD: syscall (in /lib/tls/libc-2.3.2.so) ==4616==by

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-06 Thread Catalin David
On Mon, Jul 6, 2009 at 3:26 PM, jody wrote: > Hi > Are you also sure that you have the same version of Open-MPI > on every machine of your cluster, and that it is the mpicxx of this > version that is called when you run your program? > I ask because you mentioned that there was

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-06 Thread jody
Hi Are you also sure that you have the same version of Open-MPI on every machine of your cluster, and that it is the mpicxx of this version that is called when you run your program? I ask because you mentioned that there was an old version of Open-MPI present... die you remove this? Jody On Mon,

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-06 Thread Catalin David
On Mon, Jul 6, 2009 at 2:14 PM, Dorian Krause wrote: > Hi, > >> >> //Initialize step >> MPI_Init(,); >> //Here it breaks!!! Memory allocation issue! >> MPI_Comm_size(MPI_COMM_WORLD, ); >> std::cout<<"I'm here"<>

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-06 Thread Dorian Krause
Hi, //Initialize step MPI_Init(,); //Here it breaks!!! Memory allocation issue! MPI_Comm_size(MPI_COMM_WORLD, ); std::cout<<"I'm here"<

[OMPI users] Segmentation fault - Address not mapped

2009-07-06 Thread Catalin David
Dear all, I have recently started working on a project using OpenMPI. Basically, I have been given some c++ code, a cluster to play with and a deadline in order to make the c++ code run faster. The cluster was a bit crowded, so I started working on my laptop (g++ 4.3.3 -- Ubuntu repos, OpenMPI

Re: [OMPI users] Segmentation fault: Address not mapped

2008-08-03 Thread Jeff Squyres
On Aug 1, 2008, at 6:07 PM, James Philbin wrote: I'm just using TCP so this isn't a problem for me. Any ideas what could be causing this segfault? This is not really enough information to diagnose what your problem is. Can you please send all the information listed here:

Re: [OMPI users] Segmentation fault: Address not mapped

2008-08-01 Thread James Philbin
Hi, I'm just using TCP so this isn't a problem for me. Any ideas what could be causing this segfault? James

Re: [OMPI users] Segmentation fault: Address not mapped

2008-08-01 Thread Jeff Squyres
On Jul 30, 2008, at 8:31 AM, James Philbin wrote: OK, to answer my own question, I recompiled OpenMPI appending '--with-memory-manager=none' to configure and now things seem to run fine. I'm not sure how this might affect performance, but at least it's working now. If you're not using

Re: [OMPI users] Segmentation fault: Address not mapped

2008-07-30 Thread James Philbin
Hi, OK, to answer my own question, I recompiled OpenMPI appending '--with-memory-manager=none' to configure and now things seem to run fine. I'm not sure how this might affect performance, but at least it's working now. Maybe this can be put in the FAQ? James On Wed, Jul 30, 2008 at 2:02 AM,

[OMPI users] Segmentation fault: Address not mapped

2008-07-29 Thread James Philbin
Hi, I'm running an mpi module in python (pypar), but I believe (after googling) that this might be a problem with openmpi. When I run: 'python -c "import pypar"', I get: [titus:21965] *** Process received signal *** [titus:21965] Signal: Segmentation fault (11) [titus:21965] Signal code: Address