Re: [OMPI users] Signal code: Address not mapped (1) error

2010-03-18 Thread Dorian Krause
Hi, Since you are using std::string in your structure you should allocate the memory with "new" instead of "malloc". Otherwise the constructor of std::string is not called and things like the length() of a string might not give the desired result leading boost to iterate over too many chars.

Re: [OMPI users] Progress in MPI_Win_unlock

2010-02-04 Thread Dorian Krause
Hi Dave, thanks for your answer. The question to me is: Is an MPI process supposed to eventually exit or can it be a server process running for eternity? In the later case, no progress will be made ... I think it might be helpful to users to give a clarification in the standard (e.g. in an

[OMPI users] Progress in MPI_Win_unlock

2010-02-03 Thread Dorian Krause
Dear list, from some small tests I ran it appears to me that progress in passive target sided communication is only guaranteed if the origin issues some "deeper" MPI function (i.e., a simple MPI_Comm_rank is not sufficient). Can someone confirm this experimental observation? I have two

Re: [OMPI users] segfault when combining OpenMPI and GotoBLAS2

2010-01-20 Thread Dorian Krause
On 1/20/10 5:38 PM, Eloi Gaudry wrote: Hi, FYI, This issue is solved with the last version of the library (v2-1.11), at least on my side. Eloi Hi Eloi, Thanks a lot for the message. The issues are fixed on my side too. Dorian

Re: [OMPI users] segfault when combining OpenMPI and GotoBLAS2

2010-01-19 Thread Dorian Krause
doesn't show up?! I'm interested in digging further but I need some advices/hints where to go from here. Thanks, Dorian On 1/19/10 1:29 PM, Jeff Squyres wrote: Can you get a core dump, or otherwise see exactly where the seg fault is occurring? On Jan 18, 2010, at 8:34 AM, Dorian Krause

[OMPI users] segfault when combining OpenMPI and GotoBLAS2

2010-01-18 Thread Dorian Krause
Hi, has any one successfully combined OpenMPI and GotoBLAS2? I'm facing segfaults in any program which combines the two libraries (as shared libs). The segmentation fault seems to occur in MPI_Init(). The gdb backtrace is Program received signal SIGSEGV, Segmentation fault. [Switching to

Re: [OMPI users] Deadlock in MPI_File_write_all on Infiniband

2009-10-13 Thread Dorian Krause
... Just my $0.02 ... Thanks Edgar Dorian Krause wrote: Dear list, the attached program deadlocks in MPI_File_write_all when run with 16 processes on two 8 core nodes of an Infiniband cluster. It runs fine when I a) use tcp or b) replace MPI_File_write_all by MPI_File_write I'm using openmpi V

[OMPI users] Deadlock in MPI_File_write_all on Infiniband

2009-10-12 Thread Dorian Krause
Dear list, the attached program deadlocks in MPI_File_write_all when run with 16 processes on two 8 core nodes of an Infiniband cluster. It runs fine when I a) use tcp or b) replace MPI_File_write_all by MPI_File_write I'm using openmpi V. 1.3.2 (but I checked that the problem is also

Re: [OMPI users] problems with one sided MPI

2009-09-01 Thread Dorian Krause
Hi Marcus, Marcus Daniels wrote: Hi, I'm trying to do passive one-sided communication, unlocking a receive buffer when it is safe and then re-locking it when data has arrived. Locking also occurs for the duration of a send. I also tried using post/wait and start/put/complete, but with that

Re: [OMPI users] question about algorithms for collective communication

2009-08-23 Thread Dorian Krause
Hi, a similar question was recently discussed on the mailing list: http://www.open-mpi.org/community/lists/users/2009/08/10458.php George Markomanolis wrote: Dear all, I am trying to figure out the algorithms that are used for some collective communications (allreduce, bcast, alltoall). Is

Re: [OMPI users] Open MPI and env. variables (LD_LIBRARY_PATH and PATH) - complete and utter Open MPI / Linux noob

2009-08-02 Thread Dorian Krause
Hi, Dominik Táborský wrote: Okay, now it's getting more confusing since I just found out that it somehow stopped working for me! Anyway, let's find a solution. I found out that there is difference between ssh node1 echo $PATH In this case the $PATH variable is expanded by the shell

Re: [OMPI users] strange IMB runs

2009-07-29 Thread Dorian Krause
Hi, --mca mpi_leave_pinned 1 might help. Take a look at the FAQ for various tuning parameters. Michael Di Domenico wrote: I'm not sure I understand what's actually happened here. I'm running IMB on an HP superdome, just comparing the PingPong benchmark HP-MPI v2.3 Max ~ 700-800MB/sec

Re: [OMPI users] Missmatch between sent and recieved data

2009-07-24 Thread Dorian Krause
Hi, you do not send the trailing '0' which is used to determine the stringlength. I assume that chdata[i] has at least length 5 (otherwise you overrun your memory). Replace the "4" by "5" in MPI_Isend and MPI_Recv and everything should work (If I get the problem right). Dorian. Alexey

Re: [OMPI users] Help: HPL Compile Problems

2009-07-12 Thread Dorian Krause
Hi, you can ignore MP... if you set the compiler and linker to mpicc. In my makefile for hpl I have # -- # - MPI directories - library -- #

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-07 Thread Dorian Krause
Catalin David wrote: Hello, all! Just installed Valgrind (since this seems like a memory issue) and got this interesting output (when running the test program): ==4616== Syscall param sched_setaffinity(mask) points to unaddressable byte(s) ==4616==at 0x43656BD: syscall (in

Re: [OMPI users] Segmentation fault - Address not mapped

2009-07-06 Thread Dorian Krause
Hi, //Initialize step MPI_Init(,); //Here it breaks!!! Memory allocation issue! MPI_Comm_size(MPI_COMM_WORLD, ); std::cout<<"I'm here"<

Re: [OMPI users] MPI and C++

2009-07-03 Thread Dorian Krause
I'm sorry. I meant boost.mpi ... Luis Vitorio Cargnini wrote: Hi, Please I'm writing a C++ applications that will use MPI. My problem is, I want to use the C++ bindings and then come my doubts. All the examples that I found people is using almost like C, except for the fact of adding the

Re: [OMPI users] MPI and C++

2009-07-03 Thread Dorian Krause
Hi, Luis Vitorio Cargnini wrote: Hi, Please I'm writing a C++ applications that will use MPI. My problem is, I want to use the C++ bindings and then come my doubts. All the examples that I found people is using almost like C, except for the fact of adding the namespace MPI:: before the

Re: [OMPI users] Onesided + derived datatypes

2008-12-12 Thread Dorian Krause
] = { nan, nan, nan} mem[6] = { nan, nan, nan} mem[7] = { nan, nan, nan} mem[8] = { nan, nan, nan} mem[9] = { nan, nan, nan} Dorian > -Ursprüngliche Nachricht- > Von: "Dorian Krause" <doriankra...@web.de> > Ge

Re: [OMPI users] Onesided + derived datatypes

2008-12-12 Thread Dorian Krause
Thanks George (and Brian :)). The MPI_Put error is gone. Did you take a look at the problem that with the block_indexed type the PUT doesn't work? I'm still getting the following output (V1 corresponds to the datatype created with MPI_Type_create_indexed_block while the V2 type is created with