Re: [OMPI users] Shared Memory - Eager VS Rendezvous

2012-05-23 Thread Simone Pellegrini
On 05/23/2012 03:05 PM, Jeff Squyres wrote: On May 23, 2012, at 6:05 AM, Simone Pellegrini wrote: If process A sends a message to process B and the eager protocol is used then I assume that the message is written into a shared memory area and picked up by the receiver when the receive

Re: [OMPI users] Shared Memory - Eager VS Rendezvous

2012-05-23 Thread Simone Pellegrini
on OpenMPI 1.5.x installed on top of a linux Kernel 2.6.32? cheers, Simone On 05/22/2012 05:29 PM, Simone Pellegrini wrote: Dear all, I would like to have a confirmation on the assumptions I have on how OpenMPI implements the rendezvous protocol for shared memory. If process A sends a message

[OMPI users] Shared Memory - Eager VS Rendezvous

2012-05-22 Thread Simone Pellegrini
Dear all, I would like to have a confirmation on the assumptions I have on how OpenMPI implements the rendezvous protocol for shared memory. If process A sends a message to process B and the eager protocol is used then I assume that the message is written into a shared memory area and picked

Re: [OMPI users] Why is the eager limit set to 12K?

2012-05-09 Thread Simone Pellegrini
tests has been published somehow/somewhere? cheers, Simone On May 7, 2012, at 9:25 AM, Simone Pellegrini wrote: Hello, I have one of those 1M dollar questions I guess, but why the eager limit threshold for Infiniband is set to 12KB by default in OpenMPI? I would like to know where this value

[OMPI users] Why is the eager limit set to 12K?

2012-05-07 Thread Simone Pellegrini
Hello, I have one of those 1M dollar questions I guess, but why the eager limit threshold for Infiniband is set to 12KB by default in OpenMPI? I would like to know where this value comes from. I am not wondering whether this is a good setting for this parameter, but just why this is

[OMPI users] send/recv implementation details

2012-04-19 Thread Simone Pellegrini
Hello everybody, I am measuring some timings for MPI_Send/MPI_Recv. I am doing a single communication between 2 processes and I repeat this several times to get meaningful values. The message being sent varies from 64 bytes up to 16 MBs, doubling the size each time (64, 128, 256,8M, 16M).

Re: [OMPI users] MPI_Spawn error: Data unpack would read past end of buffer" (-26) instead of "Success"

2011-09-07 Thread Simone Pellegrini
ll be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). ---------- any hints from this? cheers, Simone On Sep 6, 2011, at 1:20 P

Re: [OMPI users] MPI_Spawn error: Data unpack would read past end of buffer" (-26) instead of "Success"

2011-09-06 Thread Simone Pellegrini
On 09/06/2011 04:58 PM, Ralph Castain wrote: On Sep 6, 2011, at 12:49 PM, Simone Pellegrini wrote: On 09/06/2011 02:57 PM, Ralph Castain wrote: Hi Simone Just to clarify: is your application threaded? Could you please send the OMPI configure cmd you used? yes, it is threaded

Re: [OMPI users] MPI_Spawn error: Data unpack would read past end of buffer" (-26) instead of "Success"

2011-09-06 Thread Simone Pellegrini
is occurring there. The problem is that the error is totally nondeterministic. Sometimes happens, others not but the error message gives me no clue where the error is coming from. Is is a problem of my code or internal MPI? cheers, Simone On Sep 6, 2011, at 3:01 AM, Simone Pellegrini wrote: Dear all

[OMPI users] MPI_Spawn error: Data unpack would read past end of buffer" (-26) instead of "Success"

2011-09-06 Thread Simone Pellegrini
Dear all, I am developing an MPI application which uses heavily MPI_Spawn. Usually everything works fine for the first hundred spawn but after a while the application exist with a curious message: [arch-top:27712] [[36904,165],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in

Re: [OMPI users] MPI_Spawn and process allocation policy

2011-08-17 Thread Simone Pellegrini
: Smells like a bug - I'll take a look. On Aug 16, 2011, at 9:10 AM, Simone Pellegrini wrote: On 08/16/2011 02:11 PM, Ralph Castain wrote: That should work, then. When you set the "host" property, did you give the same name as was in your machine file? Debug options that might h

Re: [OMPI users] MPI_Spawn and process allocation policy

2011-08-16 Thread Simone Pellegrini
c.at:02647] [[34621,0],0] plm:base:launch wiring up iof [kreusspitze.dps.uibk.ac.at:02647] [[34621,0],0] plm:base:launch completed for job [34621,4] [kreusspitze.dps.uibk.ac.at:02647] [[34621,0],0] plm:base:receive job [34621,4] launched cheers, Simone P. On Aug 16, 2011, at 5:09 AM, Simone P

Re: [OMPI users] MPI_Spawn and process allocation policy

2011-08-16 Thread Simone Pellegrini
On 08/16/2011 12:30 PM, Ralph Castain wrote: What version are you using? OpenMPI 1.4.3 On Aug 16, 2011, at 3:19 AM, Simone Pellegrini wrote: Dear all, I am developing a system to manage MPI tasks on top of MPI. The architecture is rather simple, I have a set of scheduler processes which

[OMPI users] MPI_Spawn and process allocation policy

2011-08-16 Thread Simone Pellegrini
Dear all, I am developing a system to manage MPI tasks on top of MPI. The architecture is rather simple, I have a set of scheduler processes which takes care to manage the resources of a node. The idea is to have 1 (or more) of those scheduler allocated on each node of a cluster and then

[OMPI users] Implementing a new BTL module in MCA

2010-08-03 Thread Simone Pellegrini
Deal all, I need to implement an MPI layer on top of a message passing library which is currently used in a particular device where I have to run MPI programs ( very vague, I know :) ). Instead of reinventing the wheel, my idea was to reuse most of the Open MPI implementation and just add a

[OMPI users] Open MPI runtime parameter tuning on a custom cluster

2010-07-02 Thread Simone Pellegrini
Dear Open MPI community, I would like to know from expert system administrator if they know any "standardized" way for tuning Open MPI runtime parameters. I need to tune the performance on a custom cluster so I would like to have some hints in order to proceed in the correct direction.

Re: [OMPI users] Changing the MPIRUN/MPIEXEC semantics

2009-07-03 Thread Simone Pellegrini
no assumption that can me made? thanks again, regards Simone On Jul 3, 2009, at 9:36 AM, Simone Pellegrini wrote: Dear all, current implementation of mpirun starts the executable in different nodes. For some reason I need to start different MPI applications across nodes and I want t

[OMPI users] Changing the MPIRUN/MPIEXEC semantics

2009-07-03 Thread Simone Pellegrini
Dear all, current implementation of mpirun starts the executable in different nodes. For some reason I need to start different MPI applications across nodes and I want to use MPI to communicate among these applications. For short I want to breakdown the SPMD model, something like: mpirun

[OMPI users] Request for C/C++ MPI applications kernels

2009-07-03 Thread Simone Pellegrini
Dear all, I apologize with the moderator of the mailing list if my message is not strictly related to the Open MPI library. I am a PhD student at the University of Innsbruck, my topic is optimization of MPI applications. During my research I have collected several transformation that can

Re: [OMPI users] Shared Memory (SM) module and shared cache implications

2009-06-25 Thread Simone Pellegrini
applications on multi-core architectures. I will be very interested in collaborating in such project. Can you give me more details about it (links/pointers)? regards, Simone On Jun 25, 2009, at 2:39 AM, Simone Pellegrini wrote: Hello, I have a simple question for the shared memory (sm) module

[OMPI users] Shared Memory (SM) module and shared cache implications

2009-06-25 Thread Simone Pellegrini
Hello, I have a simple question for the shared memory (sm) module developers of Open MPI. In the current implementation, is there any advantage of having shared cache among processes communicating? For example let say we have P1 and P2 placed in the same CPU on 2 different physical cores

Re: [OMPI users] MPI processes hang when using OpenMPI 1.3.2 and Gcc-4.4.0

2009-05-04 Thread Simone Pellegrini
.) *) Does the problem occur at lower np? *) Does the problem correlate with the compiler version? (I.e., GCC 4.4 versus 4.3.3.) *) What is the failure rate? How many times should I expect to run to see failures? *) How large is N? Eugene Loh wrote: Simone Pellegrini wrote: Dear all, I have

[OMPI users] MPI processes hang when using OpenMPI 1.3.2 and Gcc-4.4.0

2009-04-30 Thread Simone Pellegrini
Dear all, I have successfully compiled and installed openmpi 1.3.2 on a 8 socket quad-core machine from Sun. I have used both Gcc-4.4 and Gcc-4.3.3 during the compilation phase but when I try to run simple MPI programs processes hangs. Actually this is the kernel of the application I am

Re: [OMPI users] mpirun/exec requires ssh?

2009-03-25 Thread Simone Pellegrini
Hi, I installed the patch provided from Ralph and everything works fine now! thanks a lot, regards Simone Jeff Squyres wrote: On Mar 24, 2009, at 4:24 PM, Simone Pellegrini wrote: @eNerd:~$ mpirun --np 2 ls mpirun: symbol lookup error: mpirun: undefined symbol: orted_cmd_line FWIW

Re: [OMPI users] mpirun/exec requires ssh?

2009-03-24 Thread Simone Pellegrini
Hello everyone, I have the same problem when I try to install openmpi 1.3.1 on my laptop (Ubuntu 8.10 running on a dual core machine). I did the same installation on Ubuntu 8.04 and everything works, but here no matter what I do, every time I type mpirun the system prompt for the password.