On 05/23/2012 03:05 PM, Jeff Squyres wrote:
On May 23, 2012, at 6:05 AM, Simone Pellegrini wrote:
If process A sends a message to process B and the eager protocol is used then I
assume that the message is written into a shared memory area and picked up by
the receiver when the receive
on OpenMPI 1.5.x installed on top of a linux
Kernel 2.6.32?
cheers, Simone
On 05/22/2012 05:29 PM, Simone Pellegrini wrote:
Dear all,
I would like to have a confirmation on the assumptions I have on how
OpenMPI implements the rendezvous protocol for shared memory.
If process A sends a message
Dear all,
I would like to have a confirmation on the assumptions I have on how
OpenMPI implements the rendezvous protocol for shared memory.
If process A sends a message to process B and the eager protocol is used
then I assume that the message is written into a shared memory area and
picked
tests
has been published somehow/somewhere?
cheers, Simone
On May 7, 2012, at 9:25 AM, Simone Pellegrini wrote:
Hello,
I have one of those 1M dollar questions I guess, but why the eager limit
threshold for Infiniband is set to 12KB by default in OpenMPI? I would like to
know where this value
Hello,
I have one of those 1M dollar questions I guess, but why the eager limit
threshold for Infiniband is set to 12KB by default in OpenMPI? I would
like to know where this value comes from. I am not wondering whether
this is a good setting for this parameter, but just why this is
Hello everybody,
I am measuring some timings for MPI_Send/MPI_Recv. I am doing a single
communication between 2 processes and I repeat this several times to get
meaningful values. The message being sent varies from 64 bytes up to 16
MBs, doubling the size each time (64, 128, 256,8M, 16M).
ll be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
----------
any hints from this?
cheers, Simone
On Sep 6, 2011, at 1:20 P
On 09/06/2011 04:58 PM, Ralph Castain wrote:
On Sep 6, 2011, at 12:49 PM, Simone Pellegrini wrote:
On 09/06/2011 02:57 PM, Ralph Castain wrote:
Hi Simone
Just to clarify: is your application threaded? Could you please send the OMPI
configure cmd you used?
yes, it is threaded
is occurring there.
The problem is that the error is totally nondeterministic. Sometimes
happens, others not but the error message gives me no clue where the
error is coming from. Is is a problem of my code or internal MPI?
cheers, Simone
On Sep 6, 2011, at 3:01 AM, Simone Pellegrini wrote:
Dear all
Dear all,
I am developing an MPI application which uses heavily MPI_Spawn. Usually
everything works fine for the first hundred spawn but after a while the
application exist with a curious message:
[arch-top:27712] [[36904,165],0] ORTE_ERROR_LOG: Data unpack would read
past end of buffer in
:
Smells like a bug - I'll take a look.
On Aug 16, 2011, at 9:10 AM, Simone Pellegrini wrote:
On 08/16/2011 02:11 PM, Ralph Castain wrote:
That should work, then. When you set the "host" property, did you give the same
name as was in your machine file?
Debug options that might h
c.at:02647] [[34621,0],0] plm:base:launch wiring
up iof
[kreusspitze.dps.uibk.ac.at:02647] [[34621,0],0] plm:base:launch
completed for job [34621,4]
[kreusspitze.dps.uibk.ac.at:02647] [[34621,0],0] plm:base:receive job
[34621,4] launched
cheers, Simone P.
On Aug 16, 2011, at 5:09 AM, Simone P
On 08/16/2011 12:30 PM, Ralph Castain wrote:
What version are you using?
OpenMPI 1.4.3
On Aug 16, 2011, at 3:19 AM, Simone Pellegrini wrote:
Dear all,
I am developing a system to manage MPI tasks on top of MPI. The architecture is
rather simple, I have a set of scheduler processes which
Dear all,
I am developing a system to manage MPI tasks on top of MPI. The
architecture is rather simple, I have a set of scheduler processes which
takes care to manage the resources of a node. The idea is to have 1 (or
more) of those scheduler allocated on each node of a cluster and then
Deal all,
I need to implement an MPI layer on top of a message passing library
which is currently used in a particular device where I have to run MPI
programs ( very vague, I know :) ).
Instead of reinventing the wheel, my idea was to reuse most of the Open
MPI implementation and just add a
Dear Open MPI community,
I would like to know from expert system administrator if they know any
"standardized" way for tuning Open MPI runtime parameters.
I need to tune the performance on a custom cluster so I would like to
have some hints in order to proceed in the correct direction.
no assumption that can me made?
thanks again, regards Simone
On Jul 3, 2009, at 9:36 AM, Simone Pellegrini wrote:
Dear all,
current implementation of mpirun starts the executable in different
nodes. For some reason I need to start different MPI applications
across nodes and I want t
Dear all,
current implementation of mpirun starts the executable in different
nodes. For some reason I need to start different MPI applications across
nodes and I want to use MPI to communicate among these applications. For
short I want to breakdown the SPMD model, something like:
mpirun
Dear all,
I apologize with the moderator of the mailing list if my message is not
strictly related to the Open MPI library.
I am a PhD student at the University of Innsbruck, my topic is
optimization of MPI applications. During my research I have collected
several transformation that can
applications on multi-core architectures. I will be very interested
in collaborating in such project. Can you give me more details about it
(links/pointers)?
regards, Simone
On Jun 25, 2009, at 2:39 AM, Simone Pellegrini wrote:
Hello,
I have a simple question for the shared memory (sm) module
Hello,
I have a simple question for the shared memory (sm) module developers of
Open MPI.
In the current implementation, is there any advantage of having shared
cache among processes communicating?
For example let say we have P1 and P2 placed in the same CPU on 2
different physical cores
.)
*) Does the problem occur at lower np?
*) Does the problem correlate with the compiler version? (I.e., GCC
4.4 versus 4.3.3.)
*) What is the failure rate? How many times should I expect to run to
see failures?
*) How large is N?
Eugene Loh wrote:
Simone Pellegrini wrote:
Dear all,
I have
Dear all,
I have successfully compiled and installed openmpi 1.3.2 on a 8 socket
quad-core machine from Sun.
I have used both Gcc-4.4 and Gcc-4.3.3 during the compilation phase but
when I try to run simple MPI programs processes hangs. Actually this is
the kernel of the application I am
Hi,
I installed the patch provided from Ralph and everything works fine now!
thanks a lot,
regards Simone
Jeff Squyres wrote:
On Mar 24, 2009, at 4:24 PM, Simone Pellegrini wrote:
@eNerd:~$ mpirun --np 2 ls
mpirun: symbol lookup error: mpirun: undefined symbol: orted_cmd_line
FWIW
Hello everyone,
I have the same problem when I try to install openmpi 1.3.1 on my laptop
(Ubuntu 8.10 running on a dual core machine).
I did the same installation on Ubuntu 8.04 and everything works, but
here no matter what I do, every time I type mpirun the system prompt for
the password.
25 matches
Mail list logo