So, first run i seem to have run into a bit of an issue. All the
Quadrics modules are compiled and loaded. I can ping between nodes
over the quadrics interfaces. But when i try to run one of the hello
mpi example from openmpi, i get:
first run, the process hung - killed with ctl-c
though it doe
Hi,
Am 07.07.2009 um 22:12 schrieb Lengyel, Florian:
Hi,
I may have overlooked something in the archives (not to mention
Googling)--if so I apologize, however
I have been unable to find info on this particular problem.
OpenMPI+SGE tight integration works on E6600 core duo systems but
not
Hi,
I may have overlooked something in the archives (not to mention Googling)--if
so I apologize, however
I have been unable to find info on this particular problem.
OpenMPI+SGE tight integration works on E6600 core duo systems but not on Q9550
quads.
Could use some troubleshooting assistance.
Does OpenMPI/Quadrics require the Quadrics Kernel patches in order to
operate? Or operate at full speed or are the Quadrics modules
sufficient?
On Thu, Jul 2, 2009 at 1:52 PM, Ashley Pittman wrote:
> On Thu, 2009-07-02 at 09:34 -0400, Michael Di Domenico wrote:
>> Jeff,
>>
>> Okay, thanks. I'll
I am attempting to use coll_tuned_dynamic_rules_filename to tune Open
MPI 1.3.2. Based on my testing, it appears that the dynamic rules file
*only* influences the algorithm selection for MPI_COMM_WORLD. Any
duplicate communicators will only use fixed or forced rules, which may
have much worse per
(Sorry if this is posted twice, I sent the same email yesterday but it
never appeared on the list).
Hi, I am attempting to debug a memory corruption in an mpi program
using valgrind. However, when I run with valgrind I get semi-random
segfaults and valgrind messages with the openmpi library
You probably want to use an MPI tracing tool that can break down the
times spent inside and outside of the MPI library. User vs. system
time, as you noted, can get quite blurred.
On Jul 6, 2009, at 12:48 PM, Ross Boylan wrote:
Let total time on my slot 0 process be S+C+B+I
= serial comput
Ok, after all the considerations, I'll try Boost, today, make some
experiments and see if I can use it or if I'll avoid it yet.
But as said by Raimond I think, the problem is been dependent of a
rich-incredible-amazing-toolset but still implementing only
MPI-1, and do not implement all
On Jul 7, 2009, at 8:08 AM, Catalin David wrote:
Thank you very much for the help and assistance :)
Using -isystem /users/cluster/cdavid/local/include the program now
runs fine (loads the correct mpi.h).
This is very fishy.
If mpic++ is in /users/cluster/cdavid/local/bin, and that directory
On Jul 7, 2009, at 12:43 AM, Prasadcse Perera wrote:
I'm new to openmpi and currently I have setup openmpi-1.3.3a1r21566
on my Linux machines. I have run some of available examples and also
noticed there are some test modules under /openmpi-1.3.3a1r21566/
test. Are these tests run on batchw
You might want to use a tracing library to see where exactly your
synchronization issues are occurring. It may depend on the
communication pattern between your nodes and the timing between them.
Additionally, your network switch(es) performance characteristics may
come into effect here: a
I think you face a common trade-off:
- use a well-established, debugged, abstraction-rich library
- write all of that stuff yourself
FWIW, I think the first one is a no-brainer. There's a reason they
wrote Boost.MPI: it's complex, difficult stuff, and is perfect as
middleware for others to
Thank you very much for the help and assistance :)
Using -isystem /users/cluster/cdavid/local/include the program now
runs fine (loads the correct mpi.h).
Thank you again,
Catalin
On Tue, Jul 7, 2009 at 12:29 PM, Catalin
David wrote:
> #include
> #include
> int main(int argc, char *argv[])
#include
#include
int main(int argc, char *argv[])
{
printf("%d %d %d\n", OMPI_MAJOR_VERSION,
OMPI_MINOR_VERSION,OMPI_RELEASE_VERSION);
return 0;
}
returns:
test.cpp: In function ‘int main(int, char**)’:
test.cpp:11: error: ‘OMPI_MAJOR_VERSION’ was not declared in this scope
test.cpp
Catalin David wrote:
Hello, all!
Just installed Valgrind (since this seems like a memory issue) and got
this interesting output (when running the test program):
==4616== Syscall param sched_setaffinity(mask) points to unaddressable byte(s)
==4616==at 0x43656BD: syscall (in /lib/tls/libc-2.3
This is the error you get when an invalid communicator handle is passed
to a MPI function, the handle is deferenced so you may or may not get a
SEGV from it depending on the value you pass.
The 0x44a0 address is an offset from 0x4400, the value of
MPI_COMM_WORLD in mpich2, my guess would
Hi,
On Mon, Jul 06, 2009 at 03:24:07PM -0400, Luis Vitorio Cargnini wrote:
> Thanks, but I really do not want to use Boost.
> Is easier ? certainly is, but I want to make it using only MPI
> itself
> and not been dependent of a Library, or templates like the majority
> of
> boost a huge set of tem
Hello, all!
Just installed Valgrind (since this seems like a memory issue) and got
this interesting output (when running the test program):
==4616== Syscall param sched_setaffinity(mask) points to unaddressable byte(s)
==4616==at 0x43656BD: syscall (in /lib/tls/libc-2.3.2.so)
==4616==by 0
> IF boost is attached to MPI 3 (or whatever), AND it becomes part of the
> mainstream MPI implementations, THEN you can have the discussion again.
Hi,
At the moment, I think that Boost.MPI only supports MPI1.1, and even
then, some additional work may be done, at least regarding the complex
datat
Hi Luis,
Luis Vitorio Cargnini wrote:
Your suggestion is a great and interesting idea. I only have the fear to
get used to the Boost and could not get rid of Boost anymore, because
one thing is sure the abstraction added by Boost is impressive, it turn
I should add that I fully understand
Terry Frankcombe wrote:
I understand Luis' position completely. He wants an MPI program, not a
program that's written in some other environment, no matter how
attractive that may be. It's like the difference between writing a
numerical program in standard-conforming Fortran and writing it in
Hi,
I'm new to openmpi and currently I have setup openmpi-1.3.3a1r21566 on my
Linux machines. I have run some of available examples and also noticed there
are some test modules under /openmpi-1.3.3a1r21566/test. Are these tests run
on batchwise? then how ? or are these tests suppose to run individ
Hi every one
I built a rpm file for openmpi-1.3.2 with openmpi.spec and buildrpm.sh on the
http://www.open-mpi.org/software/ompi/v1.3/srpm.php
I change buildrpm.sh as fllowing:
prefix="/usr/local/openmpi/intel/1.3.2"
specfile="openmpi.spec"
#rpmbuild_options=${rpmbuild_options:-"--define 'mflags
23 matches
Mail list logo