[OMPI users] Syntax error in remote rsh execution

2007-10-18 Thread Jorge Parra

Hi,

When trying to execute an application that spawns to another node, I 
obtain the following message:


# ./mpirun --hostfile /root/hostfile -np 2 greetings
Syntax error: "(" unexpected (expecting ")")
--
Could not execute the executable 
"/opt/OpenMPI/OpenMPI-1.1.5b/exec/bin/greetings

": Exec format error

This could mean that your PATH or executable name is wrong, or that you do 
not
have the necessary permissions.  Please ensure that the executable is able 
to be


found and executed.
--

and in the remote node:

# pam_rhosts_auth[183]: user root has a `+' user entry
pam_rhosts_auth[183]: allowed to root@192.168.1.102 as root
PAM_unix[183]: (rsh) session opened for user root by (uid=0)
in.rshd[184]: root@192.168.1.102 as root: cmd='( ! [ -e ./.profile ] || . 
./.pro
file; orted --bootproxy 1 --name 0.0.1 --num_procs 3 --vpid_start 0 
--nodename 1
92.168.1.103 --universe root@(none):default-universe --nsreplica 
"0.0.0;tcp://19
2.168.1.102:32774" --gprreplica "0.0.0;tcp://192.168.1.102:32774" 
--mpi-call-yie

ld 0 )'
PAM_unix[183]: (rsh) session closed for user root

I suspect the command that rsh is trying to execute in the remote node 
fails. It seems to me that the first parenthesis in cmd='( ! is not well 
interpreted, thus causing the syntax error. This might prevent .profile to 
run and to correctly set PATH. Therefore, "greetings" is not found.


I am attaching to this email the appropiate configuration files of my 
system and openmpi on it. This is a system in an isolated network, so I 
don't care too much for security. Therefore I am using rsh on it.


I would really appreciate any suggestions to correct this problem.

Thank you,

Jorge

logs.tar.gz
Description: GNU Zip compressed data


[OMPI users] Merging Intracommunicators

2007-10-18 Thread Murat Knecht
Hi,
I have a question regarding merging intracommunicators.
Using MPI_Spawn, I create on designated machines child processes,
retrieving an intercommunicator each time.
With MPI_Intercomm_Merge it is possible to get an intracommunicator
containing the master process(es) and the newly spawned child process.
The problem is to merge the intracommunicators into a single one.

I understand there is the possibilty to use the so created
intracommunicator from the first try in order to spawn the second child,
merge this one into the intracomm and continue like this.
This brings some considerable adminstrative overhead with it, as all
already spawned children must (be informed to) participate in the spawn
call.
I would rather merge all intercommunicators together in the end using
only the master process for spawning.
Both these possibilites have been mentioned in the following post.

http://www.lam-mpi.org/MailArchives/lam/2003/06/6226.php

While I understand the first one, I do not follow the second - I cannot
seem to find any method to merge multiple inter- or intracomms into a
single intracomm.
Groups cannot be used either, to collect the children and retrieve the
intracomm, because this is only used for subgrouping within an already
existing intracommunicator-group.
Is there a way to merge them the easy way, or did I misread the post above?

Thanks & best regards,
Murat


Re: [OMPI users] Compiling OpenMPI for i386 on a x86_64

2007-10-18 Thread Gurhan
Hello,

configure:33918: gcc -DNDEBUG -O2 -g -pipe -m32 -march=i386
-mtune=pentium4 -fno-strict-aliasing -I. -c conftest.c
configure:33925: $? = 0
configure:33935: gfortran   conftestf.f90 conftest.o -o conftest
/usr/bin/ld: warning: i386 architecture of input file `conftest.o' is
incompatible with i386:x86-64 output
configure:33942: $? = 0
configure:33990: ./conftest
configure:33997: $? = 139
configure:34006: error: Could not determine size of LOGICAL

Is this correct? We are feeding a 32-bit object file to be linked with
a 64-bit output executable file? When target is i386 shouldn't -m32
-march=i386 need to be passed on to gfortran as well on above
instance, unless it's for negative testing?

Thanks,
gurhan


On 10/18/07, Jim Kusznir  wrote:
> Attached is the requested info.  There's not much here, though...it
> dies pretty early in.
>
> --Jim
>
> On 10/17/07, Jeff Squyres  wrote:
> > On Oct 17, 2007, at 12:35 PM, Jim Kusznir wrote:
> >
> > > checking if Fortran 90 compiler supports LOGICAL... yes
> > > checking size of Fortran 90 LOGICAL... ./configure: line 34070:  7262
> > > Segmentation fault  ./conftest 1>&5 2>&1
> > > configure: error: Could not determine size of LOGICAL
> >
> > Awesome!  It looks like gfortran itself is seg faulting.
> >
> > Can you send all the information listed on the getting help page?
> >
> >  http://www.open-mpi.org/community/help/
> >
> > That will help confirm/deny whether it's gfortran itself that is seg
> > faulting.  If it's gfortran that's seg faulting, there's not much
> > that Open MPI can do...
> >
> > --
> > Jeff Squyres
> > Cisco Systems
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>


Re: [OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-10-18 Thread Daniel Rozenbaum
Yes, a memory bug has been my primary focus due to the not entirely 
consistent nature of this problem; I valgrind'ed the app a number of 
times, to no avail though. Will post again if anything new comes up... 
Thanks!


Jeff Squyres wrote:
Yes, that's the normal progression.  For some reason, OMPI appears to  
have decided that it had not yet received the message.  Perhaps a  
memory bug in your application...?  Have you run it through valgrind,  
or some other memory-checking debugger, perchance?


On Oct 18, 2007, at 12:35 PM, Daniel Rozenbaum wrote:

  

Unfortunately, so far I haven't even been able to reproduce it on a
different cluster. Since I had no success getting to the bottom of  
this
problem, I've been concentrating my efforts on changing the app so  
that

there's no need to send very large messages; I might be able to find
time later to create a short example that shows the problem.

FWIW, when I was debugging it, I peeked a little into Open MPI  
code, and
found that the client's MPI_Recv gets stuck in mca_pml_ob1_recv(),  
after
it determines that "recvreq- 


req_recv.req_base.req_ompi.req_complete ==
  

false" and calls opal_condition_wait().

Jeff Squyres wrote:


Can you send a short test program that shows this problem, perchance?


On Oct 3, 2007, at 1:41 PM, Daniel Rozenbaum wrote:


  

Hi again,

I'm trying to debug the problem I posted on several times recently;
I thought I'd try asking a more focused question:

I have the following sequence in the client code:
MPI_Status stat;
ret = MPI_Probe(0, MPI_ANY_TAG, MPI_COMM_WORLD, );
assert(ret == MPI_SUCCESS);
ret = MPI_Get_elements(, MPI_BYTE, );
assert(ret == MPI_SUCCESS);
char *buffer = malloc(count);
assert(buffer != NULL);
ret = MPI_Recv((void *)buffer, count, MPI_BYTE, 0, stat.MPI_TAG,
MPI_COMM_WORLD, MPI_STATUS_IGNORE);
assert(ret == MPI_SUCCESS);
fprintf(stderr, "MPI_Recv done\n");

Each MPI_ call in the lines above is surrounded by debug prints
that print out the client's rank, current time, the action about to
be taken with all its parameters' values, and the action's result.
After the first cycle (receive message from server -- process it --
send response -- wait for next message) works out as expected, the
next cycle get stuck in MPI_Recv. What I get in my debug prints is
more or less the following:
MPI_Probe(source= 0, tag= MPI_ANY_TAG, comm= MPI_COMM_WORKD,
status= )
MPI_Probe done, source= 0, tag= 2, error= 0
MPI_Get_elements(status= , dtype= MPI_BYTE, count=
)
MPI_Get_elements done, count= 2731776
MPI_Recv(buf= , count= 2731776, dtype= MPI_BYTE, src= 0,
tag= 2, comm= MPI_COMM_WORLD, stat= MPI_STATUS_IGNORE)

My question then is this - what would cause MPI_Recv to not return,
after the immediately preceding MPI_Probe and MPI_Get_elements
return properly?

Thanks,
Daniel




Re: [OMPI users] which alternative to OpenMPI should I choose?

2007-10-18 Thread Jeff Squyres

On Oct 18, 2007, at 9:24 AM, Marcin Skoczylas wrote:


  PML add procs failed
  --> Returned "Unreachable" (-12) instead of "Success" (0)
-- 


*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (goodbye)


Yoinks -- OMPI is determining that it can't use the TCP BTL to reach  
other hosts.



/I assume this could be because of:

$ /sbin/route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref 
Use

Iface
192.125.17.0*   255.255.255.0   U 0   
00 eth1
192.168.12.0*   255.255.255.0   U 0   
00 eth1
161.254.0.0 *   255.255.0.0 U 0   
00 eth1
default 192.125.17.10.0.0.0 UG0   
00 eth1


192.125 -- is that supposed to be a private address?  If so, that's  
not really the Right way to do things...


So "narrowly scoped netmasks" which (as it's written in the FAQ)  
are not

supported in the OpenMPI. I asked for a workaround on this newsgroup
some time ago - but no answer uptill now. So my question is: what
alternative should I choose that will work in such configuration?


We haven't put in a workaround because (to be blunt) we either forgot  
about it and/or not enough people have asked for it.  Sorry.  :-(


It probably wouldn't be too hard to put in an MCA parameter to say  
"don't do netmask comparisons; just assume that every IP address is  
reachable by every other IP address."


George -- did you mention that you were working on this at one point?


Do you
have some experience in other MPI implementations, for example LamMPI?


LAM/MPI should be able to work just fine in this environment; it  
doesn't do any kind of reachability computations like Open MPI does  
-- it blindly assumes that every MPI process is reachable by every  
other MPI process.


--
Jeff Squyres
Cisco Systems



Re: [OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-10-18 Thread Jeff Squyres
Yes, that's the normal progression.  For some reason, OMPI appears to  
have decided that it had not yet received the message.  Perhaps a  
memory bug in your application...?  Have you run it through valgrind,  
or some other memory-checking debugger, perchance?


On Oct 18, 2007, at 12:35 PM, Daniel Rozenbaum wrote:


Unfortunately, so far I haven't even been able to reproduce it on a
different cluster. Since I had no success getting to the bottom of  
this
problem, I've been concentrating my efforts on changing the app so  
that

there's no need to send very large messages; I might be able to find
time later to create a short example that shows the problem.

FWIW, when I was debugging it, I peeked a little into Open MPI  
code, and
found that the client's MPI_Recv gets stuck in mca_pml_ob1_recv(),  
after
it determines that "recvreq- 
>req_recv.req_base.req_ompi.req_complete ==

false" and calls opal_condition_wait().

Jeff Squyres wrote:

Can you send a short test program that shows this problem, perchance?


On Oct 3, 2007, at 1:41 PM, Daniel Rozenbaum wrote:



Hi again,

I'm trying to debug the problem I posted on several times recently;
I thought I'd try asking a more focused question:

I have the following sequence in the client code:
MPI_Status stat;
ret = MPI_Probe(0, MPI_ANY_TAG, MPI_COMM_WORLD, );
assert(ret == MPI_SUCCESS);
ret = MPI_Get_elements(, MPI_BYTE, );
assert(ret == MPI_SUCCESS);
char *buffer = malloc(count);
assert(buffer != NULL);
ret = MPI_Recv((void *)buffer, count, MPI_BYTE, 0, stat.MPI_TAG,
MPI_COMM_WORLD, MPI_STATUS_IGNORE);
assert(ret == MPI_SUCCESS);
fprintf(stderr, "MPI_Recv done\n");

Each MPI_ call in the lines above is surrounded by debug prints
that print out the client's rank, current time, the action about to
be taken with all its parameters' values, and the action's result.
After the first cycle (receive message from server -- process it --
send response -- wait for next message) works out as expected, the
next cycle get stuck in MPI_Recv. What I get in my debug prints is
more or less the following:
MPI_Probe(source= 0, tag= MPI_ANY_TAG, comm= MPI_COMM_WORKD,
status= )
MPI_Probe done, source= 0, tag= 2, error= 0
MPI_Get_elements(status= , dtype= MPI_BYTE, count=
)
MPI_Get_elements done, count= 2731776
MPI_Recv(buf= , count= 2731776, dtype= MPI_BYTE, src= 0,
tag= 2, comm= MPI_COMM_WORLD, stat= MPI_STATUS_IGNORE)

My question then is this - what would cause MPI_Recv to not return,
after the immediately preceding MPI_Probe and MPI_Get_elements
return properly?

Thanks,
Daniel



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Compiling OpenMPI for i386 on a x86_64

2007-10-18 Thread Jeff Squyres
Ah, I see the real problem: your C and Fortran compilers are not  
generating compatible code.  Here's the relevant snipit from config.log:


configure:33849: checking size of Fortran 90 LOGICAL
configure:33918: gcc -DNDEBUG -O2 -g -pipe -m32 -march=i386 - 
mtune=pentium4 -fno-strict-aliasing -I. -c conftest.c

configure:33925: $? = 0
configure:33935: gfortran   conftestf.f90 conftest.o -o conftest
/usr/bin/ld: warning: i386 architecture of input file `conftest.o' is  
incompatible with i386:x86-64 output

configure:33942: $? = 0
configure:33990: ./conftest
configure:33997: $? = 139
configure:34006: error: Could not determine size of LOGICAL

Specifically, when OMPI's configure is checking the size of various  
Fortran types, it compiles a simple C object file and then compiles/ 
links a simple fortran program against that C object file.


In this case, you're using different flags for C and fortran, and  
they're not compatible -- so it fails to compile properly.  However,  
the fun part is that gfortran still gives a return status of 0, so  
configure thinks that the compile succeeded and tries to run the  
resulting executable.  The resulting executable seg faults (not  
gfortran), and things go downhill from there.


From the top of your config.log, you invoked configure with the  
following command line:


./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat- 
linux-gnu --target=i386-redhat-linux-gnu --program-prefix= --prefix=/ 
usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin -- 
sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include -- 
libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var -- 
sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/ 
info


I don't see you overriding CFLAGS in there anywhere, but it's  
possible you set that CFLAGS environment variable before invoking  
configure.


The solution here is to make the compiler flags for all 4 compilers  
(C, C++, F77, F90) produce object code for the same bitness/etc.  So  
if you're using -m32 for the C compiler, then you also need to setenv  
FCFLAGS and FFLAGS to -m32 (I'm pretty sure that's the right flag for  
gfortran; RTFM if it's not).


Let me know if that works.



On Oct 18, 2007, at 12:42 PM, Jim Kusznir wrote:


Attached is the requested info.  There's not much here, though...it
dies pretty early in.

--Jim

On 10/17/07, Jeff Squyres  wrote:

On Oct 17, 2007, at 12:35 PM, Jim Kusznir wrote:


checking if Fortran 90 compiler supports LOGICAL... yes
checking size of Fortran 90 LOGICAL... ./configure: line 34070:   
7262

Segmentation fault  ./conftest 1>&5 2>&1
configure: error: Could not determine size of LOGICAL


Awesome!  It looks like gfortran itself is seg faulting.

Can you send all the information listed on the getting help page?

 http://www.open-mpi.org/community/help/

That will help confirm/deny whether it's gfortran itself that is seg
faulting.  If it's gfortran that's seg faulting, there's not much
that Open MPI can do...

--
Jeff Squyres
Cisco Systems

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Compiling OpenMPI for i386 on a x86_64

2007-10-18 Thread Jim Kusznir
Attached is the requested info.  There's not much here, though...it
dies pretty early in.

--Jim

On 10/17/07, Jeff Squyres  wrote:
> On Oct 17, 2007, at 12:35 PM, Jim Kusznir wrote:
>
> > checking if Fortran 90 compiler supports LOGICAL... yes
> > checking size of Fortran 90 LOGICAL... ./configure: line 34070:  7262
> > Segmentation fault  ./conftest 1>&5 2>&1
> > configure: error: Could not determine size of LOGICAL
>
> Awesome!  It looks like gfortran itself is seg faulting.
>
> Can you send all the information listed on the getting help page?
>
>  http://www.open-mpi.org/community/help/
>
> That will help confirm/deny whether it's gfortran itself that is seg
> faulting.  If it's gfortran that's seg faulting, there's not much
> that Open MPI can do...
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


output.tgz
Description: GNU Zip compressed data


Re: [OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-10-18 Thread Daniel Rozenbaum
Unfortunately, so far I haven't even been able to reproduce it on a 
different cluster. Since I had no success getting to the bottom of this 
problem, I've been concentrating my efforts on changing the app so that 
there's no need to send very large messages; I might be able to find 
time later to create a short example that shows the problem.


FWIW, when I was debugging it, I peeked a little into Open MPI code, and 
found that the client's MPI_Recv gets stuck in mca_pml_ob1_recv(), after 
it determines that "recvreq->req_recv.req_base.req_ompi.req_complete == 
false" and calls opal_condition_wait().


Jeff Squyres wrote:

Can you send a short test program that shows this problem, perchance?


On Oct 3, 2007, at 1:41 PM, Daniel Rozenbaum wrote:

  

Hi again,

I'm trying to debug the problem I posted on several times recently;  
I thought I'd try asking a more focused question:


I have the following sequence in the client code:
MPI_Status stat;
ret = MPI_Probe(0, MPI_ANY_TAG, MPI_COMM_WORLD, );
assert(ret == MPI_SUCCESS);
ret = MPI_Get_elements(, MPI_BYTE, );
assert(ret == MPI_SUCCESS);
char *buffer = malloc(count);
assert(buffer != NULL);
ret = MPI_Recv((void *)buffer, count, MPI_BYTE, 0, stat.MPI_TAG,  
MPI_COMM_WORLD, MPI_STATUS_IGNORE);

assert(ret == MPI_SUCCESS);
fprintf(stderr, "MPI_Recv done\n");
server>
Each MPI_ call in the lines above is surrounded by debug prints  
that print out the client's rank, current time, the action about to  
be taken with all its parameters' values, and the action's result.  
After the first cycle (receive message from server -- process it --  
send response -- wait for next message) works out as expected, the  
next cycle get stuck in MPI_Recv. What I get in my debug prints is  
more or less the following:
MPI_Probe(source= 0, tag= MPI_ANY_TAG, comm= MPI_COMM_WORKD,  
status= )

MPI_Probe done, source= 0, tag= 2, error= 0
MPI_Get_elements(status= , dtype= MPI_BYTE, count=  
)

MPI_Get_elements done, count= 2731776
MPI_Recv(buf= , count= 2731776, dtype= MPI_BYTE, src= 0,  
tag= 2, comm= MPI_COMM_WORLD, stat= MPI_STATUS_IGNORE)
failed" errors in server's stderr>
My question then is this - what would cause MPI_Recv to not return,  
after the immediately preceding MPI_Probe and MPI_Get_elements  
return properly?


Thanks,
Daniel





[OMPI users] which alternative to OpenMPI should I choose?

2007-10-18 Thread Marcin Skoczylas

Hello,

I'm having troubles to run my software after our administrators changed 
the cluster configuration. It was working perfectly before, however now 
I get these errors:


$ mpirun --hostfile ./../hostfile -np 10 ./src/smallTest
--
Process 0.1.1 is unable to reach 0.1.4 for MPI communication.
If you specified the use of a BTL component, you may have
forgotten a component (such as "self") in the list of
usable components.
--
--
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

 PML add procs failed
 --> Returned "Unreachable" (-12) instead of "Success" (0)
--
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (goodbye)
/

/I assume this could be because of:

$ /sbin/route
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse 
Iface

192.125.17.0*   255.255.255.0   U 0  00 eth1
192.168.12.0*   255.255.255.0   U 0  00 eth1
161.254.0.0 *   255.255.0.0 U 0  00 eth1
default 192.125.17.10.0.0.0 UG0  00 eth1

So "narrowly scoped netmasks" which (as it's written in the FAQ) are not 
supported in the OpenMPI. I asked for a workaround on this newsgroup 
some time ago - but no answer uptill now. So my question is: what 
alternative should I choose that will work in such configuration? Do you 
have some experience in other MPI implementations, for example LamMPI?


Thank you for your support.

regards, Marcin



Re: [OMPI users] IB latency on Mellanox ConnectX hardware

2007-10-18 Thread Jeff Squyres

On Oct 18, 2007, at 7:56 AM, Gleb Natapov wrote:


Open MPI v1.2.4 (and newer) will get around 1.5us latency with 0 byte
ping-pong benchmarks on Mellanox ConnectX HCAs.  Prior versions of
Open MPI can also achieve this low latency by setting the
btl_openib_use_eager_rdma MCA parameter to 1.


Actually setting btl_openib_use_eager_rdma to 1 will not help. The
reason is that it is 1 by default anyway, but Open MPI disables eager
rdma because it can't find HCA description in the ini file and cannot
distinguish between default value and value that user set explicitly.


Arrgh; that's a fun (read: annoying) bug.  Well, it's not a total  
loss -- you can still get the same performance in older Open MPI  
versions by adding the following to the end of the $prefix/share/ 
openmpi/mca-btl-openib-hca-params.ini file:


[Mellanox Hermon]
vendor_id = 0x2c9,0x5ad,0x66a,0x8f1,0x1708
vendor_part_id = 25408,25418,25428
use_eager_rdma = 1
mtu = 2048

--
Jeff Squyres
Cisco Systems



Re: [OMPI users] IB latency on Mellanox ConnectX hardware

2007-10-18 Thread Gleb Natapov
On Wed, Oct 17, 2007 at 05:43:14PM -0400, Jeff Squyres wrote:
> Several users have noticed poor latency with Open MPI when using the  
> new Mellanox ConnectX HCA hardware.  Open MPI was getting about 1.9us  
> latency with 0 byte ping-pong benchmarks (e.g., NetPIPE or  
> osu_latency).  This has been fixed in OMPI v1.2.4.
> 
> Short version:
> --
> 
> Open MPI v1.2.4 (and newer) will get around 1.5us latency with 0 byte  
> ping-pong benchmarks on Mellanox ConnectX HCAs.  Prior versions of  
> Open MPI can also achieve this low latency by setting the  
> btl_openib_use_eager_rdma MCA parameter to 1.

Actually setting btl_openib_use_eager_rdma to 1 will not help. The
reason is that it is 1 by default anyway, but Open MPI disables eager
rdma because it can't find HCA description in the ini file and cannot
distinguish between default value and value that user set explicitly.

> 
> Longer version:
> ---
> 
> Until OMPI v1.2.4, Open MPI did not include specific configuration  
> information for ConnectX hardware, which forced Open MPI to choose  
> the conservative/safe configuration of not using RDMA for short  
> messages (using send/receive semantics instead).  This increases  
> point-to-point latency in benchmarks.
> 
> OMPI v1.2.4 (and newer) includes the relevant configuration  
> information that enables short message RDMA by default on Mellanox  
> ConnectX hardware.  This significantly improves Open MPI's latency on  
> popular MPI benchmark applications.
> 
> The same performance can be achieved on prior versions of Open MPI by  
> setting the btl_openib_use_eager_rdma MCA parameter to 1.  The main  
> difference between v1.2.4 and prior versions is that the prior  
> versions do not set this MCA parameter value by default for ConnectX  
> hardware (because ConnectX did not exist when prior versions of Open  
> MPI were released).
> 
> This information is also now described on the FAQ:
> 
> http://www.open-mpi.org/faq/?category=openfabrics#mellanox-connectx- 
> poor-latency
> 
> -- 
> Jeff Squyres
> Cisco Systems
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
Gleb.


Re: [OMPI users] Compile test programs

2007-10-18 Thread Jeff Squyres
These programs are mainly for internal testing of Open MPI, and are  
actually being phased out.  We don't actively test them anymore, so I  
can't vouch for how well they'll work or not.


A top-level "make test" used to make them.


On Oct 18, 2007, at 4:44 AM, Neeraj Chourasia wrote:


Hi all,

Could someone suggest me, how to compile programs given in test  
directory of the source code? There are couple of directories  
within test which contains sample programs about the usage of  
datastructures being used by open-MPI. I am able to compile some of  
the directories at it was having Makefile created on running  
configure script, but few of them like runtime doesn't have the  
Makefile.


Please help me compiling it.

-Neeraj


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Compile test programs

2007-10-18 Thread Amit Kumar Saha
On 18 Oct 2007 08:44:36 -, Neeraj Chourasia 
wrote:
>
> Hi all,
>
> Could someone suggest me, how to compile programs given in test
> directory of the source code? There are couple of directories within test
> which contains sample programs about the usage of datastructures being used
> by open-MPI. I am able to compile some of the directories at it was having
> Makefile created on running configure script, but few of them like runtime
> doesn't have the Makefile.
>
> Please help me compiling it.


Did you try doing doing it using 'mpicc'? Sorry, I cant try it now.

--Amit

-- 
Amit Kumar Saha
me blogs@ http://amitksaha.blogspot.com
URL:http://amitsaha.in.googlepages.com


[OMPI users] Compile test programs

2007-10-18 Thread Neeraj Chourasia
Hi all, Could someone suggest me, how to compile programs 
given in test directory of the source code? There are couple of directories 
within test which contains sample programs about the usage of datastructures 
being used by open-MPI. I am able to compile some of the directories at it was 
having Makefile created on running configure script, but few of them like 
runtime doesn\'t have the Makefile.Please help me compiling it.-Neeraj