Re: [OMPI users] Application using OpenMPI 1.2.3 hangs, error messages in mca_btl_tcp_frag_recv

2007-09-28 Thread Daniel Rozenbaum




Good Open MPI gurus,

I've further reduced the size of the experiment that reproduces the
problem. My array of requests now has just 10 entries, and by the time
the server gets stuck in MPI_Waitany(), and three of the clients are
stuck in MPI_Recv(), the array has three unprocessed Isend()'s and
three unprocessed Irecv()'s.

I've upgraded to Open MPI 1.2.4, but this made no difference.

Are there any internal logging or debugging facilities in Open MPI that
would allow me to further track the calls that eventually result in the
error in mca_btl_tcp_frag_recv() ?

Thanks,
Daniel


Daniel Rozenbaum wrote:

  
  Here's some more info on the problem I've been struggling with;
my
apologies for the lengthy posts, but I'm a little desperate here :-)
  
I was able to reduce the size of the experiment that reproduces the
problem, both in terms of input data size and the number of slots in
the cluster. The cluster now consists of 6 slots (5 clients), with two
of the clients running on the same node as the server and three others
on another node. This allowed me to follow Brian's
advice and run the server and all the clients under gdb and make
sure none of the processes terminates (normally or abnormally) when the
server reports the "readv failed" errors; this is indeed the case.
  
I then followed Jeff's
advice and added a debug loop just prior to the server calling
MPI_Waitany(), identifying the entries in the requests array which are
not
MPI_REQUEST_NULL, and then tracing back these
requests. What I found was the following:
  
At some point during the run, the server calls MPI_Waitany() on an
array of requests consisting of 96 elements, and gets stuck in it
forever; the only thing that happens at some point thereafter is that
the server reports a couple of "readv failed" errors:
  
[host1][0,1,0][btl_tcp_frag.c:202:mca_btl_tcp_frag_recv]
mca_btl_tcp_frag_recv: readv failed with errno=110
[host1][0,1,0][btl_tcp_frag.c:202:mca_btl_tcp_frag_recv]
mca_btl_tcp_frag_recv: readv failed with errno=110
  
  According to my debug prints, just before that last call to
MPI_Waitany() the array requests[] contains 38 entries which are not
MPI_REQUEST_NULL. Half of these entries correspond to calls to Isend(),
half to Irecv(). Specifically, for example, entries
4,14,24,34,44,54,64,74,84,94 are used for Isend()'s from server to
client #3 (of 5), and entries 5,15,...,95 are used for Irecv() for the
same client.
  
I traced back what's going on, for instance, with requests[4]. As I
mentioned, it corresponds to a call to MPI_Isend() initiated by the
server to client #3 (of 5). By the time the server gets stuck in
Waitany(), this client has already correctly processed the first
Isend() from master in requests[4], returned its response in
requests[5], and the server received this response properly. After
receiving this response, the server Isend()'s the next task to this
client in requests[4], and this is correctly reflected in "requests[4]
!= MPI_REQUESTS_NULL" just before the last call to Waitany(), but for
some reason this send doesn't seem to go any further.
  
Looking at all other requests[] corresponding to Isend()'s initiated by
the server to the same client (14,24,...,94), they're all also not
MPI_REQUEST_NULL, and are not going any further either.
  
One thing that might be important is that the messages the server is
sending to the clients in my experiment are quite large, ranging from
hundreds of Kbytes to several Mbytes, the largest being around 9
Mbytes. The largest messages take place at the beginning of the run and
are processed correctly though.
  
Also, I ran the same experiment on another cluster that uses slightly
different
hardware and network infrastructure, and could not reproduce the
problem.
  
Hope at least some of the above makes some sense. Any additional advice
would be greatly appreciated!
Many thanks,
Daniel
  





Re: [MTT users] Problems running MTT with already installed MPICH-MX

2007-09-28 Thread Ethan Mallove
On Fri, Sep/28/2007 01:39:22PM, Tim Mattox wrote:
> This might also be a problem:
> > [MPI install: MPICH-MX]
> > mpi_get = mpich-mx
> > module = MPICH2
> 
> As far as I can tell, the install module should be something like this:
> module = Analyze::MPICH
> 
> But there isn't an MPICH module for that yet...  there are ones for OMPI,
> CrayMPI, HPMPI and IntelMPI.
> 
> The Analyze::OMPI module won't work, since it will try to use ompi_info...
> so you can lie and say it's one of the others, or...  I'm trying my own hacked
> Analyze::MPICH module to see if I understand how this works.
> 

It's okay to lie in this case :-) E.g., using Analyze::HPMPI
will just tell MTT that your MPI Install has C, C++, F77,
and F90 bindings so that it will proceed to running tests.
We should probably come up with a cleaner solution someday
though. Maybe an Install/Analyze/Generic.pm or something ...?

-Ethan


> On 9/28/07, Ethan Mallove  wrote:
> > Hi Jelena,
> >
> > Change this line:
> >
> >   alreadyinstalled_dir = /usr/local/mpich-mx/bin
> >
> > To this:
> >
> >   alreadyinstalled_dir = /usr/local/mpich-mx
> >
> > Does that work?
> >
> > -Ethan
> >
> >
> > On Thu, Sep/27/2007 10:46:55PM, Jelena Pjesivac-Grbovic wrote:
> > > Hello,
> > >
> > > I am trying to test the MPICH-MX using MTT on our clusters and I am 
> > > hitting
> > > the wall.
> > > I was able to get Open MPI to run (did not try trunk yet - but nightly
> > > builds worked).
> > >
> > > The problem is that all phases seem to go through (including Test RUN and
> > > Reported) but nothing happens.
> > > I have attached the stripped down ini file (only with mpich-mx stuff)
> > > and output of the command
> > >
> > >./client/mtt --scratch /home/pjesa/mtt/scratch2 \
> > >--file
> > > /home/pjesa/mtt/collective-bakeoff/samples/ompi-core-perf-testing_mpich-mx.ini
> > > \
> > >--debug --verbose --print-time --no-section 'skampi imb osu'
> > >
> > > I think that it must be something stupid because the almost same script
> > > which downloads nightly open mpi build works.
> > >
> > > Thanks!
> > > Jelena
> > >  --
> > > Jelena Pjesivac-Grbovic -- Pjesa, Ph.D.
> > > Graduate Research Assistant
> > > Innovative Computing Laboratory
> > > Computer Science Department, UTK
> > > Claxton Complex 350
> > > (865) 974 - 6722 (865) 974 - 6321
> > > jpjes...@utk.edu
> > >
> > > "The only difference between a problem and a solution is that
> > >  people understand the solution."
> > >   -- Charles Kettering
> >
> > Content-Description: simplified ini file
> > > #==
> > > # Generic OMPI core performance testing template configuration
> > > #==
> > >
> > > [MTT]
> > > # Leave this string so that we can identify this data subset in the
> > > # database
> > > # OMPI Core: Use a "test" label until we're ready to run real results
> > > description = [testbake]
> > > #description = [2007 collective performance bakeoff]
> > > # OMPI Core: Use the "trial" flag until we're ready to run real results
> > > trial = 1
> > >
> > > # Put other values here as relevant to your environment.
> > > #hostfile = PBS_NODEFILE
> > > hostfile = /home/pjesa/mtt/runs/machinefile
> > >
> > > #--
> > >
> > > [Lock]
> > > # Put values here as relevant to your environment.
> > >
> > > #==
> > > # MPI get phase
> > > #==
> > >
> > > [MPI get: MPICH-MX]
> > > mpi_details = MPICH-MX
> > >
> > > module = AlreadyInstalled
> > > alreadyinstalled_dir = /usr/local/mpich-mx/bin
> > > alreadyinstalled_version = 1.2.7
> > >
> > > #module = Download
> > > #download_url = 
> > > http://www.myri.com/ftp/pub/MPICH-MX/mpich-mx_1.2.7..5.tar.gz
> > > ## You need to obtain the username and password from Myricom
> > > #download_username = 
> > > #download_password = 
> > >
> > > #==
> > > # Install MPI phase
> > > #==
> > >
> > > #--
> > > [MPI install: MPICH-MX]
> > > mpi_get = mpich-mx
> > > module = MPICH2
> > > save_stdout_on_success = 1
> > > merge_stdout_stderr = 0
> > >
> > > #==
> > > # MPI run details
> > > #==
> > >
> > > [MPI Details: MPICH-MX]
> > >
> > > # You may need different hostfiles for running by slot/by node.
> > > exec = mpirun --machinefile () -np _np() _executable() 
> > > _argv()
> > > network = mx
> > >
> > >
> > > 

Re: [MTT users] Problems running MTT with already installed MPICH-MX

2007-09-28 Thread Jelena Pjesivac-Grbovic


 Yes, I tried Analyze::MPICH and that complained that it does not exist.
 Will try the other ones and see if anything happens.

 THanks,
 Jelena


On Fri, 28 Sep 2007, Tim Mattox wrote:


This might also be a problem:

[MPI install: MPICH-MX]
mpi_get = mpich-mx
module = MPICH2


As far as I can tell, the install module should be something like this:
module = Analyze::MPICH

But there isn't an MPICH module for that yet...  there are ones for OMPI,
CrayMPI, HPMPI and IntelMPI.

The Analyze::OMPI module won't work, since it will try to use ompi_info...
so you can lie and say it's one of the others, or...  I'm trying my own 
hacked

Analyze::MPICH module to see if I understand how this works.

On 9/28/07, Ethan Mallove  wrote:

Hi Jelena,

Change this line:

  alreadyinstalled_dir = /usr/local/mpich-mx/bin

To this:

  alreadyinstalled_dir = /usr/local/mpich-mx

Does that work?

-Ethan


On Thu, Sep/27/2007 10:46:55PM, Jelena Pjesivac-Grbovic wrote:

Hello,

I am trying to test the MPICH-MX using MTT on our clusters and I am 
hitting

the wall.
I was able to get Open MPI to run (did not try trunk yet - but nightly
builds worked).

The problem is that all phases seem to go through (including Test RUN and
Reported) but nothing happens.
I have attached the stripped down ini file (only with mpich-mx stuff)
and output of the command

   ./client/mtt --scratch /home/pjesa/mtt/scratch2 \
   --file
/home/pjesa/mtt/collective-bakeoff/samples/ompi-core-perf-testing_mpich-mx.ini
\
   --debug --verbose --print-time --no-section 'skampi imb osu'

I think that it must be something stupid because the almost same script
which downloads nightly open mpi build works.

Thanks!
Jelena
 --
Jelena Pjesivac-Grbovic -- Pjesa, Ph.D.
Graduate Research Assistant
Innovative Computing Laboratory
Computer Science Department, UTK
Claxton Complex 350
(865) 974 - 6722 (865) 974 - 6321
jpjes...@utk.edu

"The only difference between a problem and a solution is that
 people understand the solution."
  -- Charles Kettering


Content-Description: simplified ini file

#==
# Generic OMPI core performance testing template configuration
#==

[MTT]
# Leave this string so that we can identify this data subset in the
# database
# OMPI Core: Use a "test" label until we're ready to run real results
description = [testbake]
#description = [2007 collective performance bakeoff]
# OMPI Core: Use the "trial" flag until we're ready to run real results
trial = 1

# Put other values here as relevant to your environment.
#hostfile = PBS_NODEFILE
hostfile = /home/pjesa/mtt/runs/machinefile

#--

[Lock]
# Put values here as relevant to your environment.

#==
# MPI get phase
#==

[MPI get: MPICH-MX]
mpi_details = MPICH-MX

module = AlreadyInstalled
alreadyinstalled_dir = /usr/local/mpich-mx/bin
alreadyinstalled_version = 1.2.7

#module = Download
#download_url = 
http://www.myri.com/ftp/pub/MPICH-MX/mpich-mx_1.2.7..5.tar.gz

## You need to obtain the username and password from Myricom
#download_username = 
#download_password = 

#==
# Install MPI phase
#==

#--
[MPI install: MPICH-MX]
mpi_get = mpich-mx
module = MPICH2
save_stdout_on_success = 1
merge_stdout_stderr = 0

#==
# MPI run details
#==

[MPI Details: MPICH-MX]

# You may need different hostfiles for running by slot/by node.
exec = mpirun --machinefile () -np _np() _executable() 
_argv()

network = mx


#==
# Test get phase
#==

[Test get: netpipe]
module = Download
download_url = 
http://www.scl.ameslab.gov/netpipe/code/NetPIPE_3.6.2.tar.gz


#--

[Test get: osu]
module = SVN
svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/osu

#--

[Test get: imb]
module = SVN
svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/IMB_2.3

#--

[Test get: skampi]
module = SVN
svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/skampi-5.0.1

#==
# Test build phase

Re: [MTT users] Problems running MTT with already installed MPICH-MX

2007-09-28 Thread Tim Mattox
This might also be a problem:
> [MPI install: MPICH-MX]
> mpi_get = mpich-mx
> module = MPICH2

As far as I can tell, the install module should be something like this:
module = Analyze::MPICH

But there isn't an MPICH module for that yet...  there are ones for OMPI,
CrayMPI, HPMPI and IntelMPI.

The Analyze::OMPI module won't work, since it will try to use ompi_info...
so you can lie and say it's one of the others, or...  I'm trying my own hacked
Analyze::MPICH module to see if I understand how this works.

On 9/28/07, Ethan Mallove  wrote:
> Hi Jelena,
>
> Change this line:
>
>   alreadyinstalled_dir = /usr/local/mpich-mx/bin
>
> To this:
>
>   alreadyinstalled_dir = /usr/local/mpich-mx
>
> Does that work?
>
> -Ethan
>
>
> On Thu, Sep/27/2007 10:46:55PM, Jelena Pjesivac-Grbovic wrote:
> > Hello,
> >
> > I am trying to test the MPICH-MX using MTT on our clusters and I am hitting
> > the wall.
> > I was able to get Open MPI to run (did not try trunk yet - but nightly
> > builds worked).
> >
> > The problem is that all phases seem to go through (including Test RUN and
> > Reported) but nothing happens.
> > I have attached the stripped down ini file (only with mpich-mx stuff)
> > and output of the command
> >
> >./client/mtt --scratch /home/pjesa/mtt/scratch2 \
> >--file
> > /home/pjesa/mtt/collective-bakeoff/samples/ompi-core-perf-testing_mpich-mx.ini
> > \
> >--debug --verbose --print-time --no-section 'skampi imb osu'
> >
> > I think that it must be something stupid because the almost same script
> > which downloads nightly open mpi build works.
> >
> > Thanks!
> > Jelena
> >  --
> > Jelena Pjesivac-Grbovic -- Pjesa, Ph.D.
> > Graduate Research Assistant
> > Innovative Computing Laboratory
> > Computer Science Department, UTK
> > Claxton Complex 350
> > (865) 974 - 6722 (865) 974 - 6321
> > jpjes...@utk.edu
> >
> > "The only difference between a problem and a solution is that
> >  people understand the solution."
> >   -- Charles Kettering
>
> Content-Description: simplified ini file
> > #==
> > # Generic OMPI core performance testing template configuration
> > #==
> >
> > [MTT]
> > # Leave this string so that we can identify this data subset in the
> > # database
> > # OMPI Core: Use a "test" label until we're ready to run real results
> > description = [testbake]
> > #description = [2007 collective performance bakeoff]
> > # OMPI Core: Use the "trial" flag until we're ready to run real results
> > trial = 1
> >
> > # Put other values here as relevant to your environment.
> > #hostfile = PBS_NODEFILE
> > hostfile = /home/pjesa/mtt/runs/machinefile
> >
> > #--
> >
> > [Lock]
> > # Put values here as relevant to your environment.
> >
> > #==
> > # MPI get phase
> > #==
> >
> > [MPI get: MPICH-MX]
> > mpi_details = MPICH-MX
> >
> > module = AlreadyInstalled
> > alreadyinstalled_dir = /usr/local/mpich-mx/bin
> > alreadyinstalled_version = 1.2.7
> >
> > #module = Download
> > #download_url = 
> > http://www.myri.com/ftp/pub/MPICH-MX/mpich-mx_1.2.7..5.tar.gz
> > ## You need to obtain the username and password from Myricom
> > #download_username = 
> > #download_password = 
> >
> > #==
> > # Install MPI phase
> > #==
> >
> > #--
> > [MPI install: MPICH-MX]
> > mpi_get = mpich-mx
> > module = MPICH2
> > save_stdout_on_success = 1
> > merge_stdout_stderr = 0
> >
> > #==
> > # MPI run details
> > #==
> >
> > [MPI Details: MPICH-MX]
> >
> > # You may need different hostfiles for running by slot/by node.
> > exec = mpirun --machinefile () -np _np() _executable() 
> > _argv()
> > network = mx
> >
> >
> > #==
> > # Test get phase
> > #==
> >
> > [Test get: netpipe]
> > module = Download
> > download_url = http://www.scl.ameslab.gov/netpipe/code/NetPIPE_3.6.2.tar.gz
> >
> > #--
> >
> > [Test get: osu]
> > module = SVN
> > svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/osu
> >
> > #--
> >
> > [Test get: imb]
> > module = SVN
> > svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/IMB_2.3
> >
> > 

Re: [MTT users] Problems running MTT with already installed MPICH-MX

2007-09-28 Thread Jelena Pjesivac-Grbovic
Disregard the previous email... the mpich-mx works well but the MTT is 
still not running tests with it :(



Thanks,
Jelena

On Fri, 28 Sep 2007, Ethan Mallove wrote:


Hi Jelena,

Change this line:

 alreadyinstalled_dir = /usr/local/mpich-mx/bin

To this:

 alreadyinstalled_dir = /usr/local/mpich-mx

Does that work?

-Ethan


On Thu, Sep/27/2007 10:46:55PM, Jelena Pjesivac-Grbovic wrote:

Hello,

I am trying to test the MPICH-MX using MTT on our clusters and I am hitting
the wall.
I was able to get Open MPI to run (did not try trunk yet - but nightly
builds worked).

The problem is that all phases seem to go through (including Test RUN and
Reported) but nothing happens.
I have attached the stripped down ini file (only with mpich-mx stuff)
and output of the command

   ./client/mtt --scratch /home/pjesa/mtt/scratch2 \
   --file
/home/pjesa/mtt/collective-bakeoff/samples/ompi-core-perf-testing_mpich-mx.ini
\
   --debug --verbose --print-time --no-section 'skampi imb osu'

I think that it must be something stupid because the almost same script
which downloads nightly open mpi build works.

Thanks!
Jelena
 --
Jelena Pjesivac-Grbovic -- Pjesa, Ph.D.
Graduate Research Assistant
Innovative Computing Laboratory
Computer Science Department, UTK
Claxton Complex 350
(865) 974 - 6722 (865) 974 - 6321
jpjes...@utk.edu

"The only difference between a problem and a solution is that
 people understand the solution."
  -- Charles Kettering


Content-Description: simplified ini file

#==
# Generic OMPI core performance testing template configuration
#==

[MTT]
# Leave this string so that we can identify this data subset in the
# database
# OMPI Core: Use a "test" label until we're ready to run real results
description = [testbake]
#description = [2007 collective performance bakeoff]
# OMPI Core: Use the "trial" flag until we're ready to run real results
trial = 1

# Put other values here as relevant to your environment.
#hostfile = PBS_NODEFILE
hostfile = /home/pjesa/mtt/runs/machinefile

#--

[Lock]
# Put values here as relevant to your environment.

#==
# MPI get phase
#==

[MPI get: MPICH-MX]
mpi_details = MPICH-MX

module = AlreadyInstalled
alreadyinstalled_dir = /usr/local/mpich-mx/bin
alreadyinstalled_version = 1.2.7

#module = Download
#download_url = http://www.myri.com/ftp/pub/MPICH-MX/mpich-mx_1.2.7..5.tar.gz
## You need to obtain the username and password from Myricom
#download_username = 
#download_password = 

#==
# Install MPI phase
#==

#--
[MPI install: MPICH-MX]
mpi_get = mpich-mx
module = MPICH2
save_stdout_on_success = 1
merge_stdout_stderr = 0

#==
# MPI run details
#==

[MPI Details: MPICH-MX]

# You may need different hostfiles for running by slot/by node.
exec = mpirun --machinefile () -np _np() _executable() 
_argv()
network = mx


#==
# Test get phase
#==

[Test get: netpipe]
module = Download
download_url = http://www.scl.ameslab.gov/netpipe/code/NetPIPE_3.6.2.tar.gz

#--

[Test get: osu]
module = SVN
svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/osu

#--

[Test get: imb]
module = SVN
svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/IMB_2.3

#--

[Test get: skampi]
module = SVN
svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/skampi-5.0.1

#==
# Test build phase
#==

[Test build: netpipe]
test_get = netpipe
save_stdout_on_success = 1
merge_stdout_stderr = 1
stderr_save_lines = 100

module = Shell
shell_build_command = 

Re: [OMPI users] aclocal.m4 booboo?

2007-09-28 Thread Brian Barrett

On Sep 27, 2007, at 6:44 PM, Mostyn Lewis wrote:


Today's SVN.

A generated configure has this in it:


I'm not able to replicate this using an SVN checkout of the trunk --  
you might want to make sure you have a proper install of all the  
autotools.  If you are using another branch from SVN, you can not use  
recent CVS copies of Libtool, you'll have to use the same version  
specified here:


http://www.open-mpi.org/svn/building.php

Brian

--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/




Re: [OMPI users] Run MPI-based program without mpirun

2007-09-28 Thread Rainer Keller
Dear Yura,
On Friday 28 September 2007 15:44, Yu. Vishnevsky wrote:
> Is it possible to somehow build MPI-based program with OpenMPI in order to
> run it later on another SMP computer without having installed OpenMPI?
>
> Such possibility was in MPICH1 (-p4pg), but it would be better for me to
> use OpenMPI.
Sure: Build everything statically, library (--disable-shared --enable-static) 
and application (e.g. with gcc -static, with icc -static or -i-static(v9.1) / 
-static-intel(v10))
Then move the binary of Your application and the ompi-tools to another (SMP) 
machine, which does not have the usual installation of ompi-tools/orte 
available.

However, You have to be aware, that in case of error messages, the help-files 
are not available, so the out probably is not very meaningful...

Also, depending on the interconnect, that is configured & installed, You may 
need to have certain system/interconnect libraries (such as for IB). 
available.

Hope this helps.

With best regards,
Rainer
-- 

Dipl.-Inf. Rainer Keller   http://www.hlrs.de/people/keller
 HLRS  Tel: ++49 (0)711-685 6 5858
 Nobelstrasse 19  Fax: ++49 (0)711-685 6 5832
 70550 Stuttgartemail: kel...@hlrs.de 
 Germany AIM/Skype:rusraink

"Emails save time, not printing them saves trees!"


Re: [OMPI users] Open MPI on 64 bits intel Mac OS X

2007-09-28 Thread Brian Barrett


On Sep 28, 2007, at 4:56 AM, Massimo Cafaro wrote:


Dear all,

when I try to compile my MPI code on 64 bits intel Mac OS X the  
build fails since the Open MPI library has been compiled using 32  
bits. Can you please provide in the next version the ability at  
configure time to choose between 32 and 64 bits or even better  
compile by defaults using both modes?


To reproduce the problem, simply compile on 64 bits intel Mac OS X  
an MPI application using mpicc -arch x86_64. The 64 bits linker  
complains as follows:


ld64 warning: in /usr/local/mpi/lib/libmpi.dylib, file is not of  
required architecture
ld64 warning: in /usr/local/mpi/lib/libopen-rte.dylib, file is not  
of required architecture
ld64 warning: in /usr/local/mpi/lib/libopen-pal.dylib, file is not  
of required architecture


and a number of undefined symbols is shown, one for each MPI  
function used in the application.


This is already possible.  Simply use the configure options:

  ./configure ... CFLAGS="-arch x86_64" CXXFLAGS="-arch x86_64"  
OBJCFLAGS="-arch x86_64"


also set FFLAGS and FCFLAGS to "-m64" if you have gfortran/g95  
compiler installed.  The common installs of either don't speak the - 
arch option, so you have to use the more traditional -m64.


Hope this helps,

Brian


[OMPI users] OpenMPI Giving problems when using -mca btl mx, sm, self

2007-09-28 Thread Hammad Siddiqi


Hello,

I am using Sun HPC Toolkit 7.0 to compile and run my C MPI programs.

I have tested the myrinet installations using myricoms own test programs.
The Myricom software stack I am using is MX and the vesrion is 
mx2g-1.1.7, mx_mapper is also used.
We have 4 nodes having 8 dual core processors each (Sun Fire v890) and 
the operating system is
Solaris 10 (SunOS indus1 5.10 Generic_125100-10 sun4u sparc 
SUNW,Sun-Fire-V890).


The contents of machine file are:
indus1
indus2
indus3
indus4

The output of *mx_info* on each node is given below

=*=
indus1
*==

MX Version: 1.1.7rc3cvs1_1_fixes
MX Build: @indus4:/opt/mx2g-1.1.7rc3 Thu May 31 11:36:59 PKT 2007
2 Myrinet boards installed.
The MX driver is configured to support up to 4 instances and 1024 nodes.
===
Instance #0:  333.2 MHz LANai, 66.7 MHz PCI bus, 2 MB SRAM
   Status: Running, P0: Link up
   MAC Address:00:60:dd:47:ad:7c
   Product code:   M3F-PCIXF-2
   Part number:09-03392
   Serial number:  297218
   Mapper: 00:60:dd:47:b3:e8, version = 0x7677b8ba, configured
   Mapped hosts:   10


   ROUTE COUNT
INDEXMAC ADDRESS HOST NAMEP0
---- 
----

  0) 00:60:dd:47:ad:7c indus1:0  1,1
  2) 00:60:dd:47:ad:68 indus4:0  8,3
  3) 00:60:dd:47:b3:e8 indus4:1  7,3
  4) 00:60:dd:47:b3:ab indus2:0  7,3
  5) 00:60:dd:47:ad:66 indus3:0  8,3
  6) 00:60:dd:47:ad:76 indus3:1  8,3
  7) 00:60:dd:47:ad:77 jhelum1:0 8,3
  8) 00:60:dd:47:b3:5a ravi2:0   8,3
  9) 00:60:dd:47:ad:5f ravi2:1   1,1
 10) 00:60:dd:47:b3:bf ravi1:0   8,3
===

==
*indus2*
==

MX Version: 1.1.7rc3cvs1_1_fixes
MX Build: @indus2:/opt/mx2g-1.1.7rc3 Thu May 31 11:24:03 PKT 2007
2 Myrinet boards installed.
The MX driver is configured to support up to 4 instances and 1024 nodes.
===
Instance #0:  333.2 MHz LANai, 66.7 MHz PCI bus, 2 MB SRAM
   Status: Running, P0: Link up
   MAC Address:00:60:dd:47:b3:ab
   Product code:   M3F-PCIXF-2
   Part number:09-03392
   Serial number:  296636
   Mapper: 00:60:dd:47:b3:e8, version = 0x7677b8ba, configured
   Mapped hosts:   10

   ROUTE COUNT
INDEXMAC ADDRESS HOST NAMEP0
---- ----
  0) 00:60:dd:47:b3:ab indus2:0  1,1
  2) 00:60:dd:47:ad:68 indus4:0  1,1
  3) 00:60:dd:47:b3:e8 indus4:1  8,3
  4) 00:60:dd:47:ad:66 indus3:0  1,1
  5) 00:60:dd:47:ad:76 indus3:1  7,3
  6) 00:60:dd:47:ad:77 jhelum1:0 7,3
  8) 00:60:dd:47:ad:7c indus1:0  8,3
  9) 00:60:dd:47:b3:5a ravi2:0   8,3
 10) 00:60:dd:47:ad:5f ravi2:1   8,3
 11) 00:60:dd:47:b3:bf ravi1:0   7,3
===
Instance #1:  333.2 MHz LANai, 66.7 MHz PCI bus, 2 MB SRAM
   Status: Running, P0: Link down
   MAC Address:00:60:dd:47:b3:c3
   Product code:   M3F-PCIXF-2
   Part number:09-03392
   Serial number:  296612
   Mapper: 00:60:dd:47:b3:e8, version = 0x7677b8ba, configured
   Mapped hosts:   10

==
*indus3*
==
MX Version: 1.1.7rc3cvs1_1_fixes
MX Build: @indus3:/opt/mx2g-1.1.7rc3 Thu May 31 11:29:03 PKT 2007
2 Myrinet boards installed.
The MX driver is configured to support up to 4 instances and 1024 nodes.
===
Instance #0:  333.2 MHz LANai, 66.7 MHz PCI bus, 2 MB SRAM
   Status: Running, P0: Link up
   MAC Address:00:60:dd:47:ad:66
   Product code:   M3F-PCIXF-2
   Part number:09-03392
   Serial number:  297240
   Mapper: 00:60:dd:47:b3:e8, version = 0x7677b8ba, configured
   Mapped hosts:   10

   ROUTE COUNT
INDEXMAC ADDRESS HOST NAMEP0
---- ----
  0) 00:60:dd:47:ad:66 indus3:0  1,1
  1)