Re: [OMPI users] OpenMPI on OS X - file is not of required architecture

2009-09-14 Thread Warner Yuen
With IFORT 10.x the compiler defaulted to 64-bit at install, but  
there's a script that can be run to switch the compiler to 32-bit mode.


You may want to double check to make sure that it's in 64-bit mode by  
executing "ifort -V"



Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
Tel: 408.718.2859




On Sep 12, 2009, at 4:37 AM, users-requ...@open-mpi.org wrote:


Message: 4
Date: Fri, 11 Sep 2009 15:35:34 -0700
From: Doug Reeder <d...@rain.org>
Subject: Re: [OMPI users] OpenMPI on OS X - file is not of required
architecture
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <3fa2d621-d5c7-4579-94b7-8a0bfa040...@rain.org>
Content-Type: text/plain; charset="us-ascii"; Format="flowed";
DelSp="yes"

Andreas,

Have you checked that ifort is creating 64 bit objects. If I remember
correctly with 10.1 the default was to create 32 bit objects.

Doug Reeder




Re: [OMPI users] Mac OSX 10.6 (SL) + openMPI 1.3.3 + Intel Compilers 11.1.058 => Segmentation fault

2009-09-08 Thread Warner Yuen
I also had the same problem with IFORT and ICC with OMPI-1.33 on Mac  
OS X v10.6. However, I was successfully able to use 10.6 Server with  
IFORT 11.1.058 and GCC.



Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
Tel: 408.718.2859




On Sep 8, 2009, at 7:11 AM, users-requ...@open-mpi.org wrote:


Message: 7
Date: Tue, 8 Sep 2009 07:10:50 -0700
From: Marcus Herrmann <marcus.herrm...@asu.edu>
Subject: Re: [OMPI users] Mac OSX 10.6 (SL) + openMPI 1.3.3 + Intel
Compilers 11.1.058 => Segmentation fault
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <52515acc-6e3e-4bea-ae91-89b587a7c...@asu.edu>
Content-Type: text/plain;   charset=us-ascii;   format=flowed;  
delsp=yes

Christophe,
the 11.1.058 compilers are not (yet) compatible with snow leopard. See
the Intel compiler Forums. The gnu compilers however work.

Marcus




Re: [OMPI users] openmpi with xgrid

2009-08-14 Thread Warner Yuen

Hi Alan,

Xgrid support for Open MPI is currently broken in the latest version  
of Open MPI. See the ticket below. However, I believe that Xgrid still  
works with one of the earlier 1.2  versions of Open MPI. I don't  
recall for sure, but I think that it's Open MPI 1.2.3.


#1777: Xgrid support is broken in the v1.3 series
- 
+--

Reporter:  jsquyres  |Owner:  brbarret
   Type:  defect|   Status:  accepted
Priority:  major |Milestone:  Open MPI 1.3.4
Version:  trunk |   Resolution:
Keywords:|
- 
+--

Changes (by bbenton):

 * milestone:  Open MPI 1.3.3 => Open MPI 1.3.4


Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
Tel: 408.718.2859




On Aug 14, 2009, at 6:21 AM, users-requ...@open-mpi.org wrote:



Message: 1
Date: Fri, 14 Aug 2009 14:21:30 +0100
From: Alan <alanwil...@gmail.com>
Subject: [OMPI users] openmpi with xgrid
To: us...@open-mpi.org
Message-ID:
<cf58c8d00908140621v18d384f2wef97ee80ca3de...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi there,
I saw that http://www.open-mpi.org/community/lists/users/2007/08/3900.php 
.


I use fink, and so I changed the openmpi.info file in order to get  
openmpi

with xgrid support.

As you can see:
amadeus[2081]:~/Downloads% /sw/bin/ompi_info
Package: Open MPI root@amadeus.local Distribution
   Open MPI: 1.3.3
  Open MPI SVN revision: r21666
  Open MPI release date: Jul 14, 2009
   Open RTE: 1.3.3
  Open RTE SVN revision: r21666
  Open RTE release date: Jul 14, 2009
   OPAL: 1.3.3
  OPAL SVN revision: r21666
  OPAL release date: Jul 14, 2009
   Ident string: 1.3.3
 Prefix: /sw
Configured architecture: x86_64-apple-darwin9
 Configure host: amadeus.local
  Configured by: root
  Configured on: Fri Aug 14 12:58:12 BST 2009
 Configure host: amadeus.local
   Built by:
   Built on: Fri Aug 14 13:07:46 BST 2009
 Built host: amadeus.local
 C bindings: yes
   C++ bindings: yes
 Fortran77 bindings: yes (single underscore)
 Fortran90 bindings: yes
Fortran90 bindings size: small
 C compiler: gcc
C compiler absolute: /sw/var/lib/fink/path-prefix-10.6/gcc
   C++ compiler: g++
  C++ compiler absolute: /sw/var/lib/fink/path-prefix-10.6/g++
 Fortran77 compiler: gfortran
 Fortran77 compiler abs: /sw/bin/gfortran
 Fortran90 compiler: gfortran
 Fortran90 compiler abs: /sw/bin/gfortran
C profiling: yes
  C++ profiling: yes
Fortran77 profiling: yes
Fortran90 profiling: yes
 C++ exceptions: no
 Thread support: posix (mpi: no, progress: no)
  Sparse Groups: no
 Internal debug support: no
MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
libltdl support: yes
  Heterogeneous support: no
mpirun default --prefix: no
MPI I/O support: yes
  MPI_WTIME support: gettimeofday
Symbol visibility support: yes
  FT Checkpoint support: no  (checkpoint thread: no)
  MCA backtrace: execinfo (MCA v2.0, API v2.0, Component  
v1.3.3)

  MCA paffinity: darwin (MCA v2.0, API v2.0, Component v1.3.3)
  MCA carto: auto_detect (MCA v2.0, API v2.0, Component  
v1.3.3)

  MCA carto: file (MCA v2.0, API v2.0, Component v1.3.3)
  MCA maffinity: first_use (MCA v2.0, API v2.0, Component  
v1.3.3)

  MCA timer: darwin (MCA v2.0, API v2.0, Component v1.3.3)
MCA installdirs: env (MCA v2.0, API v2.0, Component v1.3.3)
MCA installdirs: config (MCA v2.0, API v2.0, Component v1.3.3)
MCA dpm: orte (MCA v2.0, API v2.0, Component v1.3.3)
 MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.3.3)
  MCA allocator: basic (MCA v2.0, API v2.0, Component v1.3.3)
  MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.3.3)
   MCA coll: basic (MCA v2.0, API v2.0, Component v1.3.3)
   MCA coll: hierarch (MCA v2.0, API v2.0, Component  
v1.3.3)

   MCA coll: inter (MCA v2.0, API v2.0, Component v1.3.3)
   MCA coll: self (MCA v2.0, API v2.0, Component v1.3.3)
   MCA coll: sm (MCA v2.0, API v2.0, Component v1.3.3)
   MCA coll: sync (MCA v2.0, API v2.0, Component v1.3.3)
   MCA coll: tuned (MCA v2.0, API v2.0, Component v1.3.3)
 MCA io: romio (MCA v2.0, API v2.0, Component v1.3.3)
  MCA mpool: fake (MCA v2.0, API v2.0, Component v1.3.3)
  MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.3.3)
  MCA mpool: sm (MCA v2.0, API v2.0, Component v1.3.3)
MCA pml: cm (MCA v2.0, A

[OMPI users] Xgrid and choosing agents...

2009-07-13 Thread Warner Yuen

Hi Jody,

Have you tried turning off Hyper-Threading with the Processor  
Preference Pane?


The processor palette is include in the CHUD package when you  
installed the developer tools. It lives in /Developer/Extras/ 
PreferencePanes;  launch it and it will get added to the system  
preferences.



Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com



On Jul 11, 2009, at 9:00 AM, users-requ...@open-mpi.org wrote:


--

Message: 3
Date: Sat, 11 Jul 2009 07:56:08 -0700
From: Klymak Jody <jkly...@uvic.ca>
Subject: [OMPI users] Xgrid and choosing agents...
To: us...@open-mpi.org
Message-ID: <a6282054-7bcc-4261-9822-ad080b5a6...@uvic.ca>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

Hi all,

Sorry in advance if these are naive questions - I'm not experienced in
running a grid...

I'm using openMPI on 4  duo Quad-core Xeon xserves.  The 8 cores mimic
16 cores and show up in xgrid as each agent having 16 processors.
However, the processing speed goes down as the used processors exceeds
8, so if possible I'd prefer to not have more than 8 processors
working on each machine at a time.

Unfortunately, if I submit a 16-processor job to xgrid it all goes to
"xserve03".  Or even worse, it does so if I submit two separate 8-
processor jobs.  Is there anyway to steer jobs to less-busy agents?

I tried making a hostfile and then specifying the host, but I get:

/usr/local/openmpi/bin/mpirun -n 8 --hostfile hostfile --host
xserve01.local ../build/mitgcmuv

Some of the requested hosts are not included in the current allocation
for the
application:
  ../build/mitgcmuv
The requested hosts were:
  xserve01.local

so I assume --host doesn't work with xgrid?

Is a reasonable alternative to simply not use xgrid and rely on ssh?

Thanks,  Jody

--
Jody Klymak
http://web.uvic.ca/~jklymak



--

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

End of users Digest, Vol 1285, Issue 2
**




[OMPI users] How do I compile OpenMPI in Xcode 3.1

2009-05-04 Thread Warner Yuen

Admittedly, I don't use Xcode to build Open MPI either.

You can just compile Open MPI from the command line and install  
everything in /usr/local/. Make sure that gfortran is set in your path  
and you should just be able to do a './configure --prefix=/usr/local'


After the installation, just make sure that your path is set correctly  
when you go to use the newly installed Open MPI. If you don't set your  
path, it will always default to using the version of OpenMPI that  
ships with Leopard.



Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
Tel: 408.718.2859




On May 4, 2009, at 9:13 AM, users-requ...@open-mpi.org wrote:


Send users mailing list submissions to
us...@open-mpi.org

To subscribe or unsubscribe via the World Wide Web, visit
http://www.open-mpi.org/mailman/listinfo.cgi/users
or, via email, send a message with subject or body 'help' to
users-requ...@open-mpi.org

You can reach the person managing the list at
users-ow...@open-mpi.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of users digest..."


Today's Topics:

  1. Re: How do I compile OpenMPI in Xcode 3.1 (Vicente Puig)


--

Message: 1
Date: Mon, 4 May 2009 18:13:45 +0200
From: Vicente Puig <vpui...@gmail.com>
Subject: Re: [OMPI users] How do I compile OpenMPI in Xcode 3.1
To: Open MPI Users <us...@open-mpi.org>
Message-ID:
<3e9a21680905040913u3f36d3c9rdcd3413bfdcd...@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

If I can not make it work with Xcode,  which one could I use?, which  
one do

you use to compile and debug OpenMPI?.
Thanks

Vincent


2009/5/4 Jeff Squyres <jsquy...@cisco.com>

Open MPI comes pre-installed in Leopard; as Warner noted, since  
Leopard
doesn't ship with a Fortran compiler, the Open MPI that Apple ships  
has

non-functional mpif77 and mpif90 wrapper compilers.

So the Open MPI that you installed manually will use your Fortran
compilers, and therefore will have functional mpif77 and mpif90  
wrapper
compilers.  Hence, you probably need to be sure to use the "right"  
wrapper
compilers.  It looks like you specified the full path specified to  
ExecPath,
so I'm not sure why Xcode wouldn't work with that (like I  
mentioned, I
unfortunately don't use Xcode myself, so I don't know why that  
wouldn't

work).




On May 4, 2009, at 11:53 AM, Vicente wrote:

Yes, I already have gfortran compiler on /usr/local/bin, the same  
path
as my mpif90 compiler. But I've seen when I use the mpif90 on /usr/ 
bin

and on  /Developer/usr/bin says it:

"Unfortunately, this installation of Open MPI was not compiled with
Fortran 90 support.  As such, the mpif90 compiler is non- 
functional."



That should be the problem, I will have to change the path to use  
the

gfortran I have installed.
How could I do it? (Sorry, I am beginner)

Thanks.


El 04/05/2009, a las 17:38, Warner Yuen escribi?:

Have you installed a Fortran compiler? Mac OS X's developer tools  
do

not come with a Fortran compiler, so you'll need to install one if
you haven't already done so. I routinely use the Intel IFORT
compilers with success. However, I hear many good things about the
gfortran compilers on Mac OS X, you can't beat the price of  
gfortran!



Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
Tel: 408.718.2859




On May 4, 2009, at 7:28 AM, users-requ...@open-mpi.org wrote:


Send users mailing list submissions to
us...@open-mpi.org

To subscribe or unsubscribe via the World Wide Web, visit
http://www.open-mpi.org/mailman/listinfo.cgi/users
or, via email, send a message with subject or body 'help' to
users-requ...@open-mpi.org

You can reach the person managing the list at
users-ow...@open-mpi.org

When replying, please edit your Subject line so it is more  
specific

than "Re: Contents of users digest..."


Today's Topics:

1. How do I compile OpenMPI in Xcode 3.1 (Vicente)
2. Re: 1.3.1 -rf rankfile behaviour ?? (Ralph Castain)


--

Message: 1
Date: Mon, 4 May 2009 16:12:44 +0200
From: Vicente <vpui...@gmail.com>
Subject: [OMPI users] How do I compile OpenMPI in Xcode 3.1
To: us...@open-mpi.org
Message-ID: <1c2c0085-940f-43bb-910f-975871ae2...@gmail.com>
Content-Type: text/plain; charset="windows-1252"; Format="flowed";
DelSp="yes"

Hi, I've seen the FAQ "How do I use Open MPI wrapper compilers in
Xcode", but it's only for MPICC. I am using MPIF90, so I did the
same,
but changing MPICC for MPIF90, and also the path, but it did not
work.

Building target ?fortran? of project ?fortran? with configuration
?Debug?


Checking Dependencies
Invalid value 'MPIF90' for GCC_VERSION


The file

[OMPI users] How do I compile OpenMPI in Xcode 3.1

2009-05-04 Thread Warner Yuen
Have you installed a Fortran compiler? Mac OS X's developer tools do  
not come with a Fortran compiler, so you'll need to install one if you  
haven't already done so. I routinely use the Intel IFORT compilers  
with success. However, I hear many good things about the gfortran  
compilers on Mac OS X, you can't beat the price of gfortran!



Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
Tel: 408.718.2859




On May 4, 2009, at 7:28 AM, users-requ...@open-mpi.org wrote:


Send users mailing list submissions to
us...@open-mpi.org

To subscribe or unsubscribe via the World Wide Web, visit
http://www.open-mpi.org/mailman/listinfo.cgi/users
or, via email, send a message with subject or body 'help' to
users-requ...@open-mpi.org

You can reach the person managing the list at
users-ow...@open-mpi.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of users digest..."


Today's Topics:

  1. How do I compile OpenMPI in Xcode 3.1 (Vicente)
  2. Re: 1.3.1 -rf rankfile behaviour ?? (Ralph Castain)


--

Message: 1
Date: Mon, 4 May 2009 16:12:44 +0200
From: Vicente <vpui...@gmail.com>
Subject: [OMPI users] How do I compile OpenMPI in Xcode 3.1
To: us...@open-mpi.org
Message-ID: <1c2c0085-940f-43bb-910f-975871ae2...@gmail.com>
Content-Type: text/plain; charset="windows-1252"; Format="flowed";
DelSp="yes"

Hi, I've seen the FAQ "How do I use Open MPI wrapper compilers in
Xcode", but it's only for MPICC. I am using MPIF90, so I did the same,
but changing MPICC for MPIF90, and also the path, but it did not work.

Building target ?fortran? of project ?fortran? with configuration
?Debug?


Checking Dependencies
Invalid value 'MPIF90' for GCC_VERSION


The file "MPIF90.cpcompspec" looks like this:

  1 /**
  2 Xcode Coompiler Specification for MPIF90
  3
  4 */
  5
  6 {   Type = Compiler;
  7 Identifier = com.apple.compilers.mpif90;
  8 BasedOn = com.apple.compilers.gcc.4_0;
  9 Name = "MPIF90";
 10 Version = "Default";
 11 Description = "MPI GNU C/C++ Compiler 4.0";
 12 ExecPath = "/usr/local/bin/mpif90";  // This gets
converted to the g++ variant automatically
 13 PrecompStyle = pch;
 14 }

and is located in "/Developer/Library/Xcode/Plug-ins"

and when I do mpif90 -v on terminal it works well:

Using built-in specs.
Target: i386-apple-darwin8.10.1
Configured with: /tmp/gfortran-20090321/ibin/../gcc/configure --
prefix=/usr/local/gfortran --enable-languages=c,fortran --with-gmp=/
tmp/gfortran-20090321/gfortran_libs --enable-bootstrap
Thread model: posix
gcc version 4.4.0 20090321 (experimental) [trunk revision 144983]  
(GCC)



Any idea??

Thanks.

Vincent
-- next part --
HTML attachment scrubbed and removed

--

Message: 2
Date: Mon, 4 May 2009 08:28:26 -0600
From: Ralph Castain <r...@open-mpi.org>
Subject: Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??
To: Open MPI Users <us...@open-mpi.org>
Message-ID:
<71d2d8cc0905040728h2002f4d7s4c49219eee29e...@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Unfortunately, I didn't write any of that code - I was just fixing the
mapper so it would properly map the procs. From what I can tell, the  
proper

things are happening there.

I'll have to dig into the code that specifically deals with parsing  
the
results to bind the processes. Afraid that will take awhile longer -  
pretty

dark in that hole.


On Mon, May 4, 2009 at 8:04 AM, Geoffroy Pignot  
<geopig...@gmail.com> wrote:



Hi,

So, there are no more crashes with my "crazy" mpirun command. But the
paffinity feature seems to be broken. Indeed I am not able to pin my
processes.

Simple test with a program using your plpa library :

r011n006% cat hostf
r011n006 slots=4

r011n006% cat rankf
rank 0=r011n006 slot=0   > bind to CPU 0 , exact ?

r011n006% /tmp/HALMPI/openmpi-1.4a/bin/mpirun --hostfile hostf -- 
rankfile

rankf --wdir /tmp -n 1 a.out

PLPA Number of processors online: 4
PLPA Number of processor sockets: 2
PLPA Socket 0 (ID 0): 2 cores
PLPA Socket 1 (ID 3): 2 cores


Ctrl+Z
r011n006%bg

r011n006% ps axo stat,user,psr,pid,pcpu,comm | grep gpignot
R+   gpignot3  9271 97.8 a.out

In fact whatever the slot number I put in my rankfile , a.out  
always runs
on the CPU 3. I was looking for it on CPU 0 accordind to my cpuinfo  
file

(see below)
The result is the same if I try another syntax (rank 0=r011n006  
slot=0:0

bind to socket 0 - core 0  , exact ? )

Thanks in advance

Geoffroy

PS: I run on rhel5

r011n006% uname -a
Linux r011n006 2.6.18-92.1.1NOMAP32.el5 #1 SMP Sat Mar 15 01:46:39  
CDT 2008

x86_64 x86_64 x86_64 GNU/Linux

My configu

[OMPI users] build failed using intel compilers on mac os

2008-10-10 Thread Warner Yuen
If using the Intel v10.1.x compilers to build a 64-bit version, by  
default (default installation), Intel invokes the 64-bit compiler. But  
yes, you can use the "-m64" flag as well.



Warner Yuen
Scientific Computing
Consulting Engineer
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859



On Oct 9, 2008, at 10:15 PM, users-requ...@open-mpi.org wrote:


Message: 2
Date: Thu, 9 Oct 2008 17:28:38 -0400
From: Jeff Squyres <jsquy...@cisco.com>
Subject: Re: [OMPI users] build failed using intel compilers on mac os
x
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <897c21db-cb73-430c-b306-8e492b247...@cisco.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

The CXX compiler should be icpc, not icc.


On Oct 7, 2008, at 11:08 AM, Massimo Cafaro wrote:




Dear all,

I tried to build the latest v1.2.7 open-mpi version on Mac OS X
10.5.5 using the intel c, c++ and fortran compilers v10.1.017 (the
latest ones released by intel). Before starting the build I have
properly configured the CC, CXX, F77 and FC environment variables
(to icc and ifort). The build failed due to undefined symbols.

I am attaching a log of the failed build process.
Any clue? Am I doing something wrong?

Also, to build a 64 bit version it is enough to supply in the
corresponding environment variables the -m64 option ?
Thank you in advance and best regards,

Massimo






Re: [OMPI users] Memory question and possible bug in 64bit addressing under Leopard!

2008-04-25 Thread Warner Yuen
I believe that you can also add this to your LIB or LDFLAGS for 64-bit  
applications:


-Wl,-stack_addr,0xF1000 -Wl,-stack_size,0x6400


Warner Yuen
Scientific Computing
Consulting Engineer
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133



On Apr 25, 2008, at 1:11 PM, users-requ...@open-mpi.org wrote:


--

Message: 6
Date: Fri, 25 Apr 2008 14:10:57 -0600
From: Brian Barrett <brbar...@open-mpi.org>
Subject: Re: [OMPI users] Memory question and possible bug in 64bit
addressing under Leopard!
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <c9986452-b565-454d-8f79-e7c5e0571...@open-mpi.org>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

On Apr 25, 2008, at 2:06 PM, Gregory John Orris wrote:


produces a core dump on a machine with 12Gb of RAM.

and the error message

mpiexec noticed that job rank 0 with PID 75545 on node mymachine.com
exited on signal 4 (Illegal instruction).

However, substituting in

float *X = new float[n];
for
float X[n];

Succeeds!



You're running off the end of the stack, because of the large amount
of data you're trying to put there.  OS X by default has a tiny stack
size, so codes that run on Linux (which defaults to a much larger
stack size) sometimes show this problem.  Your best bets are either to
increase the max stack size or (more portably) just allocate
everything on the heap with malloc/new.

Hope this helps,

Brian

--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/




Re: [OMPI users] Open MPI v1.2.5 released

2008-01-09 Thread Warner Yuen
Thanks to Brian Barrett, I was able to get through some ugly Intel  
compiler bugs during the configure script. I now have OMPI v1.2.5  
running nicely under Mac OSX v10.5 Leopard!


However, I have a question about hostfiles. I would like to manually  
launch MPI jobs from my headnode, but I don't want the jobs to run on  
the head node. In LAM/MPI I could add a "hostname schedule=no" to the  
hostfile, is there an equivalent in OpenMPI? I'm sure this has come up  
before, but I couldn't find an answer in the archives.


Thanks,

-Warner

Warner Yuen
Scientific Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133



Re: [OMPI users] Compiling 1.2.4 using Intel Compiler 10.1.007 on Leopard

2007-12-12 Thread Warner Yuen
Hi Jeff,It seems that the problems are partially the compilers fault, maybe the updated compilers didn't catch all the problems filed against the last release? Why else would I need to add the "-no-multibyte-chars" flag for pretty much everything that I build with ICC? Also, its odd that I have to use /lib/cpp when using Intel ICC/ICPC whereas with GCC things just find their way correctly. Again, IFORT and GCC together seem fine. Lastly... not that I use these... but MPICH-2.1 and MPICH-1.2.7 for Myrinet built just fine.Here are the output files:

config.log.tgz
Description: Binary data


configure.output.tgz
Description: Binary data


error.log.tgz
Description: Binary data
 Warner YuenScientific Computing ConsultantApple Computeremail: wy...@apple.comTel: 408.718.2859Fax: 408.715.0133 On Dec 12, 2007, at 9:00 AM, users-requ...@open-mpi.org wrote:--Message: 1Date: Wed, 12 Dec 2007 06:50:03 -0500From: Jeff Squyres Subject: Re: [OMPI users] Problems compiling 1.2.4 using Intel	Compiler	10.1.006 on LeopardTo: Open MPI Users Message-ID: <43bb0bce-e328-4d3e-ae61-84991b27f...@cisco.com>Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yesMy primary work platform is a MacBook Pro, but I don't specifically  develop for OS X, so I don't have any special compilers.Sorry to ask this because I think the information was sent before, but  could you send all the compile/failure information?  http://www.open-mpi.org/community/help/

Re: [OMPI users] Problems compiling 1.2.4 using Intel Compiler 10.1.006 on Leopard

2007-12-11 Thread Warner Yuen
Has anyone gotten Open MPI 1.2.4 to compile with the latest Intel  
compilers 10.1.007 and Leopard? I can get Open MPI-1.2.4 to to build  
with GCC + Fortran IFORT 10.1.007. But I can't get any configuration  
to work with Intel's 10.1.007 Compilers.


The configuration completes, but the compilation fails fairly early,

My compiler settings are as follows:

For GCC + IFORT (This one works):
export CC=/usr/bin/cc
export CXX=/usr/bin/c++
export FC=/usr/bin/ifort
export F90=/usr/bin/ifort
export F77=/usr/bin/ifort


For using all Intel compilers (The configure works but the compilation  
fails):


export CC=/usr/bin/icc
export CXX=/usr/bin/icpc
export FC=/usr/bin/ifort
export F90=/usr/bin/ifort
export F77=/usr/bin/ifort
export FFLAGS=-no-multibyte-chars
export CFLAGS=-no-multibyte-chars
export CXXFLAGS=-no-multibyte-chars
export CCASFLAGS=-no-multibyte-chars

_defined,suppress -o libasm.la  asm.lo atomic-asm.lo  -lutil
libtool: link: ar cru .libs/libasm.a .libs/asm.o .libs/atomic-asm.o
ar: .libs/atomic-asm.o: No such file or directory
make[2]: *** [libasm.la] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1





Warner Yuen
Scientific Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133


On Nov 22, 2007, at 2:26 AM, users-requ...@open-mpi.org wrote:


--

Message: 2
Date: Wed, 21 Nov 2007 15:15:00 -0500
From: Mark Dobossy <mdobo...@princeton.edu>
Subject: Re: [OMPI users] Problems compiling 1.2.4 using Intel
Compiler10.1.006 on Leopard
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <99ca0551-9bf4-47c0-85c2-6b2126a83...@princeton.edu>
Content-Type: text/plain; charset=US-ASCII; format=flowed

Thanks for the suggestion Jeff.

Unfortunately, that didn't fix the issue.

-Mark

On Nov 21, 2007, at 7:55 AM, Jeff Squyres wrote:


Can you try also adding CCASFLAGS=-no-multibyte-chars?







Re: [OMPI users] users Digest, Vol 749, Issue 1

2007-11-26 Thread Warner Yuen
This is just a stab in the dark, but can you compile Open MPI with  
Intel 10.0.xx and then just run the scripts to change the underlying  
compilers to use Intel 10.1.xx? This will now at least use the latest  
Ifort compilers. I haven't tried compiling anything this way, so  
again, its just a stab in the dark.


-Warner


Warner Yuen
Scientific Computing Consultant
Apple Computer
email: wy...@apple.com



On Nov 22, 2007, at 2:26 AM, users-requ...@open-mpi.org wrote:


--

Message: 2
Date: Wed, 21 Nov 2007 15:15:00 -0500
From: Mark Dobossy <mdobo...@princeton.edu>
Subject: Re: [OMPI users] Problems compiling 1.2.4 using Intel
Compiler10.1.006 on Leopard
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <99ca0551-9bf4-47c0-85c2-6b2126a83...@princeton.edu>
Content-Type: text/plain; charset=US-ASCII; format=flowed

Thanks for the suggestion Jeff.

Unfortunately, that didn't fix the issue.

-Mark

On Nov 21, 2007, at 7:55 AM, Jeff Squyres wrote:


Can you try also adding CCASFLAGS=-no-multibyte-chars?


On Nov 20, 2007, at 2:45 PM, Mark Dobossy wrote:




[OMPI users] Newbie Hostfile Quesiton

2007-03-30 Thread Warner Yuen
In LAM/MPI, I can use "portal.private schedule=no" if I want to  
launch a job from a specific node but not schedule it for any work. I  
can't seem to find reference to an equivalent in Open MPI.


Thanks.

-Warner


Warner Yuen
Scientific Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859




Re: [OMPI users] Odd behavior with slots=4

2007-03-29 Thread Warner Yuen

George,

Thanks for the tips. It looks like using "-bynode" as opposed to "- 
byslot" is the best way to distribute processes when running Amber9's  
Sander module. I confirmed that with MPICH-MX as well. I didn't  
realize that these settings were available. This really helps because  
I was getting bummed that I would just have to keep various hostfiles  
around some with slots=XX and some with nothing but the hostname.


Just an FYI on the timings:

-bynode:
real0m35.035s

-byslot:
real    0m44.856s


Warner Yuen
Scientific Computing Consultant

On Mar 29, 2007, at 9:00 AM, users-requ...@open-mpi.org wrote:


Message: 1
Date: Wed, 28 Mar 2007 12:19:15 -0400
From: George Bosilca <bosi...@cs.utk.edu>
Subject: Re: [OMPI users] Odd behavior with slots=4
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <2a58cf38-0fc4-4289-85e1-315376540...@cs.utk.edu>
Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed

There are multiple answers possible here. One is related to the over-
subscription of your cluster, but I expect that there are at least 4
cores per node if you want to use the slots=4 option. The real
question is what is the communication pattern in this benchmark ? and
how this match the distribution of the processes you use ?

As a matter of fact, if when you have XX processes per node, and all
of them will try to send a message to a remote process (here remote
means on another node), then they will have to share the physical
Myrinet link, which of course will lead to lower global performances
when XX increase (from 1, to 2 and then 4). And this is true without
regard on how you use the MX driver (via the Open MPI MTL or BTL).

Open MPI provide 2 options to allow you to distribute the processes
based on different criteria. Try to use -bynode and -byslot to see if
this affect the overall performances.

   Thanks,
 george.

On Mar 28, 2007, at 9:56 AM, Warner Yuen wrote:


Curious performance when using OpenMPI 1.2 to run Amber 9 on my
Xserve Xeon 5100 cluster. Each cluster node is a dual socket, dual-
core system. The cluster is also running with Myrinet 2000 with MX.
I'm just running some tests with one of Amber's benchmarks.

It seems that my hostfiles effect the performance of the
application. I tried variations of the hostfile to see what would
happen. I did a straight mpirun with no mca options set using:
"mpirun -np 32"

variation 1: hostname
real0m35.391s

variation 2: hostname slots=4
real0m45.698s

variation 3: hostname slots=2
real0m38.761s


It seems that the best performance I achieve is when I use
variation 1 with only the hostname and execute the command:
 "mpirun --hostfile hostfile -np 32 " . Its
shockingly about 13% better performance than if I use the hostfile
with a syntax of "hostname slots=4".

I also tried variations of in my mpirun command, here are the times:

straight mpirun with not mca options
real0m45.698s

and

"-mca mpi_yield_when_idle 0"
real0m44.912s

and

 "-mca mtl mx -mca pml cm"
real0m45.002s




[OMPI users] Odd behavior with slots=4

2007-03-28 Thread Warner Yuen
Curious performance when using OpenMPI 1.2 to run Amber 9 on my  
Xserve Xeon 5100 cluster. Each cluster node is a dual socket, dual- 
core system. The cluster is also running with Myrinet 2000 with MX.  
I'm just running some tests with one of Amber's benchmarks.


It seems that my hostfiles effect the performance of the application.  
I tried variations of the hostfile to see what would happen. I did a  
straight mpirun with no mca options set using: "mpirun -np 32"


variation 1: hostname
real0m35.391s

variation 2: hostname slots=4
real0m45.698s

variation 3: hostname slots=2
real0m38.761s


It seems that the best performance I achieve is when I use variation  
1 with only the hostname and execute the command:
 "mpirun --hostfile hostfile -np 32 " . Its  
shockingly about 13% better performance than if I use the hostfile  
with a syntax of "hostname slots=4".


I also tried variations of in my mpirun command, here are the times:

straight mpirun with not mca options
real0m45.698s

and

"-mca mpi_yield_when_idle 0"
real0m44.912s

and

 "-mca mtl mx -mca pml cm"
real0m45.002s





Warner Yuen
Scientific Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133




[OMPI users] BLACS Mac OS X

2006-10-12 Thread Warner Yuen

Hello OMPIers...

I know there have been a lot of emails going back and forth regarding  
BLACS recently, but I've ignored most of them until now. Why, because  
now I'm trying to build and test it...so it matters to me. ;-). Any  
way, I've just built BLACS using the latest beta: openmpi-1.1.2rc4 as  
well as openmpi-1.1.1.


I am getting the following error when running BLACS tester. Does  
anyone know what might be going on? It worked with other MPIs.


mpirun -np 2 ./xCbtest_MPI-OSX-0

Signal:10 info.si_errno:0(Unknown error: 0) si_code:1(BUS_ADRALN)
Failing at addr:0xe3
*** End of error message ***
Signal:10 info.si_errno:0(Unknown error: 0) si_code:1(BUS_ADRALN)
Failing at addr:0xe3
*** End of error message ***
2 additional processes aborted (not shown)


Thanks for any info.

-Warner


Warner Yuen
Research Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133




[OMPI users] Summary of OMPI on OS X with Intel

2006-07-18 Thread Warner Yuen
Okay...this isn't a performance summary or anything like that. Its  
just some information on what I was able to get to work. With a  
couple of suggestions from Brian Barrett about building OMPI with  
static libraries (possible problem with GNU libtool support for the  
Intel compiler on OS X?). I tried a total of six different  
configurations. So here's my summary FWIW:


USING GCC 4.0.1 (build 5341) with and without Intel Fortran (build  
9.1.027):


Config #1: ./configure --with-rsh=/usr/bin/ssh
Successful Build = YES

Config #2: ./configure --disable-shared --enable-static --with-rsh=/ 
usr/bin/ssh

Successful Build = NO
Error:
g++ -O3 -DNDEBUG -finline-functions -Wl,-u -Wl,_munmap -Wl,- 
multiply_defined -Wl,suppress -o ompi_info components.o ompi_info.o  
output.o param.o version.o -Wl,-bind_at_load  ../../../ompi/.libs/ 
libmpi.a /Users/wyuen/mpi_src/openmpi-1.1/orte/.libs/liborte.a /Users/ 
wyuen/mpi_src/openmpi-1.1/opal/.libs/libopal.a -ldl

/usr/bin/ld: Undefined symbols:
_mpi_fortran_status_ignore_
_mpi_fortran_statuses_ignore_
collect2: ld returned 1 exit status
make[2]: *** [ompi_info] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1

Config #3: ./configure --disable-shared --enable-static --disable-mpi- 
f77 --disable-mpi-f90 --with-rsh=/usr/bin/ssh

Successful Build =YES



USING Intel C (build 9.1.027) and with and without Intel Fortran  
(build 9.1.027)


Config #4: ./configure --disable-mpi-f77 --disable-mpi-f90 --with- 
rsh=/usr/bin/ssh

Successful Build = NO
Error:
IPO link: can not find "1"
icc: error: problem during multi-file optimization compilation (code 1)
make[2]: *** [libopal.la] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1

Config #5: ./configure --disable-shared --enable-static --disable-mpi- 
f77 --disable-mpi-f90 --with-rsh=/usr/bin/ssh

Successful Build = YES

Config #6: ./configure --disable-shared --enable-static --with-rsh=/ 
usr/bin/ssh

Successful Build = NO
Error:
/opt/intel/cc/9.1.027/bin/icpc -O3 -DNDEBUG -finline-functions -Wl,-u  
-Wl,_munmap -Wl,-multiply_defined -Wl,suppress -o ompi_info  
components.o ompi_info.o output.o param.o version.o -Wl,- 
bind_at_load  ../../../ompi/.libs/libmpi.a /Users/wyuen/mpi_src/ 
openmpi-1.1/orte/.libs/liborte.a /Users/wyuen/mpi_src/openmpi-1.1/ 
opal/.libs/libopal.a -ldl

ld: Undefined symbols:
_mpi_fortran_status_ignore_
_mpi_fortran_statuses_ignore_
make[2]: *** [ompi_info] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1



[OMPI users] Problem compiling OMPI with Intel C compiler on Mac OS X

2006-07-14 Thread Warner Yuen
I'm having trouble compiling Open MPI with Mac OS X v10.4.6 with the  
Intel C compiler. Here are some details:


1) I upgraded to the latest versions of Xcode including GCC 4.0.1  
build 5341.

2) I installed the latest Intel update (9.1.027) as well.
3) Open MPI compiles fine with using GCC and IFORT.
4) Open MPI fails with ICC and IFORT
5) MPICH-2.1.0.3 compiles fine with ICC and IFORT (I just had to find  
out if my compiler worked...sorry!)
6) My Open MPI confguration was using: ./configure --with-rsh=/usr/ 
bin/ssh --prefix=/usr/local/ompi11icc

7) Should I have included my config.log?

/opt/intel/cc/9.1.027/bin/icc -dynamiclib ${wl}-flat_namespace ${wl}- 
undefined ${wl}suppress -o .libs/libopal.0.0.0.dylib  class/.libs/ 
opal_free_list.o class/.libs/opal_hash_table.o class/.libs/ 
opal_list.o class/.libs/opal_object.o class/.libs/opal_atomic_lifo.o  
class/.libs/opal_value_array.o memoryhooks/.libs/memory.o  
runtime/.libs/opal_progress.o runtime/.libs/opal_finalize.o  
runtime/.libs/opal_init.o runtime/.libs/opal_params.o threads/.libs/ 
condition.o threads/.libs/mutex.o threads/.libs/thread.o  .libs/ 
libopal.lax/libltdlc.a/ltdl.o  .libs/libopal.lax/libasm.a/asm.o .libs/ 
libopal.lax/libasm.a/atomic-asm.o  .libs/libopal.lax/libevent.a/ 
event.o .libs/libopal.lax/libevent.a/kqueue.o .libs/libopal.lax/ 
libevent.a/select.o .libs/libopal.lax/libevent.a/signal.o  .libs/ 
libopal.lax/libmca_base.a/mca_base_close.o .libs/libopal.lax/ 
libmca_base.a/mca_base_cmd_line.o .libs/libopal.lax/libmca_base.a/ 
mca_base_component_compare.o .libs/libopal.lax/libmca_base.a/ 
mca_base_component_find.o .libs/libopal.lax/libmca_base.a/ 
mca_base_component_repository.o .libs/libopal.lax/libmca_base.a/ 
mca_base_components_close.o .libs/libopal.lax/libmca_base.a/ 
mca_base_components_open.o .libs/libopal.lax/libmca_base.a/ 
mca_base_list.o .libs/libopal.lax/libmca_base.a/ 
mca_base_msgbuf.o .libs/libopal.lax/libmca_base.a/ 
mca_base_open.o .libs/libopal.lax/libmca_base.a/ 
mca_base_param.o .libs/libopal.lax/libmca_base.a/ 
mca_base_parse_paramfile.o  .libs/libopal.lax/libopalutil.a/ 
argv.o .libs/libopal.lax/libopalutil.a/basename.o .libs/libopal.lax/ 
libopalutil.a/cmd_line.o .libs/libopal.lax/libopalutil.a/ 
convert.o .libs/libopal.lax/libopalutil.a/crc.o .libs/libopal.lax/ 
libopalutil.a/daemon_init.o .libs/libopal.lax/libopalutil.a/ 
error.o .libs/libopal.lax/libopalutil.a/few.o .libs/libopal.lax/ 
libopalutil.a/if.o .libs/libopal.lax/libopalutil.a/keyval_lex.o .libs/ 
libopal.lax/libopalutil.a/keyval_parse.o .libs/libopal.lax/ 
libopalutil.a/malloc.o .libs/libopal.lax/libopalutil.a/ 
numtostr.o .libs/libopal.lax/libopalutil.a/opal_environ.o .libs/ 
libopal.lax/libopalutil.a/opal_pty.o .libs/libopal.lax/libopalutil.a/ 
os_create_dirpath.o .libs/libopal.lax/libopalutil.a/os_path.o .libs/ 
libopal.lax/libopalutil.a/output.o .libs/libopal.lax/libopalutil.a/ 
path.o .libs/libopal.lax/libopalutil.a/pow2.o .libs/libopal.lax/ 
libopalutil.a/printf.o .libs/libopal.lax/libopalutil.a/qsort.o .libs/ 
libopal.lax/libopalutil.a/show_help.o .libs/libopal.lax/libopalutil.a/ 
show_help_lex.o .libs/libopal.lax/libopalutil.a/stacktrace.o .libs/ 
libopal.lax/libopalutil.a/strncpy.o .libs/libopal.lax/libopalutil.a/ 
trace.o  .libs/libopal.lax/libmca_maffinity.a/ 
maffinity_base_close.o .libs/libopal.lax/libmca_maffinity.a/ 
maffinity_base_open.o .libs/libopal.lax/libmca_maffinity.a/ 
maffinity_base_select.o .libs/libopal.lax/libmca_maffinity.a/ 
maffinity_base_wrappers.o  .libs/libopal.lax/libmca_memcpy.a/ 
memcpy_base_close.o .libs/libopal.lax/libmca_memcpy.a/ 
memcpy_base_open.o  .libs/libopal.lax/libmca_memory.a/ 
memory_base_close.o .libs/libopal.lax/libmca_memory.a/ 
memory_base_open.o  .libs/libopal.lax/libmca_memory_darwin.a/ 
memory_darwin_component.o  .libs/libopal.lax/libmca_paffinity.a/ 
paffinity_base_close.o .libs/libopal.lax/libmca_paffinity.a/ 
paffinity_base_open.o .libs/libopal.lax/libmca_paffinity.a/ 
paffinity_base_select.o .libs/libopal.lax/libmca_paffinity.a/ 
paffinity_base_wrappers.o  .libs/libopal.lax/libmca_timer.a/ 
timer_base_close.o .libs/libopal.lax/libmca_timer.a/ 
timer_base_open.o  .libs/libopal.lax/libmca_timer_darwin.a/ 
timer_darwin_component.o   -ldl  -Wl,-u -Wl,_munmap -Wl,- 
multiply_defined -Wl,suppress -install_name  /usr/local/ompi11icc/lib/ 
libopal.0.dylib -Wl,-compatibility_version -Wl,1 -Wl,-current_version  
-Wl,1.0

IPO link: can not find "1"
icc: error: problem during multi-file optimization compilation (code 1)
make[2]: *** [libopal.la] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1


Warner Yuen
Research Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133




[OMPI users] OMPI and Xgrid

2006-04-14 Thread Warner Yuen
I did get MrBayes to run with Xgrid compiled with OpenMPI. However it  
was setup as more of a "traditional" cluster. The agents all have a  
shared NFS directory to the controller. Basically I'm only using  
Xgrid as a job scheduler. It doesn't seem as if MrBayes is a "grid"  
application but more of an application for a traidional cluster.


You will need to have the following enabled:

1) NFS shared directory across all the machines on the grid.

2) Open-MPI installed locally on all the machines or via NFS. (You'll  
need to compile Open MPI)


3) Here's the part that may make Xgrid not desirable to use for MPI  
applications:


a) Compile with MPI support:

MPI = yes
CC= $(MPIPATH)/bin/mpicc
CFLAGS = -fast

	b) Make sure that Xgrid is set to properly use password-based  
authentication.


	c) Set the environment variables for Open-MPI to use Xgrid as the  
laucher/scheduler. Assuming bash:


$ export XGRID_CONTROLLER_HOSTNAME=mycomputer.apple.com
$ export XGRID_CONTROLLER_PASSWORD=passwd

You could also add the above to a .bashrc file and have  
your .bash_profile source it.


d) Run the MPI application:

$ mpirun -np X ./myapp

There are a couple of issues:

It turns out that the directory and files that MrBayes creates must  
be readable and writable by all the agents. MrBayes requires more  
than just reading standard input/output but also the creation and  
writing of other intermediate files. For an application like HP  
Linpack that just reads and writes one file, things work fine.  
However, the MrBayes application writes out and reads back two  
additional files for each MPI process that is spawned.


All the files that MrBayes are trying to read/write must have  
permissions for user 'nobody'.  This is a  bit of a problem, since  
you probably (in general) don't want to allow user nobody to write  
all over your home directory.  One solution (if possible) would be to  
have the application write into /tmp and then collect the files after  
the job completes. But I don't know if you can set MrBayes to use a  
temporary directory. Perhaps your MrBayes customer can let us know  
how to specify a tmpdir.


I don't know how or if MrBayes has the option of specifying a temp  
working directory. I have tested the basics of this by executing an  
MPI command to copy the *.nex file to /tmp of all the agents. This  
seems allows everything to work, but I can't seem to easily clean the  
intermediate files off of the agents after this runs since the  
MrBayes application created them and the user doesn't own them.


I'm hoping the OMPI developers can come to the rescue on some of  
these issues, perhaps working in conjunction with some of the Apple  
Xgrid engineers.


Lastly, this is from one of the MrBayes folks:

"Getting help with Xgrid among the phylo community will probably be  
difficult.

Fredrik can't help and probably not anyone with CIPRES either.  Fredrik
recommends mpi since it is unix based and more people use it.

He also does not recommend setting up a cluster in your lab to run  
MrBayes.
This is because of a fault with MrBayes. The way it is currently set  
up is that
the runs are only as fast as the slowest machine, in that if someone  
sits down

to use a machine in the cluster, everything is processed at that speed.
Here we use mpi for in parallel and condor to distribute for non- 
parallel.


And frankly, MrBayes can be somewhat unstable with mpi and seems to  
get hung up

on occasion.

Unfortunately for you, I think running large jobs will be a lot  
easier in a

couple of years."

-Warner

Warner Yuen
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133


On Apr 14, 2006, at 8:52 AM, users-requ...@open-mpi.org wrote:


Message: 2
Date: Thu, 13 Apr 2006 14:33:29 -0400 (EDT)
From: liuli...@stat.ohio-state.edu
Subject: Re: [OMPI users] running a job problem
To: "Open MPI Users" <us...@open-mpi.org>
Message-ID:
<1122.164.107.248.223.1144953209.squir...@www.stat.ohio-state.edu>
Content-Type: text/plain;charset=iso-8859-1

Brian,
It worked when I used the latest version of Mrbayes. Thanks. By the  
way,

do  you have any idea to submit an ompi job on xgrid? Thanks again.
Liang


On Apr 12, 2006, at 9:09 AM, liuli...@stat.ohio-state.edu wrote:

We have a Mac network running xgrid and we have successfully  
installed

mpi. We want to run a parallell version of mrbayes. It did not have
any
problem when we compiled mrbayes using mpicc. But when we tried to
run the
compiled mrbayes, we got lots errror message

mpiexec -np 4 ./mb -i  yeast_noclock_imp.txt
  Parallel version of

  Parallel version of

  Parallel version of

  Parallel version of

[ea285fltprinter.scc.ohio-state.edu:03327] *** An error occurred in
MPI_comm_size
[ea285fltprinter.scc.ohio

[OMPI users] Building OMPI-1.0.2 on OS X v10.3.9 with IBM XLC +XLF

2006-04-10 Thread Warner Yuen
I'm running Mac OS X v 10.3.9 Panther and tried to get OpenMPI to  
compile with IBM XLC and XLF. The compilation failed, any ideas what  
might be going wrong? I used the following settings:


export CC=/opt/ibmcmp/vacpp/6.0/bin/xlc
export CXX=/opt/ibmcmp/vacpp/6.0/bin/xlc++
export CFLAGS="-O3"
export CXXFLAGS="-O3"
export FFLAGS="-O3"
./configure --with-gm=/opt/gm --prefix=/home/warner/mpi_src/ompi102

ranlib .libs/libmpi_c_mpi.a
creating libmpi_c_mpi.la
(cd .libs && rm -f libmpi_c_mpi.la && ln -s ../libmpi_c_mpi.la  
libmpi_c_mpi.la)

Making all in cxx
source='mpicxx.cc' object='mpicxx.lo' libtool=yes \
DEPDIR=.deps depmode=none /bin/sh ../../.././config/depcomp \
/bin/sh ../../../libtool --tag=CXX --mode=compile /opt/ibmcmp/vacpp/ 
6.0/bin/xlc++ -DHAVE_CONFIG_H -I. -I. -I../../../include -I../../../ 
include   -I../../../include -I../../.. -I../../.. -I../../../include  
-I../../../opal -I../../../orte -I../../../ompi  -D_REENTRANT  - 
DNDEBUG -O3  -c -o mpicxx.lo mpicxx.cc

mkdir .libs
/opt/ibmcmp/vacpp/6.0/bin/xlc++ -DHAVE_CONFIG_H -I. -I. -I../../../ 
include -I../../../include -I../../../include -I../../.. -I../../.. - 
I../../../include -I../../../opal -I../../../orte -I../../../ompi - 
D_REENTRANT -DNDEBUG -O3 -c mpicxx.cc  -qnocommon -DPIC -o .libs/ 
mpicxx.o
"../../../ompi/mpi/cxx/group_inln.h", line 100.66: 1540-0216 (S) An  
expression of type "const int [][3]" cannot be converted to type "int  
(*)[3]".
"../../../ompi/mpi/cxx/group_inln.h", line 108.66: 1540-0216 (S) An  
expression of type "const int [][3]" cannot be converted to type "int  
(*)[3]".

make[3]: *** [mpicxx.lo] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1


-Thanks and have an OpenMPI day!

Warner Yuen
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133




[OMPI users] Building OpenMPI on OS X Tiger with gcc-3.3

2006-04-10 Thread Warner Yuen

Hi Charles,

I've only ever seen that error when trying to build OpenMPI with the  
IBM XLC compilers on Tiger. I just now successfully configured and  
built OpenMPI-1.0.2 using gcc 3.3 build 1819 and IBM XLF.


./configure --disable-mpi-f90 --prefix=/hpc/mpis/ompi102f77

Please note that I can also build this with F90 support. As long as  
GCC 3.3 is used as my compiler. One curious thing is that if I set my  
envirornment to specifically use XLF, i.e. 'export F77=/opt/ibmcmp/ 
xlf/8.1/bin/f77' it will fail in the configure step.


Just to be sure, OpenMPI fails to compile with your exact error if I  
first set my C compiler to IBM XLC.


Warner Yuen
Research Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133


On Apr 10, 2006, at 9:00 AM, users-requ...@open-mpi.org wrote:


--

Message: 1
Date: Mon, 10 Apr 2006 11:25:48 -0400
From: Charles Williams <will...@rpi.edu>
Subject: [OMPI users] Building OpenMPI on OS X Tiger with gcc-3.3
To: us...@open-mpi.org
Message-ID: <21a2544d-194e-4655-a0c5-658248dc7...@rpi.edu>
Content-Type: text/plain; charset="us-ascii"

Hi,

I have been attempting to build OpenMPI on my Mac, using the older
gcc-3.3 compiler using rc2r9567.  Things proceed for a while, and
then I get:

Making all in xgrid
/Users/willic3/build/openmpi-buildgcc3.3/orte/dynamic-mca/pls/xgrid
depbase=`echo src/pls_xgrid_component.lo | sed 's|[^/]*$|.deps/&|;s|
\.lo$||'`; \
if /bin/sh ../../../../libtool --mode=compile gcc -DHAVE_CONFIG_H -I.
-I/Users/willic3/build/openmpi-1.0.2rc2r9567/orte/mca/pls/xgrid -
I../../../../include -I../../../../include  -I/Users/willic3/build/
openmpi-buildgcc3.3/include  -I/Users/willic3/build/
openmpi-1.0.2rc2r9567/include -I/Users/willic3/build/
openmpi-1.0.2rc2r9567 -I../../../.. -I../../../../include -I/Users/
willic3/build/openmpi-1.0.2rc2r9567/opal -I/Users/willic3/build/
openmpi-1.0.2rc2r9567/orte -I/Users/willic3/build/
openmpi-1.0.2rc2r9567/ompi  -D_REENTRANT -F XGridFoundation  -MT src/
pls_xgrid_component.lo -MD -MP -MF "$depbase.Tpo" -c -o src/
pls_xgrid_component.lo /Users/willic3/build/openmpi-1.0.2rc2r9567/
orte/mca/pls/xgrid/src/pls_xgrid_component.m; \
then mv -f "$depbase.Tpo" "$depbase.Plo"; else rm -f "$depbase.Tpo";
exit 1; fi
libtool: compile: unable to infer tagged configuration
libtool: compile: specify a tag with `--tag'
make[4]: *** [src/pls_xgrid_component.lo] Error 1
make[3]: *** [all-recursive] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1

I may be able to avoid this problem by building without xgrid (I'm
going to try that now), but does anyone have any ideas on other
solutions?  I've built the code successfully using the default Tiger
compilers.

Thanks,
Charles

Machine info:

uname -a
Darwin rachel.geo.rpi.edu 8.6.0 Darwin Kernel Version 8.6.0: Tue Mar
7 16:58:48 PST 2006; root:xnu-792.6.70.obj~1/RELEASE_PPC Power
Macintosh powerpc

gcc-3.3 --version
gcc-3.3 (GCC) 3.3 20030304 (Apple Computer, Inc. build 1819)

I configured and built after using 'gcc_select 3.3'.

Configure command:
/Users/willic3/build/openmpi-1.0.2rc2r9567/configure --prefix=/Users/
willic3/geo
frame/tools/openmpi-gcc3.3 --disable-mpi-f90

I can also provide the configure and build logs if that will help.


Charles A. Williams
Dept. of Earth & Environmental Sciences
Science Center, 2C01B
Rensselaer Polytechnic Institute
Troy, NY  12180
Phone:(518) 276-3369
FAX:(518) 276-2012
e-mail:will...@rpi.edu





[OMPI users] Mac OS X 10.4.5 and XGrid, Open-MPI V1.0.1

2006-03-20 Thread Warner Yuen

Hi Frank,

I've used OMPI 1.0.1 with Xgrid. I don't think I ran into the same  
problem as you with the job hanging. But I'll continue just in case  
it helps or helps someone else. The one thing that I noticed was that  
Xgrid/OMPI does not allow an MPI application to write out a file  
other than to standard output.


In my example when running HP Linpack over an Xgrid enabled OMPI, if  
I execute the mpirun with HPL just outputting to the screen,  
everything runs fine. However, if I set my hpl.dat file to write out  
the results to a file, I get an error:


With 'hpl.dat' set to write to an output file called 'HPL.out' after  
executing: mpirun -d -hostfile myhosts -np 4 ./xhpl


portal.private:00545] [0,1,0] ompi_mpi_init completed
HPL ERROR from process # 0, on line 318 of function HPL_pdinfo:
>>> cannot open file HPL.out. <<<

I've tested this with a couple of other applications as well. For  
now, the only way I can solve it is if I set my working directory to  
allow user nobody to write to my working directory. Hope this helps.


-Warner

Warner Yuen
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133


On Mar 20, 2006, at 9:00 AM, users-requ...@open-mpi.org wrote:


Message: 1
Date: Mon, 20 Mar 2006 08:11:32 +0100
From: Frank <openmpi-u...@fraka-mp.de>
Subject: Re: [OMPI users] Mac OS X 10.4.5 and XGrid, Open-MPI V1.0.1
To: us...@open-mpi.org
Message-ID: <441e55a4.6090...@fraka-mp.de>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi Brian,

this is the full -d option output I've got mpi-running vhone on the
xgrid. The truncation is due to the reported "hang".




[O-MPI users] Xgrid and Open-MPI

2006-02-03 Thread Warner Yuen

Hello Everyone:

Thanks to Brian Barrett's help, I was able to get Open MPI working  
with Xgrid using two dual 2.5 GHz PowerMacs. I can submit HP Linpack  
jobs fine and get all four CPUs cranking, but I'm having problems  
with the applications that I really want to run, MrBayes and GROMACS  
(two very different programs...but both are MPI enabled)


I can submit the MPI job to a single Xgrid agent and they run just  
fine, but if I try adding the second agent (the second PowerMac) the  
jobs abort after a few seconds. Running them without Xgrid but just  
OpenMPI, the applications run fine. Below is the output...not very  
informative, but if anyone has any ideas I'd appreciate it.


For GROMACS I end up with the following:

---
Program mdrun_mpi, VERSION 3.3
Source code file: futil.c, line: 308

File input/output error:
md0.log
---

For MrBayes I get the following:

Overwriting file "testdata.nex.run1.p"
Could not open file "testdata.nex.run1.p"
Memory allocation error on at least one processor
Error in command "Mcmc"
There was an error on at least one processor
Error in command "Execute"
Will exit with signal 1 (error)





Warner Yuen
Research Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133




[O-MPI users] Mac OS X Open-MPI with Myrinet MX

2006-01-27 Thread Warner Yuen
With Mac OS X 10.4.4, I'm having problems compiling Open-MPI 1.0.1  
with Myrinet-MX support. It works fine with Myrinet-GM drivers.


I configured with the following:

./configure --prefix=/hpc/mpis/ompi101 --with-mx=/opt/mx

I'm getting the following error when I run make:

/../.. -I../../../.. -I../../../../include -I../../../../opal - 
I../../../../orte -I../../../../ompi -D_REENTRANT -O3 -DNDEBUG -fno- 
strict-aliasing -MT btl_mx_component.lo -MD -MP -MF .deps/ 
btl_mx_component.Tpo -c btl_mx_component.c  -fno-common -DPIC - 
o .libs/btl_mx_component.o

btl_mx_component.c: In function 'mca_btl_mx_component_open':
btl_mx_component.c:108: warning: pointer targets in passing argument  
7 of 'mca_base_param_reg_int' differ in signedness
btl_mx_component.c:111: error: 'mca_btl_mx_component_t' has no member  
named 'mx_timeout'
btl_mx_component.c:117: warning: pointer targets in passing argument  
7 of 'mca_base_param_reg_int' differ in signedness

make[4]: *** [btl_mx_component.lo] Error 1
make[3]: *** [all-recursive] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1

Warner Yuen
Research Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133