Re: [OMPI users] Dual core Intel CPU

2006-08-17 Thread Hugh Merz

On Wed, 16 Aug 2006, Allan Menezes wrote:

Hi AnyOne,
 I have an 18 node cluster of heterogenous machines. I used fc5 smp
kernel and ocsar 5.0 beta.
I tried the following out on a machine with Open mpi 1.1 and 1.1.1b4
versions. The machine consists of a Dlink 1gigb/s DGE-530T etherent card
2.66GHz dual core Intel Cpu Pentium D 805 with Dual Cannel 1 gig DDR
3200 ram. I compiled the ATLAS libs (ver 3.7.13beta) for this machine
and HPL (xhpl executable) and ran the following experiment twice:
content of my "hosts" file1 for this machine for 1st experiment:
a8.lightning.net slots=2
content of my "hosts" file2 for this machine for 2nd experiment:
a8.lightning.net

On the single node I ran for HPL.dat N =6840 and NB=120 : 1024 MB of Ram
N = sqrt(0.75* ((1024-32 video overhead)/2 )*100*1/8)=approx 6840;
512MB Ram per CPU otherwise the OS uses the hard drive for virtaul
memory. This way it resides totally in Ram.
I ran this command twice for the two different hosts files above in two
experiments:
# mpirun --prefix /opt/openmpi114 --hostsfile hosts -mca btl tcp, self
-np 1 ./xhpl
In both cases the performance remains the same around 4.040 GFlops I
would expect since I am running slots =2 as two CPU's I would get a
performance  increase from expt 2 by 100 -50%
But I see no difference.Can anybody tell me why this is so?


You are only launching 1 process in both cases. Try `mpirun -np 2 ...` to 
launch 2 processes, which will load each of your processors with an xhpl 
process.

Please read the faq:

http://www.open-mpi.org/faq/?category=running#simple-spmd-run

It includes a lot of information about slots and how they should be set as well.

Hugh


I have not tried mpich 2.
Thank you,
Regards,
Allan Menezes

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



[OMPI users] Open Mpi and Dual Core Intel CPU's

2006-08-17 Thread Allan Menezes

Hi AnyOne,
Soory about the SPAM. But I tried Mpich2-1.0.4p1 and got ^ Gflops 
this way.
I first tried mpich2 configured with  --with-device=ch3 and 
-with-comm=ch3:shm ans then make, make install. I then tried the first 
expt. I ran # mpd --ncpus=2 & and #.mpiexec -np 1 -f  hosts
where hosts is the file defined below in my email. I got the same 
4.00GFlops. HPL.dat was set at P=1 Q=1 for 1 node with 2 processors.
I then modified HPL.dat to P=1 and Q=2 for the same hosts file of one 
machine of a18.lightning.net and got 6GigaFlops performance. I shall try 
the same with open mpi modifying the HPL.dat P's and Q's
for open mpi and post my results for open mpi. Remember I am trying this 
out on a single dual core machine with SMP kernel.


For Open Mpi below in both cases with slots=2 or slots=1 I tried the 
HPL.dat with P=1 and Q=1. I shall now try with P=1 and Q=2 with slots=2 
in the hosts file.


   I have an 18 node cluster of heterogenous machines. I used fc5 smp 
kernel and ocsar 5.0 beta.
I tried the following out on a machine with Open mpi 1.1 and 1.1.1b4 
versions. The machine consists of a Dlink 1gigb/s DGE-530T etherent card 
2.66GHz dual core Intel Cpu Pentium D 805 with Dual Cannel 1 gig DDR 
3200 ram. I compiled the ATLAS libs (ver 3.7.13beta) for this machine 
and HPL (xhpl executable) and ran the following experiment twice:

content of my "hosts" file1 for this machine for 1st experiment:
a8.lightning.net slots=2
content of my "hosts" file2 for this machine for 2nd experiment:
a8.lightning.net

On the single node I ran for HPL.dat N =6840 and NB=120 : 1024 MB of Ram 
N = sqrt(0.75* ((1024-32 video overhead)/2 )*100*1/8)=approx 6840; 
512MB Ram per CPU otherwise the OS uses the hard drive for virtaul 
memory. This way it resides totally in Ram.
I ran this command twice for the two different hosts files above in two 
experiments:
# mpirun --prefix /opt/openmpi114 --hostsfile hosts -mca btl tcp, self  
-np 1 ./xhpl
In both cases the performance remains the same around 4.040 GFlops I 
would expect since I am running slots =2 as two CPU's I would get a 
performance  increase from expt 2 by 100 -50%

But I see no difference.Can anybody tell me why this is so?
I have not tried mpich 2.
Thank you,
Regards,
Allan Menezes



[OMPI users] OPEN MPI and Intel dual core cpu

2006-08-17 Thread Allan Menezes

Hi All,
Regarding my previous posts good news I get 6 GFlops with open mpi 
ver 1.b4  with host containing the single line a18.lightning.net slots=2 .
This time with the HPL.dat modified to be P=1 Q=2 for two 
nodes/processes on the same machine( Dual core Pentium D805)

My Question:
I have 18 heterogenous nodes all x86. Almost all single core(14) and two 
dual cores and two hyperthreading CPU's what should my P's and Q's be to 
benchmark the true performance?

I am guessing P=4 and Q=6. Am i right?
Thank you for your consideration
Allan Menezes



Re: [OMPI users] Problem compiling OMPI with Intel C compiler on Mac OS X

2006-08-17 Thread Peter Beerli

Today I ran into the same problem as Warner Yuen (see thread below),

openmpi does not compile with icc and fails with an error where  
libtool ask for --tag. The error is macosx specific.


It occurs in the compile for xgrid

in

openmpi-1.1/orte/mca/pls/xgrid

the Makefile fails; xgrid uses some objective-C stuff that needs to  
be compiled with gcc [I guess]


after adjusting the Makefile.in

from

xgrid>grep -n "\-\-tag=OBJC" Makefile.in
216:LTOBJCCOMPILE = $(LIBTOOL)  --mode=compile $(OBJC) $(DEFS) \
220:OBJCLINK = $(LIBTOOL)  --mode=link $(OBJCLD) $(AM_OBJCFLAGS) \
to

xgrid>grep -n "\-\-tag=OBJC" Makefile.in
216:LTOBJCCOMPILE = $(LIBTOOL) --tag=OBJC --mode=compile $(OBJC) $ 
(DEFS) \
220:OBJCLINK = $(LIBTOOL)  --tag=OBJC --mode=link $(OBJCLD) $ 
(AM_OBJCFLAGS)


the change elicits a warning that OBJC is not a known tag, but it  
keeps going and compiles fine.
I do not use the xgrid portion so I do not now whether this is  
clobbered or not. Standard runs using orterun

work fine.


Peter

Brian Barrett wrote in July:

On Jul 14, 2006, at 10:35 AM, Warner Yuen wrote:

> I'm having trouble compiling Open MPI with Mac OS X v10.4.6 with
> the Intel C compiler. Here are some details:
>
> 1) I upgraded to the latest versions of Xcode including GCC 4.0.1
> build 5341.
> 2) I installed the latest Intel update (9.1.027) as well.
> 3) Open MPI compiles fine with using GCC and IFORT.
> 4) Open MPI fails with ICC and IFORT
> 5) MPICH-2.1.0.3 compiles fine with ICC and IFORT (I just had to
> find out if my compiler worked...sorry!)
> 6) My Open MPI confguration was using: ./configure --with-rsh=/usr/
> bin/ssh --prefix=/usr/local/ompi11icc
> 7) Should I have included my config.log?

It looks like there are some problems with GNU libtool's support for
the Intel compiler on OS X. I can't tell if it's a problem with the
Intel compiler or libtool. A quick fix is to build Open MPI with
static libraries rather than shared libraries. You can do this by
adding:

   --disable-shared --enable-static

to the configure line for Open MPI (if you're building in the same
directory where you've already run configure, you want to run make
clean before building again).

I unfortunately don't have access to a Intel Mac machines with the
Intel compilers installed, so I can't verify this issue. I believe
one of the other developers does have such a configuration, so I'll
ask him when he's available (might be a week or two -- I believe he's
on vacation). This issue seems to be unique to your exact
configuration -- it doesn't happen with GCC on the Intel Mac nor on
Linux with the Intel compilers.



Brian



--
   Brian Barrett
   Open MPI developer
   http://www.open-mpi.org/







[OMPI users] mpi.h - not conforming to C90 spec

2006-08-17 Thread Jonathan Underwood

Hi,

Compiling an mpi program with gcc options -pedantic -Wall gives the
following warning:

mpi.h:147: warning: ISO C90 does not support 'long long'

So it seems that the openmpi implementation doesn't conform to C90. Is
this by design, or should it be reported as a bug?

Thanks,
Jonathan


Re: [OMPI users] mpi.h - not conforming to C90 spec

2006-08-17 Thread Brian Barrett

On Aug 17, 2006, at 4:43 PM, Jonathan Underwood wrote:


Compiling an mpi program with gcc options -pedantic -Wall gives the
following warning:

mpi.h:147: warning: ISO C90 does not support 'long long'

So it seems that the openmpi implementation doesn't conform to C90. Is
this by design, or should it be reported as a bug?


Well, MPI_LONG_LONG is a type we're supposed to support, and that  
means having 'long long' in the mpi.h file.  I'm not really sure how  
to get around this, especially since there are a bunch of users out  
there that rely on MPI_LONG_LONG to send 64 bit integers around on 32  
bit platforms.  So I suppose that it's by design.


Brian


Re: [OMPI users] mpi.h - not conforming to C90 spec

2006-08-17 Thread Jonathan Underwood

On 18/08/06, Brian Barrett  wrote:

On Aug 17, 2006, at 4:43 PM, Jonathan Underwood wrote:

> Compiling an mpi program with gcc options -pedantic -Wall gives the
> following warning:
>
> mpi.h:147: warning: ISO C90 does not support 'long long'
>
> So it seems that the openmpi implementation doesn't conform to C90. Is
> this by design, or should it be reported as a bug?

Well, MPI_LONG_LONG is a type we're supposed to support, and that
means having 'long long' in the mpi.h file.  I'm not really sure how
to get around this, especially since there are a bunch of users out
there that rely on MPI_LONG_LONG to send 64 bit integers around on 32
bit platforms.  So I suppose that it's by design.



OK, that seems reasonable.

I wonder then if the non-C90 conforming parts should be surrounded
with #ifndef __STRICT_ANSI__  - this is predefined when gcc is
expecting C90 conforming code. I am not sure if this is portable to
other compilers however. Probably not.

Best wishes,
Jonathan