[OMPI users] MPI_Unpublish_name and MPI_Close_port

2012-03-30 Thread Mateus Augusto
Hello,

Is there a correct order to call both functions MPI_Unplish_name and 
MPI_Close_port?

May we have

MPI_Unplish_name
MPI_Close_port

or

MPI_Close_port
MPI_Unplish_name

thank you

Re: [OMPI users] redirecting output

2012-03-30 Thread Gus Correa

Have you tried the
--output-filename 
switch to mpirun?
man mpirun may help.

If you are running under a resource manager,
such as Torque, the stdout may be retained in
the execution node until the war is over ...
well ... the job finishes.

Gus Correa




On 03/30/2012 11:44 AM, Ralph Castain wrote:

Have you looked at "mpirun -h"? There are several options available for
redirecting output, including redirecting it to files by rank so it is
separated by application process.

In general, mpirun will send the output to stdout or stderr, based on
what your process does. The provided options just let you tag it, or
separate it by rank for convenience.


On Mar 30, 2012, at 9:22 AM, tyler.bal...@huskers.unl.edu
 wrote:


I am using openmpi-1.4.5 and I just tried |tee ~/outputfile.txt and it
generated the file named outputfile.txt but again it was empty

*From:*users-boun...@open-mpi.org
[users-boun...@open-mpi.org
] on behalf of Marc Cozzi
[co...@nd.edu ]
*Sent:*Friday, March 30, 2012 9:56 AM
*To:*Open MPI Users
*Subject:*Re: [OMPI users] redirecting output

Does Pcrystal |tee ./outputfile.txt work?

--marc
*From:*users-boun...@open-mpi.org
[mailto:users-boun...@open-mpi.org]*On
Behalf Of*François Tessier
*Sent:*Friday, March 30, 2012 10:56 AM
*To:*Open MPI Users
*Subject:*Re: [OMPI users] redirecting output

Hello!

Did you try to redirect also the error output? Maybe your application
write its output on stderr.

François

On 30/03/2012 16:41,tyler.bal...@huskers.unl.edu
wrote:
Hello all,

I am using the command*mpirun -np nprocs -machinefile machines.arch
Pcrystal*and my output strolls across my terminal I would like to send
this output to a file and I cannot figure out how to do soI have
tried the general > FILENAME and > log & these generate files
however they are empty.any help would be appreciated.

Thank you for reading

Tyler



___
users mailing list
us...@open-mpi.org  
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org 
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] redirecting output

2012-03-30 Thread Ralph Castain
Have you looked at "mpirun -h"? There are several options available for 
redirecting output, including redirecting it to files by rank so it is 
separated by application process.

In general, mpirun will send the output to stdout or stderr, based on what your 
process does. The provided options just let you tag it, or separate it by rank 
for convenience.


On Mar 30, 2012, at 9:22 AM, tyler.bal...@huskers.unl.edu wrote:

> I am using openmpi-1.4.5 and I just tried |tee ~/outputfile.txt and it 
> generated the file named outputfile.txt but again it was empty
> From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of 
> Marc Cozzi [co...@nd.edu]
> Sent: Friday, March 30, 2012 9:56 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] redirecting output
> 
> DoesPcrystal |tee ./outputfile.txtwork?
>  
>  
>   --marc
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On 
> Behalf Of François Tessier
> Sent: Friday, March 30, 2012 10:56 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] redirecting output
>  
> Hello!
> 
> Did you try to redirect also the error output? Maybe your application write 
> its output on stderr.
> 
> François
> 
> On 30/03/2012 16:41, tyler.bal...@huskers.unl.edu wrote:
> Hello all,
> 
> I am using the command mpirun -np nprocs -machinefile machines.arch Pcrystal 
> and my output strolls across my terminal I would like to send this output to 
> a file and I cannot figure out how to do soI have tried the general  
> > FILENAME and > log & these generate files however they are 
> empty.any help would be appreciated.
> 
> Thank you for reading 
> 
> Tyler
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] redirecting output

2012-03-30 Thread tyler.bal...@huskers.unl.edu
I am using openmpi-1.4.5 and I just tried |tee ~/outputfile.txt and it 
generated the file named outputfile.txt but again it was empty

From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Marc 
Cozzi [co...@nd.edu]
Sent: Friday, March 30, 2012 9:56 AM
To: Open MPI Users
Subject: Re: [OMPI users] redirecting output

DoesPcrystal |tee ./outputfile.txtwork?


  --marc
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of François Tessier
Sent: Friday, March 30, 2012 10:56 AM
To: Open MPI Users
Subject: Re: [OMPI users] redirecting output

Hello!

Did you try to redirect also the error output? Maybe your application write its 
output on stderr.

François

On 30/03/2012 16:41, 
tyler.bal...@huskers.unl.edu wrote:
Hello all,

I am using the command mpirun -np nprocs -machinefile machines.arch Pcrystal 
and my output strolls across my terminal I would like to send this output to a 
file and I cannot figure out how to do soI have tried the general  > 
FILENAME and > log & these generate files however they are empty.any 
help would be appreciated.

Thank you for reading

Tyler




___

users mailing list

us...@open-mpi.org

http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] redirecting output

2012-03-30 Thread Tim Prince

 On 03/30/2012 10:41 AM, tyler.bal...@huskers.unl.edu wrote:



I am using the command mpirun -np nprocs -machinefile machines.arch 
Pcrystal and my output strolls across my terminal I would like to send 
this output to a file and I cannot figure out how to do soI have 
tried the general > FILENAME and > log & these generate files 
however they are empty.any help would be appreciated.


If you run under screen your terminal output should be collected in 
screenlog.  Beats me why some sysadmins don't see fit to install screen.


--
Tim Prince



Re: [OMPI users] redirecting output

2012-03-30 Thread Marc Cozzi
DoesPcrystal |tee ./outputfile.txtwork?


  --marc
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of François Tessier
Sent: Friday, March 30, 2012 10:56 AM
To: Open MPI Users
Subject: Re: [OMPI users] redirecting output

Hello!

Did you try to redirect also the error output? Maybe your application write its 
output on stderr.

François

On 30/03/2012 16:41, 
tyler.bal...@huskers.unl.edu wrote:
Hello all,

I am using the command mpirun -np nprocs -machinefile machines.arch Pcrystal 
and my output strolls across my terminal I would like to send this output to a 
file and I cannot figure out how to do soI have tried the general  > 
FILENAME and > log & these generate files however they are empty.any 
help would be appreciated.

Thank you for reading

Tyler




___

users mailing list

us...@open-mpi.org

http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] redirecting output

2012-03-30 Thread François Tessier

Hello!

Did you try to redirect also the error output?Maybe your application 
write its output on stderr.


François

On 30/03/2012 16:41, tyler.bal...@huskers.unl.edu wrote:

Hello all,

I am using the command mpirun -np nprocs -machinefile machines.arch 
Pcrystal and my output strolls across my terminal I would like to send 
this output to a file and I cannot figure out how to do soI have 
tried the general > FILENAME and > log & these generate files 
however they are empty.any help would be appreciated.


Thank you for reading

Tyler


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


[OMPI users] redirecting output

2012-03-30 Thread tyler.bal...@huskers.unl.edu
Hello all,

I am using the command mpirun -np nprocs -machinefile machines.arch Pcrystal 
and my output strolls across my terminal I would like to send this output to a 
file and I cannot figure out how to do soI have tried the general  > 
FILENAME and > log & these generate files however they are empty.any 
help would be appreciated.

Thank you for reading

Tyler


Re: [OMPI users] mpicc command not found - Fedora

2012-03-30 Thread Trent
Try "yum search openmpi" instead.



Or as someone else suggested you download, compile, and install the source
and you could have already been on your way to using OpenMPI in a few
moments.



From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Rohan Deshpande
Sent: Friday, March 30, 2012 7:39 AM
To: Open MPI Users
Subject: Re: [OMPI users] mpicc command not found - Fedora



Hi,

I do not know how to use ortecc. 

After looking at the details i found that yum install did not install
openmpi-devel package. 

yum cannot find it either - yum search openmpi-devel says not match found.

I am using Red Hat 6.2 and i686 processors. 

which mpicc shows - 

which: no mpicc in
(/usr/lib/qt-3.3/bin:/usr/local/ns-allinone/bin:/usr/local/ns-allinone/tcl8.
4.18/unix:/usr/local/ns-allinone/tk8.4.18/unix:/usr/local/cuda/cuda/bin:/usr
/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lib/openmpi/bi
n)

rpmquery -l openmpi-devel   says package not installed

What could be the possible solution? 



On Fri, Mar 30, 2012 at 2:05 AM, Amit Ghadge  wrote:

You can try source packaged. Extract and run command ./configure
--prefix=usr/local , make all , make install after to compile any mpi
program by using mpicc 

On 29-Mar-2012 7:26 PM, "Jeffrey Squyres"  wrote:

I don't know exactly how Fedora packages Open MPI, but I've seen some
distributions separate Open MPI into a base package and a "devel" package.
And mpicc (and some friends) are split off into that "devel" package.

The rationale is that you don't need mpicc (and friends) to *run* Open MPI
applications -- you only need mpicc (etc.) to *develop* Open MPI
applications.

Poke around and see if you can find a devel-like Open MPI package in Fedora.


On Mar 29, 2012, at 7:45 AM, Rohan Deshpande wrote:

> Hi,
>
> I have installed mpi successfully on fedora using yum install openmpi
openmpi-devel openmpi-libs
>
> I have also added /usr/lib/openmpi/bin to PATH and LD_LIBRARY_PATH
variable.
>
> But when I try to complie my program using mpicc hello.c or
/usr/lib/openmpi/bin/mpicc hello.c I get error saying mpicc: command not
found
>
> I checked the contents of /user/lib/openmpi/bin and there is no mpicc...
here is the screenshot
>
> 
>
>
> The add/remove  programs show the installation details
>
>  
>
> I have tried re installing but same thing happened.
>
> Can someone help me to solve this issue?
>
> Thanks
> --
>
> Best Regards,
>
> ROHAN
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




-- 


Best Regards,


ROHAN DESHPANDE  







Re: [OMPI users] mpicc command not found - Fedora

2012-03-30 Thread Constantinos Makassikis
On Fri, Mar 30, 2012 at 2:39 PM, Rohan Deshpande  wrote:

> Hi,
>
> I do not know how to use *ortecc*.
>
The same way as mpicc. Actually on my machine they both are symlinks to
"opal_wrapper".
Your second screenshot suggests orte* commands have been installed.


> After looking at the details i found that* yum install did not install
> openmpi-devel package. *
>
> yum cannot find it either - *yum search openmpi-devel says not match
> found.*
>
> I am using* Red Hat 6.2 and i686 processors. *
>

> which mpicc shows -
>
> *which: no mpicc in
> (/usr/lib/qt-3.3/bin:/usr/local/ns-allinone/bin:/usr/local/ns-allinone/tcl8.4.18/unix:/usr/local/ns-allinone/tk8.4.18/unix:/usr/local/cuda/cuda/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lib/openmpi/bin)
> *
>
> rpmquery -l openmpi-devel   *says package not installed*
>
> What could be the possible solution?
>

1) If ortecc is indeed present you can test it.
If it works you may create manually some symlinks of your own:
  ln -s /path1/ortecc /path2/mpicc
  ln -s /path1/orterun /path2/mpirun
where path2 is in your PATH
Maybe the fastest ... but not the cleanest :-)

2) Fix the red hat package ...
May take some time ...

3) As Amit suggested earlier you can also download OpenMPI's source,
compile and install it !

--
Constantinos




>
> On Fri, Mar 30, 2012 at 2:05 AM, Amit Ghadge  wrote:
>
>> You can try source packaged. Extract and run command ./configure
>> --prefix=usr/local , make all , make install after to compile any mpi
>> program by using mpicc
>>  On 29-Mar-2012 7:26 PM, "Jeffrey Squyres"  wrote:
>>
>>> I don't know exactly how Fedora packages Open MPI, but I've seen some
>>> distributions separate Open MPI into a base package and a "devel" package.
>>>  And mpicc (and some friends) are split off into that "devel" package.
>>>
>>> The rationale is that you don't need mpicc (and friends) to *run* Open
>>> MPI applications -- you only need mpicc (etc.) to *develop* Open MPI
>>> applications.
>>>
>>> Poke around and see if you can find a devel-like Open MPI package in
>>> Fedora.
>>>
>>>
>>> On Mar 29, 2012, at 7:45 AM, Rohan Deshpande wrote:
>>>
>>> > Hi,
>>> >
>>> > I have installed mpi successfully on fedora using yum install openmpi
>>> openmpi-devel openmpi-libs
>>> >
>>> > I have also added /usr/lib/openmpi/bin to PATH and LD_LIBRARY_PATH
>>> variable.
>>> >
>>> > But when I try to complie my program using mpicc hello.c or
>>> /usr/lib/openmpi/bin/mpicc hello.c I get error saying mpicc: command not
>>> found
>>> >
>>> > I checked the contents of /user/lib/openmpi/bin and there is no
>>> mpicc... here is the screenshot
>>> >
>>> > 
>>> >
>>> >
>>> > The add/remove  programs show the installation details
>>> >
>>> >  
>>> >
>>> > I have tried re installing but same thing happened.
>>> >
>>> > Can someone help me to solve this issue?
>>> >
>>> > Thanks
>>> > --
>>> >
>>> > Best Regards,
>>> >
>>> > ROHAN
>>> >
>>> >
>>> >
>>> > ___
>>> > users mailing list
>>> > us...@open-mpi.org
>>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>> --
>>> Jeff Squyres
>>> jsquy...@cisco.com
>>> For corporate legal information go to:
>>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>>
>>>
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
>
> --
>
> Best Regards,
>
> ROHAN DESHPANDE
>
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] mpicc command not found - Fedora

2012-03-30 Thread Rohan Deshpande
Hi,

I do not know how to use *ortecc*.

After looking at the details i found that* yum install did not install
openmpi-devel package. *

yum cannot find it either - *yum search openmpi-devel says not match found.*

I am using* Red Hat 6.2 and i686 processors. *

which mpicc shows -

*which: no mpicc in
(/usr/lib/qt-3.3/bin:/usr/local/ns-allinone/bin:/usr/local/ns-allinone/tcl8.4.18/unix:/usr/local/ns-allinone/tk8.4.18/unix:/usr/local/cuda/cuda/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lib/openmpi/bin)
*

rpmquery -l openmpi-devel   *says package not installed*

What could be the possible solution?


On Fri, Mar 30, 2012 at 2:05 AM, Amit Ghadge  wrote:

> You can try source packaged. Extract and run command ./configure
> --prefix=usr/local , make all , make install after to compile any mpi
> program by using mpicc
> On 29-Mar-2012 7:26 PM, "Jeffrey Squyres"  wrote:
>
>> I don't know exactly how Fedora packages Open MPI, but I've seen some
>> distributions separate Open MPI into a base package and a "devel" package.
>>  And mpicc (and some friends) are split off into that "devel" package.
>>
>> The rationale is that you don't need mpicc (and friends) to *run* Open
>> MPI applications -- you only need mpicc (etc.) to *develop* Open MPI
>> applications.
>>
>> Poke around and see if you can find a devel-like Open MPI package in
>> Fedora.
>>
>>
>> On Mar 29, 2012, at 7:45 AM, Rohan Deshpande wrote:
>>
>> > Hi,
>> >
>> > I have installed mpi successfully on fedora using yum install openmpi
>> openmpi-devel openmpi-libs
>> >
>> > I have also added /usr/lib/openmpi/bin to PATH and LD_LIBRARY_PATH
>> variable.
>> >
>> > But when I try to complie my program using mpicc hello.c or
>> /usr/lib/openmpi/bin/mpicc hello.c I get error saying mpicc: command not
>> found
>> >
>> > I checked the contents of /user/lib/openmpi/bin and there is no
>> mpicc... here is the screenshot
>> >
>> > 
>> >
>> >
>> > The add/remove  programs show the installation details
>> >
>> >  
>> >
>> > I have tried re installing but same thing happened.
>> >
>> > Can someone help me to solve this issue?
>> >
>> > Thanks
>> > --
>> >
>> > Best Regards,
>> >
>> > ROHAN
>> >
>> >
>> >
>> > ___
>> > users mailing list
>> > us...@open-mpi.org
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>> For corporate legal information go to:
>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 

Best Regards,

ROHAN DESHPANDE


Re: [OMPI users] Help with multicore AMD machine performance

2012-03-30 Thread Ralph Castain
FWIW: 1.5.5 still doesn't support binding to NUMA regions, for example - and 
the script doesn't really do anything more than bind to cores. I believe only 
the trunk provides a more comprehensive set of binding options.

Given the described NUMA layout, I suspect bind-to-NUMA is going to make the 
biggest difference.

On Mar 30, 2012, at 6:17 AM, Pavel Mezentsev wrote:

> You can try running using this script:
> #!/bin/bash
> 
> s=$(($OMPI_COMM_WORLD_NODE_RANK))
> 
> numactl --physcpubind=$((s)) --localalloc ./YOUR_PROG
> 
> instead of 'mpirun ... ./YOUR_PROG' run 'mpirun ... ./SCRIPT
> 
> I tried this with openmpi-1.5.4 and it helped.
> 
> Best regards, Pavel Mezentsev
> 
> P.S openmpi-1.5.5 bind processes correctly, so you can try it as well.
> 
> 2012/3/30 Ralph Castain 
> I think you'd have much better luck using the developer's trunk as the 
> binding there is much better - e.g., you can bind to NUMA instead of just 
> cores. The 1.4 binding is pretty limited.
> 
> http://www.open-mpi.org/nightly/trunk/
> 
> On Mar 30, 2012, at 5:02 AM, Ricardo Fonseca wrote:
> 
> > Hi guys
> >
> > I'm benchmarking our (well tested) parallel code on and AMD based system, 
> > featuring 2x AMD Opteron(TM) Processor 6276, with 16 cores each for a total 
> > of 32 cores. The system is running Scientific Linux 6.1 and OpenMPI 1.4.5.
> >
> > When I run a single core job the performance is as expected. However, when 
> > I run with 32 processes the performance drops to about 60% (when compared 
> > with other systems running the exact same problem, so this is not a code 
> > scaling issue). I think this may have to do with core binding / NUMA, but I 
> > haven't been able to get any improvement out of the bind-* mpirun options.
> >
> > Any suggestions?
> >
> > Thanks in advance,
> > Ricardo
> >
> > P.S: Here's the output of lscpu
> >
> > Architecture:  x86_64
> > CPU op-mode(s):32-bit, 64-bit
> > Byte Order:Little Endian
> > CPU(s):32
> > On-line CPU(s) list:   0-31
> > Thread(s) per core:2
> > Core(s) per socket:8
> > CPU socket(s): 2
> > NUMA node(s):  4
> > Vendor ID: AuthenticAMD
> > CPU family:21
> > Model: 1
> > Stepping:  2
> > CPU MHz:   2300.045
> > BogoMIPS:  4599.38
> > Virtualization:AMD-V
> > L1d cache: 16K
> > L1i cache: 64K
> > L2 cache:  2048K
> > L3 cache:  6144K
> > NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
> > NUMA node1 CPU(s): 16,18,20,22,24,26,28,30
> > NUMA node2 CPU(s): 1,3,5,7,9,11,13,15
> > NUMA node3 CPU(s): 17,19,21,23,25,27,29,31
> >
> > ---
> > Ricardo Fonseca
> >
> > Associate Professor
> > GoLP - Grupo de Lasers e Plasmas
> > Instituto de Plasmas e Fusão Nuclear
> > Instituto Superior Técnico
> > Av. Rovisco Pais
> > 1049-001 Lisboa
> > Portugal
> >
> > tel: +351 21 8419202
> > fax: +351 21 8464455
> > web: http://golp.ist.utl.pt/
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Help with multicore AMD machine performance

2012-03-30 Thread Pavel Mezentsev
You can try running using this script:
#!/bin/bash

s=$(($OMPI_COMM_WORLD_NODE_RANK))

numactl --physcpubind=$((s)) --localalloc ./YOUR_PROG

instead of 'mpirun ... ./YOUR_PROG' run 'mpirun ... ./SCRIPT

I tried this with openmpi-1.5.4 and it helped.

Best regards, Pavel Mezentsev

P.S openmpi-1.5.5 bind processes correctly, so you can try it as well.

2012/3/30 Ralph Castain 

> I think you'd have much better luck using the developer's trunk as the
> binding there is much better - e.g., you can bind to NUMA instead of just
> cores. The 1.4 binding is pretty limited.
>
> http://www.open-mpi.org/nightly/trunk/
>
> On Mar 30, 2012, at 5:02 AM, Ricardo Fonseca wrote:
>
> > Hi guys
> >
> > I'm benchmarking our (well tested) parallel code on and AMD based
> system, featuring 2x AMD Opteron(TM) Processor 6276, with 16 cores each for
> a total of 32 cores. The system is running Scientific Linux 6.1 and OpenMPI
> 1.4.5.
> >
> > When I run a single core job the performance is as expected. However,
> when I run with 32 processes the performance drops to about 60% (when
> compared with other systems running the exact same problem, so this is not
> a code scaling issue). I think this may have to do with core binding /
> NUMA, but I haven't been able to get any improvement out of the bind-*
> mpirun options.
> >
> > Any suggestions?
> >
> > Thanks in advance,
> > Ricardo
> >
> > P.S: Here's the output of lscpu
> >
> > Architecture:  x86_64
> > CPU op-mode(s):32-bit, 64-bit
> > Byte Order:Little Endian
> > CPU(s):32
> > On-line CPU(s) list:   0-31
> > Thread(s) per core:2
> > Core(s) per socket:8
> > CPU socket(s): 2
> > NUMA node(s):  4
> > Vendor ID: AuthenticAMD
> > CPU family:21
> > Model: 1
> > Stepping:  2
> > CPU MHz:   2300.045
> > BogoMIPS:  4599.38
> > Virtualization:AMD-V
> > L1d cache: 16K
> > L1i cache: 64K
> > L2 cache:  2048K
> > L3 cache:  6144K
> > NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
> > NUMA node1 CPU(s): 16,18,20,22,24,26,28,30
> > NUMA node2 CPU(s): 1,3,5,7,9,11,13,15
> > NUMA node3 CPU(s): 17,19,21,23,25,27,29,31
> >
> > ---
> > Ricardo Fonseca
> >
> > Associate Professor
> > GoLP - Grupo de Lasers e Plasmas
> > Instituto de Plasmas e Fusão Nuclear
> > Instituto Superior Técnico
> > Av. Rovisco Pais
> > 1049-001 Lisboa
> > Portugal
> >
> > tel: +351 21 8419202
> > fax: +351 21 8464455
> > web: http://golp.ist.utl.pt/
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Help with multicore AMD machine performance

2012-03-30 Thread Ralph Castain
I think you'd have much better luck using the developer's trunk as the binding 
there is much better - e.g., you can bind to NUMA instead of just cores. The 
1.4 binding is pretty limited.

http://www.open-mpi.org/nightly/trunk/

On Mar 30, 2012, at 5:02 AM, Ricardo Fonseca wrote:

> Hi guys
> 
> I'm benchmarking our (well tested) parallel code on and AMD based system, 
> featuring 2x AMD Opteron(TM) Processor 6276, with 16 cores each for a total 
> of 32 cores. The system is running Scientific Linux 6.1 and OpenMPI 1.4.5.
> 
> When I run a single core job the performance is as expected. However, when I 
> run with 32 processes the performance drops to about 60% (when compared with 
> other systems running the exact same problem, so this is not a code scaling 
> issue). I think this may have to do with core binding / NUMA, but I haven't 
> been able to get any improvement out of the bind-* mpirun options.
> 
> Any suggestions?
> 
> Thanks in advance,
> Ricardo
> 
> P.S: Here's the output of lscpu
> 
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):32
> On-line CPU(s) list:   0-31
> Thread(s) per core:2
> Core(s) per socket:8
> CPU socket(s): 2
> NUMA node(s):  4
> Vendor ID: AuthenticAMD
> CPU family:21
> Model: 1
> Stepping:  2
> CPU MHz:   2300.045
> BogoMIPS:  4599.38
> Virtualization:AMD-V
> L1d cache: 16K
> L1i cache: 64K
> L2 cache:  2048K
> L3 cache:  6144K
> NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
> NUMA node1 CPU(s): 16,18,20,22,24,26,28,30
> NUMA node2 CPU(s): 1,3,5,7,9,11,13,15
> NUMA node3 CPU(s): 17,19,21,23,25,27,29,31
> 
> ---
> Ricardo Fonseca
> 
> Associate Professor
> GoLP - Grupo de Lasers e Plasmas
> Instituto de Plasmas e Fusão Nuclear
> Instituto Superior Técnico
> Av. Rovisco Pais
> 1049-001 Lisboa
> Portugal
> 
> tel: +351 21 8419202
> fax: +351 21 8464455
> web: http://golp.ist.utl.pt/
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] Help with multicore AMD machine performance

2012-03-30 Thread Ricardo Fonseca
Hi guys

I'm benchmarking our (well tested) parallel code on and AMD based system, 
featuring 2x AMD Opteron(TM) Processor 6276, with 16 cores each for a total of 
32 cores. The system is running Scientific Linux 6.1 and OpenMPI 1.4.5.

When I run a single core job the performance is as expected. However, when I 
run with 32 processes the performance drops to about 60% (when compared with 
other systems running the exact same problem, so this is not a code scaling 
issue). I think this may have to do with core binding / NUMA, but I haven't 
been able to get any improvement out of the bind-* mpirun options.

Any suggestions?

Thanks in advance,
Ricardo

P.S: Here's the output of lscpu

Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):32
On-line CPU(s) list:   0-31
Thread(s) per core:2
Core(s) per socket:8
CPU socket(s): 2
NUMA node(s):  4
Vendor ID: AuthenticAMD
CPU family:21
Model: 1
Stepping:  2
CPU MHz:   2300.045
BogoMIPS:  4599.38
Virtualization:AMD-V
L1d cache: 16K
L1i cache: 64K
L2 cache:  2048K
L3 cache:  6144K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 16,18,20,22,24,26,28,30
NUMA node2 CPU(s): 1,3,5,7,9,11,13,15
NUMA node3 CPU(s): 17,19,21,23,25,27,29,31

---
Ricardo Fonseca

Associate Professor
GoLP - Grupo de Lasers e Plasmas
Instituto de Plasmas e Fusão Nuclear
Instituto Superior Técnico
Av. Rovisco Pais
1049-001 Lisboa
Portugal

tel: +351 21 8419202
fax: +351 21 8464455
web: http://golp.ist.utl.pt/




[OMPI users] Communication/Computation Overlap with Infiniband

2012-03-30 Thread Steffen Christgau
Hi everybody,

in our group, we are currently working with a 2D CFD application that is
based on the simple von Neumann neighborhood. The 2D data grid is
partitioned into horizontal stripes such that each process calculates
such a stripe. After each iteration, a process exchanges the upper and
lower boundary with the neighbor processes.

The application is optimized to calculate the boundary first, exchange
them with the neighbors and then compute the inner parts of the block.
We use one-sided communication to transfer the boundary data with. In
pseudo code:

for each time step:
   compute boundary
   (A) Use MPI_Win_post, MPI_Win_Start, MPI_Put to transfer/receive
 boundary to neighbor process
   (B) compute inner parts
   (C) call MPI_Win_Complete and MPI_Win_wait fo finish access/exposure
epoch

We found out that the default way of MPICH2's CH3 channel implementation
is to enqueue RMA operations until the unlocking synchronization call
(wait/complete in our case). So theres no opportunity for an overlap of
communication (A) and (B).

Now my beginners question is: How can we achieve (if possible) an
overlap of communication (A) of computation (B) with OpenMPI? Do we need
to tune any btl or osc parameters of OpenMPI? Or is this overlap
possible by design/implementation, so we really don't have to care?

We use OpenMPI 1.4.3, Mellanox MHGH19-XTC ConnectXZ cards and a Mellanox
MTS3600R-1UNC switch. IPoIB is not activated. The output of "ompi_info
--param all all" is attached.

Thanks for replies!


Steffen


openmpi_info.tar.gz
Description: GNU Zip compressed data