[OMPI users] MPI_Unpublish_name and MPI_Close_port

2012-03-30 Thread Mateus Augusto
Hello, Is there a correct order to call both functions MPI_Unplish_name and MPI_Close_port? May we have MPI_Unplish_name MPI_Close_port or MPI_Close_port MPI_Unplish_name thank you

Re: [OMPI users] redirecting output

2012-03-30 Thread Gus Correa
Have you tried the --output-filename switch to mpirun? man mpirun may help. If you are running under a resource manager, such as Torque, the stdout may be retained in the execution node until the war is over ... well ... the job finishes. Gus Correa On 03/30/2012 11:44 AM, Ralph Castain

Re: [OMPI users] redirecting output

2012-03-30 Thread Ralph Castain
Have you looked at "mpirun -h"? There are several options available for redirecting output, including redirecting it to files by rank so it is separated by application process. In general, mpirun will send the output to stdout or stderr, based on what your process does. The provided options

Re: [OMPI users] redirecting output

2012-03-30 Thread tyler.bal...@huskers.unl.edu
I am using openmpi-1.4.5 and I just tried |tee ~/outputfile.txt and it generated the file named outputfile.txt but again it was empty From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Marc Cozzi [co...@nd.edu] Sent: Friday, March 30, 2012

Re: [OMPI users] redirecting output

2012-03-30 Thread Tim Prince
On 03/30/2012 10:41 AM, tyler.bal...@huskers.unl.edu wrote: I am using the command mpirun -np nprocs -machinefile machines.arch Pcrystal and my output strolls across my terminal I would like to send this output to a file and I cannot figure out how to do soI have tried the general >

Re: [OMPI users] redirecting output

2012-03-30 Thread Marc Cozzi
DoesPcrystal |tee ./outputfile.txtwork? --marc From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of François Tessier Sent: Friday, March 30, 2012 10:56 AM To: Open MPI Users Subject: Re: [OMPI users] redirecting output Hello! Did you try to redirect also

Re: [OMPI users] redirecting output

2012-03-30 Thread François Tessier
Hello! Did you try to redirect also the error output?Maybe your application write its output on stderr. François On 30/03/2012 16:41, tyler.bal...@huskers.unl.edu wrote: Hello all, I am using the command mpirun -np nprocs -machinefile machines.arch Pcrystal and my output strolls across my

[OMPI users] redirecting output

2012-03-30 Thread tyler.bal...@huskers.unl.edu
Hello all, I am using the command mpirun -np nprocs -machinefile machines.arch Pcrystal and my output strolls across my terminal I would like to send this output to a file and I cannot figure out how to do soI have tried the general > FILENAME and > log & these generate files

Re: [OMPI users] mpicc command not found - Fedora

2012-03-30 Thread Trent
Try "yum search openmpi" instead. Or as someone else suggested you download, compile, and install the source and you could have already been on your way to using OpenMPI in a few moments. From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Rohan Deshpande Sent:

Re: [OMPI users] mpicc command not found - Fedora

2012-03-30 Thread Constantinos Makassikis
On Fri, Mar 30, 2012 at 2:39 PM, Rohan Deshpande wrote: > Hi, > > I do not know how to use *ortecc*. > The same way as mpicc. Actually on my machine they both are symlinks to "opal_wrapper". Your second screenshot suggests orte* commands have been installed. > After looking

Re: [OMPI users] mpicc command not found - Fedora

2012-03-30 Thread Rohan Deshpande
Hi, I do not know how to use *ortecc*. After looking at the details i found that* yum install did not install openmpi-devel package. * yum cannot find it either - *yum search openmpi-devel says not match found.* I am using* Red Hat 6.2 and i686 processors. * which mpicc shows - *which: no

Re: [OMPI users] Help with multicore AMD machine performance

2012-03-30 Thread Ralph Castain
FWIW: 1.5.5 still doesn't support binding to NUMA regions, for example - and the script doesn't really do anything more than bind to cores. I believe only the trunk provides a more comprehensive set of binding options. Given the described NUMA layout, I suspect bind-to-NUMA is going to make the

Re: [OMPI users] Help with multicore AMD machine performance

2012-03-30 Thread Pavel Mezentsev
You can try running using this script: #!/bin/bash s=$(($OMPI_COMM_WORLD_NODE_RANK)) numactl --physcpubind=$((s)) --localalloc ./YOUR_PROG instead of 'mpirun ... ./YOUR_PROG' run 'mpirun ... ./SCRIPT I tried this with openmpi-1.5.4 and it helped. Best regards, Pavel Mezentsev P.S

Re: [OMPI users] Help with multicore AMD machine performance

2012-03-30 Thread Ralph Castain
I think you'd have much better luck using the developer's trunk as the binding there is much better - e.g., you can bind to NUMA instead of just cores. The 1.4 binding is pretty limited. http://www.open-mpi.org/nightly/trunk/ On Mar 30, 2012, at 5:02 AM, Ricardo Fonseca wrote: > Hi guys > >

[OMPI users] Help with multicore AMD machine performance

2012-03-30 Thread Ricardo Fonseca
Hi guys I'm benchmarking our (well tested) parallel code on and AMD based system, featuring 2x AMD Opteron(TM) Processor 6276, with 16 cores each for a total of 32 cores. The system is running Scientific Linux 6.1 and OpenMPI 1.4.5. When I run a single core job the performance is as expected.

[OMPI users] Communication/Computation Overlap with Infiniband

2012-03-30 Thread Steffen Christgau
Hi everybody, in our group, we are currently working with a 2D CFD application that is based on the simple von Neumann neighborhood. The 2D data grid is partitioned into horizontal stripes such that each process calculates such a stripe. After each iteration, a process exchanges the upper and