Hello,
For several years I have successfully used MPIIO in a Fortran global
atmospheric ensemble data assimilation system. However, I always
wondered if I was fully exploiting the power of MPIIO, specifically by
using derived data types to better describe memory and file data
layouts. All of my
Yeah, we aren't connecting back - is there a firewall running? You need to
leave the "--debug-daemons --mca plm_base_verbose 5" on there as well to see
the entire problem.
What you can see here is that mpirun is listening on several interfaces:
> [access1:24264] [[55095,0],0] oob:tcp:init addin
Пересылаемое сообщение
От кого: Timur Ismagilov
Кому: Ralph Castain
Дата: Sun, 20 Jul 2014 21:58:41 +0400
Тема: Re[2]: [OMPI users] Fwd: Re[4]: Salloc and mpirun problem
Here it is:
$ salloc -N2 --exclusive -p test -J ompi
salloc: Granted job allocation 647049
$ mpirun -mc
I found no option in 1.6.5 and 1.8.1...
Am 7/20/2014 6:29 PM, schrieb Ralph Castain:
What version of OMPI are you talking about?
On Jul 20, 2014, at 9:11 AM, Tobias Kloeffel wrote:
Hello everyone,
I am trying to get the maximum performance out of my two node testing setup.
Each node consi
I'm unaware of any CentOS-OMPI bug, and I've been using CentOS throughout the
6.x series running OMPI 1.6.x and above.
I can't speak to the older versions of CentOS and/or the older versions of OMPI.
On Jul 19, 2014, at 8:14 PM, Lane, William wrote:
> Yes there is a second HPC Sun Grid Engine
What version of OMPI are you talking about?
On Jul 20, 2014, at 9:11 AM, Tobias Kloeffel wrote:
> Hello everyone,
>
> I am trying to get the maximum performance out of my two node testing setup.
> Each node consists of 4 Sandy Bridge CPUs and each CPU has one directly
> attached Mellanox QDR
Try adding -mca oob_base_verbose 10 -mca rml_base_verbose 10 to your cmd line.
It looks to me like we are unable to connect back to the node where you are
running mpirun for some reason.
On Jul 20, 2014, at 9:16 AM, Timur Ismagilov wrote:
> I have the same problem in openmpi 1.8.1(Apr 23, 201
I have the same problem in openmpi 1.8.1( Apr 23, 2014 ).
Does the srun command have a --map-by mpirun parameter, or can i chage it
from bash enviroment?
Пересылаемое сообщение
От кого: Timur Ismagilov
Кому: Mike Dubman
Копия: Open MPI Users
Дата: Thu, 17 Jul 2014 16:42:24
Hello everyone,
I am trying to get the maximum performance out of my two node testing
setup. Each node consists of 4 Sandy Bridge CPUs and each CPU has one
directly attached Mellanox QDR card. Both nodes are connected via a
8-port Mellanox switch.
So far I found no option that allows binding m
On Jul 20, 2014, at 7:11 AM, Diego Avesani wrote:
> Dear all,
> I have a question about mpi_finalize.
>
> After mpi_finalize the program returs single core, Have I understand
> correctly?
No - we don't kill any processes. We just tear down the MPI system. All your
processes continue to execu
Dear all,
I have a question about mpi_finalize.
After mpi_finalize the program returs single core, Have I understand
correctly?
In this case I do not understand something:
In my program I have something like:
call MPI_FINALIZE(rc)
write(*,*) "hello world"
However, my program write it many times
11 matches
Mail list logo