I tried with 1.10.0, and is failing still. I will need to check whether it
works for later releases.
Thanks
Udayanga
On Wed, Dec 16, 2015 at 5:24 PM, Nathan Hjelm wrote:
>
> I think this is fixed in the 1.10 series. We will not be making any more
> updates to the 1.8 series
the v1.10 series was fixed from 1.10.1
Cheers,
Gilles
$ git log --grep=57d3b832972a9d914a7c2067a526dfa3df1dbb34
commit e1ceb4e5f9dadb44edb77662a13058c9b3746505
Author: Nathan Hjelm
List-Post: users@lists.open-mpi.org
Date: Fri Oct 2 10:35:21 2015 -0600
op: allow user
Hi Gilles,
Le 2015-10-21 20:31, Gilles Gouaillardet a écrit :
#3 difficult question ...
first, keep in mind there is currently no progress thread in Open MPI.
that means messages can be received only when MPI_Wait* or MPI_Test* is
invoked. you might hope messages are received when doing some
On Dec 17, 2015, at 8:57 AM, Eric Chamberland
wrote:
>
> But I would like to know if the MPI I am using is able to do message
> progression or not: So how do an end-user like me can knows that? Does-it
> rely on hardware? Is there a #define by OpenMPI that
Le 2015-12-17 12:45, Jeff Squyres (jsquyres) a écrit :
On Dec 17, 2015, at 8:57 AM, Eric Chamberland
wrote:
But I would like to know if the MPI I am using is able to do message
progression or not: So how do an end-user like me can knows that? Does-it rely
On Dec 17, 2015, at 1:39 PM, Eric Chamberland
wrote:
>
> Just to be clear: we *always* call MPI_Wait. Now the question was about
> *when* to do it.
Ok. Remember that the various flavors of MPI_Test are acceptable, too. And
it's ok to call
Hi,
I have a system of AMD blades that I am trying to run MCNP6 on using
OPENMPI. I installed openmpi-1.6.5. I also have installed Intel FORTRAN and C
compiles. I compiled MCNP6 using FC="mpif90" CC="mpicc" ... It runs just fine
when I run it on a 1-hour test case on just one blade. I need
You might want to check the permissions on the MLX device directory - according
to that error message, the permissions are preventing you from accessing the
device. Without getting access, we don’t have a way to communicate across nodes
- you can run on one node using shared memory, but not
The "mpirun --hetero-nodes -bind-to core -map-by core" resolves the performance
issue!
I reran my test in the *same* job.
SLURM resource request:
#!/bin/sh
#SBATCH -N 4
#SBATCH -n 64
#SBATCH --mem=2g
#SBATCH --time=02:00:00
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out
env | grep
Glad you resolved it! The following MCA param has changed its name:
> rmaps_base_bycore=1
should now be
rmaps_base_mapping_policy=core
HTH
Ralph
> On Dec 17, 2015, at 5:01 PM, Jingchao Zhang wrote:
>
> The "mpirun --hetero-nodes -bind-to core -map-by core" resolves the
>
I'm no expert, but this one is pretty obvious. The error message says exactly
what you should change:
Equivalent MCA parameter:
Deprecated: rmaps_base_bycore
Replacement: rmaps_base_mapping_policy=core
--
*Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
|| \\UTGERS
Thank you all. That's my oversight. I got a similar error with
"hwloc_base_binding_policy=core" so I thought it was the same.
Cheers,
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users
12 matches
Mail list logo