Hi,
MPI_Allreduce works for me with MPI_INTEGER8 for all OpenMPI
versions up to 1.4.3. However, with OpenMPI 1.5.1 I get a
failure at runtime:
[proton:23642] *** An error occurred in MPI_Allreduce: the reduction
operation MPI_SUM is not defined on the MPI_INTEGER8 datatype
[proton:23642] *** on
Is that what it reports on the remote node?
I am guessing you are just using ssh to launch remotely -- try this:
ssh othernode env | grep PATH
Ensure that the answer you get back is what you expect. Sometime shell startup
files do different things if they're invoked interactively vs.
non-
The exact contents of the environment variables as reported by 'env' are:
PATH=/usr/lib/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/lib/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lib/openmpi/bin:/home/mpiuser/bin
LD_LIBRARY_PATH=/usr/lib/openmpi/lib
Am I mis
I am seeing similar issues on our slurm clusters. We are looking into the issue.
-Nathan
HPC-3, LANL
On Tue, 11 Jan 2011, Michael Di Domenico wrote:
Any ideas on what might be causing this one? Or atleast what
additional debug information someone might need?
On Fri, Jan 7, 2011 at 4:03 PM, M
Hello,
Please don't worry about this for now, the problem stems from
iptable rules. But, I still think putting usb0 into the reject
list should disable the ip-address associated with it.
Regards,
> "Avinash" == Avinash Malik writes:
Avinash> Hello,
Avinash
Hello,
Forgot to mention that I am running openmpi v-1.5.1.
Regards,
Let your email find you with BlackBerry® from Vodafone
-Original Message-
From: Avinash Malik
Sender: users-boun...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Mon, 24 Jan 2011 15:22:39
To: Open MPI
Hello,
I have two mahcines each having 3 live interfaces: lo, eth0
(interanet) and usb0 (internet). eth0 cannot access usb0 on the
other machine (and vice-veras). Now, when I try to run the MPI
program with these two hosts I cannot get any output, even --mca
Hello,
I am working on a distributed array data structure built on top of the one-sided
communication primitives MPI_Get and MPI_Put in MPI-2. I use passive target
synchronization (MPI_Win_{lock,unlock}) in order to access remote entries in the array
without requiring the involvement of the ta
Am 24.01.2011 um 11:47 schrieb Kedar Soparkar:
> I'm trying to setup a small cluster of 2 nodes.
>
> Both nodes are running Fedora 11 Kernel 2.6.29.4, have the same user
> mpiuser with the same password. Both of them have their env vars set
> as follows in /etc/profile itself:
This is syntax for
hi folks,
sorry to insist, i'll be glad if someone could point out the relevant
piece of info I missed.
these huge memory consumption cannot be explained easily on the user
side, especially since it's only seen on the master process node...
thanks,
éloi
On 01/03/2011 09:57 AM, Eloi Gau
I'm trying to setup a small cluster of 2 nodes.
Both nodes are running Fedora 11 Kernel 2.6.29.4, have the same user
mpiuser with the same password. Both of them have their env vars set
as follows in /etc/profile itself:
PATHusr/lib/openmpi/bin
LD_LIBRARY_PATH
-Message d'origine-
De : users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] De la part
de Gus Correa
Envoyé : vendredi 21 janvier 2011 20:37
À : Open MPI Users
Objet : Re: [OMPI users] Help with some fundamentals
Hi Olivier
I hope this helps,
Gus Correa
It sure does help
12 matches
Mail list logo