On 11/24/2012 4:02 PM, Ralph Castain wrote:
Try limiting the interfaces we use to see if that's really the problem. I forget if
cygwin has "ifconfig" or not, but use a tool to report the networks, and then
start excluding them by adding
-mca oob_tcp_if_exclude foo,bar
to your cmd line until y
Sounds like self btl does not support CUDA IPC.
Pavel (Pasha) Shamis
---
Computer Science Research Group
Computer Science and Math Division
Oak Ridge National Laboratory
On Dec 13, 2012, at 11:46 AM, Justin Luitjens wrote:
Thank you everyone for your responses. I believe I have narrowed do
Thank you everyone for your responses. I believe I have narrowed down the
cause of this. But first I'll respond to some of the points raised.
The problem is not related to receiving more then sending. The MPI spec states
that the receive size can be larger then the send size and you can use
The name "Beowulf Cluster" typically refers to systems that used the bproc
environment, which kinda fell out of common use a few years ago.
I'm guessing that your problem is because you have some kind of firewalling
enabled on your nodes. Try disabling all firewalling (e.g., disable the
iptabl
I am using openmpi-1.6.3. What do you meant with "We stopped supporting bproc
after the 1.2 series, though you could always launch via ssh." ?
Best Regards,Shi Wei.
From: r...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Thu, 13 Dec 2012 06:37:56 -0800
To: us...@open-mpi.org
Subject: Re
What version of OMPI are you running? We stopped supporting bproc after the 1.2
series, though you could always launch via ssh.
On Dec 12, 2012, at 10:25 PM, Ng Shi Wei wrote:
> Dear all,
>
> I am new in Linux and clustering. I am setting up a Beowulf Cluster using
> several PCs according to
Hi Justin:
I assume you are running on a single node. In that case, Open MPI is supposed
to take advantage of the CUDA IPC support. This will be used only when
messages are larger than 4K, which yours are. In that case, I would have
expected that the library would exchange some messages and
Le 13/12/2012 10:45, Shikha Maheshwari a écrit :
> Hi,
>
> We are trying to build 'hwloc 1.4.2' on Linux on System Z. To build hwloc
Hello,
If you are really talking about hwloc, you should contact this mailing list
Hardware locality user list
(Open MPI and hwloc are different software, even
Hi,
We are trying to build 'hwloc 1.4.2' on Linux on System Z. To build hwloc,
need to perform the following steps:
- ./configure
- gmake all install
We are getting error while performing first step i.e. running configure
script.
error message is :
configure: error: No atomic primitives availab
Hi,
> Can you send the config.log for the platform where it failed?
>
> I'd like to see the specific compiler error that occurred.
I found the error with your hint. For Open MPI 1.6.x I must also
specify "F77" and "FFLAGS" for the Fortran 77 compiler. Otherwise
it uses "gfortran" from the GNU pa
Dear all,
I am new in Linux and clustering. I am setting up a Beowulf Cluster using
several PCs according to this guide
http://www.tldp.org/HOWTO/html_single/Beowulf-HOWTO/.
I have setup and configure accordingly except for NFS part. Because I am not
requiring it for my application. I have set
11 matches
Mail list logo