Ok, great.
I've opened up https://github.com/open-mpi/ompi/pull/1814 to track the issue.
This hack around certainly isn't going to ship in an Open MPI production
tarball; we should probably do something more formal / correct.
> On Jun 24, 2016, at 10:31 AM, kna...@gmail.com wrote:
>
> Jeff,
Jeff, It works now! Thank you so much!
[user@ct110 hello]$ /opt/openmpi/1.10.3-1/bin/mpirun --mca btl self,tcp --mca btl_tcp_if_include
venet0:0 --mca oob_tcp_if_include venet0:0 -npernode 1 -np 2 --hostfile mpi_hosts.txt hostname
ct110
ct111
[user@ct110 hello]$
On Jun 24, 2016, at 7:26 AM, kna...@gmail.com wrote:
>
>> mpirun --mca btl_tcp_if_include venet0:0 --mca oob_tcp_if_include
>> venet0:0 ...
> > See if that works.
> Jeff, thanks a lot for such prompt reply, detailed explanation and
> suggestion! But unfortunately the error is still the
Jeff Squyres (jsquyres) wrote on 24/06/16 13:43:
Nikolay --
Thanks for all the detail! That helps a tremendous amount.
Open MPI actually uses IP networks in *two* ways:
1. for command and control
2. for MPI communications
Your use of btl_tcp_if_include regulates #2, but not #1 -- you need
Hi all!
I am trying to build a cluster for MPI jobs using OpenVZ containers
(https://openvz.org/Main_Page).
I've been successfully using openvz+openmpi during many years but can't make it work with OpenMPI
1.10.x.
So I have a server with openvz support enabled. The output of it's ifconfig: