Re: [OMPI users] Limiting IP addresses used by OpenMPI

2020-10-01 Thread Charles Doland via users
Thanks Ralph,

The problem was with the configuration of my OpenMPI installation. This problem 
was causing the btl tcp component to not be found, so the command "ompi_info 
--param btl tcp --level 9" was not showing anything.

Charles


From: users  on behalf of Ralph Castain via 
users 
Sent: Monday, September 21, 2020 3:11 PM
To: Open MPI Users
Cc: Ralph Castain
Subject: Re: [OMPI users] Limiting IP addresses used by OpenMPI

[External Sender]

I'm not sure where you are looking, but those params are indeed present in the 
opal/mca/btl/tcp component:

/*
 *  Called by MCA framework to open the component, registers
 *  component parameters.
 */

static int mca_btl_tcp_component_register(void)
{
char* message;

/* register TCP component parameters */
mca_btl_tcp_param_register_string("if_include", "Comma-delimited list of 
devices and/or CIDR notation of networks to use for MPI communication (e.g., 
\"eth0,192.168.0.0/16\").  Mutually exclusive with btl_tcp_if_exclude.", "", 
OPAL_INFO_LVL_1, &mca_btl_tcp_component.tcp_if_include);

mca_btl_tcp_param_register_string("if_exclude", "Comma-delimited list of 
devices and/or CIDR notation of networks to NOT use for MPI communication -- 
all devices not matching these specifications will be used (e.g., 
\"eth0,192.168.0.0/16\").  If set to a non-default value, it is mutually 
exclusive with btl_tcp_if_include.",
  "127.0.0.1/8,sppp",
  OPAL_INFO_LVL_1, 
&mca_btl_tcp_component.tcp_if_exclude);


I added a little padding to make them clearer. This was from the v3.1.x branch, 
but those params have been there for a very long time. The 
"mca_btl_tcp_param_register_string" function adds the "btl_tcp_" prefix to the 
param.


On Sep 4, 2020, at 5:39 PM, Charles Doland via users 
mailto:users@lists.open-mpi.org>> wrote:

Joseph,

There is no specific case. We are working on supporting the use of OpenMPI with 
our software, in addition to Intel MPI. With Intel MPI, we find that using the 
I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables is useful in many 
cases in which the job hosts have multiple network interfaces.

I tried to use btl_tcp_if_include and btl_tcp_if_exclude, but neither seemed to 
have any effect. I also noticed that these options do not appear to be present 
in the source code. Although there were similar options for ptl in the source, 
my undestanding is that ptl has been replaced by btl. I tested using version 
3.1.2. The source that I examined was also version 3.1.2.

Charles Doland
charles.dol...@ansys.com<mailto:charles.dol...@ansys.com>
(408) 627-6621  [x6621]


From: users 
mailto:users-boun...@lists.open-mpi.org>> on 
behalf of Joseph Schuchart via users 
mailto:users@lists.open-mpi.org>>
Sent: Tuesday, September 1, 2020 1:50 PM
To: users@lists.open-mpi.org<mailto:users@lists.open-mpi.org> 
mailto:users@lists.open-mpi.org>>
Cc: Joseph Schuchart mailto:schuch...@hlrs.de>>
Subject: Re: [OMPI users] Limiting IP addresses used by OpenMPI

[External Sender]

Charles,

What is the machine configuration you're running on? It seems that there
are two MCA parameter for the tcp btl: btl_tcp_if_include and
btl_tcp_if_exclude (see ompi_info for details). There may be other knobs
I'm not aware of. If you're using UCX then my guess is that UCX has its
own way to choose the network interface to be used...

Cheers
Joseph

On 9/1/20 9:35 PM, Charles Doland via users wrote:
> Yes. It is not unusual to have multiple network interfaces on each host
> of a cluster. Usually there is a preference to use only one network
> interface on each host due to higher speed or throughput, or other
> considerations. It would be useful to be able to explicitly specify the
> interface to use for cases in which the MPI code does not select the
> preferred interface.
>
> Charles Doland
> charles.dol...@ansys.com<mailto:charles.dol...@ansys.com> 
> <mailto:charles.dol...@ansys.com>
> (408) 627-6621  [x6621]
> 
> *From:* users 
> mailto:users-boun...@lists.open-mpi.org>> 
> on behalf of John
> Hearns via users mailto:users@lists.open-mpi.org>>
> *Sent:* Tuesday, September 1, 2020 12:22 PM
> *To:* Open MPI Users 
> mailto:users@lists.open-mpi.org>>
> *Cc:* John Hearns mailto:hear...@gmail.com>>
> *Subject:* Re: [OMPI users] Limiting IP addresses used by OpenMPI
>
> *[External Sender]*
>
> Charles, I recall using the I_MPI_NETMASK to choose which interface for
> MPI to use.
> I guess you are asking the same question for OpenMPI?
>
> On Tue, 1 S

Re: [OMPI users] Limiting IP addresses used by OpenMPI

2020-09-30 Thread Ralph Castain via users
I'm not sure where you are looking, but those params are indeed present in the 
opal/mca/btl/tcp component:

/*
 *  Called by MCA framework to open the component, registers
 *  component parameters.
 */

static int mca_btl_tcp_component_register(void)
{
    char* message;

    /* register TCP component parameters */
    mca_btl_tcp_param_register_string("if_include", "Comma-delimited list of 
devices and/or CIDR notation of networks to use for MPI communication (e.g., 
\"eth0,192.168.0.0/16\").  Mutually exclusive with btl_tcp_if_exclude.", "", 
OPAL_INFO_LVL_1, &mca_btl_tcp_component.tcp_if_include);

    mca_btl_tcp_param_register_string("if_exclude", "Comma-delimited list of 
devices and/or CIDR notation of networks to NOT use for MPI communication -- 
all devices not matching these specifications will be used (e.g., 
\"eth0,192.168.0.0/16\").  If set to a non-default value, it is mutually 
exclusive with btl_tcp_if_include.",
                                      "127.0.0.1/8,sppp",
                                      OPAL_INFO_LVL_1, 
&mca_btl_tcp_component.tcp_if_exclude);


I added a little padding to make them clearer. This was from the v3.1.x branch, 
but those params have been there for a very long time. The 
"mca_btl_tcp_param_register_string" function adds the "btl_tcp_" prefix to the 
param.


On Sep 4, 2020, at 5:39 PM, Charles Doland via users mailto:users@lists.open-mpi.org> > wrote:

Joseph,

There is no specific case. We are working on supporting the use of OpenMPI with 
our software, in addition to Intel MPI. With Intel MPI, we find that using the 
I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables is useful in many 
cases in which the job hosts have multiple network interfaces.

I tried to use btl_tcp_if_include and btl_tcp_if_exclude, but neither seemed to 
have any effect. I also noticed that these options do not appear to be present 
in the source code. Although there were similar options for ptl in the source, 
my undestanding is that ptl has been replaced by btl. I tested using version 
3.1.2. The source that I examined was also version 3.1.2.

Charles Doland
charles.dol...@ansys.com <mailto:charles.dol...@ansys.com> 
(408) 627-6621  [x6621]


From: users mailto:users-boun...@lists.open-mpi.org> > on behalf of Joseph Schuchart via 
users mailto:users@lists.open-mpi.org> >
Sent: Tuesday, September 1, 2020 1:50 PM
To: users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>  
mailto:users@lists.open-mpi.org> >
Cc: Joseph Schuchart mailto:schuch...@hlrs.de> >
Subject: Re: [OMPI users] Limiting IP addresses used by OpenMPI
 [External Sender]

Charles,

What is the machine configuration you're running on? It seems that there
are two MCA parameter for the tcp btl: btl_tcp_if_include and
btl_tcp_if_exclude (see ompi_info for details). There may be other knobs

I'm not aware of. If you're using UCX then my guess is that UCX has its
own way to choose the network interface to be used...

Cheers
Joseph

On 9/1/20 9:35 PM, Charles Doland via users wrote:
> Yes. It is not unusual to have multiple network interfaces on each host
> of a cluster. Usually there is a preference to use only one network
> interface on each host due to higher speed or throughput, or other
> considerations. It would be useful to be able to explicitly specify the
> interface to use for cases in which the MPI code does not select the
> preferred interface.
>
> Charles Doland
> charles.dol...@ansys.com <mailto:charles.dol...@ansys.com> 
> <mailto:charles.dol...@ansys.com <mailto:charles.dol...@ansys.com> >
> (408) 627-6621  [x6621]
> 
> *From:* users  <mailto:users-boun...@lists.open-mpi.org> > on behalf of John
> Hearns via users mailto:users@lists.open-mpi.org> >
> *Sent:* Tuesday, September 1, 2020 12:22 PM
> *To:* Open MPI Users  <mailto:users@lists.open-mpi.org> >
> *Cc:* John Hearns mailto:hear...@gmail.com> >
> *Subject:* Re: [OMPI users] Limiting IP addresses used by OpenMPI
>
> *[External Sender]*
>
> Charles, I recall using the I_MPI_NETMASK to choose which interface for
> MPI to use.
> I guess you are asking the same question for OpenMPI?
>
> On Tue, 1 Sep 2020 at 17:03, Charles Doland via users
> mailto:users@lists.open-mpi.org> 
> <mailto:users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> >> wrote:
>
> Is there a way to limit the IP addresses or network interfaces used
> for communication by OpenMPI? I am looking for something similar to
> the I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables for
> Intel MPI.
>
> The OpenM

Re: [OMPI users] Limiting IP addresses used by OpenMPI

2020-09-21 Thread Charles Doland via users
Joseph,

There is no specific case. We are working on supporting the use of OpenMPI with 
our software, in addition to Intel MPI. With Intel MPI, we find that using the 
I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables is useful in many 
cases in which the job hosts have multiple network interfaces.

I tried to use btl_tcp_if_include and btl_tcp_if_exclude, but neither seemed to 
have any effect. I also noticed that these options do not appear to be present 
in the source code. Although there were similar options for ptl in the source, 
my undestanding is that ptl has been replaced by btl. I tested using version 
3.1.2. The source that I examined was also version 3.1.2.

Charles Doland
charles.dol...@ansys.com<mailto:charles.dol...@ansys.com>
(408) 627-6621  [x6621]


From: users  on behalf of Joseph Schuchart 
via users 
Sent: Tuesday, September 1, 2020 1:50 PM
To: users@lists.open-mpi.org 
Cc: Joseph Schuchart 
Subject: Re: [OMPI users] Limiting IP addresses used by OpenMPI

[External Sender]

Charles,

What is the machine configuration you're running on? It seems that there
are two MCA parameter for the tcp btl: btl_tcp_if_include and
btl_tcp_if_exclude (see ompi_info for details). There may be other knobs
I'm not aware of. If you're using UCX then my guess is that UCX has its
own way to choose the network interface to be used...

Cheers
Joseph

On 9/1/20 9:35 PM, Charles Doland via users wrote:
> Yes. It is not unusual to have multiple network interfaces on each host
> of a cluster. Usually there is a preference to use only one network
> interface on each host due to higher speed or throughput, or other
> considerations. It would be useful to be able to explicitly specify the
> interface to use for cases in which the MPI code does not select the
> preferred interface.
>
> Charles Doland
> charles.dol...@ansys.com <mailto:charles.dol...@ansys.com>
> (408) 627-6621  [x6621]
> 
> *From:* users  on behalf of John
> Hearns via users 
> *Sent:* Tuesday, September 1, 2020 12:22 PM
> *To:* Open MPI Users 
> *Cc:* John Hearns 
> *Subject:* Re: [OMPI users] Limiting IP addresses used by OpenMPI
>
> *[External Sender]*
>
> Charles, I recall using the I_MPI_NETMASK to choose which interface for
> MPI to use.
> I guess you are asking the same question for OpenMPI?
>
> On Tue, 1 Sep 2020 at 17:03, Charles Doland via users
> mailto:users@lists.open-mpi.org>> wrote:
>
> Is there a way to limit the IP addresses or network interfaces used
> for communication by OpenMPI? I am looking for something similar to
> the I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables for
> Intel MPI.
>
> The OpenMPI documentation mentions the btl_tcp_if_include
> and btl_tcp_if_exclude MCA options. These do not  appear to be
> present, at least in OpenMPI v3.1.2. Is there another way to do
> this? Or are these options supported in a different version?
>
> Charles Doland
> charles.dol...@ansys.com <mailto:charles.dol...@ansys.com>
> (408) 627-6621  [x6621]
>


Re: [OMPI users] Limiting IP addresses used by OpenMPI

2020-09-01 Thread Joseph Schuchart via users

Charles,

What is the machine configuration you're running on? It seems that there 
are two MCA parameter for the tcp btl: btl_tcp_if_include and 
btl_tcp_if_exclude (see ompi_info for details). There may be other knobs 
I'm not aware of. If you're using UCX then my guess is that UCX has its 
own way to choose the network interface to be used...


Cheers
Joseph

On 9/1/20 9:35 PM, Charles Doland via users wrote:
Yes. It is not unusual to have multiple network interfaces on each host 
of a cluster. Usually there is a preference to use only one network 
interface on each host due to higher speed or throughput, or other 
considerations. It would be useful to be able to explicitly specify the 
interface to use for cases in which the MPI code does not select the 
preferred interface.


Charles Doland
charles.dol...@ansys.com <mailto:charles.dol...@ansys.com>
(408) 627-6621  [x6621]

*From:* users  on behalf of John 
Hearns via users 

*Sent:* Tuesday, September 1, 2020 12:22 PM
*To:* Open MPI Users 
*Cc:* John Hearns 
*Subject:* Re: [OMPI users] Limiting IP addresses used by OpenMPI

*[External Sender]*

Charles, I recall using the I_MPI_NETMASK to choose which interface for 
MPI to use.

I guess you are asking the same question for OpenMPI?

On Tue, 1 Sep 2020 at 17:03, Charles Doland via users 
mailto:users@lists.open-mpi.org>> wrote:


Is there a way to limit the IP addresses or network interfaces used
for communication by OpenMPI? I am looking for something similar to
the I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables for
Intel MPI.

The OpenMPI documentation mentions the btl_tcp_if_include
and btl_tcp_if_exclude MCA options. These do not  appear to be
present, at least in OpenMPI v3.1.2. Is there another way to do
this? Or are these options supported in a different version?

Charles Doland
charles.dol...@ansys.com <mailto:charles.dol...@ansys.com>
(408) 627-6621  [x6621]



Re: [OMPI users] Limiting IP addresses used by OpenMPI

2020-09-01 Thread Jeff Squyres (jsquyres) via users
3.1.2 was a long time ago, but I'm pretty sure that Open MPI v3.1.2 has 
btl_tcp_if_include / btl_tcp_if_exclude.

Try running: "ompi_info --all --parsable | grep btl_tcp_if_"

I believe that those options will both take a CIDR notation of which network(s) 
to use/not use.  Note: the _if_include option is mutually exclusive with the 
_if_exclude option.


On Sep 1, 2020, at 3:35 PM, Charles Doland via users 
mailto:users@lists.open-mpi.org>> wrote:

Yes. It is not unusual to have multiple network interfaces on each host of a 
cluster. Usually there is a preference to use only one network interface on 
each host due to higher speed or throughput, or other considerations. It would 
be useful to be able to explicitly specify the interface to use for cases in 
which the MPI code does not select the preferred interface.

Charles Doland
charles.dol...@ansys.com<mailto:charles.dol...@ansys.com>
(408) 627-6621  [x6621]

From: users 
mailto:users-boun...@lists.open-mpi.org>> on 
behalf of John Hearns via users 
mailto:users@lists.open-mpi.org>>
Sent: Tuesday, September 1, 2020 12:22 PM
To: Open MPI Users mailto:users@lists.open-mpi.org>>
Cc: John Hearns mailto:hear...@gmail.com>>
Subject: Re: [OMPI users] Limiting IP addresses used by OpenMPI

[External Sender]
Charles, I recall using the I_MPI_NETMASK to choose which interface for MPI to 
use.
I guess you are asking the same question for OpenMPI?

On Tue, 1 Sep 2020 at 17:03, Charles Doland via users 
mailto:users@lists.open-mpi.org>> wrote:
Is there a way to limit the IP addresses or network interfaces used for 
communication by OpenMPI? I am looking for something similar to the 
I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables for Intel MPI.

The OpenMPI documentation mentions the btl_tcp_if_include and 
btl_tcp_if_exclude MCA options. These do not  appear to be present, at least in 
OpenMPI v3.1.2. Is there another way to do this? Or are these options supported 
in a different version?

Charles Doland
charles.dol...@ansys.com<mailto:charles.dol...@ansys.com>
(408) 627-6621  [x6621]


--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>



Re: [OMPI users] Limiting IP addresses used by OpenMPI

2020-09-01 Thread Charles Doland via users
Yes. It is not unusual to have multiple network interfaces on each host of a 
cluster. Usually there is a preference to use only one network interface on 
each host due to higher speed or throughput, or other considerations. It would 
be useful to be able to explicitly specify the interface to use for cases in 
which the MPI code does not select the preferred interface.

Charles Doland
charles.dol...@ansys.com<mailto:charles.dol...@ansys.com>
(408) 627-6621  [x6621]

From: users  on behalf of John Hearns via 
users 
Sent: Tuesday, September 1, 2020 12:22 PM
To: Open MPI Users 
Cc: John Hearns 
Subject: Re: [OMPI users] Limiting IP addresses used by OpenMPI


[External Sender]

Charles, I recall using the I_MPI_NETMASK to choose which interface for MPI to 
use.
I guess you are asking the same question for OpenMPI?

On Tue, 1 Sep 2020 at 17:03, Charles Doland via users 
mailto:users@lists.open-mpi.org>> wrote:
Is there a way to limit the IP addresses or network interfaces used for 
communication by OpenMPI? I am looking for something similar to the 
I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables for Intel MPI.

The OpenMPI documentation mentions the btl_tcp_if_include and 
btl_tcp_if_exclude MCA options. These do not  appear to be present, at least in 
OpenMPI v3.1.2. Is there another way to do this? Or are these options supported 
in a different version?

Charles Doland
charles.dol...@ansys.com<mailto:charles.dol...@ansys.com>
(408) 627-6621  [x6621]


Re: [OMPI users] Limiting IP addresses used by OpenMPI

2020-09-01 Thread John Hearns via users
Charles, I recall using the I_MPI_NETMASK to choose which interface for MPI
to use.
I guess you are asking the same question for OpenMPI?

On Tue, 1 Sep 2020 at 17:03, Charles Doland via users <
users@lists.open-mpi.org> wrote:

> Is there a way to limit the IP addresses or network interfaces used for
> communication by OpenMPI? I am looking for something similar to the
> I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables for Intel MPI.
>
> The OpenMPI documentation mentions the btl_tcp_if_include
> and btl_tcp_if_exclude MCA options. These do not  appear to be present, at
> least in OpenMPI v3.1.2. Is there another way to do this? Or are these
> options supported in a different version?
>
> Charles Doland
> charles.dol...@ansys.com
> (408) 627-6621  [x6621]
>


[OMPI users] Limiting IP addresses used by OpenMPI

2020-09-01 Thread Charles Doland via users
Is there a way to limit the IP addresses or network interfaces used for 
communication by OpenMPI? I am looking for something similar to the 
I_MPI_TCP_NETMASK or I_MPI_NETMASK environment variables for Intel MPI.

The OpenMPI documentation mentions the btl_tcp_if_include and 
btl_tcp_if_exclude MCA options. These do not  appear to be present, at least in 
OpenMPI v3.1.2. Is there another way to do this? Or are these options supported 
in a different version?

Charles Doland
charles.dol...@ansys.com
(408) 627-6621  [x6621]