Re: [OMPI users] OpenMPI 3.0.1 - mpirun hangs with 2 hosts

2018-05-14 Thread Max Mellette
Thanks everyone for all your assistance. The problem seems to be resolved
now, although I'm not entirely sure why these changes made a difference.
There were two things I changed:

(1) I had some additional `export ...` lines in .bashrc before the `export
PATH=...` and `export LD_LIBRARY_PATH=...` lines. When I removed those
lines (and then later added them back in below the PATH and LD_LIBRARY_PATH
lines) mpirun worked. But only b09-30 was able to execute code on b09-32
and not the other way around.

(2) I passed IP addresses to mpirun instead of the hostnames (this didn't
work previously), and now mpirun works in both directions (b09-30 -> b09-32
and b09-32 -> b09-30). I added a 3rd host in the rack and mpirun still
works when passing IP addresses. For some reason using the host name
doesn't work despite the fact that I can use it to ssh.

Also FWIW I wasn't using a debugger.

Thanks again,
Max


On Mon, May 14, 2018 at 4:39 PM, Gilles Gouaillardet 
wrote:

> In the initial report, the /usr/bin/ssh process was in the 'T' state
> (it generally hints the process is attached by a debugger)
>
> /usr/bin/ssh -x b09-32 orted
>
> did behave as expected (e.g. orted was executed, exited with an error
> since the command line is invalid, and error message was received)
>
>
> can you try to run
>
> /home/user/openmpi_install/bin/mpirun --host b09-30,b09-32 hostname
>
> and see how things go ? (since you simply 'ssh orted', an other orted
> might be used)
>
> If you are still facing the same hang with ssh in the 'T' state, can you
> check the logs on b09-32 and see
> if the sshd server was even contacted ? I can hardly make sense of this
> error fwiw.
>
>
> Cheers,
>
> Gilles
>
> On 5/15/2018 5:27 AM, r...@open-mpi.org wrote:
>
>> You got that error because the orted is looking for its rank on the cmd
>> line and not finding it.
>>
>>
>> On May 14, 2018, at 12:37 PM, Max Mellette > wmell...@ucsd.edu>> wrote:
>>>
>>> Hi Gus,
>>>
>>> Thanks for the suggestions. The correct version of openmpi seems to be
>>> getting picked up; I also prepended .bashrc with the installation path like
>>> you suggested, but it didn't seemed to help:
>>>
>>> user@b09-30:~$ cat .bashrc
>>> export PATH=/home/user/openmpi_install/bin:/usr/local/sbin:/usr/
>>> local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/
>>> local/games:/snap/bin
>>> export LD_LIBRARY_PATH=/home/user/openmpi_install/lib
>>> user@b09-30:~$ which mpicc
>>> /home/user/openmpi_install/bin/mpicc
>>> user@b09-30:~$ /usr/bin/ssh -x b09-32 orted
>>> [b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
>>> ess_env_module.c at line 147
>>> [b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in
>>> file util/session_dir.c at line 106
>>> [b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in
>>> file util/session_dir.c at line 345
>>> [b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in
>>> file base/ess_base_std_orted.c at line 270
>>> 
>>> --
>>> It looks like orte_init failed for some reason; your parallel process is
>>> likely to abort.  There are many reasons that a parallel process can
>>> fail during orte_init; some of which are due to configuration or
>>> environment problems.  This failure appears to be an internal failure;
>>> here's some additional information (which may only be relevant to an
>>> Open MPI developer):
>>>
>>>   orte_session_dir failed
>>>   --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
>>> 
>>> --
>>>
>>> Thanks,
>>> Max
>>>
>>>
>>> On Mon, May 14, 2018 at 11:41 AM, Gus Correa >> > wrote:
>>>
>>> Hi Max
>>>
>>> Just in case, as environment mix often happens.
>>> Could it be that you are inadvertently picking another
>>> installation of OpenMPI, perhaps installed from packages
>>> in /usr , or /usr/local?
>>> That's easy to check with 'which mpiexec' or
>>> 'which mpicc', for instance.
>>>
>>> Have you tried to prepend (as opposed to append) OpenMPI
>>> to your PATH? Say:
>>>
>>> export
>>> PATH='/home/user/openmpi_install/bin:/usr/local/sbin:/usr/
>>> local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/
>>> local/games:/snap/bin'
>>>
>>> I hope this helps,
>>> Gus Correa
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org 
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> 

Re: [OMPI users] peformance abnormality with openib and tcp framework

2018-05-14 Thread Gilles Gouaillardet

Xie Bin,


According to the man page, -N is equivalent to npernode, which is 
equivalent to --map-by ppr:N:node.


This is *not* equivalent to -map-by node :

The former packs tasks to the same node, and the latter scatters tasks 
accross the nodes



[gilles@login ~]$ mpirun --host n0:2,n1:2 -N 2 --tag-output hostname | sort
[1,0]:n0
[1,1]:n0
[1,2]:n1
[1,3]:n1


[gilles@login ~]$ mpirun --host n0:2,n1:2 -np 4 --tag-output -map-by 
node hostname | sort

[1,0]:n0
[1,1]:n1
[1,2]:n0
[1,3]:n1


I am pretty sure a subnet manager was ran at some point in time (so your 
HCA can get their identifier).


/* feel free to reboot your nodes and see if ibstat still shows the 
adapters as active */



Note you might also use --mca pml ob1 in order to make sure mxm nor ucx 
are used



Cheers,


Gilles



On 5/15/2018 10:45 AM, Blade Shieh wrote:

Hi, George:
My command lines are:
1) single node
mpirun --allow-run-as-root -mca btl self,tcp(or openib) -mca 
btl_tcp_if_include eth2 -mca btl_openib_if_include mlx5_0 -x 
OMP_NUM_THREADS=2 -n 32 myapp

2) 2-node cluster
mpirun --allow-run-as-root -mca btl ^tcp(or ^openib) -mca 
btl_tcp_if_include eth2 -mca btl_openib_if_include mlx5_0 -x 
OMP_NUM_THREADS=4 -N 16 myapp


In 2nd condition, I used -N, which is equal to --map-by node.

Best regards,
Xie Bin


George Bosilca > 于 
2018年5月15日 周二 02:07写道:


Shared memory communication is important for multi-core platforms,
especially when you have multiple processes per node. But this is
only part of your issue here.

You haven't specified how your processes will be mapped on your
resources. As a result rank 0 and 1 will be on the same node, so
you are testing the shared memory support of whatever BTL you
allow. In this case the performance will be much better for TCP
than for IB, simply because you are not using your network, but
its capacity to move data across memory banks. In such an
environment, TCP translated to a memcpy plus a system call, which
is much faster than IB. That being said, it should not matter
because shared memory is there to cover this case.

Add "--map-by node" to your mpirun command to measure the
bandwidth between nodes.

  George.



On Mon, May 14, 2018 at 5:04 AM, Blade Shieh > wrote:


Hi, Nathan:
    Thanks for you reply.
1) It was my mistake not to notice usage of osu_latency. Now
it worked well, but still poorer in openib.
2) I did not use sm or vader because I wanted to check
performance between tcp and openib. Besides, I will run the
application in cluster, so vader is not so important.
3) Of course, I tried you suggestions. I used ^tcp/^openib and
set btl_openib_if_include to mlx5_0 in a two-node cluster (IB
direcet-connected). The result did not change -- IB still
better in MPI benchmark but poorer in my applicaion.

Best Regards,
Xie Bin

___
users mailing list
users@lists.open-mpi.org 
https://lists.open-mpi.org/mailman/listinfo/users


___
users mailing list
users@lists.open-mpi.org 
https://lists.open-mpi.org/mailman/listinfo/users



___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] peformance abnormality with openib and tcp framework

2018-05-14 Thread Blade Shieh
Hi, George:
My command lines are:
1) single node
mpirun --allow-run-as-root -mca btl self,tcp(or openib) -mca
btl_tcp_if_include eth2 -mca btl_openib_if_include mlx5_0 -x
OMP_NUM_THREADS=2 -n 32 myapp
2) 2-node cluster
mpirun --allow-run-as-root -mca btl ^tcp(or ^openib) -mca
btl_tcp_if_include eth2 -mca btl_openib_if_include mlx5_0 -x
OMP_NUM_THREADS=4 -N 16 myapp

In 2nd condition, I used -N, which is equal to --map-by node.

Best regards,
Xie Bin


George Bosilca  于 2018年5月15日 周二 02:07写道:

> Shared memory communication is important for multi-core platforms,
> especially when you have multiple processes per node. But this is only part
> of your issue here.
>
> You haven't specified how your processes will be mapped on your resources.
> As a result rank 0 and 1 will be on the same node, so you are testing the
> shared memory support of whatever BTL you allow. In this case the
> performance will be much better for TCP than for IB, simply because you are
> not using your network, but its capacity to move data across memory banks.
> In such an environment, TCP translated to a memcpy plus a system call,
> which is much faster than IB. That being said, it should not matter because
> shared memory is there to cover this case.
>
> Add "--map-by node" to your mpirun command to measure the bandwidth
> between nodes.
>
>   George.
>
>
>
> On Mon, May 14, 2018 at 5:04 AM, Blade Shieh  wrote:
>
>>
>> Hi, Nathan:
>> Thanks for you reply.
>> 1) It was my mistake not to notice usage of osu_latency. Now it worked
>> well, but still poorer in openib.
>> 2) I did not use sm or vader because I wanted to check performance
>> between tcp and openib. Besides, I will run the application in cluster, so
>> vader is not so important.
>> 3) Of course, I tried you suggestions. I used ^tcp/^openib and set
>> btl_openib_if_include to mlx5_0 in a two-node cluster (IB
>> direcet-connected).  The result did not change -- IB still better in MPI
>> benchmark but poorer in my applicaion.
>>
>> Best Regards,
>> Xie Bin
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] peformance abnormality with openib and tcp framework

2018-05-14 Thread Blade Shieh
Hi, John:

You are right on the network framework. I do have no IB switch and just
connect the servers with an IB cable. I did not even open the opensmd
service because it seems unnecessary in this situation. Can this be the
reason why IB performs poorer?

Interconnection details are in the attachment.



Best Regards,

Xie Bin


John Hearns via users  于 2018年5月14日 周一 17:45写道:

> Xie Bin,  I do hate to ask this.  You say  "in a two-node cluster (IB
> direcet-connected). "
> Does that mean that you have no IB switch, and that there is a single IB
> cable joining up these two servers?
> If so please run:ibstatusibhosts   ibdiagnet
> I am trying to check if the IB fabric is functioning properly in that
> situation.
> (Also need to check if there is o Subnet Manager  - so run   sminfo)
>
> But you do say that the IMB test gives good results for IB, so you must
> have IB working properly.
> Therefore I am an idiot...
>
>
>
> On 14 May 2018 at 11:04, Blade Shieh  wrote:
>
>>
>> Hi, Nathan:
>> Thanks for you reply.
>> 1) It was my mistake not to notice usage of osu_latency. Now it worked
>> well, but still poorer in openib.
>> 2) I did not use sm or vader because I wanted to check performance
>> between tcp and openib. Besides, I will run the application in cluster, so
>> vader is not so important.
>> 3) Of course, I tried you suggestions. I used ^tcp/^openib and set
>> btl_openib_if_include to mlx5_0 in a two-node cluster (IB
>> direcet-connected).  The result did not change -- IB still better in MPI
>> benchmark but poorer in my applicaion.
>>
>> Best Regards,
>> Xie Bin
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


IB-direct-connect.tgz
Description: application/gzip
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OpenMPI 3.0.1 - mpirun hangs with 2 hosts

2018-05-14 Thread Gilles Gouaillardet

In the initial report, the /usr/bin/ssh process was in the 'T' state
(it generally hints the process is attached by a debugger)

/usr/bin/ssh -x b09-32 orted

did behave as expected (e.g. orted was executed, exited with an error 
since the command line is invalid, and error message was received)



can you try to run

/home/user/openmpi_install/bin/mpirun --host b09-30,b09-32 hostname

and see how things go ? (since you simply 'ssh orted', an other orted 
might be used)


If you are still facing the same hang with ssh in the 'T' state, can you 
check the logs on b09-32 and see
if the sshd server was even contacted ? I can hardly make sense of this 
error fwiw.



Cheers,

Gilles

On 5/15/2018 5:27 AM, r...@open-mpi.org wrote:
You got that error because the orted is looking for its rank on the 
cmd line and not finding it.



On May 14, 2018, at 12:37 PM, Max Mellette > wrote:


Hi Gus,

Thanks for the suggestions. The correct version of openmpi seems to 
be getting picked up; I also prepended .bashrc with the installation 
path like you suggested, but it didn't seemed to help:


user@b09-30:~$ cat .bashrc
export 
PATH=/home/user/openmpi_install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin

export LD_LIBRARY_PATH=/home/user/openmpi_install/lib
user@b09-30:~$ which mpicc
/home/user/openmpi_install/bin/mpicc
user@b09-30:~$ /usr/bin/ssh -x b09-32 orted
[b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file 
ess_env_module.c at line 147
[b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in 
file util/session_dir.c at line 106
[b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in 
file util/session_dir.c at line 345
[b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in 
file base/ess_base_std_orted.c at line 270

--
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_session_dir failed
  --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
--

Thanks,
Max


On Mon, May 14, 2018 at 11:41 AM, Gus Correa > wrote:


Hi Max

Just in case, as environment mix often happens.
Could it be that you are inadvertently picking another
installation of OpenMPI, perhaps installed from packages
in /usr , or /usr/local?
That's easy to check with 'which mpiexec' or
'which mpicc', for instance.

Have you tried to prepend (as opposed to append) OpenMPI
to your PATH? Say:

export

PATH='/home/user/openmpi_install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'

I hope this helps,
Gus Correa


___
users mailing list
users@lists.open-mpi.org 
https://lists.open-mpi.org/mailman/listinfo/users




___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] OpenMPI 3.0.1 - mpirun hangs with 2 hosts

2018-05-14 Thread Jeff Squyres (jsquyres)
Yes, that "T" state is quite puzzling.  You didn't attach a debugger or hit the 
ssh with a signal, did you?

(we had a similar situation on the devel list recently, but it only happened 
with a very old version of Slurm.  We concluded that it was a SLURM bug that 
has since been fixed.  And just to be sure, I just double checked: the srun 
that hangs in that case is *not* in the "T" state -- it's in the "S" state, 
which appears to be a normal state)


> On May 12, 2018, at 4:56 AM, Gilles Gouaillardet 
>  wrote:
> 
> Max,
> 
> the 'T' state of the ssh process is very puzzling.
> 
> can you try to run
> /usr/bin/ssh -x b09-32 orted
> on b09-30 and see what happens ?
> (it should fail with an error message, instead of hanging)
> 
> In order to check there is no firewall, can you run instead
> iptables -L
> Also, is 'selinux' enabled ? there could be some rules that prevent
> 'ssh' from working as expected
> 
> 
> Cheers,
> 
> Gilles
> 
> On Sat, May 12, 2018 at 7:38 AM, Max Mellette  wrote:
>> Hi Jeff,
>> 
>> Thanks for the reply. FYI since I originally posted this, I uninstalled
>> OpenMPI 3.0.1 and installed 3.1.0, but I'm still experiencing the same
>> problem.
>> 
>> When I run the command without the `--mca plm_base_verbose 100` flag, it
>> hangs indefinitely with no output.
>> 
>> As far as I can tell, these are the additional processes running on each
>> machine while mpirun is hanging (printed using `ps -aux | less`):
>> 
>> On executing host b09-30:
>> 
>> user 361714  0.4  0.0 293016  8444 pts/0Sl+  15:10   0:00 mpirun
>> --host b09-30,b09-32 hostname
>> user 361719  0.0  0.0  37092  5112 pts/0T15:10   0:00
>> /usr/bin/ssh -x b09-32  orted -mca ess "env" -mca ess_base_jobid "638517248"
>> -mca ess_base_vpid 1 -mca ess_base_num_procs "2" -mca orte_node_regex
>> "b[2:9]-30,b[2:9]-32@0(2)" -mca orte_hnp_uri
>> "638517248.0;tcp://169.228.66.102,10.1.100.30:55090" -mca plm "rsh" -mca
>> pmix "^s1,s2,cray,isolated"
>> 
>> On remote host b09-32:
>> 
>> root 175273  0.0  0.0  61752  5824 ?Ss   15:10   0:00 sshd:
>> [accepted]
>> sshd 175274  0.0  0.0  61752   708 ?S15:10   0:00 sshd:
>> [net]
>> 
>> I only see orted showing up in the ssh flags on b09-30. Any ideas what I
>> should try next?
>> 
>> Thanks,
>> Max
>> 
>> 
>> 
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] OpenMPI 3.0.1 - mpirun hangs with 2 hosts

2018-05-14 Thread r...@open-mpi.org
You got that error because the orted is looking for its rank on the cmd line 
and not finding it.


> On May 14, 2018, at 12:37 PM, Max Mellette  wrote:
> 
> Hi Gus,
> 
> Thanks for the suggestions. The correct version of openmpi seems to be 
> getting picked up; I also prepended .bashrc with the installation path like 
> you suggested, but it didn't seemed to help:
> 
> user@b09-30:~$ cat .bashrc
> export 
> PATH=/home/user/openmpi_install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
> export LD_LIBRARY_PATH=/home/user/openmpi_install/lib
> user@b09-30:~$ which mpicc
> /home/user/openmpi_install/bin/mpicc
> user@b09-30:~$ /usr/bin/ssh -x b09-32 orted
> [b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file 
> ess_env_module.c at line 147
> [b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file 
> util/session_dir.c at line 106
> [b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file 
> util/session_dir.c at line 345
> [b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file 
> base/ess_base_std_orted.c at line 270
> --
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_session_dir failed
>   --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
> --
> 
> Thanks,
> Max
> 
> 
> On Mon, May 14, 2018 at 11:41 AM, Gus Correa  > wrote:
> Hi Max
> 
> Just in case, as environment mix often happens.
> Could it be that you are inadvertently picking another
> installation of OpenMPI, perhaps installed from packages
> in /usr , or /usr/local?
> That's easy to check with 'which mpiexec' or
> 'which mpicc', for instance.
> 
> Have you tried to prepend (as opposed to append) OpenMPI
> to your PATH? Say:
> 
> export 
> PATH='/home/user/openmpi_install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'
> 
> I hope this helps,
> Gus Correa
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OpenMPI 3.0.1 - mpirun hangs with 2 hosts

2018-05-14 Thread Max Mellette
Hi Gus,

Thanks for the suggestions. The correct version of openmpi seems to be
getting picked up; I also prepended .bashrc with the installation path like
you suggested, but it didn't seemed to help:

user@b09-30:~$ cat .bashrc
export
PATH=/home/user/openmpi_install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
export LD_LIBRARY_PATH=/home/user/openmpi_install/lib
user@b09-30:~$ which mpicc
/home/user/openmpi_install/bin/mpicc
user@b09-30:~$ /usr/bin/ssh -x b09-32 orted
[b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
ess_env_module.c at line 147
[b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file
util/session_dir.c at line 106
[b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file
util/session_dir.c at line 345
[b09-32:204536] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file
base/ess_base_std_orted.c at line 270
--
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_session_dir failed
  --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
--

Thanks,
Max


On Mon, May 14, 2018 at 11:41 AM, Gus Correa  wrote:

> Hi Max
>
> Just in case, as environment mix often happens.
> Could it be that you are inadvertently picking another
> installation of OpenMPI, perhaps installed from packages
> in /usr , or /usr/local?
> That's easy to check with 'which mpiexec' or
> 'which mpicc', for instance.
>
> Have you tried to prepend (as opposed to append) OpenMPI
> to your PATH? Say:
>
> export PATH='/home/user/openmpi_install/bin:/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/
> local/games:/snap/bin'
>
> I hope this helps,
> Gus Correa
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OpenMPI 3.0.1 - mpirun hangs with 2 hosts

2018-05-14 Thread Gus Correa

Hi Max

Just in case, as environment mix often happens.
Could it be that you are inadvertently picking another
installation of OpenMPI, perhaps installed from packages
in /usr , or /usr/local?
That's easy to check with 'which mpiexec' or
'which mpicc', for instance.

Have you tried to prepend (as opposed to append) OpenMPI
to your PATH? Say:

export 
PATH='/home/user/openmpi_install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'


I hope this helps,
Gus Correa

On 05/14/2018 12:40 PM, Max Mellette wrote:

John,

Thanks for the suggestions. In this case there is no cluster manager / 
job scheduler; these are just a couple of individual hosts in a rack. 
The reason for the generic names is that I anonymized the full network 
address in the previous posts, truncating to just the host name.


My home directory is network-mounted to both hosts. In fact, I 
uninstalled OpenMPI 3.0.1 from /usr/local on both hosts, and installed 
OpenMPI 3.1.0 into my home directory at `/home/user/openmpi_install`, 
also updating .bashrc appropriately:


user@b09-30:~$ cat .bashrc
export 
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/user/openmpi_install/bin

export LD_LIBRARY_PATH=/home/user/openmpi_install/lib

So the environment should be the same on both hosts.

Thanks,
Max

On Mon, May 14, 2018 at 12:29 AM, John Hearns via users 
> wrote:


One very, very stupid question here. This arose over on the Slurm
list actually.
Those hostnames look like quite generic names, ie they are part of
an HPC cluster?
Do they happen to have independednt home directories for your userid?
Could that possibly make a difference to the MPI launcher?

On 14 May 2018 at 06:44, Max Mellette > wrote:

Hi Gilles,

Thanks for the suggestions; the results are below. Any ideas
where to go from here?

- Seems that selinux is not installed:

user@b09-30:~$ sestatus
The program 'sestatus' is currently not installed. You can
install it by typing:
sudo apt install policycoreutils

- Output from orted:

user@b09-30:~$ /usr/bin/ssh -x b09-32 orted
[b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in
file ess_env_module.c at line 147
[b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad
parameter in file util/session_dir.c at line 106
[b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad
parameter in file util/session_dir.c at line 345
[b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad
parameter in file base/ess_base_std_orted.c at line 270

--
It looks like orte_init failed for some reason; your parallel
process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal
failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

   orte_session_dir failed
   --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS

--

- iptables rules:

user@b09-30:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ufw-before-logging-input  all  --  anywhere             anywhere
ufw-before-input  all  --  anywhere             anywhere
ufw-after-input  all  --  anywhere             anywhere
ufw-after-logging-input  all  --  anywhere             anywhere
ufw-reject-input  all  --  anywhere             anywhere
ufw-track-input  all  --  anywhere             anywhere

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ufw-before-logging-forward  all  --  anywhere             anywhere
ufw-before-forward  all  --  anywhere             anywhere
ufw-after-forward  all  --  anywhere             anywhere
ufw-after-logging-forward  all  --  anywhere             anywhere
ufw-reject-forward  all  --  anywhere             anywhere
ufw-track-forward  all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
ufw-before-logging-output  all  --  anywhere             anywhere
ufw-before-output  all  --  anywhere             anywhere
ufw-after-output  all  --  anywhere             anywhere
ufw-after-logging-output  all  

Re: [OMPI users] MPI cartesian grid : cumulate a scalar value through the procs of a given axis of the grid

2018-05-14 Thread Nathan Hjelm
Still looks to me like MPI_Scan is what you want. Just need three additional 
communicators (one for each direction). With a recurive doubling MPI_Scan 
inplementation it is O(log n) compared to O(n) in time.



> On May 14, 2018, at 8:42 AM, Pierre Gubernatis  
> wrote:
> 
> Thank you to all of you for your answers (I was off up to now).
> 
> Actually my question was't well posed. I stated it more clearly in this post, 
> with the answer: 
> 
> https://stackoverflow.com/questions/50130688/mpi-cartesian-grid-cumulate-a-scalar-value-through-the-procs-of-a-given-axis-o?noredirect=1#comment87286983_50130688
> 
> Thanks again.
> 
> 
> 
> 
> 2018-05-02 13:56 GMT+02:00 Peter Kjellström :
>> On Wed, 2 May 2018 11:15:09 +0200
>> Pierre Gubernatis  wrote:
>> 
>> > Hello all...
>> > 
>> > I am using a *cartesian grid* of processors which represents a spatial
>> > domain (a cubic geometrical domain split into several smaller
>> > cubes...), and I have communicators to address the procs, as for
>> > example a comm along each of the 3 axes I,J,K, or along a plane
>> > IK,JK,IJ, etc..).
>> > 
>> > *I need to cumulate a scalar value (SCAL) through the procs which
>> > belong to a given axis* (let's say the K axis, defined by I=J=0).
>> > 
>> > Precisely, the origin proc 0-0-0 has a given value for SCAL (say
>> > SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
>> > = SCAL + SCAL000, and I need to *propagate* this updating along the K
>> > axis. At the end, the last proc of the axis should have the total sum
>> > of SCAL over the axis. (and of course, at a given rank k along the
>> > axis, the SCAL value = sum over 0,1,   K of SCAL)
>> > 
>> > Please, do you see a way to do this ? I have tried many things (with
>> > MPI_SENDRECV and by looping over the procs of the axis, but I get
>> > deadlocks that prove I don't handle this correctly...)
>> > Thank you in any case.
>> 
>> Why did you try SENDRECV? As far as I understand your description above
>> data only flows one direction (along K)?
>> 
>> There is no MPI collective to support the kind of reduction you
>> describe but it should not be hard to do using normal SEND and RECV.
>> Something like (simplified psuedo code):
>> 
>> if (not_first_along_K)
>>  MPI_RECV(SCAL_tmp, previous)
>>  SCAL += SCAL_tmp
>> 
>> if (not_last_along_K)
>>  MPI_SEND(SCAL, next)
>> 
>> /Peter K
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] peformance abnormality with openib and tcp framework

2018-05-14 Thread George Bosilca
Shared memory communication is important for multi-core platforms,
especially when you have multiple processes per node. But this is only part
of your issue here.

You haven't specified how your processes will be mapped on your resources.
As a result rank 0 and 1 will be on the same node, so you are testing the
shared memory support of whatever BTL you allow. In this case the
performance will be much better for TCP than for IB, simply because you are
not using your network, but its capacity to move data across memory banks.
In such an environment, TCP translated to a memcpy plus a system call,
which is much faster than IB. That being said, it should not matter because
shared memory is there to cover this case.

Add "--map-by node" to your mpirun command to measure the bandwidth between
nodes.

  George.



On Mon, May 14, 2018 at 5:04 AM, Blade Shieh  wrote:

>
> Hi, Nathan:
> Thanks for you reply.
> 1) It was my mistake not to notice usage of osu_latency. Now it worked
> well, but still poorer in openib.
> 2) I did not use sm or vader because I wanted to check performance between
> tcp and openib. Besides, I will run the application in cluster, so vader is
> not so important.
> 3) Of course, I tried you suggestions. I used ^tcp/^openib and set
> btl_openib_if_include to mlx5_0 in a two-node cluster (IB
> direcet-connected).  The result did not change -- IB still better in MPI
> benchmark but poorer in my applicaion.
>
> Best Regards,
> Xie Bin
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] MPI cartesian grid : cumulate a scalar value through the procs of a given axis of the grid

2018-05-14 Thread Pierre Gubernatis
Thank you to all of you for your answers (I was off up to now).

Actually my question was't well posed. I stated it more clearly in this
post, with the answer:

https://stackoverflow.com/questions/50130688/mpi-cartesian-grid-cumulate-a-scalar-value-through-the-procs-of-a-given-axis-o?noredirect=1#comment87286983_50130688

Thanks again.



2018-05-02 13:56 GMT+02:00 Peter Kjellström :

> On Wed, 2 May 2018 11:15:09 +0200
> Pierre Gubernatis  wrote:
>
> > Hello all...
> >
> > I am using a *cartesian grid* of processors which represents a spatial
> > domain (a cubic geometrical domain split into several smaller
> > cubes...), and I have communicators to address the procs, as for
> > example a comm along each of the 3 axes I,J,K, or along a plane
> > IK,JK,IJ, etc..).
> >
> > *I need to cumulate a scalar value (SCAL) through the procs which
> > belong to a given axis* (let's say the K axis, defined by I=J=0).
> >
> > Precisely, the origin proc 0-0-0 has a given value for SCAL (say
> > SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
> > = SCAL + SCAL000, and I need to *propagate* this updating along the K
> > axis. At the end, the last proc of the axis should have the total sum
> > of SCAL over the axis. (and of course, at a given rank k along the
> > axis, the SCAL value = sum over 0,1,   K of SCAL)
> >
> > Please, do you see a way to do this ? I have tried many things (with
> > MPI_SENDRECV and by looping over the procs of the axis, but I get
> > deadlocks that prove I don't handle this correctly...)
> > Thank you in any case.
>
> Why did you try SENDRECV? As far as I understand your description above
> data only flows one direction (along K)?
>
> There is no MPI collective to support the kind of reduction you
> describe but it should not be hard to do using normal SEND and RECV.
> Something like (simplified psuedo code):
>
> if (not_first_along_K)
>  MPI_RECV(SCAL_tmp, previous)
>  SCAL += SCAL_tmp
>
> if (not_last_along_K)
>  MPI_SEND(SCAL, next)
>
> /Peter K
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OpenMPI 3.0.1 - mpirun hangs with 2 hosts

2018-05-14 Thread Max Mellette
John,

Thanks for the suggestions. In this case there is no cluster manager / job
scheduler; these are just a couple of individual hosts in a rack. The
reason for the generic names is that I anonymized the full network address
in the previous posts, truncating to just the host name.

My home directory is network-mounted to both hosts. In fact, I uninstalled
OpenMPI 3.0.1 from /usr/local on both hosts, and installed OpenMPI 3.1.0
into my home directory at `/home/user/openmpi_install`, also updating
.bashrc appropriately:

user@b09-30:~$ cat .bashrc
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/
sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/
user/openmpi_install/bin
export LD_LIBRARY_PATH=/home/user/openmpi_install/lib

So the environment should be the same on both hosts.

Thanks,
Max

On Mon, May 14, 2018 at 12:29 AM, John Hearns via users <
users@lists.open-mpi.org> wrote:

> One very, very stupid question here. This arose over on the Slurm list
> actually.
> Those hostnames look like quite generic names, ie they are part of an HPC
> cluster?
> Do they happen to have independednt home directories for your userid?
> Could that possibly make a difference to the MPI launcher?
>
> On 14 May 2018 at 06:44, Max Mellette  wrote:
>
>> Hi Gilles,
>>
>> Thanks for the suggestions; the results are below. Any ideas where to go
>> from here?
>>
>> - Seems that selinux is not installed:
>>
>> user@b09-30:~$ sestatus
>> The program 'sestatus' is currently not installed. You can install it by
>> typing:
>> sudo apt install policycoreutils
>>
>> - Output from orted:
>>
>> user@b09-30:~$ /usr/bin/ssh -x b09-32 orted
>> [b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
>> ess_env_module.c at line 147
>> [b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file
>> util/session_dir.c at line 106
>> [b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file
>> util/session_dir.c at line 345
>> [b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file
>> base/ess_base_std_orted.c at line 270
>> 
>> --
>> It looks like orte_init failed for some reason; your parallel process is
>> likely to abort.  There are many reasons that a parallel process can
>> fail during orte_init; some of which are due to configuration or
>> environment problems.  This failure appears to be an internal failure;
>> here's some additional information (which may only be relevant to an
>> Open MPI developer):
>>
>>   orte_session_dir failed
>>   --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
>> 
>> --
>>
>> - iptables rules:
>>
>> user@b09-30:~$ sudo iptables -L
>> Chain INPUT (policy ACCEPT)
>> target prot opt source   destination
>> ufw-before-logging-input  all  --  anywhere anywhere
>> ufw-before-input  all  --  anywhere anywhere
>> ufw-after-input  all  --  anywhere anywhere
>> ufw-after-logging-input  all  --  anywhere anywhere
>> ufw-reject-input  all  --  anywhere anywhere
>> ufw-track-input  all  --  anywhere anywhere
>>
>> Chain FORWARD (policy ACCEPT)
>> target prot opt source   destination
>> ufw-before-logging-forward  all  --  anywhere anywhere
>> ufw-before-forward  all  --  anywhere anywhere
>> ufw-after-forward  all  --  anywhere anywhere
>> ufw-after-logging-forward  all  --  anywhere anywhere
>> ufw-reject-forward  all  --  anywhere anywhere
>> ufw-track-forward  all  --  anywhere anywhere
>>
>> Chain OUTPUT (policy ACCEPT)
>> target prot opt source   destination
>> ufw-before-logging-output  all  --  anywhere anywhere
>> ufw-before-output  all  --  anywhere anywhere
>> ufw-after-output  all  --  anywhere anywhere
>> ufw-after-logging-output  all  --  anywhere anywhere
>> ufw-reject-output  all  --  anywhere anywhere
>> ufw-track-output  all  --  anywhere anywhere
>>
>> Chain ufw-after-forward (1 references)
>> target prot opt source   destination
>>
>> Chain ufw-after-input (1 references)
>> target prot opt source   destination
>>
>> Chain ufw-after-logging-forward (1 references)
>> target prot opt source   destination
>>
>> Chain ufw-after-logging-input (1 references)
>> target prot opt source   destination
>>
>> Chain ufw-after-logging-output (1 references)
>> target prot opt source   destination
>>
>> Chain ufw-after-output (1 references)
>> target prot opt source   destination
>>
>> Chain ufw-before-forward (1 references)
>> target prot opt source   destination
>>
>> Chain 

Re: [OMPI users] Problem running with UCX/oshmem on single node?

2018-05-14 Thread Michael Di Domenico
On Wed, May 9, 2018 at 9:45 PM, Howard Pritchard  wrote:
>
> You either need to go and buy a connectx4/5 HCA from mellanox (and maybe a
> switch), and install that
> on your system, or else install xpmem (https://github.com/hjelmn/xpmem).
> Note there is a bug right now
> in UCX that you may hit if you try to go thee xpmem only  route:

How stringent is the Connect-X 4/5 requirement?  i have Connect-X 3
cards will they work?  during the configure step is seems to yell at
me that mlx5 wont compile because i don't have Mellanox OFED v3.1
installed, is that also a requirement (i'm using the RHEl7.4 bundled
version of ofed, not then vendor versions)
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] peformance abnormality with openib and tcp framework

2018-05-14 Thread John Hearns via users
Xie Bin,  I do hate to ask this.  You say  "in a two-node cluster (IB
direcet-connected). "
Does that mean that you have no IB switch, and that there is a single IB
cable joining up these two servers?
If so please run:ibstatusibhosts   ibdiagnet
I am trying to check if the IB fabric is functioning properly in that
situation.
(Also need to check if there is o Subnet Manager  - so run   sminfo)

But you do say that the IMB test gives good results for IB, so you must
have IB working properly.
Therefore I am an idiot...



On 14 May 2018 at 11:04, Blade Shieh  wrote:

>
> Hi, Nathan:
> Thanks for you reply.
> 1) It was my mistake not to notice usage of osu_latency. Now it worked
> well, but still poorer in openib.
> 2) I did not use sm or vader because I wanted to check performance between
> tcp and openib. Besides, I will run the application in cluster, so vader is
> not so important.
> 3) Of course, I tried you suggestions. I used ^tcp/^openib and set
> btl_openib_if_include to mlx5_0 in a two-node cluster (IB
> direcet-connected).  The result did not change -- IB still better in MPI
> benchmark but poorer in my applicaion.
>
> Best Regards,
> Xie Bin
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] peformance abnormality with openib and tcp framework

2018-05-14 Thread Blade Shieh
Hi, Nathan:
Thanks for you reply.
1) It was my mistake not to notice usage of osu_latency. Now it worked
well, but still poorer in openib.
2) I did not use sm or vader because I wanted to check performance between
tcp and openib. Besides, I will run the application in cluster, so vader is
not so important.
3) Of course, I tried you suggestions. I used ^tcp/^openib and set
btl_openib_if_include to mlx5_0 in a two-node cluster (IB
direcet-connected).  The result did not change -- IB still better in MPI
benchmark but poorer in my applicaion.

Best Regards,
Xie Bin
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OpenMPI 3.0.1 - mpirun hangs with 2 hosts

2018-05-14 Thread John Hearns via users
One very, very stupid question here. This arose over on the Slurm list
actually.
Those hostnames look like quite generic names, ie they are part of an HPC
cluster?
Do they happen to have independednt home directories for your userid?
Could that possibly make a difference to the MPI launcher?

On 14 May 2018 at 06:44, Max Mellette  wrote:

> Hi Gilles,
>
> Thanks for the suggestions; the results are below. Any ideas where to go
> from here?
>
> - Seems that selinux is not installed:
>
> user@b09-30:~$ sestatus
> The program 'sestatus' is currently not installed. You can install it by
> typing:
> sudo apt install policycoreutils
>
> - Output from orted:
>
> user@b09-30:~$ /usr/bin/ssh -x b09-32 orted
> [b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
> ess_env_module.c at line 147
> [b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file
> util/session_dir.c at line 106
> [b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file
> util/session_dir.c at line 345
> [b09-32:197698] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in file
> base/ess_base_std_orted.c at line 270
> --
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
>   orte_session_dir failed
>   --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
> --
>
> - iptables rules:
>
> user@b09-30:~$ sudo iptables -L
> Chain INPUT (policy ACCEPT)
> target prot opt source   destination
> ufw-before-logging-input  all  --  anywhere anywhere
> ufw-before-input  all  --  anywhere anywhere
> ufw-after-input  all  --  anywhere anywhere
> ufw-after-logging-input  all  --  anywhere anywhere
> ufw-reject-input  all  --  anywhere anywhere
> ufw-track-input  all  --  anywhere anywhere
>
> Chain FORWARD (policy ACCEPT)
> target prot opt source   destination
> ufw-before-logging-forward  all  --  anywhere anywhere
> ufw-before-forward  all  --  anywhere anywhere
> ufw-after-forward  all  --  anywhere anywhere
> ufw-after-logging-forward  all  --  anywhere anywhere
> ufw-reject-forward  all  --  anywhere anywhere
> ufw-track-forward  all  --  anywhere anywhere
>
> Chain OUTPUT (policy ACCEPT)
> target prot opt source   destination
> ufw-before-logging-output  all  --  anywhere anywhere
> ufw-before-output  all  --  anywhere anywhere
> ufw-after-output  all  --  anywhere anywhere
> ufw-after-logging-output  all  --  anywhere anywhere
> ufw-reject-output  all  --  anywhere anywhere
> ufw-track-output  all  --  anywhere anywhere
>
> Chain ufw-after-forward (1 references)
> target prot opt source   destination
>
> Chain ufw-after-input (1 references)
> target prot opt source   destination
>
> Chain ufw-after-logging-forward (1 references)
> target prot opt source   destination
>
> Chain ufw-after-logging-input (1 references)
> target prot opt source   destination
>
> Chain ufw-after-logging-output (1 references)
> target prot opt source   destination
>
> Chain ufw-after-output (1 references)
> target prot opt source   destination
>
> Chain ufw-before-forward (1 references)
> target prot opt source   destination
>
> Chain ufw-before-input (1 references)
> target prot opt source   destination
>
> Chain ufw-before-logging-forward (1 references)
> target prot opt source   destination
>
> Chain ufw-before-logging-input (1 references)
> target prot opt source   destination
>
> Chain ufw-before-logging-output (1 references)
> target prot opt source   destination
>
> Chain ufw-before-output (1 references)
> target prot opt source   destination
>
> Chain ufw-reject-forward (1 references)
> target prot opt source   destination
>
> Chain ufw-reject-input (1 references)
> target prot opt source   destination
>
> Chain ufw-reject-output (1 references)
> target prot opt source   destination
>
> Chain ufw-track-forward (1 references)
> target prot opt source   destination
>
> Chain ufw-track-input (1 references)
> target prot opt source   destination
>
> Chain ufw-track-output (1 references)
>