Re: [tipc-discussion] [net-next v2] tipc: improve throughput between nodes in netns

2019-10-29 Thread David Miller
From: Hoang Le 
Date: Tue, 29 Oct 2019 07:51:21 +0700

> Currently, TIPC transports intra-node user data messages directly
> socket to socket, hence shortcutting all the lower layers of the
> communication stack. This gives TIPC very good intra node performance,
> both regarding throughput and latency.
> 
> We now introduce a similar mechanism for TIPC data traffic across
> network namespaces located in the same kernel. On the send path, the
> call chain is as always accompanied by the sending node's network name
> space pointer. However, once we have reliably established that the
> receiving node is represented by a namespace on the same host, we just
> replace the namespace pointer with the receiving node/namespace's
> ditto, and follow the regular socket receive patch though the receiving
> node. This technique gives us a throughput similar to the node internal
> throughput, several times larger than if we let the traffic go though
> the full network stacks. As a comparison, max throughput for 64k
> messages is four times larger than TCP throughput for the same type of
> traffic.
> 
> To meet any security concerns, the following should be noted.
 ...
> Regarding traceability, we should notice that since commit 6c9081a3915d
> ("tipc: add loopback device tracking") it is possible to follow the node
> internal packet flow by just activating tcpdump on the loopback
> interface. This will be true even for this mechanism; by activating
> tcpdump on the involved nodes' loopback interfaces their inter-name
> space messaging can easily be tracked.
> 
> v2:
> - update 'net' pointer when node left/rejoined
> v3:
> - grab read/write lock when using node ref obj
> v4:
> - clone traffics between netns to loopback
> 
> Suggested-by: Jon Maloy 
> Acked-by: Jon Maloy 
> Signed-off-by: Hoang Le 

Applied to net-next.


___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


[tipc-discussion] [net-next v2] tipc: improve throughput between nodes in netns

2019-10-28 Thread Hoang Le
Currently, TIPC transports intra-node user data messages directly
socket to socket, hence shortcutting all the lower layers of the
communication stack. This gives TIPC very good intra node performance,
both regarding throughput and latency.

We now introduce a similar mechanism for TIPC data traffic across
network namespaces located in the same kernel. On the send path, the
call chain is as always accompanied by the sending node's network name
space pointer. However, once we have reliably established that the
receiving node is represented by a namespace on the same host, we just
replace the namespace pointer with the receiving node/namespace's
ditto, and follow the regular socket receive patch though the receiving
node. This technique gives us a throughput similar to the node internal
throughput, several times larger than if we let the traffic go though
the full network stacks. As a comparison, max throughput for 64k
messages is four times larger than TCP throughput for the same type of
traffic.

To meet any security concerns, the following should be noted.

- All nodes joining a cluster are supposed to have been be certified
and authenticated by mechanisms outside TIPC. This is no different for
nodes/namespaces on the same host; they have to auto discover each
other using the attached interfaces, and establish links which are
supervised via the regular link monitoring mechanism. Hence, a kernel
local node has no other way to join a cluster than any other node, and
have to obey to policies set in the IP or device layers of the stack.

- Only when a sender has established with 100% certainty that the peer
node is located in a kernel local namespace does it choose to let user
data messages, and only those, take the crossover path to the receiving
node/namespace.

- If the receiving node/namespace is removed, its namespace pointer
is invalidated at all peer nodes, and their neighbor link monitoring
will eventually note that this node is gone.

- To ensure the "100% certainty" criteria, and prevent any possible
spoofing, received discovery messages must contain a proof that the
sender knows a common secret. We use the hash mix of the sending
node/namespace for this purpose, since it can be accessed directly by
all other namespaces in the kernel. Upon reception of a discovery
message, the receiver checks this proof against all the local
namespaces'hash_mix:es. If it finds a match, that, along with a
matching node id and cluster id, this is deemed sufficient proof that
the peer node in question is in a local namespace, and a wormhole can
be opened.

- We should also consider that TIPC is intended to be a cluster local
IPC mechanism (just like e.g. UNIX sockets) rather than a network
protocol, and hence we think it can justified to allow it to shortcut the
lower protocol layers.

Regarding traceability, we should notice that since commit 6c9081a3915d
("tipc: add loopback device tracking") it is possible to follow the node
internal packet flow by just activating tcpdump on the loopback
interface. This will be true even for this mechanism; by activating
tcpdump on the involved nodes' loopback interfaces their inter-name
space messaging can easily be tracked.

v2:
- update 'net' pointer when node left/rejoined
v3:
- grab read/write lock when using node ref obj
v4:
- clone traffics between netns to loopback

Suggested-by: Jon Maloy 
Acked-by: Jon Maloy 
Signed-off-by: Hoang Le 
---
 net/tipc/core.c   |  16 +
 net/tipc/core.h   |   6 ++
 net/tipc/discover.c   |   4 +-
 net/tipc/msg.h|  14 
 net/tipc/name_distr.c |   2 +-
 net/tipc/node.c   | 155 --
 net/tipc/node.h   |   5 +-
 net/tipc/socket.c |   6 +-
 8 files changed, 197 insertions(+), 11 deletions(-)

diff --git a/net/tipc/core.c b/net/tipc/core.c
index 23cb379a93d6..ab648dd150ee 100644
--- a/net/tipc/core.c
+++ b/net/tipc/core.c
@@ -105,6 +105,15 @@ static void __net_exit tipc_exit_net(struct net *net)
tipc_sk_rht_destroy(net);
 }
 
+static void __net_exit tipc_pernet_pre_exit(struct net *net)
+{
+   tipc_node_pre_cleanup_net(net);
+}
+
+static struct pernet_operations tipc_pernet_pre_exit_ops = {
+   .pre_exit = tipc_pernet_pre_exit,
+};
+
 static struct pernet_operations tipc_net_ops = {
.init = tipc_init_net,
.exit = tipc_exit_net,
@@ -151,6 +160,10 @@ static int __init tipc_init(void)
if (err)
goto out_pernet_topsrv;
 
+   err = register_pernet_subsys(_pernet_pre_exit_ops);
+   if (err)
+   goto out_register_pernet_subsys;
+
err = tipc_bearer_setup();
if (err)
goto out_bearer;
@@ -158,6 +171,8 @@ static int __init tipc_init(void)
pr_info("Started in single node mode\n");
return 0;
 out_bearer:
+   unregister_pernet_subsys(_pernet_pre_exit_ops);
+out_register_pernet_subsys:
unregister_pernet_device(_topsrv_net_ops);
 out_pernet_topsrv:

[tipc-discussion] [net-next v2] tipc: improve throughput between nodes in netns

2019-10-24 Thread Hoang Le
Currently, TIPC transports intra-node user data messages directly
socket to socket, hence shortcutting all the lower layers of the
communication stack. This gives TIPC very good intra node performance,
both regarding throughput and latency.

We now introduce a similar mechanism for TIPC data traffic across
network namespaces located in the same kernel. On the send path, the
call chain is as always accompanied by the sending node's network name
space pointer. However, once we have reliably established that the
receiving node is represented by a namespace on the same host, we just
replace the namespace pointer with the receiving node/namespace's
ditto, and follow the regular socket receive patch though the receiving
node. This technique gives us a throughput similar to the node internal
throughput, several times larger than if we let the traffic go though
the full network stacks. As a comparison, max throughput for 64k
messages is four times larger than TCP throughput for the same type of
traffic.

To meet any security concerns, the following should be noted.

- All nodes joining a cluster are supposed to have been be certified
and authenticated by mechanisms outside TIPC. This is no different for
nodes/namespaces on the same host; they have to auto discover each
other using the attached interfaces, and establish links which are
supervised via the regular link monitoring mechanism. Hence, a kernel
local node has no other way to join a cluster than any other node, and
have to obey to policies set in the IP or device layers of the stack.

- Only when a sender has established with 100% certainty that the peer
node is located in a kernel local namespace does it choose to let user
data messages, and only those, take the crossover path to the receiving
node/namespace.

- If the receiving node/namespace is removed, its namespace pointer
is invalidated at all peer nodes, and their neighbor link monitoring
will eventually note that this node is gone.

- To ensure the "100% certainty" criteria, and prevent any possible
spoofing, received discovery messages must contain a proof that the
sender knows a common secret. We use the hash mix of the sending
node/namespace for this purpose, since it can be accessed directly by
all other namespaces in the kernel. Upon reception of a discovery
message, the receiver checks this proof against all the local
namespaces'hash_mix:es. If it finds a match, that, along with a
matching node id and cluster id, this is deemed sufficient proof that
the peer node in question is in a local namespace, and a wormhole can
be opened.

- We should also consider that TIPC is intended to be a cluster local
IPC mechanism (just like e.g. UNIX sockets) rather than a network
protocol, and hence we think it can justified to allow it to shortcut the
lower protocol layers.

Regarding traceability, we should notice that since commit 6c9081a3915d
("tipc: add loopback device tracking") it is possible to follow the node
internal packet flow by just activating tcpdump on the loopback
interface. This will be true even for this mechanism; by activating
tcpdump on the involved nodes' loopback interfaces their inter-name
space messaging can easily be tracked.

v2:
- update 'net' pointer when node left/rejoined

Suggested-by: Jon Maloy 
Acked-by: Jon Maloy 
Signed-off-by: Hoang Le 
---
 net/tipc/core.c   |  16 +
 net/tipc/core.h   |   6 ++
 net/tipc/discover.c   |   4 +-
 net/tipc/msg.h|  14 
 net/tipc/name_distr.c |   2 +-
 net/tipc/node.c   | 148 --
 net/tipc/node.h   |   5 +-
 net/tipc/socket.c |   6 +-
 8 files changed, 190 insertions(+), 11 deletions(-)

diff --git a/net/tipc/core.c b/net/tipc/core.c
index 23cb379a93d6..ab648dd150ee 100644
--- a/net/tipc/core.c
+++ b/net/tipc/core.c
@@ -105,6 +105,15 @@ static void __net_exit tipc_exit_net(struct net *net)
tipc_sk_rht_destroy(net);
 }
 
+static void __net_exit tipc_pernet_pre_exit(struct net *net)
+{
+   tipc_node_pre_cleanup_net(net);
+}
+
+static struct pernet_operations tipc_pernet_pre_exit_ops = {
+   .pre_exit = tipc_pernet_pre_exit,
+};
+
 static struct pernet_operations tipc_net_ops = {
.init = tipc_init_net,
.exit = tipc_exit_net,
@@ -151,6 +160,10 @@ static int __init tipc_init(void)
if (err)
goto out_pernet_topsrv;
 
+   err = register_pernet_subsys(_pernet_pre_exit_ops);
+   if (err)
+   goto out_register_pernet_subsys;
+
err = tipc_bearer_setup();
if (err)
goto out_bearer;
@@ -158,6 +171,8 @@ static int __init tipc_init(void)
pr_info("Started in single node mode\n");
return 0;
 out_bearer:
+   unregister_pernet_subsys(_pernet_pre_exit_ops);
+out_register_pernet_subsys:
unregister_pernet_device(_topsrv_net_ops);
 out_pernet_topsrv:
tipc_socket_stop();
@@ -177,6 +192,7 @@ static int __init tipc_init(void)
 static void __exit 

Re: [tipc-discussion] [net-next v2] tipc: improve throughput between nodes in netns

2019-10-21 Thread Jon Maloy via tipc-discussion
Hi Hoang,
Just some improvements to (my own) log message text below.  Then you can go 
ahead and add "acked-by" from me.

///jon


> -Original Message-
> From: Hoang Le 
> Sent: 21-Oct-19 00:16
> To: tipc-discussion@lists.sourceforge.net; Jon Maloy
> ; ma...@donjonn.com; ying@windriver.com;
> l...@redhat.com
> Subject: [net-next v2] tipc: improve throughput between nodes in netns
> 
> Currently, TIPC transports intra-node user data messages directly socket to
> socket, hence shortcutting all the lower layers of the communication stack.
> This gives TIPC very good intra node performance, both regarding throughput
> and latency.
> 
> We now introduce a similar mechanism for TIPC data traffic across network
> name spaces located in the same kernel. On the send path, the call chain is as
> always accompanied by the sending node's network name space pointer.
> However, once we have reliably established that the receiving node is
> represented by a name space on the same host, we just replace the name
> space pointer with the receiving node/name space's ditto, and follow the
> regular socket receive patch though the receiving node. This technique gives
> us a throughput similar to the node internal throughput, several times larger
> than if we let the traffic go though the full network stack. As a comparison,
> max throughput for 64k messages is four times larger than TCP throughput for
> the same type of traffic in a similar environment.
> 
> To meet any security concerns, the following should be noted.
> 
> - All nodes joining a cluster are supposed to have been be certified and
> authenticated by mechanisms outside TIPC. This is no different for
> nodes/name spaces on the same host; they have to auto discover each other
> using the attached interfaces, and establish links which are supervised via 
> the
> regular link monitoring mechanism. Hence, a kernel local node has no other
> way to join a cluster than any other node, and have to obey to policies set in
> the IP or device layers of the stack.
> 
> - Only when a sender has established with 100% certainty that the peer node
> is located in a kernel local name space does it choose to let user data 
> messages,
> and only those, take the crossover path to the receiving node/name space.
> 
> - If the receiving node/name space  is removed, its name space pointer is
> invalidated at all peer nodes, and their neighbor link monitoring will 
> eventually
> note that this node is gone.
> 
> - To ensure the "100% certainty" criteria, and prevent any possible spoofing,
> received discovery messages must contain a proof that 

s/they know a common secret./the sender knows a common secret./g

> We use the hash_mix of the sending node/name space for this
> purpose, since it can be accessed directly by all other name spaces in the
> kernel. Upon reception of a discovery message, the receiver checks this proof
> against all the local name spaces'
> hash_mix:es.  If it finds a match, that, along with a matching node id and
> cluster id, this is deemed sufficient proof that the peer node in question is 
> in a
> local name space, and a wormhole can be opened.
> 
> - We should also consider that TIPC is intended to be a cluster local IPC
> mechanism (just like e.g. UNIX sockets)  rather than a network protocol, and
> hence 

s/should be given more freedom to shortcut the lower protocol than other 
protocols/ 
   we think it can justified to allow it to shortcut the lower protocol 
layers./g
> 
> Regarding traceability, we should notice that since commit 6c9081a3915d
> ("tipc: add loopback device tracking") it is possible to follow the node 
> internal
> packet flow by just activating tcpdump on the loopback interface. This will be
> true even for this mechanism; by activating tcpdump on the invloved nodes'
> loopback interfaces their inter-name space messaging can easily be tracked.
> 
> Suggested-by: Jon Maloy 
> Signed-off-by: Hoang Le 
> ---
>  net/tipc/discover.c   |  10 -
>  net/tipc/msg.h|  10 +
>  net/tipc/name_distr.c |   2 +-
>  net/tipc/node.c   | 100
> --
>  net/tipc/node.h   |   4 +-
>  net/tipc/socket.c |   6 +--
>  6 files changed, 121 insertions(+), 11 deletions(-)
> 
> diff --git a/net/tipc/discover.c b/net/tipc/discover.c index
> c138d68e8a69..338d402fcf39 100644
> --- a/net/tipc/discover.c
> +++ b/net/tipc/discover.c
> @@ -38,6 +38,8 @@
>  #include "node.h"
>  #include "discover.h"
> 
> +#include 
> +
>  /* min delay during bearer start up */
>  #define TIPC_DISC_INIT   msecs_to_jiffies(125)
>  /* max delay if bearer has no links */
> @@ -83,6 +85,7 @@ static void tipc_disc_init_msg(struct net *net, struct
> sk_buff *skb,
>   struct tipc_net *tn = tipc_net(net);
>   u32 dest_domain = b->domain;
>   struct tipc_msg *hdr;
> + u32 hash;
> 
>   hdr = buf_msg(skb);
>   tipc_msg_init(tn->trial_addr, hdr, LINK_CONFIG, mtyp, @@ -94,6
> +97,10 @@ static void 

[tipc-discussion] [net-next v2] tipc: improve throughput between nodes in netns

2019-10-20 Thread Hoang Le
Currently, TIPC transports intra-node user data messages directly
socket to socket, hence shortcutting all the lower layers of the
communication stack. This gives TIPC very good intra node performance,
both regarding throughput and latency.

We now introduce a similar mechanism for TIPC data traffic across
network name spaces located in the same kernel. On the send path, the
call chain is as always accompanied by the sending node's network name
space pointer. However, once we have reliably established that the
receiving node is represented by a name space on the same host, we just
replace the name space pointer with the receiving node/name space's
ditto, and follow the regular socket receive patch though the receiving
node. This technique gives us a throughput similar to the node internal
throughput, several times larger than if we let the traffic go though
the full network stack. As a comparison, max throughput for 64k
messages is four times larger than TCP throughput for the same type of
traffic.

To meet any security concerns, the following should be noted.

- All nodes joining a cluster are supposed to have been be certified
and authenticated by mechanisms outside TIPC. This is no different for
nodes/name spaces on the same host; they have to auto discover each
other using the attached interfaces, and establish links which are
supervised via the regular link monitoring mechanism. Hence, a kernel
local node has no other way to join a cluster than any other node, and
have to obey to policies set in the IP or device layers of the stack.

- Only when a sender has established with 100% certainty that the peer
node is located in a kernel local name space does it choose to let user
data messages, and only those, take the crossover path to the receiving
node/name space.

- If the receiving node/name space  is removed, its name space pointer
is invalidated at all peer nodes, and their neighbor link monitoring
will eventually note that this node is gone.

- To ensure the "100% certainty" criteria, and prevent any possible
spoofing, received discovery messages must contain a proof that they
know a common secret. We use the hash_mix of the sending node/name
space for this purpose, since it can be accessed directly by all other
name spaces in the kernel. Upon reception of a discovery message, the
receiver checks this proof against all the local name spaces'
hash_mix:es.  If it finds a match, that, along with a matching node id
and cluster id, this is deemed sufficient proof that the peer node in
question is in a local name space, and a wormhole can be opened.

- We should also consider that TIPC is intended to be a cluster local
IPC mechanism (just like e.g. UNIX sockets)  rather than a network
protocol, and hence should be given more freedom to shortcut the lower
protocol than other protocols.

Regarding traceability, we should notice that since commit 6c9081a3915d
("tipc: add loopback device tracking") it is possible to follow the node
internal packet flow by just activating tcpdump on the loopback
interface. This will be true even for this mechanism; by activating
tcpdump on the invloved nodes' loopback interfaces their inter-name
space messaging can easily be tracked.

Suggested-by: Jon Maloy 
Signed-off-by: Hoang Le 
---
 net/tipc/discover.c   |  10 -
 net/tipc/msg.h|  10 +
 net/tipc/name_distr.c |   2 +-
 net/tipc/node.c   | 100 --
 net/tipc/node.h   |   4 +-
 net/tipc/socket.c |   6 +--
 6 files changed, 121 insertions(+), 11 deletions(-)

diff --git a/net/tipc/discover.c b/net/tipc/discover.c
index c138d68e8a69..338d402fcf39 100644
--- a/net/tipc/discover.c
+++ b/net/tipc/discover.c
@@ -38,6 +38,8 @@
 #include "node.h"
 #include "discover.h"
 
+#include 
+
 /* min delay during bearer start up */
 #define TIPC_DISC_INIT msecs_to_jiffies(125)
 /* max delay if bearer has no links */
@@ -83,6 +85,7 @@ static void tipc_disc_init_msg(struct net *net, struct 
sk_buff *skb,
struct tipc_net *tn = tipc_net(net);
u32 dest_domain = b->domain;
struct tipc_msg *hdr;
+   u32 hash;
 
hdr = buf_msg(skb);
tipc_msg_init(tn->trial_addr, hdr, LINK_CONFIG, mtyp,
@@ -94,6 +97,10 @@ static void tipc_disc_init_msg(struct net *net, struct 
sk_buff *skb,
msg_set_dest_domain(hdr, dest_domain);
msg_set_bc_netid(hdr, tn->net_id);
b->media->addr2msg(msg_media_addr(hdr), >addr);
+   hash = tn->random;
+   hash ^= net_hash_mix(_net);
+   hash ^= net_hash_mix(net);
+   msg_set_peer_net_hash(hdr, hash);
msg_set_node_id(hdr, tipc_own_id(net));
 }
 
@@ -242,7 +249,8 @@ void tipc_disc_rcv(struct net *net, struct sk_buff *skb,
if (!tipc_in_scope(legacy, b->domain, src))
return;
tipc_node_check_dest(net, src, peer_id, b, caps, signature,
-, , _addr);
+msg_peer_net_hash(hdr), , ,
+