Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-19 Thread Alexander Alemayhu
On Wed, Mar 15, 2017 at 05:06:58PM -0700, Eric Dumazet wrote:
> 
> Nat are good, Nat are good.
> 
> I can't find this hilarious video we watched in Copenhagen ;)
> 
Maybe 'Oops, I did it: IPv6 NAT by Patrick McHardy'[0]. Starts around
19:10.

[0]: http://video.dkuug.dk/media/oops-i-did-it-ipv6-nat-by-patrick-mchardy

-- 
Mit freundlichen Grüßen

Alexander Alemayhu


Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-16 Thread Lutz Vieweg

On 03/16/2017 04:40 PM, Neal Cardwell wrote:

I currently wonder: What it the correct advise to an operator who needs
to run one server instance that is meant to accept thousands of new,
short-lived TCP connections per minute?


Note that for this to be a problem there would have to be thousands of
new, short-lived TCP connections per minute from a single source IP
address to a single destination IP address. Normal client software
should not be doing this. AFAIK this is pretty rare, unless someone is
running a load test or has an overly-aggressive monitoring system.


Indeed, I meanwhile found that a load/regression test scenario had
been the rationale for the tcp_tw_recycle = 1 setting - when a
recorded log of hundreds of thousands connections (each placing
one or a few requests) was replayed, this failed due to excessive
number of TIME_WAIT state connections.

Do I understand correctly that "tcp_tw_recycle = 1" is fine
in such a scenario as one can be sure both client and server
are at fixed, not-NATed IP addresses?

I wonder whether there might be a possibility to limit the use
of "tcp_tw_recycle = 1" to either a certain address or listen-port range?

If not, I guess our best option at this time is to advise
enabling "tcp_tw_recycle = 1" only while explicitely performing
local load/regression tests, and to disable it otherwise.
(This however means that running both automated continous integration
tests and any services for remote clients on the same system
would not mix well, as the setting could be "right" for only one
of them.)



(1) use longer connections from the client side


Sure, in cases where that is under our control, we do exactly that.


(2) have the client do the close(), so the client is the side to carry the
 TIME_WAIT state


In the load/regression test scenario, we are both server and client,
so I guess this would not help.


(3) have the server use SO_LINGER with a timeout of 0, so that
 the connection is closed with a RST and the server carries no
 TIME_WAIT state


Potentially losing the end of some conversation is not really
an option for most protocols / use cases.

Regards,

Lutz Vieweg




Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-16 Thread Willy Tarreau
Hi Neal,

On Thu, Mar 16, 2017 at 11:40:52AM -0400, Neal Cardwell wrote:
> On Thu, Mar 16, 2017 at 7:31 AM, Lutz Vieweg  wrote:
> >
> > On 03/15/2017 11:55 PM, Willy Tarreau wrote:
> >>
> >> At least I can say I've seen many people enable it without understanding 
> >> its impact, confusing it
> >> with tcp_tw_reuse, and copy-pasting it from random blogs and complaining 
> >> about issues in
> >> production.
> >
> >
> > I currently wonder: What it the correct advise to an operator who needs
> > to run one server instance that is meant to accept thousands of new,
> > short-lived TCP connections per minute?
> 
> Note that for this to be a problem there would have to be thousands of
> new, short-lived TCP connections per minute from a single source IP
> address to a single destination IP address. Normal client software
> should not be doing this. AFAIK this is pretty rare, unless someone is
> running a load test or has an overly-aggressive monitoring system. NAT
> boxes or proxies with that kind of traffic should be running with
> multiple public source IPs.

In fact it's the regular stuff with reverse-proxies. You can scan the
whole source port range every second. But when enabling timestamps, you
benefit from PAWS and you don't have any problem anymore, everything
works pretty well.

> But if/when the problem occurs, then the feasible solutions I'm aware
> of, in approximate descending order of preference, are:
> 
> (1) use longer connections from the client side (browsers and RPC libraries 
> are
> usually pretty good about keeping connections open for a long time, so 
> this
> is usually sufficient)
> 
> (2) have the client do the close(), so the client is the side to carry the
> TIME_WAIT state

That's impossible for proxies, as you can't connect again from the same
source port, causing the performances to be divided by more than 100. What
proxies have to do when they're forced to close first an outgoing connection
is to set SO_LINGER to (0,0) so that an RST is used and the source port can
be reused. But as you guess, if that RST gets lost, then next opening is
not that beautiful : either [SYN, ACK, RST, pause, SYN, SYN-ACK, ACK] or
[SYN, RST, pause SYN, SYN-ACK, ACK] depending on whether the SYN appears
in the previous window or not.

> (3) have the server use SO_LINGER with a timeout of 0, so that
> the connection is closed with a RST and the server carries no
> TIME_WAIT state

The problem is that it also kills the tail data.

Quite frankly, the only issues I'm used to see are with clients closing
first and with reusing source connections. As soon as timestamps are
enabled on both sides and people don't blindly play with tcp_tw_recycle,
I really never face any connection issue.

Willy


Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-16 Thread Neal Cardwell
On Thu, Mar 16, 2017 at 7:31 AM, Lutz Vieweg  wrote:
>
> On 03/15/2017 11:55 PM, Willy Tarreau wrote:
>>
>> At least I can say I've seen many people enable it without understanding its 
>> impact, confusing it
>> with tcp_tw_reuse, and copy-pasting it from random blogs and complaining 
>> about issues in
>> production.
>
>
> I currently wonder: What it the correct advise to an operator who needs
> to run one server instance that is meant to accept thousands of new,
> short-lived TCP connections per minute?

Note that for this to be a problem there would have to be thousands of
new, short-lived TCP connections per minute from a single source IP
address to a single destination IP address. Normal client software
should not be doing this. AFAIK this is pretty rare, unless someone is
running a load test or has an overly-aggressive monitoring system. NAT
boxes or proxies with that kind of traffic should be running with
multiple public source IPs.

But if/when the problem occurs, then the feasible solutions I'm aware
of, in approximate descending order of preference, are:

(1) use longer connections from the client side (browsers and RPC libraries are
usually pretty good about keeping connections open for a long time, so this
is usually sufficient)

(2) have the client do the close(), so the client is the side to carry the
TIME_WAIT state

(3) have the server use SO_LINGER with a timeout of 0, so that
the connection is closed with a RST and the server carries no
TIME_WAIT state

neal


Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-16 Thread Lutz Vieweg

On 03/15/2017 11:55 PM, Willy Tarreau wrote:

At least I can say I've seen many people enable it without understanding its 
impact, confusing it
with tcp_tw_reuse, and copy-pasting it from random blogs and complaining about 
issues in
production.


I currently wonder: What it the correct advise to an operator who needs
to run one server instance that is meant to accept thousands of new,
short-lived TCP connections per minute?

The explanation text at
https://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux
seems to provide comprehensive advise, but its summary is, after all,
somewhat disappointing:


The universal solution is to increase the number of possible quadruplets by 
using, for example,
more server ports. This will allow you to not exhaust the possible connections 
with TIME-WAIT
entries.


Assuming an operator has to deal with a given server executable, which does not
provide a feature to "open many listening ports for the same purpose in 
parallel",
this is hardly an option. (Of course, if you can start just N instead of 1 
server
instance, this becomes an option, but not everything is a simple, stateless web 
server.)


On the server side, do not enable net.ipv4.tcp_tw_recycle unless you are pretty 
sure you will
never have NAT devices in the mix. Enabling net.ipv4.tcp_tw_reuse is useless 
for incoming
connections.


So basically both options won't help the server operator.


On the client side, enabling net.ipv4.tcp_tw_reuse is another almost-safe 
solution. Enabling
net.ipv4.tcp_tw_recycle in addition to net.ipv4.tcp_tw_reuse is mostly useless.


If you just operate the server, but not the (remote) clients, this is not 
relevant.


Is the final verdict that unless a server software by itself offers to open
N listen ports for the same purpose, there is no solution?

Regards,

Lutz Vieweg




Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-15 Thread Eric Dumazet
On Wed, 2017-03-15 at 16:45 -0700, David Miller wrote:

> Ok, I guess we can remove it in that case.  I'm still a bit disappointed
> as I was always hoping someone would find a way to make this work even
> in the presence of NAT.

Nat are good, Nat are good.

I can't find this hilarious video we watched in Copenhagen ;)






Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-15 Thread David Miller
From: Eric Dumazet 
Date: Wed, 15 Mar 2017 15:59:01 -0700

> On Wed, 2017-03-15 at 15:40 -0700, David Miller wrote:
>> From: Soheil Hassas Yeganeh 
>> Date: Wed, 15 Mar 2017 16:30:45 -0400
>> 
>> > Note that this cache was already broken for caching timestamps of
>> > multiple machines behind a NAT sharing the same address.
>> 
>> That's the documented, well established, limitation of time-wait
>> recycling.
>> 
>> People who enable it, need to consider this issue.
>> 
>> This limitation of the feature does not give us a reason to break the
>> feature even further as a matter of convenience, or to remove it
>> altogether for the same reason.
>> 
>> Please, instead, fix the bug that was introduced.
>> 
>> Thank you.
> 
> You mean revert Florian nice patches ?
> 
> This would kill timestamps randomization, and thus prevent some
> organizations to turn TCP timestamps on.
> 
> TCP timestamps are more useful than this obscure tw_recycle thing that
> is hurting innocent users.

Ok, I guess we can remove it in that case.  I'm still a bit disappointed
as I was always hoping someone would find a way to make this work even
in the presence of NAT.

I must be too optimistic.


Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-15 Thread David Miller
From: Florian Westphal 
Date: Wed, 15 Mar 2017 23:57:26 +0100

> AFAIU we only have two alternatives, removal of the randomization feature
> or switch to a offset computed via hash(saddr, daddr, secret).
> 
> Unless there are more comments I'll look into doing the latter tomorrow.

No, I'll apply the removal.


Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-15 Thread Eric Dumazet
On Wed, 2017-03-15 at 15:40 -0700, David Miller wrote:
> From: Soheil Hassas Yeganeh 
> Date: Wed, 15 Mar 2017 16:30:45 -0400
> 
> > Note that this cache was already broken for caching timestamps of
> > multiple machines behind a NAT sharing the same address.
> 
> That's the documented, well established, limitation of time-wait
> recycling.
> 
> People who enable it, need to consider this issue.
> 
> This limitation of the feature does not give us a reason to break the
> feature even further as a matter of convenience, or to remove it
> altogether for the same reason.
> 
> Please, instead, fix the bug that was introduced.
> 
> Thank you.

You mean revert Florian nice patches ?

This would kill timestamps randomization, and thus prevent some
organizations to turn TCP timestamps on.

TCP timestamps are more useful than this obscure tw_recycle thing that
is hurting innocent users.




Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-15 Thread Florian Westphal
David Miller  wrote:
> From: Soheil Hassas Yeganeh 
> Date: Wed, 15 Mar 2017 16:30:45 -0400
> 
> > Note that this cache was already broken for caching timestamps of
> > multiple machines behind a NAT sharing the same address.
> 
> That's the documented, well established, limitation of time-wait
> recycling.

Sigh.

"don't enable this if you connect your machine to the internet".
We're not in the 1990s anymore.  Even I am behind ipv4 CG-NAT nowadays.

So I disagree and would remove this thing.

> This limitation of the feature does not give us a reason to break the
> feature even further as a matter of convenience, or to remove it
> altogether for the same reason.
> 
> Please, instead, fix the bug that was introduced.

AFAIU we only have two alternatives, removal of the randomization feature
or switch to a offset computed via hash(saddr, daddr, secret).

Unless there are more comments I'll look into doing the latter tomorrow.


Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-15 Thread Willy Tarreau
Hi David,

On Wed, Mar 15, 2017 at 03:40:44PM -0700, David Miller wrote:
> From: Soheil Hassas Yeganeh 
> Date: Wed, 15 Mar 2017 16:30:45 -0400
> 
> > Note that this cache was already broken for caching timestamps of
> > multiple machines behind a NAT sharing the same address.
> 
> That's the documented, well established, limitation of time-wait
> recycling.
> 
> People who enable it, need to consider this issue.
> 
> This limitation of the feature does not give us a reason to break the
> feature even further as a matter of convenience, or to remove it
> altogether for the same reason.
> 
> Please, instead, fix the bug that was introduced.

At least I can say I've seen many people enable it without understanding
its impact, confusing it with tcp_tw_reuse, and copy-pasting it from
random blogs and complaining about issues in production.

I agree that it's hard to arbiter between stupidity and flexibility
though :-/

Regards,
Willy


Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-15 Thread David Miller
From: Soheil Hassas Yeganeh 
Date: Wed, 15 Mar 2017 16:30:45 -0400

> Note that this cache was already broken for caching timestamps of
> multiple machines behind a NAT sharing the same address.

That's the documented, well established, limitation of time-wait
recycling.

People who enable it, need to consider this issue.

This limitation of the feature does not give us a reason to break the
feature even further as a matter of convenience, or to remove it
altogether for the same reason.

Please, instead, fix the bug that was introduced.

Thank you.


[PATCH net-next 1/2] tcp: remove per-destination timestamp cache

2017-03-15 Thread Soheil Hassas Yeganeh
From: Soheil Hassas Yeganeh 

Commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets for each connection)
randomizes TCP timestamps per connection. After this commit,
there is no guarantee that the timestamps received from the
same destination are monotonically increasing. As a result,
the per-destination timestamp cache in TCP metrics (i.e., tcpm_ts
in struct tcp_metrics_block) is broken and cannot be relied upon.

Remove the per-destination timestamp cache and all related code
paths.

Note that this cache was already broken for caching timestamps of
multiple machines behind a NAT sharing the same address.

Signed-off-by: Soheil Hassas Yeganeh 
Signed-off-by: Eric Dumazet 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Cc: Lutz Vieweg 
Cc: Florian Westphal 
---
 include/net/tcp.h|   6 +-
 net/ipv4/tcp_input.c |   6 +-
 net/ipv4/tcp_ipv4.c  |   4 --
 net/ipv4/tcp_metrics.c   | 147 ++-
 net/ipv4/tcp_minisocks.c |  22 ++-
 net/ipv6/tcp_ipv6.c  |   5 --
 6 files changed, 11 insertions(+), 179 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index bede8f7fa742..c81f3b958d44 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -406,11 +406,7 @@ void tcp_clear_retrans(struct tcp_sock *tp);
 void tcp_update_metrics(struct sock *sk);
 void tcp_init_metrics(struct sock *sk);
 void tcp_metrics_init(void);
-bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst,
-   bool paws_check, bool timestamps);
-bool tcp_remember_stamp(struct sock *sk);
-bool tcp_tw_remember_stamp(struct inet_timewait_sock *tw);
-void tcp_fetch_timewait_stamp(struct sock *sk, struct dst_entry *dst);
+bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst);
 void tcp_disable_fack(struct tcp_sock *tp);
 void tcp_close(struct sock *sk, long timeout);
 void tcp_init_sock(struct sock *sk);
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 96b67a8b18c3..aafec0676d3e 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -6342,8 +6342,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
dst = af_ops->route_req(sk, , req, );
 
if (dst && strict &&
-   !tcp_peer_is_proven(req, dst, true,
-   tmp_opt.saw_tstamp)) {
+   !tcp_peer_is_proven(req, dst)) {
NET_INC_STATS(sock_net(sk), 
LINUX_MIB_PAWSPASSIVEREJECTED);
goto drop_and_release;
}
@@ -6352,8 +6351,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
else if (!net->ipv4.sysctl_tcp_syncookies &&
 (net->ipv4.sysctl_max_syn_backlog - 
inet_csk_reqsk_queue_len(sk) <
  (net->ipv4.sysctl_max_syn_backlog >> 2)) &&
-!tcp_peer_is_proven(req, dst, false,
-tmp_opt.saw_tstamp)) {
+!tcp_peer_is_proven(req, dst)) {
/* Without syncookies last quarter of
 * backlog is filled with destinations,
 * proven to be alive.
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 08d870e45658..d8b401fff9fe 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -198,10 +198,6 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr 
*uaddr, int addr_len)
tp->write_seq  = 0;
}
 
-   if (tcp_death_row->sysctl_tw_recycle &&
-   !tp->rx_opt.ts_recent_stamp && fl4->daddr == daddr)
-   tcp_fetch_timewait_stamp(sk, >dst);
-
inet->inet_dport = usin->sin_port;
sk_daddr_set(sk, daddr);
 
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
index 0f46e5fe31ad..9d0d4f39e42b 100644
--- a/net/ipv4/tcp_metrics.c
+++ b/net/ipv4/tcp_metrics.c
@@ -45,8 +45,6 @@ struct tcp_metrics_block {
struct inetpeer_addrtcpm_saddr;
struct inetpeer_addrtcpm_daddr;
unsigned long   tcpm_stamp;
-   u32 tcpm_ts;
-   u32 tcpm_ts_stamp;
u32 tcpm_lock;
u32 tcpm_vals[TCP_METRIC_MAX_KERNEL + 1];
struct tcp_fastopen_metrics tcpm_fastopen;
@@ -123,8 +121,6 @@ static void tcpm_suck_dst(struct tcp_metrics_block *tm,
tm->tcpm_vals[TCP_METRIC_SSTHRESH] = dst_metric_raw(dst, RTAX_SSTHRESH);
tm->tcpm_vals[TCP_METRIC_CWND] = dst_metric_raw(dst, RTAX_CWND);
tm->tcpm_vals[TCP_METRIC_REORDERING] = dst_metric_raw(dst, 
RTAX_REORDERING);
-   tm->tcpm_ts = 0;
-   tm->tcpm_ts_stamp =