Mark Handley sent this thought provoking note after our meeting this
morning. Some ideas here that stretch the architecture, I think. At
least we need to consider extensibility of the API. Forwarding with
permission.
--aaron
Forwarded message:
From: Mark Handley <[email protected]>
To: Colin Perkins > <[email protected]>, [email protected]
Subject: Possible TAPS challenges
Date: Wed, 21 Mar 2018 14:27:40 +0000
Hi Aaron, Colin,
As I mentioned, I'm playing with ideas for a new TCP-replacement
transport protocol at the moment. It doesn't really have a name right
now, but lets call it NeoTCP. Likely it won't go anywhere, but some
of the ideas differ a bit from other transport protocols, so may serve
as a useful test for the TAPS API.
1. Pre-authentication. I want to put the ssh server on my home
machine on the Internet without it being constantly attacked with
password guessing attacks. NeoTCP implements very simple
pre-authentication, where my laptop and my home machine share a
secret. When connecting, the client sends the pair (nonce,
Hash(nonce, secret)). The server won't send a syn/ack equivalent
unless the secret match one of the secrets from its cache.
2. Source-spoofing protection. NeoTCP supports the equivalent of SYN
cookies, but with a 4-way handshake to avoid the potential deadlock
associated with SYN cookies. I'm also playing with the idea of
allowing middleboxes to interpose a challenge in response to the SYN,
and when the challenge response arrives from the client, only then is
the packet passed through to the actual server, and the handshake
continues. I'm not clear yet if including middleboxes in some way
impacts the API.
3. Encryption. Goal is to do tcpcrypt-like encryption, but provide
hooks to higher layers so they can do whatever full authentication
that particular application needs.
4. Redirect. Redirecting connections is a very common
application-layer function, but really you would prefer to redirect
before you've set up a connection. Such redirection obviously needs
some form of authentication (maybe this contradicts 3 - not clear
where I'm going on this yet). What would the API be for a redirect
server? It's not a full server; it listens, but simply sends
stateless syn/ack redirects, rather than accept connections.
5. Acking. NeoTCP supports multi-path using two sequence spaces -
subflow packet sequence numbers and data sequence numbers like MPTCP.
Unlike MPTCP, the data sequence number ack indicates the receiving
application received the packet (with MPTCP, it only indicates
reception by the receiving stack.
6. Pulling. NeoTCP takes some lessons from NDP, and is a
receiver-driven protocol. When several senders are sending to one
receiver, this allows the receiver to choose precisely which senders
to pull packets from at any time. This generalizes the QUIC/HTTP2
priorities to now support multiple different senders. You can use
this to do aggregate congestion control for incoming traffic, avoiding
self-congestion. It's up to the receiving application to determine
priorities, and to the transport to decide how to use those
priorities.
7. Close is a total mess with TCP. It's even worse with a user-space
protocol - your application may quit when close returns; data may not
have been received yet and needs retransmitting, but there's no-one
left to do it. By default, with NDP close won't return until the
receiving application has received sent data. Obviously you need
some way to avoid deadlock when the receiver died; that timeout is
application specific.
8. Finally, we've done a lot of work over the last couple of years on
understanding MPTCP for web traffic. It's ugly. You have a whole
load of short objects, and the application has a priority order for
them. But the paths may have very different latency. If you send any
packet from the highest priority object on the high latency path, you
can delay the object significantly, and it can really hurt the overall
page load time, as other requests get stalled waiting for that object.
The best you can do is to run a per-packet scheduler, and ask the
question "if I send the next packet from this object on the higher
latency path, will it arrive after the rest of the object sent on the
lower latency path?" If the answer is yes, you shouldn't send that
packet on the higher latency path. Next you have to consider the
second highest priority object, and ask if you should instead send a
packet from that object on the higher latency path. You should, if
will arrive before both the highest priority object and all the rest
of the second highest priority object. If not, you shouldn't, and you
should consider the third highest priority object, and so on.
Obviously, this is complicated. At the very least, the transport
protocol needs to know which packets are from which object, and what
those object priorities are. At the receiver, objects may arrive out
of order, but packets within an object must be in-order.
There are probably more things, but that's what comes to mind right
now.
Mark
_______________________________________________
Taps mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/taps