Tom,
I disagree, discussions about deployment and implementation are in
scope. The primary argument for necessity that this draft makes is
that MPTCP is being deployed too slowly.
The main argument of the draft is that deployment on client and servers does
not go at the same pace. This is the motivation for having a converter.
There *is* significant deployement of Multipath TCP today on clients and in
some specific use cases.
If we ignore the very large providers like Google or Facebook, deployment of
Olivier,
Why ignore the large content providers? If you want MPTCP to be a
success, these (e.g. FANG) are exactly who you should be engaging.
I don't ignore them. Apple uses MPTCP on iOS11 for all applications.
Once they're on board that covers a lot of user experience and others
are likely to follow their lead.
With the availability of MPTCP on ios11 for any applications, we'll see
a growth in usage of MPTCP on iphones.
new TCP features on servers takes more time. This does not depend only on
the availability of a particular feature in the mainline Linux kernel but on
many other factors. Server administrators are usually rather conservative
and they favor stability and only deploy new features when they are strictly
required. Looking at operating systems, they also tend to deploy stable
releases that have been well tested and rarely deploy cutting edge features.
Table 2 in
https://irtf.org/anrw/2017/anrw17-final16.pdf
shows that TFO has similar deployment issues than MPTCP. It has been enabled
on client devices (Linux, iOS/Macos, soon Windows), but Brian Trammel and
his colleagues could only find 578 servers (IP address) in Oct 2016 and 866
in Jan 2017 that negotiated TFO. Most of them where in a single AS.
That says nothing about MPTCP. I think you're making broad
generalizations about deployment based on a few select data points.
I'm simply showing that the deployment of a recent TCP extension takes
time. Another datapoint is
https://link.springer.com/chapter/10.1007/978-3-642-20305-3_3
Figure 7 shows the evolution of the negotations of SACK, WSCALE
and TIMESTAMP options over a decade of passive measurements. SACK,
despite its clear benefits took almost ten years to be widely deployed
In 2010, WSCALE and Timestamp were only supported by half of the
observed TCP connections.
Not all TCP features take a long time to get significant deployment
(e.g., ICW=10, congestion avoidance improvements, retrans.
improvements).
These are features that reside on a single host and do not require
cooperation between client and server to be used.
And more than that, QUIC is entirely new transport
protocol that has gotten significant deployment in a relatively short
amount of time. The particulars of each protocol or feature need to be
considered if a perceived deployment issue is to be addressed.
QUIC is a different beast. Coming back to TCP, extensions that provide
benefits with a change on the sender or the receiver side only are
easier to deploy because they provide immediate benefit on the stack
that implements them. All the examples that you mention above are in
this category. Congestion control algorithms are another example.
Extensions that require a negotiation (SACK, WSCALE, TFO, MPTCP, ...)
require support on both sender and receiver which takes much more time.
I see that as a generic trend looking at the measurement papers that
have studied this problem. I'd be interested in any measurement study
that shows a new significant TCP option being quickly deployed on both
clients and servers.
TFO and MPTCP differ in some critical ways. TFO has good kernel
support, but the server applications need to be fixed to support it.
This dependency makes deployment longer. Also, TFO is a nice feature,
but is only relevant at the beginning of a connection, and until
zero-RTT TLS gets deployed it is of limited value. MPTCP does not
currently have kernel support
The MPTCP patch released on http://www.multipath-tcp.org is well tested
and stable. It is used in various deployments.
(i.e. not accepted by Linux), however
shouldn't require server application layer changes-- the latter
characteristic is an important simplification. This means if there
were good kernel support then when the servers are updated (in cycles
of 2-3 yrs. for a major content provider) they will get support and
the value of MPTCP without additional work or action. > Honestly, had
the support gotten into Linux fours years ago when that was proposed
we probably wouldn't be having this conversation!
That's another story which mainly depends on the engineering ressources
that the core MPTCP developpers could dedicate to refactoring the MPTCP
patch to include it in the official Linux kernel.
If we are to consider that
argument then we need to understand _why_ deployment of MPTCP is slow,
This is true for any TCP extension. Client and servers migrate at their own
pace and it takes many years to deploy any TCP extension. MPTCP is not
different from RFC1323, SACK or TFO
As I said above, the deployment and implementation characteristics of
all of these is different and need to be considered for each feature.
A key concern is whether the extension must be negotiated during the
three-way handshake. If yes, then deployment will be much more difficult
than if there is no negotiation.
so the details about current deployment state and implementation are
pertinent. Also, while there is significant engineering needed to get
MPTCP into Linux or other systems, we cannot ignore the significant
(possibly more) engineering effort to define, standardize, develop and
deploy an interim solution on clients and converters.
This cost will be on the devices that will benefit from the extension. The
deployment of MPTCP in Korea to bond WiFi and LTE shows that there is a
benefit for this type of service.
Please elaborate on the cost of this solution. Specifically, please
explain why the cost of doing this interim solution is less than the
cost of figuring out how to get servers to support MPTCP.
There are different actors that have different incentives in the
deployment of MPTCP. Let's consider the MPTCP deployment in Korea as an
example. Their current solution is roughly to install a SOCKS client on
MPTCP-enabled smartphones. Those smartphones interact with an
MPTCP-enabled SOCKS server to reach remote servers.
From the viewpoint of the enduser, there is a benefit in using MPTCP on
her smartphone because she can combine LTE and WiFi to achieve higher
bandwidth or have seamless handovers. From the viewpoint of the network
operator, this is an added value service that they provide to their
customers. If all Internet servers supported MPTCP, they would not have
had to deploy and support the SOCKS servers in their network.
Also, the
cost analysis should take into account any negative effects on nodes
outside of the devices being touched-- this solution is not
transparent to the outside world (similar to how NAT isn't really
transparent to end hosts).
The negative effect is that the smartphone needs to include MPTCP and
the SOCKS client. The SOCKS server is a single-point of failure inside
the ISP network. It is not totally transparent since the smartphone
needs to be configured with a SOCKS server, but there are clear benefits
since this kind of solution is deployed by several ISPs in several
countries.
Olivier