In a case like ours, where the overlay network uses
tunneling, transparently adding the option is not a critical
problem to be solved.
It is already the case that the overlay entry point must
advertise a reduced MSS in order to accommodate the tunnel
overhead. The amount of space consumed by the option will
always be smaller than the tunnel overhead, and the option
can be added at OVRLY_OUT, so the two are not additive. That
said, I can see that an overlay network that does not use
tunnels internally, or one that in fact does apply the option
on OVRLY_IN, would have a bigger problem, though.
The issue of the proposed fast-open scheme is one that we
have not considered, but I don't think it adds any problems
for the TCP option that aren't already a problem for tunneled
connectivity in general. I will have to spend some time with
that proposal and think about how they interrelate.
--Brandon
On 12/21/2012 08:34 AM, Scharf, Michael (Michael) wrote:
Brandon,
If there were tunnels between the OVRLY_IN and OVERLY_OUT
boxes, then
the inner IP headers would have the HOST_X and SERVER
addresses, and
the outer ones in the tunnel would have the overlay
headers. Since
the inner packets would be delivered ultimately after
egressing the
tunnels, the HOST_X addresses are totally visible to the
server, and
vice versa.
There are indeed tunnels between OVRLY_IN and OVRLY_OUT, and the
inner IP headers will typically use either the client-side
addresses
or the server-side addresses. However, neither OVRLY_IN
nor OVRLY_OUT
can be assumed to be reliably in-path between HOST and
SERVER, which
means that internet routing cannot be relied upon to cause
packets to
arrive at the overlay ingress. Instead, HOST_1 must
directly address
OVRLY_IN_1 in order to send its packets into the tunnel,
and SERVER
must directly address OVRLY_OUT in order to send the
return traffic
into the tunnel.
Thanks for this explanation - this indeed helps to
understand the architecture. But actually I still don't fully
understand the motivation of bypassing Internet routing this
way. As a non-expert on routing, it indeed looks to me like
reinventing source routing - but this is outside my core expertise.
Regarding TCPM's business: If I correctly understand the approach,
OVRLY_IN will "transparently" add and remove TCP options.
This is kind
of dangerous from an end-to-end perspective... Sorry if
that has been
answered before, but I really wonder what to do if OVRLY_IN
can't add
this option, either because of lack of TCP option space, or because
the path MTU is exceeded by the resulting IP packet. (In
fact, I think
that this problem does not apply to TCP options only.)
Unless I miss something, the latter case could become much
more relevant soon: TCPM currently works on the fast-open
scheme that adds data to SYNs. With that, I think it is
possible that all data packets from a sender to a receiver
are either full sized or large enough that the proposed
option does not fit in. Given that this option can include
full-sized IPv6 addresses, this likelihood is much larger
than for other existing TCP option, right?
In some cases, I believe that the proposed TCP option
cannot be added in the overlay without either IP
fragmentation, which is unlikely to be a good idea with NATs,
or TCP segment splitting, which probably can cause harm as
well. For instance, what would OVRLY_IN do if it receives an
IP packet with a TCP SYN segment that already sums up to 1500
byte? And, to make the scenario more nasty, if the same
applies to the first data segments as well?
Thanks
Michael
--
Brandon Williams; Principal Software Engineer Cloud
Engineering; Akamai Technologies Inc.