Brandon, 

> Your are correct that the option could be problematic if 
> added to a full-sized packet, or even a nearly full one. I 
> can see that the document should have some discussion of this issue.

Yes.

> In a case like ours, where the overlay network uses 
> tunneling, transparently adding the option is not a critical 
> problem to be solved. 
> It is already the case that the overlay entry point must 
> advertise a reduced MSS in order to accommodate the tunnel 
> overhead. The amount of space consumed by the option will 
> always be smaller than the tunnel overhead, and the option 
> can be added at OVRLY_OUT, so the two are not additive. That 
> said, I can see that an overlay network that does not use 
> tunnels internally, or one that in fact does apply the option 
> on OVRLY_IN, would have a bigger problem, though.

So, the new TCP option is basically required between OVRLY_OUT and the 
receiver/server, because the relevant information is already somehow 
transported in the overlay, right?

This raises another question (sorry if it is naive): Why can't the overlay 
tunnel just be extended to the server? This somehow implies that OVRLY_OUT 
would be kind of co-located with the server - obviously, there can be further 
routers/overlay nodes in between.

I am asking this because processing the information contained in the TCP option 
will require anyway a modified TCP stack in the server, i. e., the server will 
not be fully backward compatible if it has to process the proposed option. But 
if the TCP/IP stack has to be modified anyway, I could imagine that one could 
just add to the server whatever encap/decap is required for the overlay 
transport. Then, I have the impression that the proposed TCP option would not 
be needed at all.

I don't want to dig into the overlay design, because this is not really in 
scope of TCPM. But if there is a system architecture that does not require 
adding TCP options in middleboxes, thus affecting TCP end-to-end semantics, it 
would really be important to understand why such an architecture cannot be used.

Thanks

Michael


 
> The issue of the proposed fast-open scheme is one that we 
> have not considered, but I don't think it adds any problems 
> for the TCP option that aren't already a problem for tunneled 
> connectivity in general. I will have to spend some time with 
> that proposal and think about how they interrelate.
> 
> --Brandon
> 
> On 12/21/2012 08:34 AM, Scharf, Michael (Michael) wrote:
> > Brandon,
> >
> >>> If there were tunnels between the OVRLY_IN and OVERLY_OUT
> >> boxes, then
> >>> the inner IP headers would have the HOST_X and SERVER
> >> addresses, and
> >>> the outer ones in the tunnel would have the overlay 
> headers.  Since 
> >>> the inner packets would be delivered ultimately after 
> egressing the 
> >>> tunnels, the HOST_X addresses are totally visible to the
> >> server, and
> >>> vice versa.
> >>
> >> There are indeed tunnels between OVRLY_IN and OVRLY_OUT, and the 
> >> inner IP headers will typically use either the client-side 
> addresses 
> >> or the server-side addresses. However, neither OVRLY_IN 
> nor OVRLY_OUT 
> >> can be assumed to be reliably in-path between HOST and 
> SERVER, which 
> >> means that internet routing cannot be relied upon to cause 
> packets to 
> >> arrive at the overlay ingress. Instead, HOST_1 must 
> directly address
> >> OVRLY_IN_1 in order to send its packets into the tunnel, 
> and SERVER 
> >> must directly address OVRLY_OUT in order to send the 
> return traffic 
> >> into the tunnel.
> >
> > Thanks for this explanation - this indeed helps to 
> understand the architecture. But actually I still don't fully 
> understand the motivation of bypassing Internet routing this 
> way. As a non-expert on routing, it indeed looks to me like 
> reinventing source routing - but this is outside my core expertise.
> >
> > Regarding TCPM's business: If I correctly understand the approach, 
> > OVRLY_IN will "transparently" add and remove TCP options. 
> This is kind 
> > of dangerous from an end-to-end perspective... Sorry if 
> that has been 
> > answered before, but I really wonder what to do if OVRLY_IN 
> can't add 
> > this option, either because of lack of TCP option space, or because 
> > the path MTU is exceeded by the resulting IP packet. (In 
> fact, I think 
> > that this problem does not apply to TCP options only.)
> >
> > Unless I miss something, the latter case could become much 
> more relevant soon: TCPM currently works on the fast-open 
> scheme that adds data to SYNs. With that, I think it is 
> possible that all data packets from a sender to a receiver 
> are either full sized or large enough that the proposed 
> option does not fit in. Given that this option can include 
> full-sized IPv6 addresses, this likelihood is much larger 
> than for other existing TCP option, right?
> >
> > In some cases, I believe that the proposed TCP option 
> cannot be added in the overlay without either IP 
> fragmentation, which is unlikely to be a good idea with NATs, 
> or TCP segment splitting, which probably can cause harm as 
> well. For instance, what would OVRLY_IN do if it receives an 
> IP packet with a TCP SYN segment that already sums up to 1500 
> byte? And, to make the scenario more nasty, if the same 
> applies to the first data segments as well?
> >
> > Thanks
> >
> > Michael
> >
> 
> --
> Brandon Williams; Principal Software Engineer Cloud 
> Engineering; Akamai Technologies Inc.
> 
_______________________________________________
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area

Reply via email to