Hi Toreless,

see below.

> Am 01.02.2018 um 19:22 schrieb Toerless Eckert <t...@cs.fau.de>:
> Inline
> On Thu, Feb 01, 2018 at 02:42:29PM +0100, Mirja Kuehlewind (IETF) wrote:
>> HI Toerless,
>> thanks so much for these edits. Unfortunately I have one more question. And 
>> sorry again for the delay.
>> Why does your case 2 need an MPTCP connection instead of just opening a 
>> second separate TCP data plan connection (that of course fails when it 
>> fails..)?
> How do the peers know each others data plane address ?
> => draft text:
>  (Only) the GACP address of GACP devices could to be put into DNS.
>  When MPTCP builds the first subflow across the GACP, it exchanges the
>  data-plane address.
> Without MPTCP i need to reinvent a good part of it and i can not imagine how
> i would get anyone interested to do that signaling part in a standardized
> fashion given how MPTCP does exist.

No. For what ever you want to do, you at least have to change something in the 
MPTCP interface. And you can just expose the address you’ve got from MPTCP and 
open a new TCP connection for the build transfer.
> -> Build TCP connection to responder GACP address.
> -> Signal with responder the data-plane address - in a fashion that would
>   be backward compatible with a legacy responder/initiator that doesn't
>   speak this extension. I wouldn't even know how i could do this via
>   TCP payload. Must be at the transport level.

This is not a transport function. So you would need to build a minimal app 
protocol on top of TCP. However it you be as minimal as, if you open a TCP 
connection to a certain port, the other end with send you an address in the 
first payload packet and closes the connection again.

Changing the MPTCP interface is also not compatible with legacy systems that 
does support that yet.

> -> Build TCP connection via data-plane addresses.
> -> Use policy and/or new signaling to decide to prefer using that connection.
> …

This is the part you also need with MPTCP as you need some kind of framework 
policy on top of MPTCP that decides when to build with subflow.

> -> Data-plane connection fails. 
> -> need some scheme to figure out how to rebuild.

However, my reading is that is what you want in case of bulk transfers. Fail 
instead of fail-over. In MPTCP if one or your subflows fails it will also not 
be rebuild automatically; this is again part of the policy framework what to do 
in such a case and when and how to try to build up another subflow. 

> if - if and when
>   available. Including some exponential backoff trying to reconnect. Or
>   more likely in our case the data-plane address was unconfigured and gets
>   reconfigured maybe differently.  (some SDN controller doing some change
>   of addressing to the data-plane), So when one side signals to the other
>   side that the data-plane address is gone, then no connection probing is
>   necessary.  And when it is available again, maybe different one, gets
>   signalled, and we're building the connection again and using it.

Again basically what I’m proposing it that you use MPTCP for connection where 
the fallback makes sense, and at the same time expose the address you’ve 
learned over MPTCP and build up additional TCP connection for transfers where 
no fallback is wanted. Using only one subflow of MPTCP is not MPTCP anymore and 
therefore a functionality we probably don’t want to provide with MPTCP. Also 
see my next mail in this point please.


> Cheers
>    Toerless
>> Mirja
>>> Am 27.01.2018 um 00:06 schrieb Toerless Eckert <t...@cs.fau.de>:
>>> On Fri, Jan 26, 2018 at 11:43:08AM +0100, Mirja Kuehlewind (IETF) wrote:
>>>> Hi Toerless,
>>>> sorry for my late reply but I spend only limited time on AD stuff on the 
>>>> telechat agenda was crowded this week.
>>> Thanks for all the hard work!
>>>> Okay that is fine. I mainly wanted to double-check as that was still not 
>>>> fully clear from the provided text. Can you maybe propose a slight 
>>>> rewording that explicitly say that there is this backup while using the 
>>>> word ???only??? above in the cited text seems to indicated differently.
>>> I wanted to take the full extend of your feedback into account, so
>>> rewrote the paragraph in question maybe a bit more than slightly
>>> (couldn't figure out a sane "slightly" ;-):
>>> - Moved up the last sentence about per-connection policy up front,
>>> so the examples are more logical scoped.
>>> - Made the examples into three separate bullet points
>>> and expanded them to hopefully eliminate the misunderstanding
>>> wrt to backup.
>>> - Not only mentioning the fall back (backup), but also added
>>> desirable behavior under recovery of preferred subflow path. 
>>> (important for both 1) and 2).
>>> - explaining in bullet point 3) how this one is actually a
>>> simple TCP connection (As you pointed out).
>>> (whereas 1) and 2) rely on the MPTCP benefit of two subflows and address 
>>> signaling).
>>> -09 uploaded with this fix as well as fix for Yohi's comment.
>>> Diff:
>>> http://tools.ietf.org/tools/rfcdiff/rfcdiff.pyht?url1=https://tools.ietf.org/id/draft-ietf-anima-stable-connectivity-08.txt&url2=https://tools.ietf.org/id/draft-ietf-anima-stable-connectivity-09.txt
>>> Here is the changed paragraph:
>>> The above described behavior/policy for MPTCP must be controllable by       
>>> applications or libraries acting on behalf of applications.  APIs
>>> and/or data models for this control need to be defined. It should be
>>> sufficient to make these policies work on a per connection  
>>> basis, and not change during the lifetime of a connection for       
>>> different data items:
>>> 1) The policy for likely most connections would be to use the data-plane
>>>  subflow and fall back using the ACP subflow when the data-plane fails.
>>>  This could be the default. It reduces load on the ACP but would continue
>>>  to run connection traffic at likely reduced throughput when the
>>>  data-plane fails.  Ideally such connections would also revert back to
>>>  using a data-plane subflow once its connectivity recovers.
>>> 2) Connections for non-urgent bulk transers (for example most routine 
>>>  firmware updates or cached log collection) may use a policy where
>>>  the connection is made to fail when the data-plane fails or have
>>>  transfers suspend until another data-plane subflow can successfully
>>>  be built. This avoids over-taxing the ACP when the data plane fails. 
>>> 3) Connections for critical network configuration change operations 
>>>  known to impact the data-plane might want to only use the ACP and
>>>  could therefore map to a (non-MP)TCP connection.
>>> Cheers
>>>   Toerless
>>> m 22.01.2018 um 19:42 schrieb Toerless Eckert <t...@cs.fau.de>:
>>>>> Thanks, Mirja. Inline.
>>>>> On Mon, Jan 22, 2018 at 01:37:58PM +0100, Mirja Kuehlewind (IETF) wrote:
>>>>>> Hi Toerless,
>>>>>> thanks a lot for the updated text. I still have one little point I would 
>>>>>> like to discuss a bit further on this text:
>>>>>> "The above described behavior/policy for MPTCP must be controllable by   
>>>>>> applications or libraries acting on behalf of applications.  APIs        
>>>>>> and/or data models for this control need to be defined.  As outlined     
>>>>>> above, applications for example may choose to only perform transfers     
>>>>>> if the data-plane is actually available because of performance   
>>>>>> limitations of the GACP, so the application needs to be made aware if    
>>>>>> the setup of the data-plane subflow fails.  Or transfers may want to     
>>>>>> only use the GACP because the connection performs configuration  
>>>>>> changes that are likely known to bring down the data-plane.  It  
>>>>>> should be sufficient to make these policies work on a per connection     
>>>>>> basis, and not change during the lifetime of a connection for    
>>>>>> different data items.???
>>>>>> When you say that there is a need for an application to send data only 
>>>>>> on one path, today that???s not possible with MPTCP as you may fallback 
>>>>>> to the other path silently during the transmission. Yes, of course this 
>>>>>> could be changed and an extended interface could indicate this. 
>>>>>> So my question is still what the ???only??? really means in this text. 
>>>>>> If you???d like to just indicate a preference that might be okay. If you 
>>>>>> really want to rule out the possibility to fallback to the other path, 
>>>>>> then I don???t think you need MPTCP and two separate TCP connections 
>>>>>> would be the better option.
>>>>>> Does that make sense to you?
>>>>> Not really, the goal is always to leverage existing MPTCP signaling of
>>>>> addresses, indicating backup for flows, etc. I think two TCP flows would
>>>>> never be better:
>>>> Okay that is fine. I mainly wanted to double-check as that was still not 
>>>> fully clear from the provided text. Can you maybe propose a slight 
>>>> rewording that explicitly say that there is this backup while using the 
>>>> word ???only??? above in the cited text seems to indicated differently.
>>>> Thanks!
>>>>> The GACP address is the primary identifier of a device known in DNS.
>>>>> Data-Plane addresses can change over time subject to operator 
>>>>> configuration
>>>>> and could also pose more of a security issue to be in DNS (GACP is 
>>>>> isolated
>>>>> network).
>>>>> GACP performance may be quite low because some network device may 
>>>>> route/forward
>>>>> it in software. data-plane routing/forwarding is expected to be 
>>>>> fast/HW-accelerated.
>>>>> With MPTCP as suggested this gives for example:
>>>>> Initial subflow must use GACP because thats the only "known/stable 
>>>>> address" in DNS
>>>>> Then MPTCP signaling is used to indicate mutually the data-plane addresses
>>>>> and make GACP subflow backup.
>>>>> Then you transfer lets say a few Gigabyte of data (e.g.: firmware update) 
>>>>> which
>>>>> goes over data-plane.
>>>>> Then some uncorrelated error happens and data-plane fails. This 
>>>>> particular app
>>>>> should then fail because it would just slow down to insufferable slow 
>>>>> across
>>>>> GACP and would overload software forwarding in GACP.
>>>>> In another application connection, fallback to GACP subflow for data if 
>>>>> dat-plane
>>>>> subflow fails is fine, but you would only prefer data-plane for 
>>>>> performance.
>>>>> You could consider the GACP potential software only forwarding like a 
>>>>> very high
>>>>> cost of using it while data-plane is free.
>>>>> If i would just use 2 * TCP instead of MPTCP i would have to reinvent a 
>>>>> lot of
>>>>> MPTCP to get the address signaling etc. Right ?
>>>>> Cheers
>>>>>  Toerless
>>>>>> Mirja
>>>>>>> Am 17.01.2018 um 03:36 schrieb Toerless Eckert <t...@cs.fau.de>:
>>>>>>> Mirja, Yoshifumi
>>>>>>> I just posted -08: 
>>>>>>> https://tools.ietf.org/id/draft-ietf-anima-stable-connectivity-08.txt
>>>>>>> I have reworked the MPTCP text based on your threads feedback with
>>>>>>> the intention to fix errors and have it answer the questions/concerns 
>>>>>>> raised,
>>>>>>> but without otherwise changing the scope of it:
>>>>>>> - What are they key features making MPTCP interesting 
>>>>>>> - How could it be used to solve the stable-connectivity issue
>>>>>>> beyond the scope of this document
>>>>>>> - Describe the areas of specification work required
>>>>>>> - API/policy, control by apps
>>>>>>> - dealing with dual VRF addresses
>>>>>>> Please check diff in this URL (not complete diff of -08, just fix for 
>>>>>>> your discuss/comment):
>>>>>>> http://tools.ietf.org/tools/rfcdiff/rfcdiff.pyht?url1=https://raw.githubusercontent.com/anima-wg/autonomic-control-plane/14d5f9b66b8318bc160cee74ad152c0b926b4042/draft-ietf-anima-stable-connectivity/draft-ietf-anima-stable-connectivity-08.txt&url2=https://raw.githubusercontent.com/anima-wg/autonomic-control-plane/c02252710fbd7aea15aff550fb393eb36584658b/draft-ietf-anima-stable-connectivity/draft-ietf-anima-stable-connectivity-08.txt
>>>>>>> I hope this resolves your DISCUSS/comments. 
>>>>>>> Note that the term ACP was changed in the doc to GACP based on 
>>>>>>> resolving Alvaros discus. See:
>>>>>>> https://raw.githubusercontent.com/anima-wg/autonomic-control-plane/master/draft-ietf-anima-stable-connectivity/08-01-alvaro-retana.txt
>>>>>>> Thank you
>>>>>>> Toerless
>>>>>>> On Thu, Jan 11, 2018 at 03:42:53PM +0100, Mirja K?hlewind wrote:
>>>>>>>> Hi Toerless,
>>>>>>>> the point I'm wondering about is your point (b) below. Yes, you can
>>>>>>>> set the ACP subflow to backup but that would still mean that if the
>>>>>>>> other link fails, it would automatically switch over to the ACP
>>>>>>>> subflow (without exposing this to the upper layer). Is that what you
>>>>>>>> want? Because my understanding was rather that there are cases where
>>>>>>>> you'd probably would like to know over which link the OAM packets
>>>>>>>> where actually sent...?
>>>>>>>> Mirja
>>>>>>>> On 08.01.2018 22:06, Toerless Eckert wrote:
>>>>>>>>> Thanks, Mirja
>>>>>>>>> (a) If the systems socket API does not transparently make TCP sockets
>>>>>>>>> to use MPTCP, then you would want a shim library. According to
>>>>>>>>> draft-hesmans-mptcp-socket, this is be the case on Apple (iOS).
>>>>>>>>> (b) Making the ACP subflow not carry traffic after establishing the
>>>>>>>>> data plane subflow should easily be possible possible by setting its 
>>>>>>>>> MP_PRIO
>>>>>>>>> to backup.
>>>>>>>>> (c) How exactly to specify or implement the desired policy of only
>>>>>>>>> establishing a subflow between the ACP addresses and the data plane 
>>>>>>>>> addresses
>>>>>>>>> (but not full-mesh) seem to be a subject for further spec work. It 
>>>>>>>>> could
>>>>>>>>> be defined as a specific in-MPTCP policy, or it could be done via a 
>>>>>>>>> shim
>>>>>>>>> library (orthogonal to (a)). draft-hesmans-mptcp-socket might be 
>>>>>>>>> sufficient.
>>>>>>>>> But in general: i would like for this (informational!) draft to just 
>>>>>>>>> motivate
>>>>>>>>> the concept and not specify the solution: MPTCP would be a simple way 
>>>>>>>>> to
>>>>>>>>> make "TCP" applications automatically "upgrade" from ACP to a 
>>>>>>>>> data-plane path
>>>>>>>>> and switch back when data-plane fails... because MP-TCP can signal the
>>>>>>>>> additional data-plane addresses, establish transparently another 
>>>>>>>>> subflow and switch
>>>>>>>>> traffic between the subflows - all by using the right 
>>>>>>>>> shim-library+API or
>>>>>>>>> in-MP-TCP address/subflow policies - and the ability to establish 
>>>>>>>>> subflows
>>>>>>>>> across two VRFs.
>>>>>>>>> Cheers
>>>>>>>>> Toerless
>>>>>>>>> On Mon, Jan 08, 2018 at 05:18:54PM +0100, Mirja Kuehlewind (IETF) 
>>>>>>>>> wrote:
>>>>>>>>>>>>> Suggested replacement text last two paragraphs of 2.1.5:
>>>>>>>>>>> A (shim) library for aplications maps TCP connections to MPTCP 
>>>>>>>>>>> without the applications having to be aware of it.
>>>>>>>>>> It???s not the (shim) library/policy framework that opens/maps the 
>>>>>>>>>> TCP connection. MPTCP itself opens eventually multiple connections 
>>>>>>>>>> but exposes only one connection to the layer above. That means 
>>>>>>>>>> everything above MPTCP does not have any real control about which 
>>>>>>>>>> data is send over which connection.
>>>>>>>>>> A policy shim layer could only implement rules about which new 
>>>>>>>>>> subflows should be established when and what the priority is over 
>>>>>>>>>> which subflow data should be sent but it generally does not control 
>>>>>>>>>> which data is send of which flow. You can???t say I want this data 
>>>>>>>>>> to be sent over subflow one and this data to be sent over sub flow 
>>>>>>>>>> two.
>>>>>>>>>> I think what you want is actually a view on the different TCP 
>>>>>>>>>> connections and you try to use MPTCP only for announcing the other 
>>>>>>>>>> IP address but that is not what MPTCP meant to be.
>>>>>>>>>> Mirja
>>>>>>>>>>> Names in DNS use only the ACP IPv6 addresses of network devices. 
>>>>>>>>>>> Thererefore, the initial MPTCP subflow will use the ACP. After it 
>>>>>>>>>>> is operating, the shim libraries on both ends add their data-plane 
>>>>>>>>>>> address (MPTCP ADD_ADDR) and attempt to build a new subflow between 
>>>>>>>>>>> those addresses. If that (data plane) subflow is successfully 
>>>>>>>>>>> built, the shim libraries could shift all traffic over this subflow 
>>>>>>>>>>> which should be forwarded hardware accelerated by the network - and 
>>>>>>>>>>> use the ACP subflow across the ACP solely for signaling - beause it 
>>>>>>>>>>> is most resilient against failure.
>>>>>>>>>>> This MPTCP approach is only an outline and would need to be fully 
>>>>>>>>>>> speficied for interoperable implementations. It may also require 
>>>>>>>>>>> extensions to MP-TCP. This mechanism must not be used without 
>>>>>>>>>>> providing for encryption of subflows not running across the ACP.
>>>>>>>>>>>>> Brian: ... can not use capital MUST NOT in an information draft 
>>>>>>>>>>>>> (i think)
>>>>>>>>>>> Cheers
>>>>>>>>>>> Toerless
>>>>>>>>>>> On Mon, Jan 08, 2018 at 11:30:30AM +0100, Mirja Kuehlewind (IETF) 
>>>>>>>>>>> wrote:
>>>>>>>>>>>> Hi Micheal,
>>>>>>>>>>>> to clarify one part below:
>>>>>>>>>>>>> Am 05.01.2018 um 23:30 schrieb Michael Richardson 
>>>>>>>>>>>>> <mcr+i...@sandelman.ca>:
>>>>>>>>>>>>> Mirja Kühlewind <i...@kuehlewind.net> wrote:
>>>>>>>>>>>>>> "DNS naming is set up to provide the ACP IPv6 address of network
>>>>>>>>>>>>>> devices.  Unbeknownst to the application, MPTCP is used.  MPTCP
>>>>>>>>>>>>>> mutually discovers between the NOC and network device the 
>>>>>>>>>>>>>> data-plane
>>>>>>>>>>>>>> address and caries all traffic across it when that MPTCP subflow
>>>>>>>>>>>>>> across the data-plane can be built."
>>>>>>>>>>>>> Section 2.1.5 is discussion, it discusses ways in which the
>>>>>>>>>>>>> anticipated low performance (compared to what the box might do 
>>>>>>>>>>>>> with its
>>>>>>>>>>>>> hardware accelerated forwarding).
>>>>>>>>>>>>> If we have an application that needs the bandwidth of the native 
>>>>>>>>>>>>> hardware,
>>>>>>>>>>>>> the connection can be initated over the ACP (that's what would be 
>>>>>>>>>>>>> in DNS).
>>>>>>>>>>>>> One presumes that an MPTCP layer could then enumerate the 
>>>>>>>>>>>>> available IPs at
>>>>>>>>>>>>> each end and then start off additional flows on the other 
>>>>>>>>>>>>> destinations.
>>>>>>>>>>>> MPTCP adda an additional TCP flow but for the application that 
>>>>>>>>>>>> still looks like one flow. As I said I???m not sure if that is 
>>>>>>>>>>>> what you want.
>>>>>>>>>>>> Mirja
>>>>>>>>>>>>> The application would have to include application security, since 
>>>>>>>>>>>>> it would
>>>>>>>>>>>>> not be protected by the ACP.
>>>>>>>>>>>>> Perhaps MPTCP doesn't work this way.
>>>>>>>>>>>>>> However, I'm actually uncertain how this is supposed to work and 
>>>>>>>>>>>>>> what
>>>>>>>>>>>>>> "Unbeknownst to the application" should mean. If another address 
>>>>>>>>>>>>>> should be
>>>>>>>>>>>>>> signaled to the other host, this needs to be indicated by the 
>>>>>>>>>>>>>> application or at
>>>>>>>>>>>>>> least some kind of policy framework above MPTCP. Also MPTCP will 
>>>>>>>>>>>>>> by default use
>>>>>>>>>>>>>> both paths simultaneously while still looking like one 
>>>>>>>>>>>>>> connection to the
>>>>>>>>>>>>>> application, meaning the application has no control which path 
>>>>>>>>>>>>>> is used for
>>>>>>>>>>>>>> which traffic. I guess you can open a second subflow and then 
>>>>>>>>>>>>>> configure the
>>>>>>>>>>>>>> first subflow as backup path but I'm not sure if that's what you 
>>>>>>>>>>>>>> want (given
>>>>>>>>>>>>>> the application/policy framework will still not know which path 
>>>>>>>>>>>>>> is used)..?
>>>>>>>>>>>>>> Please provide more information about what the expected usage 
>>>>>>>>>>>>>> scenario is here.
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Michael Richardson <mcr+i...@sandelman.ca>, Sandelman Software 
>>>>>>>>>>>>> Works
>>>>>>>>>>>>> -= IPv6 IoT consulting =-
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Anima mailing list
>>>>>>>>>>>> Anima@ietf.org
>>>>>>>>>>>> https://www.ietf.org/mailman/listinfo/anima
>>>>>>>>>>> -- 
>>>>>>>>>>> ---
>>>>>>>>>>> t...@cs.fau.de
>>>>>>> -- 
>>>>>>> ---
>>>>>>> t...@cs.fau.de
>>> -- 
>>> ---
>>> t...@cs.fau.de
> -- 
> ---
> t...@cs.fau.de

Anima mailing list

Reply via email to