On Thu, Feb 01, 2018 at 03:21:18PM -0500, Michael Richardson wrote:
> I also don't know why it would be bad.
> My other question is, what happens in MPTCP is one path is significantly
> faster (or less lossy) than the other path? Won't the window open up
> significantly on that path and simply attrack more traffic?
I am changing subject because i am hoping/claiming, that we
do not need this discussion to proceed stable-connectivity
draft because it explicitly excludes that option. But of course
i am very intersted in this discussion. Maybe better had on
MPTCP mailing lists...
MPTCP implementations are supposed to use couple
rfc6824 does not seem to mandate anything but just suggests to use
coupled congstion control according to rfc6356 if the policy goal
is to maximize utilization of both paths, see
If i correctly understand it though, MPTCP is not really
fair, so if you have lets say 5 flows and they all happen to
share a single bottleneck link, then i think MPTCP stil gives
you as much as 5 separate flows. If for example ACP and data plane
where both hardware accelerated then this would be a possible
discussion point if we wanted to have the policy of load splitting
traffic across both ACP and data plane. SCTP should be "fairer"
in such a case if i remember correctly (i may be wrong).
In any case, i didn't think we needed to investigate these options and
their possible difficulties for the stable-connectivity use case.
That drat is solely suited on resilience and avoiding overload of
devices due to possible short term limited ACP implementations. Therefore
siple policy to always only transfer data across one subflow.
Your point of looking into best methods to speed up switchover
between sublflows of course is valid for stable connectivity,
and maybe one of the enhanced policies to achieve this is to not
wait for connection reset of the data-plane flow but to start
sharing load on the ACP much faster, eg: jut measuring the
> Michael Richardson <mcr+i...@sandelman.ca>, Sandelman Software Works
> -= IPv6 IoT consulting =-
Anima mailing list