2023年10月25日(水) 14:58 Martin Thomson <[email protected]>:

> Can you include the connection ID (index) that you received along with the
> address?  Then you don't need to worry about what path the indication is
> sent on.
>

Exactly. What we need to do is borrow the ACK_MP Destination Connection ID
Sequence Number field from the multipath draft.


> On Wed, Oct 25, 2023, at 16:06, Marten Seemann wrote:
> > I like Igor's idea, as it reduces the number of frames we need to
> > define. However, sending the address automatically has an annoying race
> > condition when used with multipath: A path might only work in one
> > direction, but not in the return direction. The NEW_OBSERVED_ADDRESS
> > therefore needs to identify the path it applies to, and since multipath
> > identifies paths by connection IDs, it would need to contain the
> > connection ID. However, the connection ID might already have been
> > retired by the initiator of the path when the NEW_OBSERVED_ADDRESS
> > frame is received. This is yet another case where not having explicit
> > path IDs makes the protocol more difficult to reason about... It
> > probably doesn't matter too much in practice, but it's certainly
> > annoying that there's a racy corner case.
> >
> > If we want to stick with request-response, here's an easy way to
> > mitigate Kazuho's attack. We could limit the number of outstanding
> > requests to active_connection_id_limit. This would make sure that an
> > endpoint can't send an unbounded number of requests, while still
> > allowing an endpoint to request the address of every path in use
> > concurrently.
> >
> > On Wed, 25 Oct 2023 at 11:22, Kazuho Oku <[email protected]> wrote:
> >>
> >>
> >> 2023年10月25日(水) 12:42 Martin Thomson <[email protected]>:
> >>> On Wed, Oct 25, 2023, at 13:52, Kazuho Oku wrote:
> >>> > FWIW my complaint against the original approach was that it needs
> yet
> >>> > another mechanism to limit concurrency (as without one there would
> be
> >>> > concern of state exhaustion) and that I do not like it.
> >>>
> >>> I don't think that this has concurrency issues.  You receive a frame,
> you send a frame.
> >>
> >> The problem is that clients can send an arbitrary number of requests
> (as identified by request IDs) and that servers have to track the responses
> that they send for each response.
> >>
> >>> That said, Igor's idea is attractive.  With some allowance for
> endpoints throttling updates on frequent changes,
> >>
> >> With Igor's approach, this protection already exists thanks to the
> active_connection_id_limit Transport Parameter. An endpoint can send
> updates only as much as the number of paths that it can open at a time.
> >>
> >>> that would reduce the number of frames we need and improve
> reliability.  Endpoint advertises support in transport parameter; peer
> sends frame when it adds a path.
> >>
> >>
> >> --
> >> Kazuho Oku
>


-- 
Kazuho Oku

Reply via email to