Christian Hopps writes:
> >> It might be obvious to you, but it might not be obvious to the person
> >> doing the actual implementations. I always consider it a good idea to
> >> point out pitfalls and cases where implementor should be vary to the
> >> implementor and not to assume that implementor actually realizes this. 
> > 
> > I agree with that sentiment.
> 
> This is the specific case here:

No it is not.

> “Given an ordered packet stream, A, B, C, if you send B before A you
> will be sending packets in a different order”

The question there is that it is very hard to see from the text in
current draft section 2.5 that it can cause extra 32-1000 packets
(reorder window size) of buffering for every single lost packet.

And that current text does not allow the sending packets in different
order, as it does not allow processing packets in any other order than
in-order.

So there are multiple choises here, which affect how the
implementation behaves:

  1) Make sure that packets are processed in-order always, i.e., do
  not allow any outer packets to be processed until you are sure they
  are in-order thus causing extra buffering/latency if any packet is
  lost, as you need to wait for that packet to drop out of reorder
  window before you know it is lots, thus before you can continue
  processing packets. This will not cause any reordering of packets.

  2) Process incoming outer packets as they come in, and do not
  reorder them before processing. In that case you need to process
  outer packets partially, i.e., only send those inner packets out
  which have been fully received, but buffer those pieces of inner
  packets which are still missing pieces as outer packets were either
  lost or reordered. In this case if there is reordering in the outer
  packets this will cause this reordering on the inner packets too.

  3) Do hybrid version where when you notice missing packet on the
  outer packets you postpone processing of it for short duration and
  to see the reordering was only very small (for example wait for just
  next outer packet). If the outer packet stream can be reordered
  inside this small window you do that and process packets in order
  and send them out in order, but you limit the latency caused by this
  to for example only for one packet and if larger reordering is
  happening then you still buffer wait until the full reorder window
  until you deem that you are not able to process that inner packet as
  it was not completely received because of missing packet.

The current text only allows option 1, and I would like to allow
options 2 and 3 and perhaps also others, but I would also like to have
some text explaining the trade offs of different options. This does
not affect the interoperability as such, as two implementations using
different methods will interoperate, but this might cause very bad
performance issues.

Actually I think option 1 (the one only allowed now) can and will
cause large jitter for round trip times for every single lost frame. I
am not sure what large jitter of round trip times does for different
protocols running inside the tunnel. I would assume that any kind of
audio conferencing system would have really bad performance if run
over such system.

> Again I’ll put this text to unblock this document, but really,
> sometimes things *are* obvious. 

I had to parse the section 2.5 several times before I realised that it
do really require me to process packets in-order i.e., it forbids
options 2 and 3.

It might be obvious for you, but it was not obvious for me, and I
think that restriction do make the performance really bad.
-- 
kivi...@iki.fi

_______________________________________________
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec

Reply via email to