[Int-area] Stateless IPv4-in-IPv6 experiments

2011-04-06 Thread xiaohong.deng
Dear all,
 
An I-D about stateless IPv4-in-IPv6 experiments has been submitted. A
URL for this Internet-Draft is:
http://tools.ietf.org/html/draft-deng-aplusp-experiment-results-00
http://tools.ietf.org/html/draft-deng-aplusp-experiment-results-00 .
 
A website introducing the detail of experiment results is also
accessible via: http://opensourceaplusp.weebly.com/
http://opensourceaplusp.weebly.com/ 
 
BR,
Xiaohong
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] draft-ietf-intarea-ipv4-id-update-02.txt

2011-04-06 Thread Scott Brim
On Tue, Apr 5, 2011 at 16:44, Iljitsch van Beijnum iljit...@muada.comwrote:

 On 5 apr 2011, at 18:51, Joe Touch wrote:

  This is the idea that an ID value can't be repeated for 120 (or 240)
  seconds for the same 3-tuple. But why 120 seconds? Is there any
  reasonable way that a packet gets fragmented, and then one of the
  fragments is delayed so long than 65535 other packets are received in
  the interim? I can't think of anything.

  There are plenty of systems with path asymmetries with delays of tens of
 seconds. E.g., see Jim Getty's presentation this past IETF in TSVAREA.

 Can't find that real quick.


You would enjoy a visit to http://www.bufferbloat.net

Scott
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] draft-ietf-intarea-ipv4-id-update-02.txt

2011-04-06 Thread Iljitsch van Beijnum
On 6 apr 2011, at 17:33, Scott Brim wrote:

 Jim Getty's presentation this past IETF in TSVAREA.

 Can't find that real quick.

 You would enjoy a visit to http://www.bufferbloat.net

Ah, it starts at 25 minutes in on the recording for wednesday morning on 
channel 4, with the slides here:

http://mirrors.bufferbloat.net/Talks/PragueIETF/IETFBloat7.pdf

BTW, I'm not seeing the disasters he's describing, I only get 6 or 12 ms extra 
delay doing 100 Mbps transfers locally.

But... none of this has ANYTHING to do with the issue at hand, unless someone 
can show me how packet 50 sits in the buffer for 100 or 1000 or what have you 
milliseconds and then packet 51 magically flies through the queue and arrives 
much earlier than packet 50.

___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] draft-ietf-intarea-ipv4-id-update-02.txt

2011-04-06 Thread Mikael Abrahamsson

On Tue, 5 Apr 2011, Iljitsch van Beijnum wrote:

- even if it happens, the timing differences between reordered packets 
are counted in microseconds, not even whole milliseconds


I have to disagree with this. I always discourage people from using 
per-packet loadblanacing, but if done on a 2 meg connection (as an 
example), just clocking out a 1500 byte packet takes 6 milliseconds, so 
two packets can easily arrive several milliseconds in wrong order in this 
use-case.


In addition to that, I don't see multiple second delays AND hundreds or 
thousands of packets per second happening at the same time. Slow 
networks could potentially do the former, fast networks the latter, but 
I can't think of anything that can do both.


I have to agree with that, though.

--
Mikael Abrahamssonemail: swm...@swm.pp.se
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] draft-ietf-intarea-ipv4-id-update-02.txt

2011-04-06 Thread Iljitsch van Beijnum
On 6 apr 2011, at 18:57, Mikael Abrahamsson wrote:

 - even if it happens, the timing differences between reordered packets are 
 counted in microseconds, not even whole milliseconds

 I have to disagree with this. I always discourage people from using 
 per-packet loadblanacing, but if done on a 2 meg connection (as an example), 
 just clocking out a 1500 byte packet takes 6 milliseconds, so two packets can 
 easily arrive several milliseconds in wrong order in this use-case.

I've been working on 100 Mbps links, obviously this is different if you run 
much slower, which can happen at the edge of the network. But not really in the 
core, these days, I would say.

___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] draft-ietf-intarea-ipv4-id-update-02.txt

2011-04-06 Thread Joe Touch



On 4/5/2011 1:44 PM, Iljitsch van Beijnum wrote:

On 5 apr 2011, at 18:51, Joe Touch wrote:


This is the idea that an ID value can't be repeated for 120 (or
240) seconds for the same 3-tuple. But why 120 seconds? Is there
any reasonable way that a packet gets fragmented, and then one of
the fragments is delayed so long than 65535 other packets are
received in the interim? I can't think of anything.



There are plenty of systems with path asymmetries with delays of
tens of seconds. E.g., see Jim Getty's presentation this past IETF
in TSVAREA.


Can't find that real quick.

But I'm doing some measurements currently, and what I'm seeing
doesn't support that at all, for two reasons:


The Internet is a very big place, and it's intended to work in most
places, rather than just in the places any individual currently sees.


- reordering (= the packets must be sent down different paths) is
very rare for packets that belong to the same session, think single
digit percentages


As noted in my other post, that could be increasing with multiple
disparate connections becoming the norm.


- even if it happens, the timing differences between reordered
packets are counted in microseconds, not even whole milliseconds


You should measure 3G, and include a satellite path too. Both have
longer latencies even without buffer bloat.


In addition to that, I don't see multiple second delays AND hundreds
or thousands of packets per second happening at the same time. Slow
networks could potentially do the former, fast networks the latter, but
I can't think of anything that can do both.


That's common over satellite paths, and also common in the buffer bloat 
case.



As is the note that upper layers should not rely on packets being
reassembled correctly - but instead should include upper layer
checksums that are robust in the presence of misfragmentation.


If the ID rollover time is shorter than the reassembly timeout you
WILL get an incorrect reassembly for _every_ lost packet, so a
checksum would have to be very strong to protect against that. The
current 16-bit checksum isn't strong enough for this. But upgrading
TCP and UDP to use larger checksums seems like an uphill struggle.


This doc isn't proposing that. It just warns that rolling over the ID in 
the absence of stronger checksums is dangerous.



Especially because the lower layer checksums are typically strong
enough to protect against transmission errors and the weak internet
checksum allows us to do stuff cheap that would otherwise be
problematic, such as NAT, which would get a big performance hit if
incremental checksum updates weren't possible.


Stronger object checksums can also exist at the application layer, as 
with Bittorrent, and at the session layer, such as with SSL (that's for 
crypto, but still protects against incorrect reassembly).


Joe

___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] draft-ietf-intarea-ipv4-id-update-02.txt

2011-04-06 Thread Mikael Abrahamsson

On Wed, 6 Apr 2011, Joe Touch wrote:

You should measure 3G, and include a satellite path too. Both have 
longer latencies even without buffer bloat.


My personal record is 180 seconds delay over GSM. However, the RAN ensures 
packet ordering, so to get re-ordering over this, they'd have to be 
per-packet loadshared between multiple connections.


That's common over satellite paths, and also common in the buffer bloat 
case.


I dislike the term buffer bloat. Large buffers have a point, it's just 
that large FIFO buffers are bad. An example, there is little problem 
having single TCP session over 100 meg link between proper routers, get 
100 ms latency. This is between two linux boxes. Turn on cisco 
fair-queue (which does require to keep flow state, thus uses lots of 
CPU), and one flow will not give 100ms latency to another flow, instead it 
will see very little difference to an unloaded link.


Anyhow, my point is that re-ordering happens in real life networks 
(although I'd say they're badly engineered ones). I've seen it in real 
life, I've talked to application programmers who designed applications 
which relied on packets being ordered, and who had to re-design with 
jitter buffers to handle out of order packets (instead of counting skips 
in sequence numbers as drops). It happens in slow networks, it happens in 
fast networks.


It's not hugely common, but it's out there.

--
Mikael Abrahamssonemail: swm...@swm.pp.se
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] Recycle IPv4 bits

2011-04-06 Thread Templin, Fred L
Hi Iljitsch, 

 -Original Message-
 From: int-area-boun...@ietf.org 
 [mailto:int-area-boun...@ietf.org] On Behalf Of Iljitsch van Beijnum
 Sent: Tuesday, April 05, 2011 1:33 PM
 To: Joe Touch
 Cc: Internet Area
 Subject: Re: [Int-area] Recycle IPv4 bits
 
 Hi Joe,
 
 On 5 apr 2011, at 20:47, Joe Touch wrote:
 
  I assume this is in the spirit of the date of your message.
 
  I'll remember to avoid email altogether next Apr 1.
 
 :-)
 
  Don't forget that with IPv4,
  unlike with IPv6, fragmentation at the source based on too 
 big messages
  is not performed, only resegmentation in the case of TCP.
 
  RFC792 says that fragmentation needed ICMPs are sent back 
 to the originating source, and cites RFC791. Note - it does 
 NOT cite RFC793.
 
 You are reading this with today's notion of PMTUD in mind. 
 For IPv4, too big is a subtype of destination unreachable, 
 which typically doesn't carry the connotation that a 
 retransmission is useful. (For IPv6 too big is no longer 
 grouped under unreachable.)
 
 Because IPv4 never had ICMP feedback related fragmentation at 
 the source (just whatever fits in the MTU) de facto IPv4 
 PMTUD only works with TCP because unlike most UDP protocols, 
 TCP can arbitrarily adjust its packet size without any higher 
 layer implications.

While it is true that some UDP applications have no way
to reduce the size of packets they send (and therefore
set DF=0), I don't think it is possible to say that
PMTUD only works with TCP. Do you know of any surveys
showing that PMTUD is broken for other than TCP?

Thanks - Fred
fred.l.temp...@boeing.com 

  I'd have to do some checking on what actually happens when 
 an ICMP too-big is received. I'd be surprised if that isn't 
 handled in some way, though.
 
 Prepare to be surprised. Perhaps this is brought into line 
 with IPv6 in newer implementations recently, but I've never 
 seen it happen.
 
  I'm saying that once a packet is sent with DF=0, changing 
 it to DF=1 en-route is dangerous and serves no useful purpose AFAICT.
 
 Ok, then we agree on that part.
 
 Iljitsch
 
 ___
 Int-area mailing list
 Int-area@ietf.org
 https://www.ietf.org/mailman/listinfo/int-area
 
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


[Int-area] The virtues of large buffers, was: Re: draft-ietf-intarea-ipv4-id-update-02.txt

2011-04-06 Thread Iljitsch van Beijnum
On 6 apr 2011, at 19:46, Mikael Abrahamsson wrote:

 I dislike the term buffer bloat. Large buffers have a point, it's just that 
 large FIFO buffers are bad. An example, there is little problem having single 
 TCP session over 100 meg link between proper routers, get 100 ms latency.

I really don't understand why people defend large buffers. There is really no 
upside to them save for the case where the system is otherwise too slow to fill 
up the pipe.

In fact, in big routers they cost a lot of money and use significant energy. 
And still C and J put a gigabyte of buffering on some of their interfaces.

In theory some buffering will smoothe out bursts, but in practice TCPs will 
just build up standing queues using up all the buffering so bursts tail drop 
anyway.

Of course all of this applies to FIFO, but that's what's used in the vast 
majority of cases as there are no cheap set-and-forget first-do-no-harm AQMs 
AFAIK.

 Anyhow, my point is that re-ordering happens in real life networks

Only a few percent of destinations have a non flow conserving (per packet) load 
balancer on the path, and even then you only trigger the reordering if you send 
fast enough.

But even then, the timing differences between the reordered packets won't add 
up to 10+ seconds while the bandwidth is 100s or 1000s of packets per second. 
For this to happen you'd need load balancing over links with 120 second jitter. 
Not sure how jitter and queue length are related off the top of my head, but 
that probably means a standing queue that's even larger than 120 seconds. At 
the 500 pps that you need to fill up 6 Mbps at 1500 bytes per packet, that's an 
average queue length of 100k packets or 143 MB to reach a queuing delay of say 
200 seconds. That translates into a utilization of 99.999%.

___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] Recycle IPv4 bits

2011-04-06 Thread Iljitsch van Beijnum
Hi Fred,

On 6 apr 2011, at 19:47, Templin, Fred L wrote:

 de facto IPv4 
 PMTUD only works with TCP

 While it is true that some UDP applications have no way
 to reduce the size of packets they send (and therefore
 set DF=0), I don't think it is possible to say that
 PMTUD only works with TCP. Do you know of any surveys
 showing that PMTUD is broken for other than TCP?

Only what I've seen myself.

You're right of course that there is no reason why IPv4 can't handle this at 
the IP layer like IPv6 does, but it just doesn't happen in practice. Hence the 
de facto above.

It's such a shame that we never got this right. Maybe with IPvA...

___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] Recycle IPv4 bits

2011-04-06 Thread Templin, Fred L
Hi Iljitsch, 

 -Original Message-
 From: int-area-boun...@ietf.org 
 [mailto:int-area-boun...@ietf.org] On Behalf Of Iljitsch van Beijnum
 Sent: Wednesday, April 06, 2011 1:54 PM
 To: Templin, Fred L
 Cc: Internet Area
 Subject: Re: [Int-area] Recycle IPv4 bits
 
 Hi Fred,
 
 On 6 apr 2011, at 19:47, Templin, Fred L wrote:
 
  de facto IPv4 
  PMTUD only works with TCP
 
  While it is true that some UDP applications have no way
  to reduce the size of packets they send (and therefore
  set DF=0), I don't think it is possible to say that
  PMTUD only works with TCP. Do you know of any surveys
  showing that PMTUD is broken for other than TCP?
 
 Only what I've seen myself.
 
 You're right of course that there is no reason why IPv4 can't 
 handle this at the IP layer like IPv6 does, but it just 
 doesn't happen in practice. Hence the de facto above.
 
 It's such a shame that we never got this right. Maybe with IPvA...

Any ideas on how we are going to get to 9KB jumbograms
everywhere then, or are we just going to punt on that?

Thanks - Fred
fred.l.temp...@boeing.com

 ___
 Int-area mailing list
 Int-area@ietf.org
 https://www.ietf.org/mailman/listinfo/int-area
 
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] Recycle IPv4 bits

2011-04-06 Thread Joel Jaeggli


Joel's widget number 2

On Apr 6, 2011, at 17:23, Templin, Fred L fred.l.temp...@boeing.com wrote:

 Hi Iljitsch, 
 
 -Original Message-
 From: int-area-boun...@ietf.org 
 [mailto:int-area-boun...@ietf.org] On Behalf Of Iljitsch van Beijnum
 Sent: Wednesday, April 06, 2011 1:54 PM
 To: Templin, Fred L
 Cc: Internet Area
 Subject: Re: [Int-area] Recycle IPv4 bits
 
 Hi Fred,
 
 On 6 apr 2011, at 19:47, Templin, Fred L wrote:
 
 de facto IPv4 
 PMTUD only works with TCP
 
 While it is true that some UDP applications have no way
 to reduce the size of packets they send (and therefore
 set DF=0), I don't think it is possible to say that
 PMTUD only works with TCP. Do you know of any surveys
 showing that PMTUD is broken for other than TCP?
 
 Only what I've seen myself.
 
 You're right of course that there is no reason why IPv4 can't 
 handle this at the IP layer like IPv6 does, but it just 
 doesn't happen in practice. Hence the de facto above.
 
 It's such a shame that we never got this right. Maybe with IPvA...
 
 Any ideas on how we are going to get to 9KB jumbograms
 everywhere then, or are we just going to punt on that?

Still need pmtud when you have jumbo frames support because you don't know how 
many encapsulations there are on the wire. 

 Thanks - Fred
 fred.l.temp...@boeing.com
 
 ___
 Int-area mailing list
 Int-area@ietf.org
 https://www.ietf.org/mailman/listinfo/int-area
 
 ___
 Int-area mailing list
 Int-area@ietf.org
 https://www.ietf.org/mailman/listinfo/int-area
 
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] Stateless IPv4-in-IPv6 experiments DS-Lite/NAT64 experiments

2011-04-06 Thread xiaohong.deng
We have tested how applications behave when it has both DS-Lite IPv4 and NAT64 
IPv6 (but not native IPv6). Firefox, IE, Skype, Google Earth v5.2.1, Live 
Messenger, uTorrent, BitComet, were on our test list. 

 

Despres,regarding your question ( maybe not answering your question exactly as 
we tested NAT64 IPv6 instead of native IPv6 ), uTorrent v2.2 used IPv6 for 
login/authentication but IPv4 for the data exchange, for IPv6 peer was not able 
to talk to IPv4 peer. Although BitComet v1.23 issued both A and  quires, it 
completely ignored  RR and only used IPv4 for communication.

 

Please also see some of other test results below. DS-Lite/NAT64 experiments 
could be documented if it is of interests. 

 

Tested Apps   IPv4/IPv6 portion

Firefox (Non-vedio) v3.6.12   All IPv6 

video website Half/ half

IE (Non-vedio) v6.0   All IPv6

Skype v5.0Major IPv6  

Google Earth v5.2.1   Major IPv6

Live Messenger 2009   Major IPv4

uTorrent v2.2 Major IPv4 

BitComet v1.23All IPv4   

 

BR,

Xiaohong

opensource A+P: http://opensourceaplusp.weebly.com/

 




发件人: Rémi Després [mailto:remi.desp...@free.fr] 
发送时间: 2011年4月7日 0:21
收件人: DENG Xiaohong ESP/PEK
抄送: int-area@ietf.org
主题: Re: [Int-area] Stateless IPv4-in-IPv6 experiments


Thank you, Xiahong, for sharing this interesting information. 

In the case of BitComet and uTorrent, I wonder what happens if the host 
has both A+P IPv4 and IPv6, as opposed to just A+P IPv4.
Did you try that?

Regards,
RD

Le 6 avr. 2011 à 10:57, xiaohong.d...@orange-ftgroup.com 
xiaohong.d...@orange-ftgroup.com a écrit :


Dear all,
 
An I-D about stateless IPv4-in-IPv6 experiments has been 
submitted. A URL for this Internet-Draft is: 
http://tools.ietf.org/html/draft-deng-aplusp-experiment-results-00 
http://tools.ietf.org/html/draft-deng-aplusp-experiment-results-00 .
 
A website introducing the detail of experiment results is also 
accessible via: http://opensourceaplusp.weebly.com/ 
http://opensourceaplusp.weebly.com/ 
 
BR,
Xiaohong
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area



___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] The virtues of large buffers, was: Re: draft-ietf-intarea-ipv4-id-update-02.txt

2011-04-06 Thread Mikael Abrahamsson

On Wed, 6 Apr 2011, Iljitsch van Beijnum wrote:

I really don't understand why people defend large buffers. There is 
really no upside to them save for the case where the system is otherwise 
too slow to fill up the pipe.


Yes there is, if you have long fat pipes. It can be argued that it's much 
less needed now, when core routers have 10G and up, which is what router 
vendors are responding to, newer routers have 50ms buffer instead of 600ms 
they had 10 years ago.


TCP (file transfer) responds better to delayed packets than to lost 
packets.


In fact, in big routers they cost a lot of money and use significant 
energy. And still C and J put a gigabyte of buffering on some of their 
interfaces.


At 40G speed, that's 250ms. There are applications that would rather have 
their packet delayed 100ms but get it delivered, and there are other 
applications who would rather like packet loss but almost no jitter.


The hard part is to make a network that caters well to both without having 
to keep flow state.


In theory some buffering will smoothe out bursts, but in practice TCPs 
will just build up standing queues using up all the buffering so bursts 
tail drop anyway.


Depends on speed, but you're basically right.

Of course all of this applies to FIFO, but that's what's used in the 
vast majority of cases as there are no cheap set-and-forget 
first-do-no-harm AQMs AFAIK.


I agree.


Anyhow, my point is that re-ordering happens in real life networks


Only a few percent of destinations have a non flow conserving (per 
packet) load balancer on the path, and even then you only trigger the 
reordering if you send fast enough.


I've seen it on 5 megabit/s video streams.

But even then, the timing differences between the reordered packets 
won't add up to 10+ seconds while the bandwidth is 100s or 1000s of 
packets per second.


It's definitely a corner case, I agree.

--
Mikael Abrahamssonemail: swm...@swm.pp.se
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area