Hi All,

I have taken the responsibility of being the editor for
the draft draft-ietf-ipngwg-icmp-v3-02.txt and I am trying
to address the issues raised by Thomas.

One of the issues raised was about rate limiting methods
suggested by the draft. The draft suggests Timer-based,
Bandwidth-based and Token-bucket based methods for limiting
the rate of the ICMP messages.

After going through the discussion about this in the archive
and thinking about this a little bit, I propose that we 
remove the Timer-based and Bandwidth-based methods and just 
keep the Token-bucket based method in the draft. (look at
the arguments at the end of this mail)

I would like to hear from people who would like to keep the
Timer-based and Bandwidth-based methods with logical reasons.

Regards
Mukesh
PS: Here are the snippets from the archive just to refresh
everyone.

Thomas Narten:
=========
T= 0.5 seems fairly high. Why not .1 or .01 (on today's links).

It might also be useful to point out that having too much rate
limiting (in routers) causes problems for traceroute, since this has
been experienced in practice. Indeed, if routers rate limit to one
message every .5 seconds, today's traceroute would seem to become
unusable in practice. Indeed, even at .1 seconds, traceroute won't
hardly work anymore. I.e., the first probe will get trigger an ICMP
error, the second one (some 20-30 ms later) will not, due to rate
limiting, etc.

Would it be better to move away completely from fixed time intervals
and just say as a percentage of the link traffic? at least in the case
of routers?
=========

Pekka Savola:
=========
Well, I brought up the issue with traceroute when I noticed some major 
router vendors implemented timer-based mechanisms, and proposed the 
token-bucket mechanism (which is used quite often).

Personally, I'm a bit unsatisfied how the examples are portrayed; IMO, 
timer-based should definitely go away, be put in last, huge disclaimers 
printed or whatever.

Bandwidth-based does not really scale: same implementation could be used 
over 64kbit/s or 10Gbit/s links.  What's the percentage there?  Assuming 
sending ICMP errors require processor cycles, the latter might still use 
up too much CPU even with 0.01% ;-) 

Also personally, token bucket is IMO the only sensible way to handle this.
=========

Robert Elz:
=========
It should be revised, but should be specified as token bucket type
specifications - that is, it should be possible to send a burst of
ICMPs if they're needed, just as long as the long term rate doesn't
get above N (N can be pkts/sec, or percentage of packets processed).

The link traffic percentage model (alone) breaks down if there has been
no traffic (N% of 0 is 0, no matter what N is).

Make all rate limits be specified with a rate/second (and 2/sec, or T=0.5)
as a nice conservative default (just default) I can handle, but with a
burst count of about 20 (as dedfault) (10 seconds with none, and 20 can
be sent very quickly again).
=========

Francis Dupont:
=========
I agree because T should be in milliseconds (f.1) and
is against back-to-back erroneous packets. I propose 20ms (50Hz).
=========

--------------------------------------------------------------------
IETF IPv6 working group mailing list
[EMAIL PROTECTED]
Administrative Requests: https://www1.ietf.org/mailman/listinfo/ipv6
--------------------------------------------------------------------

Reply via email to