We were experiencing problems with the TFRC/CCID 3 packet scheduling mechanism
on
the implementation mailing list; these can be summarised as follows.
The packet scheduling mechanism of [RFC 3448, 4.6] is in principle a simple
Token
Bucket Filter: tokens are placed at a rate of 1/t_ipi into the bucket, and each
time the sender finds at least one token in the bucket, the packet can be sent.
Under `normal' conditions, the bucket size of the TBF is equal to 1; in this
case,
a continuous stream of packets is always scheduled at the precalculated nominal
sending times t_nom.
The following conditions result in a bucket size different from 1 (the last two
were pointed out by Ian in the previous email):
(a) tardiness due to scheduling granularity (as per 4.6 in RFC 3448)
(b) application is idle for a longer while
(c) application emits packets at a rate which is small compared to X/s
In these `non-normal' cases, it can happen that the current time t_now is
several
multiples of t_ipi later _after_ the scheduled nominal sending time t_nom. This
accrues a burst size of
beta = floor( (t_now - t_nom) / t_ipi ) - 1
packets which the sender is permitted to send immediately.
The problem that we are experiencing is that beta grows unbounded.
A previous attempt to fix this problem has been to re-set the nominal sending
time
t_nom whenever such a credit had accumulated: this is equivalent of enforcing a
bucket
size of 1. There was disagreement with this solution, since RFC 3448 explicitly
permits
bursts. But we were then experiencing almost arbitrarily large burst sizes
(note the
cases stated in Ians earlier email); without an upper bound for beta there is
no
regulation of the sender behaviour anymore.
PROBLEM: What is a reasonable upper bound for the bucket/burst size? Is for
instance
floor(MSS/s) (where `s' is the packet size and MSS the path MTU minus
the size
of the IP and DCCP headers) a conservative-enough upper bound?