For what it's worth (not much) this seems to me like a very nice
implementation choice.
Eddie
Gerrit Renker wrote:
The problem with Ack Vectors is that
i) their length is variable and can in principle grow quite large,
ii) it is hard to predict exactly how large they will be.
Due to
One slight comment. NDP Count is not required by CCID 3's *specification*.
Without NDP Count, the receiver will simply treat ack losses like data losses;
this can lead to lower send rates when acks are lost, but does not harm
interoperability. However, it is totally reasonable for an
Gerrit Renker wrote:
| - The DCCP specification does not require RTT measurements on every packet
| exchange.
|
| - It should be possible to use coarse grained timestamps (even jiffies) for
| most packets, with finer grained timestamps used on occasion, to improve the
| current
- The DCCP specification does not require RTT measurements on every packet
exchange.
- It should be possible to use coarse grained timestamps (even jiffies) for
most packets, with finer grained timestamps used on occasion, to improve the
current estimate; for example, getting the time of day
.
Quoting Eddie Kohler:
| That is one of the problems here - in the RFC such problems do not arise,
but the implementation needs
| to address these correctly.
|
| The RFC's solution to this problem, which involves t_gran, EXACTLY addresses this
|
| | Your token bucket math
, but still on LANs where RTT
timer_granularity this would reduce burstiness. (All assuming CCID3 doesn't
do this already.)
Eddie
David Miller wrote:
From: Eddie Kohler [EMAIL PROTECTED]
Date: Fri, 13 Apr 2007 13:37:57 -0700
Gerrit. I know the implementation is broken for high rates
I can't resist:
David Miller wrote:
I am really not sure that CCID3 can be implemented well without a lot of
real-time and system load requirements - if you have any suggestions or
know of similar problem areas, input would be very welcome.
I wonder what a DCCP implementation on old BSD would
Putting on my Sally hat:
David Miller wrote:
Eddie, this is an interesting idea, but would you be amicable to the
suggestion I made in another email? Basically if RTT is extremely
low, don't do any of this limiting.
What sense is there to doing any of this for very low RTTs? It is
a very
Gerrit Renker wrote:
Quoting Ian McDonald:
| Will have to read more about rate limiting though as I'm
| not convinced normally about rate limiting schemes etc as they
| invariably end up causing problems when doing things like running a
| huge server with lots of connections.
We pictured
This is a question, not a comment:
Sally and I are actually discussing whether, as a result of the RFC3448bis
work, we should update RFC4342 so that the Receive Rate option in CCID 3
*always* reports the receive rate over the last RTT, rather the max of (RTT,
time since last RR option).
If,
I'm surprised that this case occurs:
Gerrit Renker wrote:
[CCID 3]: Fix bug in the calculation of very low sending rates
This fixes an error in the calculation of t_ipi when X converges towards
very low sending rates (between 1 and 64 bytes per second).
Although this case may not sound
Hi Ian,
Sorry for the delay in responding.
I agree that the t_ipi implementation sketched in RFC3448 Section 4.6 is
incomplete with respect to slow applications, idle periods, and the like. :(
What follows is a first cut at a solution. Any thoughts from others??
If t_ipi is used to
I have some minor thoughts relating to this.
- In what units are t_nom kept? I would hope microseconds at least, not
milliseconds. You say dccps_xmit_timer is reset to expire in t_now + rc
milliseconds; I assume you mean that the value t_now + rc is cast to
milliseconds. Clearly high rates
Hi Gerrit,
I don't actually completely understand where you're coming from. The initial
send rate is not 1 packet per second, it is 2-4 packets per RTT, as per
RFC4342. So the initial t_ipi is going to be s/X, where X = 2-4 packets per RTT.
If you follow the logic through RFC3448 4.2-4.4,
- ccid3_hc_tx_send_packet should return a value that is measured in
MICROSECONDS not milliseconds. It also sounds like there is a
rounding error
in step 3a); it should probably return (delay + 500)/1000 at least.
This is used to set a timer to know when to wake up again which is
valid to be
WHOOPSY! I wrote t_ipi when I meant t_nominal, or whatever symbol you choose
for the time the next packet is allowed to be sent. t_ipi should not be
changed; it depends on X_inst.
Eddie Kohler wrote:
If t_ipi is used to schedule transmissions, then the following equation
should be applied
| For what it's worth, it's as close to in the RFC as it can get without a
| revision. The authors of the RFC agree that we meant the initial
| Request-Response RTT to be usable as an initial RTT estimate; the
| working group agreed; errata has been sent.
So we are RFC-compliant for the
is even
weirder), then that might be useful.
Eddie
Gerrit Renker wrote:
Quoting Eddie Kohler:
| The problem I see is that
|* scheduling granularity is at best 1ms
|* hence t_gran/2 is at best 500usec
|* and so when t_ipi 1ms, packets will always be sent in bursts
|
| So we
Gerrit, this summary is not right.
RFC 3448 says that I_0 represents the number of packets received since the
last loss event. (Section 5.5) In the Linux implementation, this number is
NOT stored in the li_entry list. It must be calculated. This is what Ian's
nonloss manipulations do.
works. Maybe I'm
missing something but I don't think so...
Eddie
Ian McDonald wrote:
On 1/5/07, Eddie Kohler [EMAIL PROTECTED] wrote:
Ian (catching up slowly slowly), here is a nit as nitty as they come.
This diff seems strange to me, since ~ actually does the same thing on
integers
Ian (catching up slowly slowly), here is a nit as nitty as they come.
This diff seems strange to me, since ~ actually does the same thing on
integers and unsigned integers. (This code:
printf(%u %u\n, ~0, ~0U);
will print the same thing twice.)
Perhaps dccplih_interval is a 64-bit
Guys,
I can't follow this code 100%, but here is an analysis of what I can
understand. Summary: Ian is correct (I think).
From Ian's patch, it appears that the OLD code DID NOT include the most
recent loss interval (i.e., the incomplete loss interval, the one that has no
losses) in its
The reason for this is if you are recalculating i_mean based on non
loss you should check after every packet received. However this
involves quite a lot of calculations on linked lists which are CPU
intensive and also stall other processes potentially with locks being
taken. So what I've done is
You shouldn't need to iterate through the list, since i_mean is just
i_tot/w_tot, and w_tot is a constant. You do need to divide, though.
If it makes no difference to you I'd recommend going with the simpler version
-- the logic in dccp_li_hist_recalc_recalcloss is difficult to follow; I
Gerrit,
Subtracting two 32-bit numbers which are 2^31 apart will have the same results.
(int32_t) ((uint32_t) 0 - (uint32_t) 0x8000) == -0x8000
(int32_t) ((uint32_t) 0x8000 - (uint32_t) 0) == -0x8000
The RFC is not in error and your delta_seqno patch should not be accepted.
As
-bit trick recommended by RFC4340.
Eddie
Eddie Kohler wrote:
Gerrit,
Subtracting two 32-bit numbers which are 2^31 apart will have the same
results.
(int32_t) ((uint32_t) 0 - (uint32_t) 0x8000) == -0x8000
(int32_t) ((uint32_t) 0x8000 - (uint32_t) 0) == -0x8000
The RFC
Gerrit,
This is cool, but it would be nice to still have the option to use a nominal
packet size 's' and do packet-based congestion control.
Eddie
Gerrit Renker wrote:
[CCID 3]: Track RX/TX packet size `s' using moving-average
Problem:
Currently, the receiver/sender packet size
Gerrit, everyone,
Again, the INTENTION I think was for the 2-4 packets per RTT to apply
IMMEDIATELY, because the Request-Response exchange gave you an initial RTT
estimate. But this is just not what RFC 4342 says. When I figure this out
with Sally we will get back to you, and perhaps
Same comment (it's probably safe to use the Req-Response exchange for an
initial RTT estimate).
E
Gerrit Renker wrote:
[CCID 3]: Avoid `division by zero' errors
Several places of the code divide by the current RTT value. A division-by-zero
error results if this value is 0.
To protect
Hi, a bit of re-clarification: CCIDs 2 and 3 are *not* meant for apps that
NEVER vary their packet size. Rather, they are meant for apps that very
packet size *for application reasons* (such as codec output), but *not* in
response to congestion. CCIDs 2 and 3 expect to reduce application
Hi, a short note;
Andrea Bittau wrote:
On Thu, Sep 21, 2006 at 09:30:21AM +0100, Gerrit Renker wrote:
1/ TX Buffering: set size of TX ring buffer via socket option.
The size of the TX buffer is interesting in applications which want to do their
own queue management. That is, real-time
Well, here's what I think about service codes. I think I speak for the
authors here. This has all been said previously on the list FWIW.
* I think service codes should be part of the sockaddr for DCCP. The decision
to make them a setsockopt() I think has made things harder. From 9/9/05:
Last time, vger ate this reply as possible spam, trying again.
Well, here's what I think about service codes. I think I speak for the
authors here. This has all been said previously on the list FWIW.
* I think service codes should be part of the sockaddr for DCCP. The decision
to make
33 matches
Mail list logo