Jonathan Morton writes:
> Just to add - I think the biggest impediment to experimentation in
> asynchronous logic is the complete absence of convenient Muller
> C-element gates in the 74-series logic family. If you want to build
> some, I recommend using NAND and OR gates as inputs to
On 2018-11-23, 08:33, "Dave Taht" wrote:
Back in the day, I was a huge fan of async logic, which I first
encountered via caltech's cpu and later the amulet.
https://en.wikipedia.org/wiki/Asynchronous_circuit#Asynchronous_CPU
...
I've never really understood why it
On Tue, Nov 27, 2018 at 10:44 AM Kathleen Nichols wrote:
>
>
> I have been kind of blown away by this discussion. Jim Gettys kind of
> kicked off the current wave of dealing with full queues, dubbing it
> "bufferbloat". He wanted to write up how it happened so that people
> could start on a
On 2018-11-27, 10:31, "Stephen Hemminger" wrote:
With asynchronous circuits there is too much unpredictablity and
instability.
Seem to remember there are even cases where two inputs arrive at once and
output is non-determistic.
IIRC they talked about that some too. I think maybe
I have been kind of blown away by this discussion. Jim Gettys kind of
kicked off the current wave of dealing with full queues, dubbing it
"bufferbloat". He wanted to write up how it happened so that people
could start on a solution and I was enlisted to get an article written.
We tried to draw on
> On 27 Nov, 2018, at 3:19 pm, Michael Richardson wrote:
>
> If the drops are due to noise, then I don't think it will help.
> The congestion signals should already getting made.
If they are drops due to noise, then they are not congestion signals at all, as
they occur independently of whether
On Tue, Nov 27, 2018 at 10:31 AM Stephen Hemminger
wrote:
>
> On Tue, 27 Nov 2018 18:14:01 +
> "Holland, Jake" wrote:
>
> > On 2018-11-23, 08:33, "Dave Taht" wrote:
> > Back in the day, I was a huge fan of async logic, which I first
> > encountered via caltech's cpu and later the
On Tue, 27 Nov 2018 18:14:01 +
"Holland, Jake" wrote:
> On 2018-11-23, 08:33, "Dave Taht" wrote:
> Back in the day, I was a huge fan of async logic, which I first
> encountered via caltech's cpu and later the amulet.
>
>
On Mon, Nov 26, 2018 at 1:30 PM Jonathan Morton wrote:
>
> > On 26 Nov, 2018, at 9:08 pm, Pete Heist wrote:
> >
> > So I just thought to continue the discussion- when does the CoDel part of
> > fq_codel actually help in the real world?
>
> Fundamentally, without Codel the only limits on the
On Mon, Nov 26, 2018 at 1:56 PM Michael Welzl wrote:
>
> Hi folks,
>
> That “Michael” dude was me :)
>
> About the stuff below, a few comments. First, an impressive effort to dig all
> of this up - I also thought that this was an interesting conversation to have!
>
> However, I would like to
Dave Taht writes:
> I've done things like measure induced latency on wireguard streams of
> late and codel keeps it sane. still, wireguard internally is optimized
> for single flow "dragster" performance, and I'd like it to gain the
> same fq_codel optimization that did such nice things for
On Tue, Nov 27, 2018 at 12:54 PM Toke Høiland-Jørgensen wrote:
>
> Dave Taht writes:
>
> > I've done things like measure induced latency on wireguard streams of
> > late and codel keeps it sane. still, wireguard internally is optimized
> > for single flow "dragster" performance, and I'd like it
Dave Taht writes:
> On Tue, Nov 27, 2018 at 12:54 PM Toke Høiland-Jørgensen wrote:
>>
>> Dave Taht writes:
>>
>> > I've done things like measure induced latency on wireguard streams of
>> > late and codel keeps it sane. still, wireguard internally is optimized
>> > for single flow "dragster"
OK, wow, this conversation got long. and I'm still 20 messages behind.
Two points, and I'm going to go back to work, and maybe I'll try to
summarize a table
of the competing viewpoints, as there's far more than BDP of
discussion here, and what
we need is sqrt(bdp) to deal with all the different
Hi Kathie,
[long time, no see :-)]
I'm well aware of the CoDel paper and it really does a nice job
of explaining the good queue and bad queue properties. What we
found is that loss-based TCP CCs systematically build standing
queues. Their positive function is to keep up the link utilization,
Just a small clarification:
>> To me the switch to head dropping essentially killed the tail loss RTO
>> problem, eliminated most of the need for ecn.
>
> I doubt that: TCP will need to retransmit that packet at the head, and that
> takes an RTT - all the packets after it will need to wait in
On Mon, Nov 26, 2018 at 11:28 AM Neal Cardwell wrote:
>
> I believe Dave Taht has pointed out, essentially, that the "codel" part of
> fq_codel can be useful in cases where the definition of "flow" is not visible
> to fq_codel, so that "fq" part is inactive. For example, if there is VPN
>
Hi,
On 27.11.18 at 23:19 Luca Muscariello wrote:
> I suggest re-reading this
>
> https://queue.acm.org/detail.cfm?id=3022184
Probably not without this afterwards:
https://ieeexplore.ieee.org/document/8117540
(especially sections II and III).
Regards,
Roland
Just to add - I think the biggest impediment to experimentation in asynchronous
logic is the complete absence of convenient Muller C-element gates in the
74-series logic family. If you want to build some, I recommend using NAND and
OR gates as inputs to active-low SR flipflops.
- Jonathan
On Tue, Nov 27, 2018 at 2:30 PM Roland Bless wrote:
>
> Hi,
>
> On 27.11.18 at 23:19 Luca Muscariello wrote:
> > I suggest re-reading this
> >
> > https://queue.acm.org/detail.cfm?id=3022184
> Probably not without this afterwards:
> https://ieeexplore.ieee.org/document/8117540
>
> (especially
I suggest re-reading this
https://queue.acm.org/detail.cfm?id=3022184
On Tue 27 Nov 2018 at 21:58, Dave Taht wrote:
> OK, wow, this conversation got long. and I'm still 20 messages behind.
>
> Two points, and I'm going to go back to work, and maybe I'll try to
> summarize a table
> of the
>> I wish I knew of a mailing list where I could get a definitive answer
>> on "modern problems with async circuits", or an update on the kind of
>> techniques the new AI chips were using to keep their power consumption
>> so low. I'll keep googling.
>
> I’d be interested in knowing this as well.
> On Nov 27, 2018, at 8:09 PM, Dave Taht wrote:
>
> I wish I knew of a mailing list where I could get a definitive answer
> on "modern problems with async circuits", or an update on the kind of
> techniques the new AI chips were using to keep their power consumption
> so low. I'll keep
> On Nov 27, 2018, at 9:10 PM, Dave Taht wrote:
>
> EVEN with http 2.0/ I would be extremely surprised to learn that many
> websites fit it all into one tcp transaction.
>
> There are very few other examples of TCP traffic requiring a low
> latency response.
This is the crux of what I was
Toke Høiland-Jørgensen writes:
> Luca Muscariello writes:
>
>> This procedure would allow to size FQ_codel but also SFQ.
>> It would be interesting to compare the two under this buffer sizing.
>> It would also be interesting to compare another mechanism that we have
>> mentioned during the
On 11/27/18 3:17 PM, Dave Taht wrote:
...
>
> but now that we all have bedtime reading, I'm going to go back to
> hacking on libcuckoo. :)
>
>
Geez, louise. As if everyone doesn't have enough to do! I apologize. I
did not mean for anyone to completely read the links I sent, just look
at the
anybody have an old ultrasparc T1 they are not using I could buy or
borrow (there's a few cheap ones on ebay)? or a sparc T-series in the
cloud?
I'd like to be able to check some assumptions about atomic memory models
--
Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740
Pete Heist writes:
> On Nov 27, 2018, at 9:10 PM, Dave Taht
> wrote:
>
> EVEN with http 2.0/ I would be extremely surprised to learn that
> many
> websites fit it all into one tcp transaction.
>
> There are very few other examples of TCP traffic requiring a low
Thank you all for the responses!
I was asked a related question by my local WISP, who wanted to know if there
would be any reason that fq_codel or Cake would be an improvement over sfq
specifically for some "noisy links” (loose translation from Czech) in a
backhaul that have some loss but also
Hi,
Am 27.11.18 um 11:29 schrieb Luca Muscariello:
> I have never said that you need to fill the buffer to the max size to
> get full capacity, which is an absurdity.
Yes, it's absurd, but that's what today's loss-based CC algorithms do.
> I said you need at least the BDP so that the queue
I think that this is a very good comment to the discussion at the defense
about the comparison between
SFQ with longest queue drop and FQ_Codel.
A congestion controlled protocol such as TCP or others, including QUIC,
LEDBAT and so on
need at least the BDP in the transmission queue to get full
Hi Luca,
Am 27.11.18 um 10:24 schrieb Luca Muscariello:
> A congestion controlled protocol such as TCP or others, including QUIC,
> LEDBAT and so on
> need at least the BDP in the transmission queue to get full link
> efficiency, i.e. the queue never empties out.
This is not true. There are
On Tue, 27 Nov 2018, Luca Muscariello wrote:
link fully utilized is defined as Q>0 unless you don't include the
packet currently being transmitted. I do, so the TXtteer is never idle.
But that's a detail.
As someone who works with moving packets, it's perplexing to me to
interact with
OK. We agree.
That's correct, you need *at least* the BDP in flight so that the
bottleneck queue never empties out.
This can be easily proven using fluid models for any congestion controlled
source no matter if it is
loss-based, delay-based, rate-based, formula-based etc.
A highly paced source
> On 27 Nov, 2018, at 10:54 am, Pete Heist wrote:
>
> …any reason that fq_codel or Cake would be an improvement over sfq
> specifically for some "noisy links” (loose translation from Czech) in a
> backhaul that have some loss but also experience saturation.
If the random loss is low enough
I have never said that you need to fill the buffer to the max size to get
full capacity, which is an absurdity.
I said you need at least the BDP so that the queue never empties out.
The link is fully utilized IFF the queue is never emptied.
On Tue 27 Nov 2018 at 11:26, Bless, Roland (TM)
A BDP is not a large buffer. I'm not unveiling a secret.
And it is just a rule of thumb to have an idea at which working point the
protocol is working.
In practice the protocol is usually working below or above that value.
This is where AQM and ECN help also. So most of the time the protocol is
> On 27 Nov, 2018, at 12:50 pm, Mikael Abrahamsson wrote:
>
> Could someone perhaps comment on the thinking in the transport protocol
> design "crowd" when it comes to this?
BBR purports to aim for the optimum of maximum throughput at minimum latency;
there is a sharp knee in the
On Tue, 27 Nov 2018, Luca Muscariello wrote:
A BDP is not a large buffer. I'm not unveiling a secret.
It's complicated. I've had people throw in my face that I need 2xBDP in
buffer size to smoothe things out. Personally I don't want more than 10ms
buffer (max), and I don't see why I should
Luca Muscariello writes:
> This procedure would allow to size FQ_codel but also SFQ.
> It would be interesting to compare the two under this buffer sizing.
> It would also be interesting to compare another mechanism that we have
> mentioned during the defense
> which is AFD + a sparse flow
Hi Luca,
Am 27.11.18 um 12:01 schrieb Luca Muscariello:
> A BDP is not a large buffer. I'm not unveiling a secret.
That depends on speed and RTT (note that typically there are
several flows with different RTTs sharing the same buffer).
The essential point is not how much buffer capacity is
A buffer in a router is sized once. RTT varies.
So BDP varies. That’s as simple as that.
So you just cannot be always at optimum because you don’t know what RTT you
have at any time.
Lola si not solving that. No protocol could BTW.
BTW I don’t see any formal proof about queue occupancy in the
Hi,
Am 27.11.18 um 12:40 schrieb Bless, Roland (TM):
> Hi Luca,
>
> Am 27.11.18 um 11:40 schrieb Luca Muscariello:
>> OK. We agree.
>> That's correct, you need *at least* the BDP in flight so that the
>> bottleneck queue never empties out.
>
> No, that's not what I meant, but it's quite simple.
Hi,
Am 27.11.18 um 12:58 schrieb Luca Muscariello:
> A buffer in a router is sized once. RTT varies.
> So BDP varies. That’s as simple as that.
> So you just cannot be always at optimum because you don’t know what RTT
> you have at any time.
The endpoints can measure the RTT. Yes, it's probably
Well, I'm concerned about the delay experienced by people when they surf the
web... flow completion time, which relates not only to the delay of packets as
they are sent from A to B, but also the utilization.
Cheers,
Michael
> On 27 Nov 2018, at 11:50, Mikael Abrahamsson wrote:
>
> On Tue,
Hi Luca,
Am 27.11.18 um 11:40 schrieb Luca Muscariello:
> OK. We agree.
> That's correct, you need *at least* the BDP in flight so that the
> bottleneck queue never empties out.
No, that's not what I meant, but it's quite simple.
You need: data min_inflight=2 * RTTmin * bottleneck_rate to filly
> On 27 Nov, 2018, at 1:21 pm, Mikael Abrahamsson wrote:
>
> It's complicated. I've had people throw in my face that I need 2xBDP in
> buffer size to smoothe things out. Personally I don't want more than 10ms
> buffer (max), and I don't see why I should need more than that even if
> transfers
Hi Michael,
Am 27.11.18 um 12:04 schrieb Michael Welzl:
> I'm lost in this conversation: I thought it started with a statement saying
> that the queue length must be at least a BDP such that full utilization is
> attained because the queue never drains.
I think it helps to distinguish between
Pete Heist wrote:
> I was asked a related question by my local WISP, who wanted to know if
> there would be any reason that fq_codel or Cake would be an improvement
> over sfq specifically for some "noisy links” (loose translation from
> Czech) in a backhaul that have some loss
Another bit to this.
A router queue is supposed to serve packets no matter what is running at
the controlled end-point, BBR, Cubic or else.
So, delay-based congestion controller still get hurt in today Internet
unless they can get their portion of buffer at the line card.
FQ creates incentives for
On Tue, 27 Nov 2018, Luca Muscariello wrote:
If you, Mikael don't want more than 10ms buffer, how do you achieve that?
class class-default
random-detect 10 ms 2000 ms
That's the only thing available to me on the platforms I have. If you
would like this improved, please reach out to the
On Tue, Nov 27, 2018 at 2:49 PM Mikael Abrahamsson wrote:
> On Tue, 27 Nov 2018, Luca Muscariello wrote:
>
> > If you, Mikael don't want more than 10ms buffer, how do you achieve that?
>
> class class-default
>random-detect 10 ms 2000 ms
>
> That's the only thing available to me on the
On Tue, 27 Nov 2018, Luca Muscariello wrote:
This is a whole different discussion but if you want to have a per-user
context at the BNG level + TM + FQ I'm not sure that kind of beast will
ever exist. Unless you have a very small user fan-out the hardware
clocks could loop over several
53 matches
Mail list logo