(5%)
> 10=10:0:0:0:0:0:0:0 0
> [ 1] 6.0002-6.0511 sec 40.0 KBytes 6.44 Mbits/sec 50.895 ms (5.1%)
> 10=10:0:0:0:0:0:0:0 0
> [ 1] 7.0002-7.0501 sec 40.0 KBytes 6.57 Mbits/sec 49.889 ms (5%)
> 10=10:0:0:0:0:0:0:0 0
> [ 1] 8.0002-8.0481 sec 40.0 KBytes 6.84 M
/0) (6.396
>> ms/1635289683.794338)
>> [ 1] 8.00-9.00 sec 40.0 KBytes 328 Kbits/sec 10/0 0
>> 14K/5329 us 8
>> [ 1] 8.00-9.00 sec S8-PDF:
>> bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1
>> (5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0/0) (6.292
On Sat, 2016-06-04 at 22:55 +0300, Jonathan Morton wrote:
> > On 4 Jun, 2016, at 20:49, Eric Dumazet <eric.duma...@gmail.com> wrote:
> >
> > ECN (as in RFC 3168) is well known to be trivially exploited by peers
> > pretending to be ECN ready, but not reac
On Mon, 2016-05-16 at 01:34 +0300, Roman Yeryomin wrote:
> qdisc fq_codel 8003: parent :3 limit 1024p flows 16 quantum 1514
> target 80.0ms ce_threshold 32us interval 100.0ms ecn
> Sent 1601271168 bytes 1057706 pkt (dropped 1422304, overlimits 0 requeues 17)
> backlog 1541252b 1018p requeues 17
On Fri, 2016-05-06 at 17:25 +0200, moeller0 wrote:
> Hi Eric,
>
> > On May 6, 2016, at 15:25 , Eric Dumazet <eric.duma...@gmail.com> wrote:
> > Angles of attack :
> >
> > 1) I will provide a per device /sys/class/net/eth0/gro_max_frags so that
> > we
On Fri, 2016-05-06 at 13:46 +0200, moeller0 wrote:
> Hi Jesper,
>
> > On May 6, 2016, at 13:33 , Jesper Dangaard Brouer
> wrote:
> >
> >
> > On Fri, 6 May 2016 10:41:53 +0200 moeller0 wrote:
> >
> >>Speaking out of total ignorance, I ask why not
On Thu, 2016-05-05 at 19:25 +0300, Roman Yeryomin wrote:
> On 5 May 2016 at 19:12, Eric Dumazet <eric.duma...@gmail.com> wrote:
> > On Thu, 2016-05-05 at 17:53 +0300, Roman Yeryomin wrote:
> >
> >>
> >> qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p fl
On Thu, 2016-05-05 at 17:53 +0300, Roman Yeryomin wrote:
>
> qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024
> quantum 1514 target 5.0ms interval 100.0ms ecn
> Sent 12306 bytes 128 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> maxpacket 0
On Tue, 2016-05-03 at 10:37 -0700, Dave Taht wrote:
> Thus far this batch drop patch is testing out beautifully. Under a
> 900Mbit flood going into 100Mbit on the pcengines apu2, cpu usage for
> ksoftirqd now doesn't crack 10%, where before (under
> pie,pfifo,fq_codel,cake & the prior fq_codel)
On Mon, 2016-05-02 at 18:43 +0300, Roman Yeryomin wrote:
> On 2 May 2016 at 18:07, Eric Dumazet <eric.duma...@gmail.com> wrote:
> > On Mon, 2016-05-02 at 17:18 +0300, Roman Yeryomin wrote:
> >
> >> Imagine you are a video operator, have MacBook Pro, gigabit LAN and
On Mon, 2016-05-02 at 17:09 +0300, Roman Yeryomin wrote:
> So if I run some UDP download you will just kill me? Sounds broken.
>
Seriously guys, I never suggesting kill a _task_ but the _flow_
Meaning dropping packets. See ?
If you do not want to drop packets, do not use fq_codel and simply
On Mon, 2016-05-02 at 16:47 +0300, Roman Yeryomin wrote:
> So it looks to me that fq_codel is just broken if it needs half of my
> resources.
Agreed.
When I wrote fq_codel, I was not expecting that one UDP socket could
fill fq_codel with packets, since we have standard backpressure.
SO_SNDBUF
On Sun, 2016-05-01 at 11:26 -0700, Dave Taht wrote:
> On Sun, May 1, 2016 at 10:59 AM, Eric Dumazet <eric.duma...@gmail.com> wrote:
> >
> > Well, just _kill_ the offender, instead of trying to be gentle.
>
> I like it. :) Killing off a malfunctioning program flooding th
On Sun, 2016-05-01 at 23:35 +0300, Jonathan Morton wrote:
> > On 1 May, 2016, at 21:46, Eric Dumazet <eric.duma...@gmail.com> wrote:
> >
> > Optimizing the search function is not possible, unless you slow down the
> > fast path. This was my design choice.
>
>
On Sun, 2016-05-01 at 11:46 -0700, Eric Dumazet wrote:
> Just drop half backlog packets instead of 1, (maybe add a cap of 64
> packets to avoid too big burts of kfree_skb() which might add cpu
> spikes) and problem is gone.
>
I used following patch and it indeed solved the issue
On Sat, 2016-04-30 at 20:41 -0700, Dave Taht wrote:
> >>>
> >>> 45.78% [kernel] [k] fq_codel_drop
> >>> 3.05% [kernel] [k] ag71xx_poll
> >>> 2.18% [kernel] [k] skb_release_data
> >>> 2.01% [kernel] [k] r4k_dma_cache_inv
>
> The udp flood behavior is
On Mon, 2016-04-18 at 07:16 +0200, Michal Kazior wrote:
>
> I guess .h file can give the compiler an opportunity for more
> optimizations. With .c you would need LTO which I'm not sure if it's
> available everywhere.
>
This makes little sense really. Otherwise everything would be in .h
files.
On Wed, 2015-06-17 at 08:49 +0300, Jonathan Morton wrote:
2) The active flow counter is now an atomic-access variable. This is really
just an abundance of caution.
Certainly not needed.
Qdisc enqueue() dequeue() are done under qdisc spinlock protection.
Oh well. Today really was a bad day for me.
Fortunately, tomorrow is almost there.
On Wed, 2015-04-15 at 21:16 -0700, Kathleen Nichols wrote:
...otherwise known as limiting the size of buffers?
Go ahead, Dave, surely liquor doesn't keep you from laughing.
Gee, I thought the code was
On Tue, 2014-05-20 at 20:16 +1000, Andrew McGregor wrote:
That's about what constitutes a flow. fq_codel as implemented in
linux works per (source ip, dest ip, protocol, source port, dest port)
5-tuple. Linux should probably support multiple flow hashing
algorithms in the kernel.
Right.
On Tue, 2014-05-20 at 08:31 -0700, Eric Dumazet wrote:
On Tue, 2014-05-20 at 20:16 +1000, Andrew McGregor wrote:
That's about what constitutes a flow. fq_codel as implemented in
linux works per (source ip, dest ip, protocol, source port, dest port)
5-tuple. Linux should probably support
On Sat, 2013-08-31 at 13:47 -0700, Dave Taht wrote:
Eric Dumazet just posted a pure fq scheduler (using the highly
optimized red/black trees in the kernel)
http://marc.info/?l=linux-netdevm=137740009008261w=2
which scales to millions of concurrent flows per qdisc.Jon Corbet
wrote
On Mon, 2013-07-15 at 06:57 -0700, Eric Dumazet wrote:
By the way, tcp_cong.c has a race in its list handling, list_move() is
not RCU compatable.
Oh well, list_move() is fine, ignore this false statement.
___
Codel mailing list
Codel
On Fri, 2013-07-12 at 11:34 +0200, Jesper Dangaard Brouer wrote:
I also think of fq_codel as a good replacement for pfifo_fast. As
the 3-PRIO bands in pfifo_fast is replaced with something smarter in
fq_codel. (IMHO please don't try to add a prio_fq_codel, just be because
pfifo_fast had prio
On Fri, 2013-07-12 at 18:36 +0200, Sebastian Moeller wrote:
Question, what stops the same attacker to also fudge the TOS bits (say
to land in priority band 0)? Just asking...
This kind of thing is filtered before those packets arrive to the tx
queue where pfifo_fast is plugged ;)
TOS
On Fri, 2013-07-12 at 12:54 -0400, Dave Taht wrote:
My point was that same program would be just as damaging against
pfifo_fast.
Or just think of SYN flood attack.
For which other defenses exist.
If someone uses pfifo_fast, it needs no particular protection right now
to be able to log
On Fri, 2013-07-12 at 13:35 -0400, Dave Taht wrote:
Against a syn flood attack?
Yes. SYNACK messages are in the band 1. SYNACK messages might be
dropped, but your precious management traffic will not.
Thats the point you absolutely missed. Its kind of incredible.
I guess I'm still
On Thu, 2013-07-11 at 10:09 -0700, Dave Taht wrote:
In my default environments (wifi, mainly) the hardware queues have
very different properties.
I'm under the impression that in at least a few ethernet devices they
are essentially the same. That said, in the sch_mq case, an entirely
On Thu, 2013-07-11 at 14:18 -0700, Dave Taht wrote:
I have incidentally long thought that you are also tweaking target and
interval for your environment?
Google data centers are spread all over the world, and speed of light
was not increased by wonderful Google engineers,
thats a real pity I
On Tue, 2013-07-09 at 09:57 +0200, Toke Høiland-Jørgensen wrote:
Mikael Abrahamsson swm...@swm.pp.se writes:
For me, it shows that FQ_CODEL indeed affects TCP performance
negatively for long links, however it looks like the impact is only
about 20-30%.
As far as I can tell, fq_codel's
On Tue, 2013-07-09 at 15:13 +0200, Toke Høiland-Jørgensen wrote:
Eric Dumazet eric.duma...@gmail.com writes:
What do you mean ? This makes little sense to me.
The data from my previous post
(http://archive.tohojo.dk/bufferbloat-data/long-rtt/throughput.txt)
shows fq_codel achieving
On Tue, 2013-05-14 at 03:24 -0700, Dave Taht wrote:
As for dealing with incoming vs outgoing traffic, it might be possible
to use connection tracking to successfully re-mark traffic on incoming
to match the outgoing.
Indeed, we had a discussion about that during Netfilter workshop 2013 in
On Wed, 2013-05-08 at 15:25 -0700, Dave Taht wrote:
Heh. I am hoping you are providing this as a negative proof!? as the
strict prioritization of this particular linux scheduler means that a
single full rate TCP flow in class 1:1 will completely starve classes
1:2 and 1:3.
Some level of
On Tue, 2013-05-07 at 14:56 -0500, Wes Felter wrote:
Is it time for prio_fq_codel or wfq_codel? That's what comes to mind
when seeing the BitTorrent vs. VoIP results.
Sure !
eth=eth0
tc qdisc del dev $eth root 2/dev/null
tc -batch EOF
qdisc add dev $eth root handle 1: prio bands 3
qdisc
On Thu, 2012-08-30 at 16:19 -0700, Dave Taht wrote:
In that case it will deliver 3 acks in a row from
stream A, and then 3 acks in stream B, in the linux 3.5 version, and
push the the 1500 byte packet from my example to the old flows queue -
Nope, the 1500 byte packet will be sent as normal
On Sun, 2012-08-26 at 12:02 -0700, Dave Täht wrote:
From: Dave Taht dave.t...@bufferbloat.net
This updates the codel algorithm to more closely match the current
experimental ns2 code. Not presently recomended for mainline.
1) It shortens the search for the minimum by reducing the window
On Sun, 2012-08-05 at 11:14 -0700, Yuchung Cheng wrote:
On Sun, Aug 5, 2012 at 10:35 AM, Eric Dumazet eric.duma...@gmail.com wrote:
On Sun, 2012-08-05 at 19:26 +0200, Eric Dumazet wrote:
It could be a flaw in linux implementation, I admit we had so many bugs
that it could very well
On Sat, 2012-08-04 at 20:06 -0700, Andrew McGregor wrote:
Well, thanks Eric for trying it.
Hmm. How was I that wrong? Because I was supporting that idea.
Time to think.
No problem Andrew ;)
Its seems ECN is not well enough understood.
ECN marking a packet has the same effect for the
On Tue, 2012-07-10 at 17:13 +0200, Eric Dumazet wrote:
This introduce TSQ (TCP Small Queues)
TSQ goal is to reduce number of TCP packets in xmit queues (qdisc
device queues), to reduce RTT and cwnd bias, part of the bufferbloat
problem.
sk-sk_wmem_alloc not allowed to grow above a given
, or else a single TCP
session can still fill the whole NIC TX ring, since TSQ will
have no effect.
Signed-off-by: Eric Dumazet eduma...@google.com
Cc: Dave Taht dave.t...@bufferbloat.net
Cc: Tom Herbert therb...@google.com
Cc: Matt Mathis mattmat...@google.com
Cc: Yuchung Cheng ych...@google.com
On Wed, 2012-07-11 at 08:43 -0700, Ben Greear wrote:
On 07/11/2012 08:25 AM, Eric Dumazet wrote:
On Wed, 2012-07-11 at 08:16 -0700, Ben Greear wrote:
I haven't read your patch in detail, but I was wondering if this feature
would cause trouble for applications that are servicing many
On Wed, 2012-07-11 at 11:38 -0700, Andi Kleen wrote:
Eric Dumazet eric.duma...@gmail.com writes:
+
+ if (!sock_owned_by_user(sk)) {
+ if ((1 sk-sk_state)
+ (TCPF_ESTABLISHED | TCPF_FIN_WAIT1 |
+TCPF_CLOSING
On Mon, 2012-07-09 at 00:08 -0700, David Miller wrote:
I'm suspicious and anticipate that 10G will need more queueing than
you are able to get away with tg3 at 1G speeds. But it is an exciting
idea nonetheless :-)
There is a fundamental problem calling any xmit function from skb
destructor.
On Fri, 2012-06-29 at 06:51 +0200, Eric Dumazet wrote:
My long term plan is to reduce number of skbs queued in Qdisc for TCP
stack, to reduce RTT (removing the artificial RTT bias because of local
queues)
preliminar patch to give the rough idea :
sk-sk_wmem_alloc not allowed to grow above
From: Eric Dumazet eduma...@google.com
At enqueue time, check sojourn time of packet at head of the queue,
and return NET_XMIT_CN instead of NET_XMIT_SUCCESS if this sejourn
time is above codel @target.
This permits local TCP stack to call tcp_enter_cwr() and reduce its cwnd
without drops
On Thu, 2012-06-28 at 16:52 -0700, Nandita Dukkipati wrote:
As you know I really like this idea. My main concern is that the same
packet could cause TCP to reduce cwnd twice within an RTT - first on
enqueue and then if this packet is ECN marked on dequeue. I don't
think this is the desired
On Thu, 2012-06-28 at 19:07 +0200, Eric Dumazet wrote:
From: Eric Dumazet eduma...@google.com
At enqueue time, check sojourn time of packet at head of the queue,
and return NET_XMIT_CN instead of NET_XMIT_SUCCESS if this sejourn
time is above codel @target.
This permits local TCP stack
On Mon, 2012-06-25 at 11:25 -0700, Dave Taht wrote:
On Mon, Jun 25, 2012 at 10:48 AM, Jonathan Morton chromati...@gmail.com
wrote:
My impression is that ECN ignorant flows do exist, because of stupid
routers rather than malice, and that Dave considers this a problem worth
tackling in
48 matches
Mail list logo