Hi Eric
Sorry for coming late to the discussion.
On Thu, Apr 16, 2015 at 05:42:16AM -0700, Eric Dumazet wrote:
On Thu, 2015-04-16 at 11:01 +0100, George Dunlap wrote:
He suggested that after he'd been prodded by 4 more e-mails in which two
of us guessed what he was trying to get at.
On Tue, 2015-06-02 at 10:52 +0100, Wei Liu wrote:
Hi Eric
Sorry for coming late to the discussion.
On Thu, Apr 16, 2015 at 05:42:16AM -0700, Eric Dumazet wrote:
On Thu, 2015-04-16 at 11:01 +0100, George Dunlap wrote:
He suggested that after he'd been prodded by 4 more e-mails in
On Thu, Apr 16, 2015 at 1:42 PM, Eric Dumazet eric.duma...@gmail.com wrote:
On Thu, 2015-04-16 at 11:01 +0100, George Dunlap wrote:
He suggested that after he'd been prodded by 4 more e-mails in which two
of us guessed what he was trying to get at. That's what I was
complaining about.
My
On 04/16/2015 10:20 AM, Daniel Borkmann wrote:
So mid term, it would be much more beneficial if you attempt fix the
underlying driver issues that actually cause high tx completion delays,
instead of reintroducing bufferbloat. So that we all can move forward
and not backwards in time.
Yes, I
On 04/15/2015 07:19 PM, Eric Dumazet wrote:
On Wed, 2015-04-15 at 19:04 +0100, George Dunlap wrote:
Maybe you should stop wasting all of our time and just tell us what
you're thinking.
I think you make me wasting my time.
I already gave all the hints in prior discussions.
Right, and I
On 04/16/2015 10:56 AM, George Dunlap wrote:
On 04/15/2015 07:19 PM, Eric Dumazet wrote:
On Wed, 2015-04-15 at 19:04 +0100, George Dunlap wrote:
Maybe you should stop wasting all of our time and just tell us what
you're thinking.
I think you make me wasting my time.
I already gave all the
From: George Dunlap
Sent: 16 April 2015 09:56
On 04/15/2015 07:19 PM, Eric Dumazet wrote:
On Wed, 2015-04-15 at 19:04 +0100, George Dunlap wrote:
Maybe you should stop wasting all of our time and just tell us what
you're thinking.
I think you make me wasting my time.
I already
On Thu, 2015-04-16 at 12:39 +0100, George Dunlap wrote:
On 04/15/2015 07:17 PM, Eric Dumazet wrote:
Do not expect me to fight bufferbloat alone. Be part of the challenge,
instead of trying to get back to proven bad solutions.
I tried that. I wrote a description of what I thought the
On Thu, 2015-04-16 at 11:01 +0100, George Dunlap wrote:
He suggested that after he'd been prodded by 4 more e-mails in which two
of us guessed what he was trying to get at. That's what I was
complaining about.
My big complain is that I suggested to test to double the sysctl, which
gave good
On Thu, Apr 16, 2015 at 10:22 AM, David Laight david.lai...@aculab.com wrote:
ISTM that you are changing the wrong knob.
You need to change something that affects the global amount of pending tx
data,
not the amount that can be buffered by a single connection.
Well it seems like the problem
On 04/15/2015 07:17 PM, Eric Dumazet wrote:
Do not expect me to fight bufferbloat alone. Be part of the challenge,
instead of trying to get back to proven bad solutions.
I tried that. I wrote a description of what I thought the situation
was, so that you could correct me if my understanding
At 12:39 +0100 on 16 Apr (1429187952), George Dunlap wrote:
Your comment lists three benefits:
1. better RTT estimation
2. faster recovery
3. high rates
#3 is just marketing fluff; it's also contradicted by the statement that
immediately follows it -- i.e., there are drivers for which the
On Mon, Apr 13, 2015 at 2:49 PM, Eric Dumazet eric.duma...@gmail.com wrote:
On Mon, 2015-04-13 at 11:56 +0100, George Dunlap wrote:
Is the problem perhaps that netback/netfront delays TX completion?
Would it be better to see if that can be addressed properly, so that
the original purpose of
On Wed, 2015-04-15 at 14:43 +0100, George Dunlap wrote:
On Mon, Apr 13, 2015 at 2:49 PM, Eric Dumazet eric.duma...@gmail.com wrote:
On Mon, 2015-04-13 at 11:56 +0100, George Dunlap wrote:
Is the problem perhaps that netback/netfront delays TX completion?
Would it be better to see if that
On Wed, 2015-04-15 at 18:23 +0100, George Dunlap wrote:
On 04/15/2015 05:38 PM, Eric Dumazet wrote:
My thoughts that instead of these long talks you should guys read the
code :
/* TCP Small Queues :
* Control number of packets in qdisc/devices to two
On Wed, 2015-04-15 at 19:04 +0100, George Dunlap wrote:
Maybe you should stop wasting all of our time and just tell us what
you're thinking.
I think you make me wasting my time.
I already gave all the hints in prior discussions.
Rome was not built in one day.
--
To unsubscribe from this
On Wed, 2015-04-15 at 11:19 -0700, Rick Jones wrote:
Well, I'm not sure that it is George and Jonathan themselves who don't
want to change a sysctl, but the customers who would have to tweak that
in their VMs?
Keep in mind some VM users install custom qdisc, or even custom TCP
sysctls.
--
On Wed, 2015-04-15 at 18:23 +0100, George Dunlap wrote:
Which means that max(2*skb-truesize, sk-sk_pacing_rate 10) is
*already* larger for Xen; that calculation mentioned in the comment is
*already* doing the right thing.
Sigh.
1ms of traffic at 40Gbit is 5 MBytes
The reason for the cap to
On 04/15/2015 05:38 PM, Eric Dumazet wrote:
My thoughts that instead of these long talks you should guys read the
code :
/* TCP Small Queues :
* Control number of packets in qdisc/devices to two packets
/ or ~1 ms.
* This allows for :
On Wed, 2015-04-15 at 10:55 -0700, Rick Jones wrote:
Have you tested this patch on a NIC without GSO/TSO ?
This would allow more than 500 packets for a single flow.
Hello bufferbloat.
Woudln't the fq_codel qdisc on that interface address that problem?
Last time I checked, default
On Wed, 2015-04-15 at 18:58 +0100, Stefano Stabellini wrote:
On Wed, 15 Apr 2015, Eric Dumazet wrote:
On Wed, 2015-04-15 at 18:23 +0100, George Dunlap wrote:
Which means that max(2*skb-truesize, sk-sk_pacing_rate 10) is
*already* larger for Xen; that calculation mentioned in the
On 04/15/2015 06:29 PM, Eric Dumazet wrote:
On Wed, 2015-04-15 at 18:23 +0100, George Dunlap wrote:
On 04/15/2015 05:38 PM, Eric Dumazet wrote:
My thoughts that instead of these long talks you should guys read the
code :
/* TCP Small Queues :
* Control
Have you tested this patch on a NIC without GSO/TSO ?
This would allow more than 500 packets for a single flow.
Hello bufferbloat.
Woudln't the fq_codel qdisc on that interface address that problem?
rick
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a
On Wed, 15 Apr 2015, Eric Dumazet wrote:
On Wed, 2015-04-15 at 18:23 +0100, George Dunlap wrote:
Which means that max(2*skb-truesize, sk-sk_pacing_rate 10) is
*already* larger for Xen; that calculation mentioned in the comment is
*already* doing the right thing.
Sigh.
1ms of traffic
On 04/15/2015 06:52 PM, Eric Dumazet wrote:
On Wed, 2015-04-15 at 18:41 +0100, George Dunlap wrote:
So you'd be OK with a patch like this? (With perhaps a better changelog?)
-George
---
TSQ: Raise default static TSQ limit
A new dynamic TSQ limit was introduced in c/s 605ad7f18 based
On 04/15/2015 11:08 AM, Eric Dumazet wrote:
On Wed, 2015-04-15 at 10:55 -0700, Rick Jones wrote:
Have you tested this patch on a NIC without GSO/TSO ?
This would allow more than 500 packets for a single flow.
Hello bufferbloat.
Woudln't the fq_codel qdisc on that interface address that
On 04/15/2015 11:32 AM, Eric Dumazet wrote:
On Wed, 2015-04-15 at 11:19 -0700, Rick Jones wrote:
Well, I'm not sure that it is George and Jonathan themselves who don't
want to change a sysctl, but the customers who would have to tweak that
in their VMs?
Keep in mind some VM users install
On Thu, 2015-04-16 at 12:20 +0800, Herbert Xu wrote:
Eric Dumazet eric.duma...@gmail.com wrote:
We already have netdev-gso_max_size and netdev-gso_max_segs
which are cached into sk-sk_gso_max_size sk-sk_gso_max_segs
It is quite dangerous to attempt tricks like this because a
tc
Eric Dumazet eric.duma...@gmail.com wrote:
We already have netdev-gso_max_size and netdev-gso_max_segs
which are cached into sk-sk_gso_max_size sk-sk_gso_max_segs
It is quite dangerous to attempt tricks like this because a
tc redirection or netfilter nat could change the destination
device
29 matches
Mail list logo