It may look like a Xen issue. Though the t1.micro instance where using
some 3.4 versions, too. One had some kaos in the version string the
other looked more like to more common variant. If Amazon does not play
tricks and have different patches applied to the same versions of Xen on
different
Hi Stefan,
Ok I think I've figured out why you were unable to reproduce the
slowness, as I mentioned earlier we use the m2 instance type that runs
on underlying Xen 3.4 where the t1.micro is probably running on newer
infrastructure so I decided to give m3 a try and xen is actually newer
(4.2)
I think we need you to give us a good test case (good in the sense that
it reproduces the issue and also is easy to set up for us). Because I
tried again this morning with iperf but on two t1-micro (64bit) Trusty
daily AMIs. And with that I seem to get identical performance with or
witout GRO
This is what I run client side, the server (receiver) just starts iperf
-s.
** Attachment added: Client side (sender) script
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1391339/+attachment/4259734/+files/iperf-send-test
--
You received this bug notification because you are a member
Hi Rodrigo, there is a possibility that the problem is not a regression
in handling GRO but actually supporting GRO. I can find the following
commit in 3.13-rc1:
commit 99d3d587b2b4314ccc8ea066cb327dfb523d598e
Author: Wei Liu wei.l...@citrix.com
Date: Mon Sep 30 13:46:34 2013 +0100
I meant, not sure how to get traffic coming in in a way that GRO
actually slows things down instead of lowering the impact of processing.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
** Changed in: linux (Ubuntu)
Importance: Undecided = Medium
** Tags added: kernel-da-key
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1391339
Title:
Trusty kernel inbound network
Hi Stefan,
Interesting finding indeed, looks like the author has only tested on
Xen-4.4 and the EC2 instance type that we use report Xen-3.4 as the
logline below shows:
Xen version: 3.4.3.amazon (preserve-AD)
I'm using a common S3 url in my test cases and the results are the same
either using
This recent mainline commit might be relevant:
73d3fe6d1c6d840763ceafa9afae0aaafa18c4b5 gro: fix aggregation for skb
using frag_list
That commit notes that it fixes a regression introduced by 8a29111c7ca6
(net: gro: allow to build full sized skb), which first appeared in
Trusty. That fix is
Hi Kamal, thanks for puting together a PPA with the test kernel but
unfortunately I had the same results:
Linux runtime-common 4 3.13.0-40-generic #68+g73d3fe6-Ubuntu SMP Tue Nov
11 16:39:20 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
GRO enabled:
root@runtime-common.23 ~# for i in {1..3}; do ab -n
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: linux (Ubuntu)
Status: New = Confirmed
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1391339
FWIW here is the speed for the Lucid instances with a custom 3.8.11
kernel which I've used as baseline:
Transfer rate: 93501.64 [Kbytes/sec] received
Transfer rate: 84949.88 [Kbytes/sec] received
Transfer rate: 84795.65 [Kbytes/sec] received
Rodrigo.
--
You received
** Tags added: regression-release trusty
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1391339
Title:
Trusty kernel inbound network performance regression when GRO is
enabled
Status
13 matches
Mail list logo