On 06/22/2016 03:56 PM, Alexander Duyck wrote:
On Wed, Jun 22, 2016 at 3:47 PM, Eric Dumazet <eric.duma...@gmail.com> wrote:
On Wed, 2016-06-22 at 14:52 -0700, Rick Jones wrote:
Had the bnx2x-driven NICs' firmware not had that rather unfortunate
assumption about MSSes I probably would
On 06/22/2016 03:47 PM, Eric Dumazet wrote:
On Wed, 2016-06-22 at 14:52 -0700, Rick Jones wrote:
On 06/22/2016 11:22 AM, Yuval Mintz wrote:
But seriously, this isn't really anything new but rather a step forward in
the direction we've already taken - bnx2x/qede are already performing
the same
d never have noticed.
happy benchmarking,
rick jones
, as would a comparison of the service demands of the different
single-stream results.
CPU and NIC models would provide excellent context for the numbers.
happy benchmarking,
rick jones
g MTUs to 65535 bytes?
rick jones
On 06/02/2016 10:06 AM, Aaron Conole wrote:
Rick Jones <rick.jon...@hpe.com> writes:
One of the things I've been doing has been setting-up a cluster
(OpenStack) with JumboFrames, and then setting MTUs on instance vNICs
by hand to measure different MTU sizes. It would be a shame if such a
onable proxy for aggregate small packet
performance.
happy benchmarking,
rick jones
On 05/04/2016 10:34 AM, Eric Dumazet wrote:
On Wed, 2016-05-04 at 10:24 -0700, Rick Jones wrote:
Dropping the connection attempt makes sense, but is entering/claiming
synflood really indicated in the case of a zero-length accept queue?
This is a one time message.
This is how people can
On 05/03/2016 05:25 PM, Eric Dumazet wrote:
On Tue, 2016-05-03 at 23:54 +0200, Peter Wu wrote:
When applications use listen() with a backlog of 0, the kernel would
set the maximum connection request queue to zero. This causes false
reports of SYN flooding (if tcp_syncookies is enabled) or
to the driver, which then either
queued them all, or none of them.
I don't recall seeing similar poor behaviour in Linux; I would have
assumed that the intra-stack flow-control "took care" of it. Perhaps
there is something specific to wpan which precludes that?
happy benchmarking,
rick jones
els perform identically - just as you would
expect.
Running in a VM will likely change things massively and could I suppose
mask other behaviour changes.
happy benchmarking,
rick jones
raj@tardy:~$ cat signatures/toppost
A: Because it fouls the order in which people normally read text.
Q: Why is top
size of one byte) doesn't really care about stateless
offloads or MTUs and could show how much difference there is in basic
path length (or I suppose in interrupt coalescing behaviour if the NIC
in question has a mildly dodgy heuristic for such things).
happy benchmarking,
rick jones
Trinity and other fuzzers can hit this WARN on far too easily,
resulting in a tainted kernel that hinders automated fuzzing.
Replace it with a rate-limited printk.
Signed-off-by: Dave Jones <da...@codemonkey.org.uk>
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index 1ecfa7
inates ambiguity
when analyzing TCP traces.
Yes, our team (including Van Jacobson ;) ) would be sad to not have
sequential IP ID (but then we don't have them for IPv6 ;) )
Your team would not be the only one sad to see that go away.
rick jones
Since the cost of generating them is pretty small (inet-&g
On 03/28/2016 01:01 PM, Eric Dumazet wrote:
Note : file structures got RCU freeing back in 2.6.14, and I do not
think named users ever complained about added cost ;)
Couldn't see the tree for the forest I guess :)
rick
On 03/28/2016 11:55 AM, Eric Dumazet wrote:
On Mon, 2016-03-28 at 11:44 -0700, Rick Jones wrote:
On 03/28/2016 10:00 AM, Eric Dumazet wrote:
If you mean that a busy DNS resolver spends _most_ of its time doing :
fd = socket()
bind(fd port=0)
< send and receive one frame >
close(fd)
On 03/28/2016 10:00 AM, Eric Dumazet wrote:
On Mon, 2016-03-28 at 09:15 -0700, Rick Jones wrote:
On 03/25/2016 03:29 PM, Eric Dumazet wrote:
UDP sockets are not short lived in the high usage case, so the added
cost of call_rcu() should not be a concern.
Even a busy DNS resolver?
If you
On 03/25/2016 03:29 PM, Eric Dumazet wrote:
UDP sockets are not short lived in the high usage case, so the added
cost of call_rcu() should not be a concern.
Even a busy DNS resolver?
rick jones
commit 911362c70d ("net: add dst_cache support") added a new
kconfig option that gets selected by other networking options.
It seems the intent wasn't to offer this as a user-selectable
option given the lack of help text, so this patch converts it
to a silent option.
Signed-off-by: Dave
n inflight at one time.
And unless one uses the test-specific -e option to provide a very crude
retransmission mechanism based on a socket read timeout, neither does
UDP_RR recover from lost datagrams.
happy benchmarking,
rick jones
http://www.netperf.org/
may add more thorough
error handling.
How do you see this interacting with VMs getting MTU settings via DHCP?
rick jones
v2:
* Whitespace and code style cleanups from Sergei Shtylyov and Paolo Abeni
* Additional test before printing a warning
Aaron Conole (2):
virtio: Start feature MTU
com>
Note that RDB probably should get some SNMP counters,
so that we get an idea of how many times a loss could be repaired.
And some idea of the duplication seen by receivers, assuming there isn't
already a counter for such a thing in Linux.
happy benchmarking,
rick jones
Ideally, if th
ing a non-zero IP ID on fragments with
DF set?
rick jones
We need to do increment IP identifier in UFO, but I only see one
device (neterion) that advertises NETIF_F_UFO-- honestly, removing
that feature might be another good simplification!
Tom
--
-Ed
ne can try to craft
things so there is no storage I/O of note, it would still be better to
use a network-specific tool such as netperf or iperf. Minimize the
number of variables.
happy benchmarking,
rick jones
k local multicast */
#define BR_GROUPFWD_DEFAULT 0
/* Don't allow forwarding of control protocols like STP, MAC PAUSE and LACP */
If you are going to 9000. why not just go ahead and use the maximum size
of an IP datagram?
rick jones
Or Gerlitz <ogerl...@mellanox.com>
Reviewed-By: Rick Jones <rick.jon...@hpe.com>
rick
s->rx_crc_errors = be32_to_cpu(mlx4_en_stats->RCRC);
stats->rx_frame_errors = 0;
stats->rx_fifo_errors = be32_to_cpu(mlx4_en_stats->RdropOvflw);
happy benchmarking,
rick jones
1-comp0001-mgmt:~$ grep TCPOFO xps_off_* | awk '{sum +=
$NF}END{print "sum",sum/NR}'
sum 173.9
happy benchmarking,
rick jones
raw results at ftp://ftp.netperf.org/xps_4.4.0-1_ixgbe.tgz
1
stack@fcperf-cp1-comp0001-mgmt:~$ grep "1 1" xps_tcp_rr_off_* |
awk '{t+=$6;r+=$9;s+=$10}END{print "throughput",t/NR,"recv
sd",r/NR,"send sd",s/NR}'
throughput 20883.6 recv sd 19.6255 send sd 20.0178
So that is 12% on TCP_RR throughput.
Looks like XPS shouldn't be enabled by default for ixgbe.
happy benchmarking,
rick jones
On Tue, Feb 02, 2016 at 02:28:58AM +, Linux Kernel wrote:
> Web:
> https://git.kernel.org/torvalds/c/ce87fc6ce3f9f4488546187e3757cf666d9d4a2a
> Commit: ce87fc6ce3f9f4488546187e3757cf666d9d4a2a
> Parent: 5f2f3cad8b878b23f17a11dd5af4f4a2cc41c797
> Refname:
using some packet re-ordering. That
could I suppose explain the TCP_STREAM difference, but not the TCP_RR
since that has just a single segment in flight at one time.
I can try to get perf/whatnot installed on the systems - suggestions as
to what metrics to look at are welcome.
happy bench
On 02/04/2016 04:47 AM, Michael S. Tsirkin wrote:
On Wed, Feb 03, 2016 at 03:49:04PM -0800, Rick Jones wrote:
And even for not-quite-virtual devices - such as a VC/FlexNIC in an HPE
blade server there can be just about any speed set. I think we went down a
path of patching some things
On 02/04/2016 11:38 AM, Tom Herbert wrote:
On Thu, Feb 4, 2016 at 11:13 AM, Rick Jones <rick.jon...@hpe.com> wrote:
The Intel folks suggested something about the process scheduler moving the
sender around and ultimately causing some packet re-ordering. That could I
suppose e
On 02/04/2016 12:13 PM, Tom Herbert wrote:
On Thu, Feb 4, 2016 at 11:57 AM, Rick Jones <rick.jon...@hpe.com> wrote:
On 02/04/2016 11:38 AM, Tom Herbert wrote:
XPS has OOO avoidance for TCP, that should not be a problem.
What/how much should I read into:
With XPSTCPOFOQueue:
On 02/03/2016 03:32 PM, Stephen Hemminger wrote:
But why check for valid value at all. At some point in the
future, there will be yet another speed adopted by some standard body
and the switch statement would need another value.
Why not accept any value? This is a virtual device.
And even
===
[ INFO: suspicious RCU usage. ]
4.5.0-rc2-think+ #2 Tainted: GW
---
net/ipv6/ip6_flowlabel.c:543 suspicious rcu_dereference_check() usage!
other info that might help us debug this:
rcu_scheduler_active = 1, debug_locks =
netdev_rss_key is written to once and thereafter is read by
drivers when they are initialising. The fact that it is mostly
read and not written to makes it a candidate for a __read_mostly
declaration.
Signed-off-by: Kim Jones <kim-marie.jo...@intel.com>
Signed-off-by: Alan Carey &l
through an interface is significantly
greater than the reported link speed. I have to wonder how unique it is
in that regard.
Doesn't mean there can't be a default, but does suggest it should be
rather high.
rick jones
On Sun, Jan 17, 2016 at 12:06:58PM -0500, Dave Jones wrote:
> I've managed to trigger this a few times the last few days, on Linus' tree.
>
> ==
> BUG: KASAN: slab-out-of-bounds in pptp_connect+0xb7b/0xc70 [p
===
[ INFO: suspicious RCU usage. ]
4.4.0-rc8-firewall+ #1 Not tainted
---
net/ipv6/tcp_ipv6.c:465 suspicious rcu_dereference_check() usage!
other info that might help us debug this:
rcu_scheduler_active = 1, debug_locks = 1
1 lock held by
On Wed, Dec 30, 2015 at 10:38:56AM +0100, Daniel Borkmann wrote:
> Given that this drop doesn't strictly need to be caused by filter code,
> it would be nice if you could pin the location down where the packet gets
> dropped exactly. Perhaps dropwatch or perf with '-e skb:kfree_skb -a -g
>
On Tue, Dec 22, 2015 at 04:50:20PM -0500, David Miller wrote:
> > > > Simple fix is below. Though, I don't understand the history of the
> > > > multiple locks in this structure to be sure it's correct. I'll send
> > > > it as a formal patch. Please reject if it's not the right
On Tue, Dec 22, 2015 at 04:42:25PM -0500, David Miller wrote:
> From: Craig Gallek
> Date: Tue, 22 Dec 2015 16:38:32 -0500
>
> > On Tue, Dec 22, 2015 at 4:28 PM, David Miller wrote:
> >> From: Craig Gallek
> >> Date: Tue,
===
[ INFO: suspicious RCU usage. ]
4.4.0-rc6-think+ #1 Not tainted
---
lib/rhashtable.c:522 suspicious rcu_dereference_protected() usage!
other info that might help us debug this:
rcu_scheduler_active = 1, debug_locks = 0
2 locks held by
per-core "horsepower" on either side and so why LRO on/off could have
also affected the TCP_STREAM results. (When LRO was off it was off on
both sides, and when on was on on both yes?)
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line "unsubs
try doing is turning on LRO support via ethtool -k to see if that is the
issue you are seeing.
Hi Alex,
enabling LRO resolved the problem.
So you had the same NIC and CPUs and whatnot on both sides?
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
t
===
[ INFO: suspicious RCU usage. ]
4.4.0-rc3-think+ #8 Tainted: GW
---
net/sctp/ipv6.c:331 suspicious rcu_dereference_check() usage!
other info that might help us debug this:
rcu_scheduler_active = 1, debug_locks = 0
1 lock
On Sat, Dec 05, 2015 at 05:13:06PM -0800, Eric Dumazet wrote:
> > diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
> > index acb45b8c2a9d..7081183f4d9f 100644
> > --- a/net/sctp/ipv6.c
> > +++ b/net/sctp/ipv6.c
> > @@ -328,7 +328,9 @@ static void sctp_v6_get_dst(struct sctp_transport *t,
>
s
created. If you want to see what they became by the end of the test,
you need to use the appropriate output selectors (or, IIRC invoking the
tests as "omni" rather than tcp_stream/tcp_maerts will report the end
values rather than the start ones.).
happy benchmarking,
rick jones
--
ver side. That is pure "effective" path-length increase.
happy benchmarking,
rick jones
PS - the netperf commands were varations on this theme:
./netperf -P 0 -T 0 -H 10.12.49.1 -c -C -l 30 -i 30,3 -- -O
throughput,local_cpu_util,local_sd,local_cpu_peak_util,remote_cpu_uti
keeping the per-byte roughly the same.
You could also compare the likes of a single-byte netperf TCP_RR test
between ipsec enabled and not to get an idea of the basic path length
differences without TSO/GSO/whatnot muddying the waters.
happy benchmarking,
rick jones
--
To unsubscribe from
On 12/01/2015 10:45 AM, Sowmini Varadhan wrote:
On (12/01/15 10:17), Rick Jones wrote:
What do the perf profiles show? Presumably, loss of TSO/GSO means
an increase in the per-packet costs, but if the ipsec path
significantly increases the per-byte costs...
For ESP-null, there's actually
My router fell off the internet. When I got home, I found a few hundred
of these traces in the logs, and it refusing to route packets.
Oddly, it only prints a stack trace, and no clue as to why it printed that
trace.
There was also nothing in the log prior to this that indicates how it got that
jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
I've been trying to figure this one out for a while. It smells like a race,
but I can't figure out any more than the clues below, and I've not really
got the time to dig into it.
After running Trinity for a while, I saw the machine just suddenly reboot.
I managed to capture a partial trace over
On Fri, Nov 13, 2015 at 02:37:00PM -0700, Jens Axboe wrote:
> Hi,
>
> Tried to connect to sw vpn today, and it isn't working. Running git
> as-of yesterday. In dmesg:
>
> [23703.921542] vpn0: set_features() failed (-1); wanted
> 0x008048c1, left 0x0080001b48c9
>
>
On Wed, Nov 11, 2015 at 10:19:28AM +0100, Francois Romieu wrote:
> Dave Jones <da...@codemonkey.org.uk> :
> > This happens during boot, (and then there's a flood of traces that happen
> > so fast
> > afterwards it completely overwhelms serial console; not sure if th
This happens during boot, (and then there's a flood of traces that happen so
fast
afterwards it completely overwhelms serial console; not sure if they're the
same/related or not).
==
BUG: KASAN: use-after-free in
On Thu, Nov 05, 2015 at 01:29:15PM -0500, David Miller wrote:
> From: Sergei Shtylyov
> Date: Thu, 5 Nov 2015 20:19:17 +0300
>
> >Hmm, I hadn't seen your announcement, else I would have refrained from
> >sending. Will look for it now...
>
> I
aggregate _RR/packets per second for many VMs on
the same system would be in order.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
/run/netns . At least that is what an strace of that
command suggests.
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
though I've installed the dbg package from "make deb-pkg" the
symbol resolution doesn't seem to be working.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
Mo
On 08/31/2015 02:29 PM, David Ahern wrote:
On 8/31/15 1:48 PM, Rick Jones wrote:
My attempts to get a call-graph have been met with very limited success.
Even though I've installed the dbg package from "make deb-pkg" the
symbol resolution doesn't seem to be working.
Looks like D
is using virtio_net) Does the behaviour
change if vhost-net is loaded into the host and used by the VM?
rick jones
For completeness, it would also be good to compare the likes of netperf
TCP_RR between VxLAN and without.
--
To unsubscribe from this list: send the line unsubscribe netdev
I've got a machine with an onboard NIC that reproduces a hardware
hang every time I do an rsync to it.
[ 488.752630] e1000e :00:19.0 eth0: Detected Hardware Unit Hang:
TDH 27
TDT 34
next_to_use 34
next_to_clean23
On 08/12/2015 04:46 PM, David Miller wrote:
From: r...@tardy.usa.hp.com (Rick Jones)
Date: Wed, 12 Aug 2015 10:23:14 -0700 (PDT)
From: Rick Jones rick.jon...@hp.com
A few things have changed since the previous version of the vxlan
documentation was written, so update it and correct some
On Wed, Jul 15, 2015 at 06:07:10PM -0400, Dave Jones wrote:
While experimenting with some dccp fuzzing, I hit this..
Oops: 0010 [#1] PREEMPT SMP DEBUG_PAGEALLOC
CPU: 3 PID: 19269 Comm: trinity-c22 Not tainted 4.2.0-rc2-think+ #2
task: 88006f3954c0 ti: 8802b89b task.ti
From: Rick Jones rick.jon...@hp.com
A few things have changed since the previous version of the vxlan
documentation was written, so update it and correct some grammer and
such while we are at it.
Signed-off-by: Rick Jones rick.jon...@hp.com
---
v2: Stephen Hemminger feedback to include dstport
From: Rick Jones rick.jon...@hp.com
A few things have changed since the previous version of the vxlan
documentation was written, so update it and correct some grammer and
such while we are at it.
Signed-off-by: Rick Jones rick.jon...@hp.com
diff --git a/Documentation/networking/vxlan.txt
b
On 08/11/2015 03:09 PM, Stephen Hemminger wrote:
On Tue, 11 Aug 2015 13:47:16 -0700 (PDT)
r...@tardy.usa.hp.com (Rick Jones) wrote:
+ # ip link add vxlan0 type vxlan id 42 group 239.1.1.1 dev eth1
+
+This creates a new device named vxlan0. The device uses the
+multicast group 239.1.1.1 over
From: Rick Jones rick.jon...@hp.com
Add an explicit neighbour table overflow message (ratelimited) and
statistic to make diagnosing neighbour table overflows tractable in
the wild.
Diagnosing a neighbour table overflow can be quite difficult in the wild
because there is no explicit dmesg logged
be indicated as well. The
forced_gc_runs stat doesn't indicate success or failure of the garbage
collection, so in and of itself it doesn't mean we had a failure to add
an entry to the table.
Thoughts/comments?
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line
On 08/03/2015 06:37 AM, Michael S. Tsirkin wrote:
Ideally this needs to also be tested on non-vxlan configs with gro in
host, to make sure this doesn't cause regressions.
Measured with the same instances on the same hardware and software,
taking a path through the stack (public rather than
From: Rick Jones rick.jon...@hp.com
Track success and failure of TCP PMTU probing.
Signed-off-by: Rick Jones rick.jon...@hp.com
---
Tested by loading-up into an OpenStack instance and kicking the MTU
out from under it in the corresponding router namespace.
diff --git a/include/uapi/linux
While experimenting with some dccp fuzzing, I hit this..
Oops: 0010 [#1] PREEMPT SMP DEBUG_PAGEALLOC
CPU: 3 PID: 19269 Comm: trinity-c22 Not tainted 4.2.0-rc2-think+ #2
task: 88006f3954c0 ti: 8802b89b task.ti: 8802b89b
RIP: 0010:[] [ (null)]
On Thu, Jul 09, 2015 at 01:42:29PM -0700, Tom Herbert wrote:
For general information about IPv6, see
https://en.wikipedia.org/wiki/IPv6.
- For Linux IPv6 development information, see
http://www.linux-ipv6.org.
- For specific information about IPv6 under Linux,
On 06/28/2015 10:20 AM, Ramu Ramamurthy wrote:
Rick, in your test, are you seeing gro becoming effective on the
vxlan interface with the 82599ES nic ? (ie, tcpdump on the vxlan
interface shows larger frames than the mtu of that interface, and
kernel trace shows vxlan_gro_receive() being hit)
servers are doing.
Slight drift - Linux is, for lack of a better expression, a complete
fruit stand. One customer might indeed be into oranges, but I've had
customers coming to me wanting to see shiny apples.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line
I went ahead and put the patched kernel on both systems. I was getting
mixed results - in one direction, results in the 8Gbit/s range, in the
other in the 7 Gbit/s. I noticed that interrupts were going to
different CPUs so I started playing with IRQ assignments, and bound all
interrupts of
fix (GRO disabled on VXLAN interface)
Verified no GRO is happening.
9084 MBps tput
5.54% CPU utilization
This has been an area of interest so:
Tested-by: Rick Jones rick.jon...@hp.com
Some single-stream results between two otherwise identical systems with
82599ES NICs in them, one
explanation can be found in the beginning few
pages of the file.
rick jones
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
I taught Trinity about NETLINK_LISTEN_ALL_NSID and NETLINK_LIST_MEMBERSHIPS
yesterday, and this evening, this fell out..
general protection fault: [#1] PREEMPT SMP DEBUG_PAGEALLOC
CPU: 1 PID: 9130 Comm: kworker/1:1 Not tainted 4.1.0-gelk-debug+ #1
Workqueue: sock_diag_events
Just hit this weird problem where I can ssh into a machine once,
then after logging out, subsequent ssh connections hang.
The client side looks like..
13:39:06.307781 IP wopr.kernelslacker.org.43982 gelk.kernelslacker.org.ssh:
Flags [S], seq 319726787, win 29200, options [mss 1460,sackOK,TS
On Thu, Jun 11, 2015 at 01:46:18PM -0400, Dave Jones wrote:
Just hit this weird problem where I can ssh into a machine once,
then after logging out, subsequent ssh connections hang.
The client side looks like..
derp, missed half the tcpdump capture on both sides, and now
I can't
On Thu, Jun 11, 2015 at 11:24:21AM -0700, Eric Dumazet wrote:
Just hit this weird problem where I can ssh into a machine once,
then after logging out, subsequent ssh connections hang.
Your tcpdumps look one way only.
ok hit it again, so let's try again...
client side:
that at 100 Gbit/s and
1500 byte MTU the 32 bit segment counter would wrap in something like
500 seconds and change?
rick jones
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
On 05/20/2015 05:37 PM, Eric Dumazet wrote:
Anyway, if we can send tcp data at 100Gbits on one flow, I guess we are
doing a terrific job and do not need to tweak TCP stack anymore ;)
:)
rick
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to
Have you tested this patch on a NIC without GSO/TSO ?
This would allow more than 500 packets for a single flow.
Hello bufferbloat.
Woudln't the fq_codel qdisc on that interface address that problem?
rick
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a
On 04/15/2015 11:08 AM, Eric Dumazet wrote:
On Wed, 2015-04-15 at 10:55 -0700, Rick Jones wrote:
Have you tested this patch on a NIC without GSO/TSO ?
This would allow more than 500 packets for a single flow.
Hello bufferbloat.
Woudln't the fq_codel qdisc on that interface address
On 04/15/2015 11:32 AM, Eric Dumazet wrote:
On Wed, 2015-04-15 at 11:19 -0700, Rick Jones wrote:
Well, I'm not sure that it is George and Jonathan themselves who don't
want to change a sysctl, but the customers who would have to tweak that
in their VMs?
Keep in mind some VM users install
consumption break-down has changed.
happy benchmarking,
rick jones
If others have seen this or is just simply to be expected (from new
features and the like) is it due to the TCP stack itself or other
changes in the kernel?
If so, is there anyway to mitigate the effect of this via stack tuning
https://bugzilla.redhat.com/show_bug.cgi?id=431038 has some more info,
but the trace is below...
I'll get an rc3 kernel built and ask the user to retest, but in case this
isn't a known problem, I'm forwarding this here.
Dave
Feb 24 17:53:21 cirithungol kernel:
think
the sensitivity of both sets of information would be about the same?
Is the difference simply an artifact of history?
sincerely,
rick jones
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
Arnaldo Carvalho de Melo wrote:
Em Fri, Feb 01, 2008 at 05:42:23PM -0800, Rick Jones escreveu:
Hi -
I'm tweaking the netperf omni tests to be able to run over DCCP. I've run
across a not-unorecedented problem with getaddrinfo() not groking either
SOCK_DCCP or IPPROTO_DCCP in the hints
mentioned person shaping for their DSL line
happens to have enabled JumboFrames on their GbE network, will/should
the qdisc negate that? Or is the qdisc currently assuming that the
remote end of the DSL will have asked for a smaller MSS?
rick jones
--
To unsubscribe from this list: send the line
the later listen() or connect() call...
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
-Aggregate-Performance
and use a combination of TCP_STREAM and TCP_MAERTS (STREAM backwards) tests.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org
anything but noop
or pfifo_fast and pfifo right now.
Does this also imply that JumboFrames interacts badly with these qdiscs?
Or IPoIB with its 65000ish byte MTU?
rick jones
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More
$
(this was a WAN test :)
rick jones
one of these days I may tweak netperf further so if the CPU utilization
method for either end doesn't require calibration, CPU utilization will
always be done on that end. people's thoughts on that tweak would be
most welcome...
--
To unsubscribe from this list
to experiment with the value you
use with -b - the value necessary to get to saturation may not always be
the same - particularly as you switch from link to link and from LAN to
WAN and all those familiar bandwidthXdelay considerations.
happy benchmarking,
rick jones
--
To unsubscribe from this list
101 - 200 of 666 matches
Mail list logo