Re: [PATCH v2 net-next] liquidio: improve UDP TX performance

2017-02-21 Thread Rick Jones
. As one approaches the wire limit for bitrate, the likes of a netperf service demand can be used to demonstrate the performance change - though there isn't an easy way to do that for parallel flows. happy benchmarking, rick jones

Re: [PATCH net-next] liquidio: improve UDP TX performance

2017-02-16 Thread Rick Jones
performance improved? happy benchmarking, rick jones

Re: [PATCH net-next] virtio: Fix affinity for >32 VCPUs

2017-02-03 Thread Rick Jones
sane defaults. For example, the issues we've seen with VMs sending traffic getting reordered when the driver took it upon itself to enable xps. rick jones

Re: [PATCH net-next] virtio: Fix affinity for >32 VCPUs

2017-02-03 Thread Rick Jones
On 02/03/2017 10:22 AM, Benjamin Serebrin wrote: Thanks, Michael, I'll put this text in the commit log: XPS settings aren't write-able from userspace, so the only way I know to fix XPS is in the driver. ?? root@np-cp1-c0-m1-mgmt:/home/stack# cat /sys/devices/pci:00/:00:02.0/:04:0

Re: [PATCH net-next] tcp: accept RST for rcv_nxt - 1 after receiving a FIN

2017-01-17 Thread Rick Jones
On 01/17/2017 11:13 AM, Eric Dumazet wrote: On Tue, Jan 17, 2017 at 11:04 AM, Rick Jones wrote: Drifting a bit, and it doesn't change the value of dealing with it, but out of curiosity, when you say mostly in CLOSE_WAIT, why aren't the server-side applications reacting to the read

Re: [PATCH net-next] tcp: accept RST for rcv_nxt - 1 after receiving a FIN

2017-01-17 Thread Rick Jones
AIT, why aren't the server-side applications reacting to the read return of zero triggered by the arrival of the FIN? happy benchmarking, rick jones

Re: [pull request][for-next] Mellanox mlx5 Reorganize core driver directory layout

2017-01-13 Thread Rick Jones
rrors. Straight-up defaults with netperf, or do you use specific -s/S or -m/M options? happy benchmarking, rick jones

Re: [PATCH net-next] udp: under rx pressure, try to condense skbs

2016-12-08 Thread Rick Jones
tionally, even under no stress at all, you really should complain then. Isn't that behaviour based (in part?) on the observation/belief that it is fewer cycles to copy the small packet into a small buffer than to send the larger buffer up the stack and have to allocate and map a replacement? rick jones

Re: [PATCH net-next 2/4] mlx4: xdp: Allow raising MTU up to one page minus eth and vlan hdrs

2016-12-02 Thread Rick Jones
- (2 * VLAN_HLEN) which this patch is doing. It will be useful in the next patch which allows XDP program to extend the packet by adding new header(s). Is mlx4 the only driver doing page-per-packet? rick jones

Re: Initial thoughts on TXDP

2016-12-01 Thread Rick Jones
On 12/01/2016 02:12 PM, Tom Herbert wrote: We have consider both request size and response side in RPC. Presumably, something like a memcache server is most serving data as opposed to reading it, we are looking to receiving much smaller packets than being sent. Requests are going to be quite smal

Re: Initial thoughts on TXDP

2016-12-01 Thread Rick Jones
On 12/01/2016 12:18 PM, Tom Herbert wrote: On Thu, Dec 1, 2016 at 11:48 AM, Rick Jones wrote: Just how much per-packet path-length are you thinking will go away under the likes of TXDP? It is admittedly "just" netperf but losing TSO/GSO does some non-trivial things to effectiv

Re: Initial thoughts on TXDP

2016-12-01 Thread Rick Jones
even if one does have the CPU cycles to burn so to speak, the effect on power consumption needs to be included in the calculus. happy benchmarking, rick jones

Re: Netperf UDP issue with connected sockets

2016-11-30 Thread Rick Jones
On 11/30/2016 02:43 AM, Jesper Dangaard Brouer wrote: Notice the "fib_lookup" cost is still present, even when I use option "-- -n -N" to create a connected socket. As Eric taught us, this is because we should use syscalls "send" or "write" on a connected socket. In theory, once the data socke

Re: Netperf UDP issue with connected sockets

2016-11-28 Thread Rick Jones
On 11/28/2016 10:33 AM, Rick Jones wrote: On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote: time to try IP_MTU_DISCOVER ;) To Rick, maybe you can find a good solution or option with Eric's hint, to send appropriate sized UDP packets with Don't Fragment (DF). Jesper - Top of t

Re: Netperf UDP issue with connected sockets

2016-11-28 Thread Rick Jones
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote: time to try IP_MTU_DISCOVER ;) To Rick, maybe you can find a good solution or option with Eric's hint, to send appropriate sized UDP packets with Don't Fragment (DF). Jesper - Top of trunk has a change adding an omni, test-specific -f opt

Re: Netperf UDP issue with connected sockets

2016-11-17 Thread Rick Jones
On 11/17/2016 04:37 PM, Julian Anastasov wrote: On Thu, 17 Nov 2016, Rick Jones wrote: raj@tardy:~/netperf2_trunk$ strace -v -o /tmp/netperf.strace src/netperf -F src/nettest_omni.c -t UDP_STREAM -l 1 -- -m 1472 ... socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4 getsockopt(4, SOL_SOCKET

Re: Netperf UDP issue with connected sockets

2016-11-17 Thread Rick Jones
tf(where,\n\t\ttput_fmt_1_l"..., 1472, 0, {sa_family=AF_INET, sin_port=htons(58088), sin_addr=inet_addr("127.0.0.1")}, 16) = 1472 Of course, it will continue to send the same messages from the send_ring over and over instead of putting different data into the buffers each time, but if one has a sufficiently large -W option specified... happy benchmarking, rick jones

Re: Netperf UDP issue with connected sockets

2016-11-17 Thread Rick Jones
t creation wouldn't be too difficult, along with another command-line option to cause it to happen. Could we leave things as "make sure you don't need fragmentation when you use this" or would netperf have to start processing ICMP messages? happy benchmarking, rick jones

Re: Netperf UDP issue with connected sockets

2016-11-16 Thread Rick Jones
On 11/16/2016 02:40 PM, Jesper Dangaard Brouer wrote: On Wed, 16 Nov 2016 09:46:37 -0800 Rick Jones wrote: It is a wild guess, but does setting SO_DONTROUTE affect whether or not a connect() would have the desired effect? That is there to protect people from themselves (long story about

Re: Netperf UDP issue with connected sockets

2016-11-16 Thread Rick Jones
tperf users on Windows and there wasn't (at the time) support for git under Windows. But I am not against the idea in principle. happy benchmarking, rick jones PS - rick.jo...@hp.com no longer works. rick.jon...@hpe.com should be used instead.

Re: [patch] netlink.7: srcfix Change buffer size in example code about reading netlink message.

2016-11-14 Thread Rick Jones
ms with a large PAGE_SIZE? /* avoid msg truncation on > 4096 byte PAGE_SIZE platforms */ or something like that. rick jones

Re: [PATCH RFC 0/2] ethtool: Add actual port speed reporting

2016-11-03 Thread Rick Jones
the can, while "back in the day" (when some of the first ethtool changes to report speeds other than the "normal" ones went in) the speed of a flexnic was fixed, today, it can actually operate in a range. From a minimum guarantee to an "if there is bandwidth available" cap. rick jones

Re: [bnx2] [Regression 4.8] Driver loading fails without firmware

2016-10-25 Thread Rick Jones
On 10/25/2016 08:31 AM, Paul Menzel wrote: To my knowledge, the firmware files haven’t changed since years [1]. Indeed - it looks like I read "bnx2" and thought "bnx2x" Must remember to hold-off on replying until after the morning orange juice is consumed :) rick

Re: [bnx2] [Regression 4.8] Driver loading fails without firmware

2016-10-25 Thread Rick Jones
version of the firmware. Usually, finding a package "out there" with the newer version of the firmware, and installing it onto the system is sufficient. happy benchmarking, rick jones

Re: Accelerated receive flow steering (aRFS) for UDP

2016-10-10 Thread Rick Jones
On 10/10/2016 09:08 AM, Rick Jones wrote: On 10/09/2016 03:33 PM, Eric Dumazet wrote: OK, I am adding/CC Rick Jones, netperf author, since it seems a netperf bug, not a kernel one. I believe I already mentioned fact that "UDP_STREAM -- -N" was not doing a connect() on the receiver

Re: Accelerated receive flow steering (aRFS) for UDP

2016-10-10 Thread Rick Jones
On 10/09/2016 03:33 PM, Eric Dumazet wrote: OK, I am adding/CC Rick Jones, netperf author, since it seems a netperf bug, not a kernel one. I believe I already mentioned fact that "UDP_STREAM -- -N" was not doing a connect() on the receiver side. I can confirm that the receive s

Re: [PATCH v2 net-next 4/5] xps_flows: XPS for packets that don't have a socket

2016-09-29 Thread Rick Jones
currently selecting different TXQ. Just for completeness, in my testing, the VMs were single-vCPU. rick jones

Re: [PATCH RFC 0/4] xfs: Transmit flow steering

2016-09-28 Thread Rick Jones
nnectX-3 Pro,E5-2670v3 12421 12612 BE3, E5-26408178 8484 82599, E5-2640 8499 8549 BCM57840, E5-2640 8544 8560 Skyhawk, E5-26408537 8701 happy benchmarking, Drew Balliet Jeurg Haefliger rick jones

Re: [PATCH v3 net-next 16/16] tcp_bbr: add BBR congestion control

2016-09-19 Thread Rick Jones
true long-term bw estimate variable? We could do that. We used to have variables (aka module params) while BBR was cooking in our kernels ;) Are there better than epsilon odds of someone perhaps wanting to poke those values as it gets exposure beyond Google? happy benchmarking, rick jones

Re: [PATCH next 3/3] ipvlan: Introduce l3s mode

2016-09-09 Thread Rick Jones
conn-tracking work. What is that first sentence trying to say? It appears to be incomplete, and is that supposed to be "L3-symmetric?" happy benchmarking, rick jones

Re: [PATCH RFC 11/11] net/mlx5e: XDP TX xmit more

2016-09-08 Thread Rick Jones
with one doorbell. With small packets and the "default" ring size for this NIC/driver combination, is the BQL large enough that the ring fills before one hits the BQL? rick jones

Re: [PATCH] softirq: let ksoftirqd do its job

2016-08-31 Thread Rick Jones
On 08/31/2016 04:11 PM, Eric Dumazet wrote: On Wed, 2016-08-31 at 15:47 -0700, Rick Jones wrote: With regard to drops, are both of you sure you're using the same socket buffer sizes? Does it really matter ? At least at points in the past I have seen different drop counts at the SO_R

Re: [PATCH] softirq: let ksoftirqd do its job

2016-08-31 Thread Rick Jones
With regard to drops, are both of you sure you're using the same socket buffer sizes? In the meantime, is anything interesting happening with TCP_RR or TCP_STREAM? happy benchmarking, rick jones

Re: [PATCH v2 net-next] documentation: Document issues with VMs and XPS and drivers enabling it on their own

2016-08-29 Thread Rick Jones
kinda feel the same way about this situation. I'm working on XFS (as the transmit analogue to RFS). We'll track flows enough so that we should know when it's safe to move them. Is the XFS you are working on going to subsume XPS or will the two continue to exist in parallel a la RPS and RFS? rick jones

[PATCH v2 net-next] documentation: Document issues with VMs and XPS and drivers enabling it on their own

2016-08-25 Thread Rick Jones
From: Rick Jones Since XPS was first introduced two things have happened. Some drivers have started enabling XPS on their own initiative, and it has been found that when a VM is sending data through a host interface with XPS enabled, that traffic can end-up seriously out of order. Signed-off

[PATCH net-next] documentation: Document issues with VMs and XPS and drivers enabling it on their own

2016-08-25 Thread Rick Jones
From: Rick Jones Since XPS was first introduced two things have happened. Some drivers have started enabling XPS on their own initiative, and it has been found that when a VM is sending data through a host interface with XPS enabled, that traffic can end-up seriously out of order. Signed-off

Re: [RFC PATCH] net: Require socket to allow XPS to set queue mapping

2016-08-25 Thread Rick Jones
On 08/25/2016 02:08 PM, Eric Dumazet wrote: When XPS was submitted, it was _not_ enabled by default and 'magic' Some NIC vendors decided it was a good thing, you should complain to them ;) I kindasorta am with the emails I've been sending to netdev :) And also hopefully precluding others goi

Re: [RFC PATCH] net: Require socket to allow XPS to set queue mapping

2016-08-25 Thread Rick Jones
steps to pin VMs can enable XPS in that case. It isn't clear that one should always pin VMs - for example if a (public) cloud needed to oversubscribe the cores. happy benchmarking, rick jones

Re: A second case of XPS considerably reducing single-stream performance

2016-08-25 Thread Rick Jones
On 08/25/2016 12:19 PM, Alexander Duyck wrote: The problem is that there is no socket associated with the guest from the host's perspective. This is resulting in the traffic bouncing between queues because there is no saved socket to lock the interface onto. I was looking into this recently as

Re: A second case of XPS considerably reducing single-stream performance

2016-08-24 Thread Rick Jones
when the NIC at the sending end is a BCM57840. It does not appear that the bnx2x driver in the 4.4 kernel is enabling XPS. So, it would seem that there are three cases of enabling XPS resulting in out-of-order traffic, two of which result in a non-trivial loss of performance. happy benc

Re: [PATCH net-next] net: minor optimization in qdisc_qstats_cpu_drop()

2016-08-24 Thread Rick Jones
On 08/24/2016 10:23 AM, Eric Dumazet wrote: From: Eric Dumazet per_cpu_inc() is faster (at least on x86) than per_cpu_ptr(xxx)++; Is it possible it is non-trivially slower on other architectures? rick jones Signed-off-by: Eric Dumazet --- include/net/sch_generic.h |2 +- 1 file

A second case of XPS considerably reducing single-stream performance

2016-08-24 Thread Rick Jones
8695 Average 4108 8940 8859 8885 8671 happy benchmarking, rick jones The sample counts below may not fully support the additional statistics but for the curious: raj@tardy:/tmp$ ~/netperf2_trunk/doc/examples/parse_single_stream.py -r 6 waxon_performance.log

Re: [PATCH net 1/2] tg3: Fix for diasllow rx coalescing time to be 0

2016-08-03 Thread Rick Jones
trigger an interrupt. Presumably setting rx_max_coalesced_frames to 1 to disable interrupt coalescing. happy benchmarking, rick jones

Re: [iproute PATCH 0/2] Netns performance improvements

2016-07-08 Thread Rick Jones
resently? I believe Phil posted something several messages back in the thread. happy benchmarking, rick jones

Re: [iproute PATCH 0/2] Netns performance improvements

2016-07-07 Thread Rick Jones
On 07/07/2016 09:34 AM, Eric W. Biederman wrote: Rick Jones writes: 300 routers is far from the upper limit/goal. Back in HP Public Cloud, we were running as many as 700 routers per network node (*), and more than four network nodes. (back then it was just the one namespace per router and

Re: [iproute PATCH 0/2] Netns performance improvements

2016-07-07 Thread Rick Jones
espace per router and network). Mileage will of course vary based on the "oomph" of one's network node(s). happy benchmarking, rick jones * Didn't want to go much higher than that because each router had a port on a common linux bridge and getting to > 1024 would be an unpleasant day.

Re: strange Mac OSX RST behavior

2016-07-01 Thread Rick Jones
problematic since it takes up server resources for sockets sitting in TCP_CLOSE_WAIT. Isn't the server application expected to act on the read return of zero (which is supposed to be) triggered by the receipt of the FIN segment? rick jones We are also in the process of contacting Appl

Re: [PATCH v12 net-next 1/1] hv_sock: introduce Hyper-V Sockets

2016-06-28 Thread Rick Jones
onnection which has been reset? Is it limited to those errno values listed in the read() manpage, or does it end-up getting an errno value from those listed in the recv() manpage? Or, perhaps even one not (presently) listed in either? rick jones

Re: [PATCH net-next 0/8] tou: Transports over UDP - part I

2016-06-24 Thread Rick Jones
and so could indeed productively use TCP FastOpen. "Overall, very good success-rate" though tempered by "But... middleboxes were a big issue in some ISPs..." Though it doesn't get into how big (some connections, many, most, all?) and how many ISPs. rick jones Just an anecdote.

Re: [PATCH net-next 0/8] tou: Transports over UDP - part I

2016-06-24 Thread Rick Jones
On 06/24/2016 02:46 PM, Tom Herbert wrote: On Fri, Jun 24, 2016 at 2:36 PM, Rick Jones wrote: How would you define "severely?" Has it actually been more severe than for say ECN? Or it was for say SACK or PAWS? ECN is probably even a bigger disappointment in terms of seeing

Re: [PATCH net-next 0/8] tou: Transports over UDP - part I

2016-06-24 Thread Rick Jones
YN packets with data have together severely hindered what otherwise should have been straightforward and useful feature to deploy. How would you define "severely?" Has it actually been more severe than for say ECN? Or it was for say SACK or PAWS? rick jones

Re: [PATCH net-next 0/5] qed/qede: Tunnel hardware GRO support

2016-06-22 Thread Rick Jones
On 06/22/2016 04:10 PM, Rick Jones wrote: My systems are presently in the midst of an install but I should be able to demonstrate it in the morning (US Pacific time, modulo the shuttle service of a car repair place) The installs finished sooner than I thought. So, receiver: root@np-cp1

Re: [PATCH net-next 0/5] qed/qede: Tunnel hardware GRO support

2016-06-22 Thread Rick Jones
On 06/22/2016 03:56 PM, Alexander Duyck wrote: On Wed, Jun 22, 2016 at 3:47 PM, Eric Dumazet wrote: On Wed, 2016-06-22 at 14:52 -0700, Rick Jones wrote: Had the bnx2x-driven NICs' firmware not had that rather unfortunate assumption about MSSes I probably would never have noticed. It

Re: [PATCH net-next 0/5] qed/qede: Tunnel hardware GRO support

2016-06-22 Thread Rick Jones
On 06/22/2016 03:47 PM, Eric Dumazet wrote: On Wed, 2016-06-22 at 14:52 -0700, Rick Jones wrote: On 06/22/2016 11:22 AM, Yuval Mintz wrote: But seriously, this isn't really anything new but rather a step forward in the direction we've already taken - bnx2x/qede are already performin

Re: [PATCH net-next 0/5] qed/qede: Tunnel hardware GRO support

2016-06-22 Thread Rick Jones
I probably would never have noticed. happy benchmarking, rick jones

Re: [PATCH net-next 0/8] tou: Transports over UDP - part I

2016-06-16 Thread Rick Jones
as would a comparison of the service demands of the different single-stream results. CPU and NIC models would provide excellent context for the numbers. happy benchmarking, rick jones

Re: [PATCH] openvswitch: Add packet truncation support.

2016-06-09 Thread Rick Jones
On 06/08/2016 09:30 PM, pravin shelar wrote: On Wed, Jun 8, 2016 at 6:18 PM, William Tu wrote: +struct ovs_action_trunc { + uint32_t max_len; /* Max packet size in bytes. */ This could uint16_t. as it is related to packet len. Is there something limiting MTUs to 65535 bytes? rick

Re: [PATCH -next 2/2] virtio_net: Read the advised MTU

2016-06-02 Thread Rick Jones
On 06/02/2016 10:06 AM, Aaron Conole wrote: Rick Jones writes: One of the things I've been doing has been setting-up a cluster (OpenStack) with JumboFrames, and then setting MTUs on instance vNICs by hand to measure different MTU sizes. It would be a shame if such a thing were not possib

Re: [RFC] net: remove busylock

2016-05-19 Thread Rick Jones
aggregate small packet performance. happy benchmarking, rick jones

Re: [PATCH] tcp: ensure non-empty connection request queue

2016-05-04 Thread Rick Jones
On 05/04/2016 10:34 AM, Eric Dumazet wrote: On Wed, 2016-05-04 at 10:24 -0700, Rick Jones wrote: Dropping the connection attempt makes sense, but is entering/claiming synflood really indicated in the case of a zero-length accept queue? This is a one time message. This is how people can

Re: [PATCH] tcp: ensure non-empty connection request queue

2016-05-04 Thread Rick Jones
On 05/03/2016 05:25 PM, Eric Dumazet wrote: On Tue, 2016-05-03 at 23:54 +0200, Peter Wu wrote: When applications use listen() with a backlog of 0, the kernel would set the maximum connection request queue to zero. This causes false reports of SYN flooding (if tcp_syncookies is enabled) or packet

Re: drop all fragments inside tx queue if one gets dropped

2016-04-20 Thread Rick Jones
to the driver, which then either queued them all, or none of them. I don't recall seeing similar poor behaviour in Linux; I would have assumed that the intra-stack flow-control "took care" of it. Perhaps there is something specific to wpan which precludes that? happy benchmarking, rick jones

Re: Poorer networking performance in later kernels?

2016-04-18 Thread Rick Jones
s in this setup the 3.4.2 and 4.4.0 kernels perform identically - just as you would expect. Running in a VM will likely change things massively and could I suppose mask other behaviour changes. happy benchmarking, rick jones raj@tardy:~$ cat signatures/toppost A: Because it fouls the order in which

Re: Poorer networking performance in later kernels?

2016-04-15 Thread Rick Jones
default request/response size of one byte) doesn't really care about stateless offloads or MTUs and could show how much difference there is in basic path length (or I suppose in interrupt coalescing behaviour if the NIC in question has a mildly dodgy heuristic for such things). happy benchmarking, rick jones

Re: [net PATCH 2/2] ipv4/GRO: Make GRO conform to RFC 6864

2016-04-02 Thread Rick Jones
races. Yes, our team (including Van Jacobson ;) ) would be sad to not have sequential IP ID (but then we don't have them for IPv6 ;) ) Your team would not be the only one sad to see that go away. rick jones Since the cost of generating them is pretty small (inet->inet_id counter)

Re: [RFC net-next 2/2] udp: No longer use SLAB_DESTROY_BY_RCU

2016-03-28 Thread Rick Jones
On 03/28/2016 01:01 PM, Eric Dumazet wrote: Note : file structures got RCU freeing back in 2.6.14, and I do not think named users ever complained about added cost ;) Couldn't see the tree for the forest I guess :) rick

Re: [RFC net-next 2/2] udp: No longer use SLAB_DESTROY_BY_RCU

2016-03-28 Thread Rick Jones
On 03/28/2016 11:55 AM, Eric Dumazet wrote: On Mon, 2016-03-28 at 11:44 -0700, Rick Jones wrote: On 03/28/2016 10:00 AM, Eric Dumazet wrote: If you mean that a busy DNS resolver spends _most_ of its time doing : fd = socket() bind(fd port=0) < send and receive one frame > close(fd)

Re: [RFC net-next 2/2] udp: No longer use SLAB_DESTROY_BY_RCU

2016-03-28 Thread Rick Jones
On 03/28/2016 10:00 AM, Eric Dumazet wrote: On Mon, 2016-03-28 at 09:15 -0700, Rick Jones wrote: On 03/25/2016 03:29 PM, Eric Dumazet wrote: UDP sockets are not short lived in the high usage case, so the added cost of call_rcu() should not be a concern. Even a busy DNS resolver? If you

Re: [RFC net-next 2/2] udp: No longer use SLAB_DESTROY_BY_RCU

2016-03-28 Thread Rick Jones
On 03/25/2016 03:29 PM, Eric Dumazet wrote: UDP sockets are not short lived in the high usage case, so the added cost of call_rcu() should not be a concern. Even a busy DNS resolver? rick jones

Re: [Codel] [RFCv2 0/3] mac80211: implement fq codel

2016-03-19 Thread Rick Jones
nsaction inflight at one time. And unless one uses the test-specific -e option to provide a very crude retransmission mechanism based on a socket read timeout, neither does UDP_RR recover from lost datagrams. happy benchmarking, rick jones http://www.netperf.org/

Re: [RFC v2 -next 0/2] virtio-net: Advised MTU feature

2016-03-15 Thread Rick Jones
may add more thorough error handling. How do you see this interacting with VMs getting MTU settings via DHCP? rick jones v2: * Whitespace and code style cleanups from Sergei Shtylyov and Paolo Abeni * Additional test before printing a warning Aaron Conole (2): virtio: Start feature MTU

Re: [PATCH v6 net-next 2/2] tcp: Add Redundant Data Bundling (RDB)

2016-03-14 Thread Rick Jones
should get some SNMP counters, so that we get an idea of how many times a loss could be repaired. And some idea of the duplication seen by receivers, assuming there isn't already a counter for such a thing in Linux. happy benchmarking, rick jones Ideally, if the path happens to be los

Re: [net-next PATCH 0/2] GENEVE/VXLAN: Enable outer Tx checksum by default

2016-02-23 Thread Rick Jones
tting a non-zero IP ID on fragments with DF set? rick jones We need to do increment IP identifier in UFO, but I only see one device (neterion) that advertises NETIF_F_UFO-- honestly, removing that feature might be another good simplification! Tom -- -Ed

Re: Variable download speed

2016-02-23 Thread Rick Jones
e one can try to craft things so there is no storage I/O of note, it would still be better to use a network-specific tool such as netperf or iperf. Minimize the number of variables. happy benchmarking, rick jones

Re: [PATCH][net-next] bridge: increase mtu to 9000

2016-02-22 Thread Rick Jones
/ #define BR_GROUPFWD_DEFAULT 0 /* Don't allow forwarding of control protocols like STP, MAC PAUSE and LACP */ If you are going to 9000. why not just go ahead and use the maximum size of an IP datagram? rick jones

Re: [PATCH net V1 1/6] net/mlx4_en: Count HW buffer overrun only once

2016-02-17 Thread Rick Jones
accounting to show wrong results. Fix that. Use it for rx_fifo_errors only. Fixes: c27a02cd94d6 ('mlx4_en: Add driver for Mellanox ConnectX 10GbE NIC') Signed-off-by: Amir Vadai Signed-off-by: Eugenia Emantayev Signed-off-by: Or Gerlitz Reviewed-By: Rick Jones rick

Re: [PATCH net 1/6] net/mlx4_en: Do not count dropped packets twice

2016-02-16 Thread Rick Jones
ors = 0; stats->rx_fifo_errors = be32_to_cpu(mlx4_en_stats->RdropOvflw); happy benchmarking, rick jones

Re: Disabling XPS for 4.4.0-1+ixgbe+OpenStack VM over a VLAN means 65% increase in netperf TCP_STREAM

2016-02-08 Thread Rick Jones
sd 20.5931 stack@fcperf-cp1-comp0001-mgmt:~$ grep "1 1" xps_tcp_rr_off_* | awk '{t+=$6;r+=$9;s+=$10}END{print "throughput",t/NR,"recv sd",r/NR,"send sd",s/NR}' throughput 20883.6 recv sd 19.6255 send sd 20.0178 So that is 12% on TCP_RR throughput. Looks like XPS shouldn't be enabled by default for ixgbe. happy benchmarking, rick jones

Re: Disabling XPS for 4.4.0-1+ixgbe+OpenStack VM over a VLAN means 65% increase in netperf TCP_STREAM

2016-02-08 Thread Rick Jones
sd 0.6543 send sd 0.3606 stack@fcperf-cp1-comp0001-mgmt:~$ grep TCPOFO xps_off_* | awk '{sum += $NF}END{print "sum",sum/NR}' sum 173.9 happy benchmarking, rick jones raw results at ftp://ftp.netperf.org/xps_4.4.0-1_ixgbe.tgz

Re: Disabling XPS for 4.4.0-1+ixgbe+OpenStack VM over a VLAN means 65% increase in netperf TCP_STREAM

2016-02-04 Thread Rick Jones
On 02/04/2016 12:13 PM, Tom Herbert wrote: On Thu, Feb 4, 2016 at 11:57 AM, Rick Jones wrote: On 02/04/2016 11:38 AM, Tom Herbert wrote: XPS has OOO avoidance for TCP, that should not be a problem. What/how much should I read into: With XPSTCPOFOQueue: 78206 Without XPS TCPOFOQueue

Re: Disabling XPS for 4.4.0-1+ixgbe+OpenStack VM over a VLAN means 65% increase in netperf TCP_STREAM

2016-02-04 Thread Rick Jones
On 02/04/2016 11:38 AM, Tom Herbert wrote: On Thu, Feb 4, 2016 at 11:13 AM, Rick Jones wrote: The Intel folks suggested something about the process scheduler moving the sender around and ultimately causing some packet re-ordering. That could I suppose explain the TCP_STREAM difference, but

Disabling XPS for 4.4.0-1+ixgbe+OpenStack VM over a VLAN means 65% increase in netperf TCP_STREAM

2016-02-04 Thread Rick Jones
around and ultimately causing some packet re-ordering. That could I suppose explain the TCP_STREAM difference, but not the TCP_RR since that has just a single segment in flight at one time. I can try to get perf/whatnot installed on the systems - suggestions as to what metrics to look at are we

Re: [PATCH net-next v5 1/2] ethtool: add speed/duplex validation functions

2016-02-04 Thread Rick Jones
On 02/04/2016 04:47 AM, Michael S. Tsirkin wrote: On Wed, Feb 03, 2016 at 03:49:04PM -0800, Rick Jones wrote: And even for not-quite-virtual devices - such as a VC/FlexNIC in an HPE blade server there can be just about any speed set. I think we went down a path of patching some things to

Re: [PATCH net-next v5 1/2] ethtool: add speed/duplex validation functions

2016-02-03 Thread Rick Jones
On 02/03/2016 03:32 PM, Stephen Hemminger wrote: But why check for valid value at all. At some point in the future, there will be yet another speed adopted by some standard body and the switch statement would need another value. Why not accept any value? This is a virtual device. And even fo

Re: bonding (IEEE 802.3ad) not working with qemu/virtio

2016-02-01 Thread Rick Jones
through an interface is significantly greater than the reported link speed. I have to wonder how unique it is in that regard. Doesn't mean there can't be a default, but does suggest it should be rather high. rick jones

Re: [BUG] net: performance regression on ixgbe (Intel 82599EB 10-Gigabit NIC)

2015-12-10 Thread Rick Jones
since it wasn't the same per-core "horsepower" on either side and so why LRO on/off could have also affected the TCP_STREAM results. (When LRO was off it was off on both sides, and when on was on on both yes?) happy benchmarking, rick jones -- To unsubscribe from this li

Re: [BUG] net: performance regression on ixgbe (Intel 82599EB 10-Gigabit NIC)

2015-12-07 Thread Rick Jones
y doing is turning on LRO support via ethtool -k to see if that is the issue you are seeing. Hi Alex, enabling LRO resolved the problem. So you had the same NIC and CPUs and whatnot on both sides? rick jones -- To unsubscribe from this list: send the line "unsubscribe netdev" in the

Re: [BUG] net: performance regression on ixgbe (Intel 82599EB 10-Gigabit NIC)

2015-12-04 Thread Rick Jones
socket was created. If you want to see what they became by the end of the test, you need to use the appropriate output selectors (or, IIRC invoking the tests as "omni" rather than tcp_stream/tcp_maerts will report the end values rather than the start ones.). happy benchmarking, ric

Re: ipsec impact on performance

2015-12-02 Thread Rick Jones
almost 80% on the netserver side. That is pure "effective" path-length increase. happy benchmarking, rick jones PS - the netperf commands were varations on this theme: ./netperf -P 0 -T 0 -H 10.12.49.1 -c -C -l 30 -i 30,3 -- -O throughput,local_cpu_util,local_sd,local_cpu

Re: ipsec impact on performance

2015-12-01 Thread Rick Jones
On 12/01/2015 10:45 AM, Sowmini Varadhan wrote: On (12/01/15 10:17), Rick Jones wrote: What do the perf profiles show? Presumably, loss of TSO/GSO means an increase in the per-packet costs, but if the ipsec path significantly increases the per-byte costs... For ESP-null, there's act

Re: ipsec impact on performance

2015-12-01 Thread Rick Jones
keeping the per-byte roughly the same. You could also compare the likes of a single-byte netperf TCP_RR test between ipsec enabled and not to get an idea of the basic path length differences without TSO/GSO/whatnot muddying the waters. happy benchmarking, rick jones -- To unsubscribe from this

Re: [PATCH net-next 0/6] kcm: Kernel Connection Multiplexor (KCM)

2015-11-24 Thread Rick Jones
latency on the likes of netperf TCP_RR with JumboFrames than you would with the standard 1500 byte MTU. Something I saw on GbE links years back anyway. I chalked it up to getting better parallelism between the NIC and the host. Of course the service demands were lower with JumboFrames... rick

Re: [PATCH net-next RFC 2/2] vhost_net: basic polling support

2015-10-22 Thread Rick Jones
R and even aggregate _RR/packets per second for many VMs on the same system would be in order. happy benchmarking, rick jones -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.

Re: list of all network namespaces

2015-09-16 Thread Rick Jones
etns . At least that is what an strace of that command suggests. rick jones -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: vethpair creation performance, 3.14 versus 4.2.0

2015-08-31 Thread Rick Jones
On 08/31/2015 02:29 PM, David Ahern wrote: On 8/31/15 1:48 PM, Rick Jones wrote: My attempts to get a call-graph have been met with very limited success. Even though I've installed the dbg package from "make deb-pkg" the symbol resolution doesn't seem to be working. Lo

vethpair creation performance, 3.14 versus 4.2.0

2015-08-31 Thread Rick Jones
Even though I've installed the dbg package from "make deb-pkg" the symbol resolution doesn't seem to be working. happy benchmarking, rick jones -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vge

Re: Low throughput in VMs using VxLAN

2015-08-24 Thread Rick Jones
'm assuming the VM is using virtio_net) Does the behaviour change if vhost-net is loaded into the host and used by the VM? rick jones For completeness, it would also be good to compare the likes of netperf TCP_RR between VxLAN and without. -- To unsubscribe from this list: send the line &qu

Re: [PATCH v2 net-next] documentation: bring vxlan documentation more up-to-date

2015-08-12 Thread Rick Jones
On 08/12/2015 04:46 PM, David Miller wrote: From: r...@tardy.usa.hp.com (Rick Jones) Date: Wed, 12 Aug 2015 10:23:14 -0700 (PDT) From: Rick Jones A few things have changed since the previous version of the vxlan documentation was written, so update it and correct some grammer and such while

[PATCH v2 net-next] documentation: bring vxlan documentation more up-to-date

2015-08-12 Thread Rick Jones
From: Rick Jones A few things have changed since the previous version of the vxlan documentation was written, so update it and correct some grammer and such while we are at it. Signed-off-by: Rick Jones --- v2: Stephen Hemminger feedback to include dstport 4789 in command line example

Re: [PATCH net-next] documentation: bring vxlan documentation more up-to-date

2015-08-11 Thread Rick Jones
On 08/11/2015 03:09 PM, Stephen Hemminger wrote: On Tue, 11 Aug 2015 13:47:16 -0700 (PDT) r...@tardy.usa.hp.com (Rick Jones) wrote: + # ip link add vxlan0 type vxlan id 42 group 239.1.1.1 dev eth1 + +This creates a new device named vxlan0. The device uses the +multicast group 239.1.1.1 over

  1   2   3   4   5   6   >