> -Original Message-
> From: Ben Pfaff [mailto:b...@ovn.org]
> Sent: Thursday, 12 April, 2018 18:37
>
> On Thu, Apr 12, 2018 at 05:32:11PM +0200, Jan Scheurich wrote:
> > If the caller provides a non-NULL qfill pointer and the netdev
> > implemementation supports reading the rx queue fill
On Thu, Apr 12, 2018 at 10:50:55PM +0300, Liran Schour wrote:
> Ben Pfaff wrote on 06/04/2018 11:05:24 PM:
> > On Fri, Mar 30, 2018 at 05:15:57AM +0200, Liran Schour wrote:
> > > I wanted to raise a question that I came a cross. Maybe the community
> > > already dealt with it.
> >
Ben Pfaff wrote on 06/04/2018 11:05:24 PM:
> On Fri, Mar 30, 2018 at 05:15:57AM +0200, Liran Schour wrote:
> > I wanted to raise a question that I came a cross. Maybe the community
> > already dealt with it.
> >
> > The ovn-northd translates the CMS's commands that resides in the
On Wed, Apr 11, 2018 at 11:29:28PM -0700, Han Zhou wrote:
> On Tue, Apr 10, 2018 at 6:21 PM, Han Zhou wrote:
> >
> >
> >
> > On Tue, Apr 10, 2018 at 5:04 PM, Ben Pfaff wrote:
> > >
> > > On Fri, Apr 06, 2018 at 02:40:21PM -0700, Han Zhou wrote:
> > > > On Fri,
On Thu, Apr 12, 2018 at 05:32:11PM +0200, Jan Scheurich wrote:
> If the caller provides a non-NULL qfill pointer and the netdev
> implemementation supports reading the rx queue fill level, the rxq_recv()
> function returns the remaining number of packets in the rx queue after
> reception of the
> > I would not say this is expected behavior.
> >
> > It seems that you are executing on a somewhat slower system (tsc clock
> > seems to be 100/us = 0.1 GHz) and that, even with only 5
> lines logged before and after, the logging output is causing so much slow
> down of the PMD that it
On 10.04.2018 21:12, Kevin Traynor wrote:
> DPDK mempools are freed when they are no longer needed.
> This can happen when a port is removed or a port's mtu
> is reconfigured so that a new mempool is used.
>
> It is possible that an mbuf is attempted to be returned
> to a freed mempool from NIC
The run-time performance of PMDs is often difficult to understand and
trouble-shoot. The existing PMD statistics counters only provide a coarse
grained average picture. At packet rates of several Mpps sporadic drops of
packet bursts happen at sub-millisecond time scales and are impossible to
If the caller provides a non-NULL qfill pointer and the netdev
implemementation supports reading the rx queue fill level, the rxq_recv()
function returns the remaining number of packets in the rx queue after
reception of the packet burst to the caller. If the implementation does
not support this,
This patch enhances dpif-netdev-perf to detect iterations with
suspicious statistics according to the following criteria:
- iteration lasts longer than US_THR microseconds (default 250).
This can be used to capture events where a PMD is blocked or
interrupted for such a period of time that
This patch instruments the dpif-netdev datapath to record detailed
statistics of what is happening in every iteration of a PMD thread.
The collection of detailed statistics can be controlled by a new
Open_vSwitch configuration parameter "other_config:pmd-perf-metrics".
By default it is disabled.
On 11.04.2018 20:55, Kevin Traynor wrote:
> On 04/10/2018 11:12 AM, Stokes, Ian wrote:
> -Original Message-
> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> Sent: Monday, 29 January, 2018 09:35
> To: Jan Scheurich ; Venkatesan Pradeep
On 12.04.2018 17:18, Jan Scheurich wrote:
>>> The bond of openvswitch has not good performance.
>>
>> Any examples?
>
> For example, balance-tcp bond mode for L34 load sharing still requires a
> recirculation after dp_hash.
dp_hash is lightweight action now since we're using rss hash for it.
On Thu, Apr 12, 2018 at 10:18 PM, Jan Scheurich
wrote:
>> > The bond of openvswitch has not good performance.
>>
>> Any examples?
>
> For example, balance-tcp bond mode for L34 load sharing still requires a
> recirculation after dp_hash.
Yes, we need more bond modes
> > The bond of openvswitch has not good performance.
>
> Any examples?
For example, balance-tcp bond mode for L34 load sharing still requires a
recirculation after dp_hash.
I believe that it would definitely be interesting to compare bond performance
between DPDK bonding and OVS bonding with
Hi Tonghao,
Thanks for working on this. That was on my backlog to try out for a while.
One immediate feedback: This is a pure OVS user space patch. Please remove the
"net-next" tag from your patches in the next version. "net-next" is reserved
for OVS kernel module patches that are first
> Tuesday, April 10, 2018 10:58 PM, Stokes, Ian:
> > Subject: RE: [PATCH v8 2/6] dpif-netdev: retrieve flow directly from
> > the flow mark
> >
> > > Subject: [PATCH v8 2/6] dpif-netdev: retrieve flow directly from the
> > > flow mark
> > >
> > > From: Yuanhan Liu
> > >
> >
> From: Tonghao Zhang
>
> The bond of openvswitch has not good performance.
Any examples?
> In some
> cases we would recommend that you use Linux bonds instead
> of Open vSwitch bonds. In userspace datapath, we wants use
> bond to improve bandwidth. The DPDK has implemented it as lib.
You
> On 10/04/18 21:08, Stokes, Ian wrote:
> >> Currently to RX jumbo packets fails for NICs not supporting scatter.
> >> Scatter is not strictly needed for jumbo support on RX. This change
> >> fixes the issue by only enabling scatter for NICs supporting it.
> >>
> >> Reported-by: Louis Peens
In real-world vSwitch deployments, handling a few thousand flows,
EMC is quickly saturated, so it's optimal usage is critical to
reach the highest packet forwarding speed of the vSwitch.
EMC lookup is initiated based on the hash value of the packet.
In case the packet does not already have a
On Wed, Apr 11, 2018 at 03:54:24PM -0500, Terry Wilson wrote:
> On Wed, Apr 11, 2018 at 12:52 PM, Flavio Leitner wrote:
> > On Wed, Apr 11, 2018 at 07:23:07PM +0200, Timothy Redaelli wrote:
> >> On Tue, 10 Apr 2018 15:20:54 -0700
> >> Ben Pfaff wrote:
> >>
> >> >
From: Tonghao Zhang
This patch allows users to set the dpdk-bond mode,
such as round_robin, active_backup and balance and so on.
ovs-vsctl add-port br0 dpdk0 -- \
set Interface dpdk0 type=dpdk \
options:dpdk-devargs=:06:00.0,:06:00.1
From: Tonghao Zhang
Extend the function, when looking up the dpdk netdev
by port id, if the port id is a slave port id, return
its master device.
The patch changes the function 'netdev_dpdk_lookup_by_port_id'.
Signed-off-by: Tonghao Zhang
From: Tonghao Zhang
This patch implements, mostly the dpdk-bond support.
vswitchd try to parse devargs as dpdk-bond device.
If success, create a bond device and add slave ports
to it. And the bond device id will be set to dev->port_id
as a normal interface.
* check
From: Tonghao Zhang
netdev_dpdk_bond struct will be a member in netdev_dpdk struct.
and its init/uinit will be done in the common_construct/destruct.
By default, the mode of bond device is active-backup mode.
Signed-off-by: Tonghao Zhang
---
From: Tonghao Zhang
The bond device in dpdk-17.11 does not support setting mtu,
but dpdk upstream supports it now. For more information, see:
http://dpdk.org/browse/dpdk/commit/?id=55b58a7374554cd1c86f4a13a0e2f54e9ba6fe4d
This patch allows to create bond devices which
From: Tonghao Zhang
If users set the interface options with multi-pci or device names
with ',' as a separator, we try to parse it as dpdk-bond args.
For example, set an interface as:
ovs-vsctl add-port br0 dpdk0 -- \
set Interface dpdk0 type=dpdk \
From: Tonghao Zhang
The bond of openvswitch has not good performance. In some
cases we would recommend that you use Linux bonds instead
of Open vSwitch bonds. In userspace datapath, we wants use
bond to improve bandwidth. The DPDK has implemented it as lib.
These
Hello Sir/Madam your 7.4 million USD deposit from (L'Oréal) Philanthropic Grant
respond to the details below Email::lorealcharitei...@zoho.com
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Hi Marcelo,
Apologies. It wasn't clear that you had actually hands on experience of the
issue.
Regards,
Billy.
> -Original Message-
> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
> boun...@openvswitch.org] On Behalf Of O Mahony, Billy
> Sent: Wednesday, April 11, 2018 10:32
Acked-by: Daniel Alvarez
Thanks Han! Everything LGTM and the tests pass okay against current master.
On Thu, Apr 5, 2018 at 2:51 AM, Han Zhou wrote:
> Address sets are automatically generated from corresponding port
> groups, and can be used directly in
Any feedback on this patch?
On 09/02/18 15:42, Eelco Chaudron wrote:
This patch will make sure VXLAN tunnels with and without the group
based policy (GBP) option enabled can not coexist on the same
destination UDP port.
In theory, VXLAN tunnel with and without GBP enables can be
multiplexed on
Acked-by: Daniel Alvarez
Thanks Han! Everything LGTM and the test pass okay against current master.
On Thu, Apr 5, 2018 at 2:51 AM, Han Zhou wrote:
> This patch enables using port group names in ACL match conditions.
> Users can create a port group in
On Wed, 11 Apr 2018 12:43:44 -0700
Guru Shetty wrote:
> On 11 April 2018 at 11:03, Timothy Redaelli
> wrote:
>
> > On Wed, 11 Apr 2018 10:05:53 -0700
> > Guru Shetty wrote:
> >
> > > On 22 December 2017 at 07:00, Timothy Redaelli
> > >
Hi Simon,
> -Original Message-
> From: Simon Horman [mailto:simon.hor...@netronome.com]
> Sent: Thursday, April 12, 2018 5:13 PM
> To: Chris Mi
> Cc: d...@openvswitch.org; Roi Dayan ; Paul Blakey
>
> Subject: Re: [ovs-dev 0/2]
On Tue, Apr 10, 2018 at 02:18:07PM +0900, Chris Mi wrote:
> This patchset adds the offloading support of multiple outputs.
>
> The first patch makes the actions order consistent. In previous
> implementation, the actions order is lost when offloading. If there
> is only one output, there is on
On 4/12/2018 4:20 PM, Simon Horman wrote:
On 12 April 2018 at 09:29, Chris Mi > wrote:
A reminder.
Thanks Chris, this is on my todo list.
Thanks for your help, Simon.
___
dev mailing list
On 12 April 2018 at 09:29, Chris Mi wrote:
> A reminder.
>
Thanks Chris, this is on my todo list.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
On 11.04.2018 20:45, Jan Scheurich wrote:
> Hi Ilya,
>
> I would not say this is expected behavior.
>
> It seems that you are executing on a somewhat slower system (tsc clock seems
> to be 100/us = 0.1 GHz) and that, even with only 5 lines logged before and
> after, the logging output is
A reminder.
Thanks,
Chris
On 4/10/2018 1:18 PM, Chris Mi wrote:
This patchset adds the offloading support of multiple outputs.
The first patch makes the actions order consistent. In previous
implementation, the actions order is lost when offloading. If there
is only one output, there is on
On 11/04/2018 14:53, Aaron Conole wrote:
Tiago Lam writes:
When explaining on how to add vhost-user ports to a guest, using
libvirt, point to the qemu-system-x86_64 binary by default, instead of
using qemu-kvm. The latter has been made obsolete and dropped from a
number
On 11/04/2018 15:03, Stephen Finucane wrote:
On Wed, 2018-04-11 at 09:54 -0400, Aaron Conole wrote:
Tiago Lam writes:
When explaining on how to add vhost-user ports to a guest, using
libvirt, the following piece of configuration is used:
On Tue, Apr 10, 2018 at 6:21 PM, Han Zhou wrote:
>
>
>
> On Tue, Apr 10, 2018 at 5:04 PM, Ben Pfaff wrote:
> >
> > On Fri, Apr 06, 2018 at 02:40:21PM -0700, Han Zhou wrote:
> > > On Fri, Apr 6, 2018 at 1:54 PM, Ben Pfaff wrote:
> > > >
> > > >
Hi Ben,
Thanks for reviewing it.
Will update the documentation and NEWS item soon.
Regards,
Nitin
-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org]
Sent: Sunday, April 01, 2018 6:06 AM
To: Nitin Katiyar
Cc: d...@openvswitch.org
Subject: Re: [ovs-dev]
44 matches
Mail list logo