> > Hi,
> >
> > Here is a joint work from Mellanox and Napatech, to enable the flow hw
> > offload with the DPDK generic flow interface (rte_flow).
>
> Hi folks, I feel Mellanox/Netronome have reached the point where HWOL can
> be introduced to OVS DPDK pending performance review.
Apologies , Mel
The run-time performance of PMDs is often difficult to understand and
trouble-shoot. The existing PMD statistics counters only provide a coarse
grained average picture. At packet rates of several Mpps sporadic drops of
packet bursts happen at sub-millisecond time scales and are impossible to
capt
This patch enhances dpif-netdev-perf to detect iterations with
suspicious statistics according to the following criteria:
- iteration lasts longer than US_THR microseconds (default 250).
This can be used to capture events where a PMD is blocked or
interrupted for such a period of time that the
This patch instruments the dpif-netdev datapath to record detailed
statistics of what is happening in every iteration of a PMD thread.
The collection of detailed statistics can be controlled by a new
Open_vSwitch configuration parameter "other_config:pmd-perf-metrics".
By default it is disabled. T
If the caller provides a non-NULL qfill pointer and the netdev
implemementation supports reading the rx queue fill level, the rxq_recv()
function returns the remaining number of packets in the rx queue after
reception of the packet burst to the caller. If the implementation does
not support this, i
Currently when fragmented packets are to be transmitted in to tunnel,
base_flow->nw_frag which was initially non-zero at reception is not
reset to zero when the base_flow and flow are rewritten
as part of the emulated tnl_push action in the ofproto-dpif-xlate
module.
Because of this when fragmente
Two mistakes here:
- Automatic assignment of Rx queues to PMD threads has always existed -
it was simply switched from round-robin allocation to
utilization-based allocation
- The above, along with the 'pmd-rxq-rebalance' command, was added in
OVS 2.9.0 - not OVS 2.8.0 - while the 'pmd-rxq-s
On 04/20/2018 10:24 AM, Stephen Finucane wrote:
> Two mistakes here:
>
> - Automatic assignment of Rx queues to PMD threads has always existed -
> it was simply switched from round-robin allocation to
> utilization-based allocation
> - The above, along with the 'pmd-rxq-rebalance' command, was
Acked-by: Billy O'Mahony
> -Original Message-
> From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
> Sent: Thursday, April 19, 2018 6:41 PM
> To: d...@openvswitch.org
> Cc: ktray...@redhat.com; Stokes, Ian ;
> i.maxim...@samsung.com; O Mahony, Billy ; Jan
> Scheurich
> Subject: [PAT
Acked-by: Billy O'Mahony
> -Original Message-
> From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
> Sent: Thursday, April 19, 2018 6:41 PM
> To: d...@openvswitch.org
> Cc: ktray...@redhat.com; Stokes, Ian ;
> i.maxim...@samsung.com; O Mahony, Billy ; Jan
> Scheurich
> Subject: [PAT
Acked-by: Billy O'Mahony
> -Original Message-
> From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
> Sent: Thursday, April 19, 2018 6:41 PM
> To: d...@openvswitch.org
> Cc: ktray...@redhat.com; Stokes, Ian ;
> i.maxim...@samsung.com; O Mahony, Billy ; Jan
> Scheurich
> Subject: [PAT
On 19/04/18 19:25, Kevin Traynor wrote:
On 04/19/2018 03:32 PM, Pablo Cascón wrote:
On 18/04/18 18:35, Kevin Traynor wrote:
On 04/18/2018 03:41 PM, Pablo Cascón wrote:
On 13/04/18 19:45, Kevin Traynor wrote:
On 04/13/2018 04:20 PM, Stokes, Ian wrote:
Currently to RX jumbo packets fails for N
DPDK mempools are freed when they are no longer needed.
This can happen when a port is removed or a port's mtu
is reconfigured so that a new mempool is used.
It is possible that an mbuf is attempted to be returned
to a freed mempool from NIC Tx queues and this can lead
to a segfault.
In order to
DPDK mempools are freed when they are no longer needed.
This can happen when a port is removed or a port's mtu
is reconfigured so that a new mempool is used.
It is possible that an mbuf is attempted to be returned
to a freed mempool from NIC Tx queues and this can lead
to a segfault.
In order to
On 04/13/2018 06:25 PM, Kevin Traynor wrote:
> There is debug when a new mempool is created, but not
> when it is reused or freed. Add these as it is very
> difficult to debug mempool issues from logs without
> them.
>
Hi Ian,
I just sent backports for 2.6/2.7/2.8 branches for the 1/2 patch as i
On Thu, Apr 19, 2018 at 08:07:33PM -0700, Gregory Rose wrote:
> On 4/19/2018 4:18 PM, Flavio Leitner wrote:
> > On Tue, Apr 17, 2018 at 12:34:08PM -0700, Greg Rose wrote:
> > > On RHEL 7.x kernels we observe a panic induced by a paging error
> > > when the timer kicks off a job that subsequently ac
Hi Ben,
Thanks for your reply.
I'm using this repo because it supports OVN SFC, the official doesn't, does
it?
For the old build directory, I don't know it since my installation is and
OPNFV deployment using Apex.
As you Apex uses RDO TripleO openstack.
Appreciate if you can guide me h
New OVS-DPDK testsuite, which can be launched via `make check-dpdk`,
tests OVS using a DPDK datapath. The testsuite contains already
initial tests:
1. EAL init
2. Add standard DPDK PHY port
3. Add vhost-user-client port
Signed-off-by: Marcin Rybka
---
Ver.5 updates:
- updated documentation a
Hello Team,
I am using Core Network emulator to simulate wired networks but I don't
want to use Linux bridges so decided to use OVS.
My question is
1. Currently how OVS is being used means Is it being used by an network
simulator or something else ?
2. How I can use OVS-switches with Core Netw
Hello Rakesh,
It is unlikely anyone in this list knows what a "Core Network emulator "
is. OVS is a production grade virtual switch and used in real environments.
mininet is a popular "simulator" that uses OVS. But you won't get answers
around that topic here. You should head to a mininet mailing
On 4/20/2018 5:39 AM, Eric Garver wrote:
On Thu, Apr 19, 2018 at 08:07:33PM -0700, Gregory Rose wrote:
Fantastic, I'll test this and whip up a patch.
Thanks!
- Greg
I'll be on the lookout for it. Thanks.
[..]
Eric,
with the above patch I'm getting this on a stock RHEL 7.4 kernel:
[ 599.6
Currently to RX jumbo packets fails for NICs not supporting scatter.
Scatter is not strictly needed for jumbo RX support. This change fixes
the issue by only enabling scatter for NICs known to need it to
support jumbo RX. Add a quirk for "igb" while the PMD is fixed.
Reported-by: Louis Peens
Sign
On 4/20/2018 9:03 AM, Gregory Rose wrote:
On 4/20/2018 5:39 AM, Eric Garver wrote:
On Thu, Apr 19, 2018 at 08:07:33PM -0700, Gregory Rose wrote:
Fantastic, I'll test this and whip up a patch.
Thanks!
- Greg
I'll be on the lookout for it. Thanks.
[..]
Eric,
with the above patch I'm getting
Report gateway chassis in decreasing priority order running ovn-nbctl
show sub-command. Add get_ordered_gw_chassis_prio_list routine to sort
gw chassis according to the configured priority
Signed-off-by: Lorenzo Bianconi
---
ovn/utilities/ovn-nbctl.c | 64 +---
On RHEL 7.x kernels we observe a panic induced by a paging error
when the timer kicks off a job that subsequently accesses memory
that belonged to the openvswitch kernel module but was since
unloaded - thus the paging error.
The panic can be induced on any RHEL 7.x kernel with the following test:
On Fri, Apr 20, 2018 at 09:56:53AM -0700, Greg Rose wrote:
> On RHEL 7.x kernels we observe a panic induced by a paging error
> when the timer kicks off a job that subsequently accesses memory
> that belonged to the openvswitch kernel module but was since
> unloaded - thus the paging error.
>
> Th
The Linux 4.4.119 kernel (and perhaps others) from kernel.org
backports some dst_cache code that breaks the openvswitch kernel
due to a duplicated name "dst_cache_destroy". For most cases the
"USE_UPSTREAM_TUNNEL" covers this but in this case the dst_cache
feature needs to be separated out.
Add t
On 4/20/2018 11:10 AM, Flavio Leitner wrote:
On Fri, Apr 20, 2018 at 09:56:53AM -0700, Greg Rose wrote:
On RHEL 7.x kernels we observe a panic induced by a paging error
when the timer kicks off a job that subsequently accesses memory
that belonged to the openvswitch kernel module but was since
u
On 4/20/2018 11:15 AM, Gregory Rose wrote:
On 4/20/2018 11:10 AM, Flavio Leitner wrote:
On Fri, Apr 20, 2018 at 09:56:53AM -0700, Greg Rose wrote:
On RHEL 7.x kernels we observe a panic induced by a paging error
when the timer kicks off a job that subsequently accesses memory
that belonged to
eval doesn't understand white spaces which was introduced in commit
79c7961b8b3c4b7ea0251dea2ffacfa84c84fecb for starting clustered ovn dbs
Hence, we need to explicitely handle it.
e.g. /usr/share/openvswitch/scripts/ovn-ctl --db-nb-addr=192.168.220.101
--db-nb-create-insecure-remote=yes \
--
In case where "use_names" is set (e.g. in an interactive session) to show
the port and table names when ovs-ofctl is run with snoop command,
ovs-ofctl would get stuck in an endless loop inside "table_iterator_next"
function's while loop checking for "while (ti->send_xid != recv_xid)".
This would ha
31 matches
Mail list logo