Hi, Jan:
When I test dp_hash with the new patch, vswitchd was killed by segment
fault in some conditions.
1. add group with no buckets, then winner will be NULL
2. add buckets with weight with 0, then winner will also be NULL
I did little modify to the patch, will you help to check
On Tue, Apr 10, 2018 at 5:04 PM, Ben Pfaff wrote:
>
> On Fri, Apr 06, 2018 at 02:40:21PM -0700, Han Zhou wrote:
> > On Fri, Apr 6, 2018 at 1:54 PM, Ben Pfaff wrote:
> > >
> > > Thanks for working on making OVN faster and scale better.
> > >
> > > I see what you mean
Not yet; I will check with Justin when he gets back from PTO
On 4/10/18, 4:50 PM, "Ben Pfaff" wrote:
Did you find anyone to take a look?
On Fri, Mar 09, 2018 at 07:29:06PM +, Darrell Ball wrote:
> Windows folks have also been looking at this, as this file is
On Fri, Apr 06, 2018 at 02:40:21PM -0700, Han Zhou wrote:
> On Fri, Apr 6, 2018 at 1:54 PM, Ben Pfaff wrote:
> >
> > Thanks for working on making OVN faster and scale better.
> >
> > I see what you mean about how nb_cfg can be a scale problem. Really,
> > each hypervisor only cares
On 4/6/2018 7:35 AM, Gregory Rose wrote:
On 4/4/2018 10:23 AM, Ben Pfaff wrote:
On Thu, Mar 29, 2018 at 04:46:09PM -0700, Ashish Varma wrote:
Added test cases for encap, decap, replace and forwarding of NSH
packets.
Also added a python script 'sendpkt.py' to send hex ethernet frames.
Did you find anyone to take a look?
On Fri, Mar 09, 2018 at 07:29:06PM +, Darrell Ball wrote:
> Windows folks have also been looking at this, as this file is mostly a common
> port from bsd.
> I’ll check with Sai
>
> On 3/9/18, 11:22 AM, "ovs-dev-boun...@openvswitch.org on behalf of Ben
On Mon, Apr 09, 2018 at 12:00:23PM +0200, Lorenzo Bianconi wrote:
> Changes since v1:
> - squashed ACTION_OPCODE_ICMP4 and ACTION_OPCODE_ICMP6 in ACTION_OPCODE_ICMP
> - updated ovn-northd manpage
> - added a NEWS item that describes the new features
>
> Lorenzo Bianconi (2):
> OVN: add icmp6{}
On Thu, Apr 05, 2018 at 12:20:27PM +, Manohar Krishnappa Chidambaraswamy
wrote:
> Problem:
>
> In user-space tunneling implementation, tnl_arp_snoop() snoops only ARP
> *reply* packets to resolve tunnel nexthop IP addresses to MAC addresses.
> Normally the ARP requests are
On Mon, Apr 09, 2018 at 03:03:03PM -0500, twil...@redhat.com wrote:
> From: Terry Wilson
>
> This adds multi-column index support for the Python IDL that is
> similar to the feature in the C IDL.
>
> Signed-off-by: Terry Wilson
Thanks for working on
On Mon, Apr 09, 2018 at 05:18:55PM +0200, Timothy Redaelli wrote:
> Currently the code relies on the standard 6 byte octets, but the
> documentation uses a wrong 7-byte octects.
> This commit fix the documention in order to use the correct 6 byte octets
> syntax.
>
> Fixes: 5e7588186839
On Mon, Apr 09, 2018 at 12:07:20PM -0500, Mark Michelson wrote:
> Stopwatch was implemented using a Unix-only pipe structure. This commit
> changes to using a guarded list and latch in order to pass data between
> threads.
>
> Signed-off-by: Mark Michelson
Thanks, applied
> Currently to RX jumbo packets fails for NICs not supporting scatter.
> Scatter is not strictly needed for jumbo support on RX. This change fixes
> the issue by only enabling scatter for NICs supporting it.
>
> Reported-by: Louis Peens
> Signed-off-by: Pablo Cascón
> Subject: [PATCH v8 2/6] dpif-netdev: retrieve flow directly from the flow
> mark
>
> From: Yuanhan Liu
>
> So that we could skip some very costly CPU operations, including but not
> limiting to miniflow_extract, emc lookup, dpcls lookup, etc. Thus,
> performance could be
> The basic yet the major part of this patch is to translate the "match"
> to rte flow patterns. And then, we create a rte flow with MARK + RSS
> actions. Afterwards, all packets match the flow will have the mark id in
> the mbuf.
>
> The reason RSS is needed is, for most NICs, a MARK only action
> Currently, the major trigger for hw flow offload is at upcall handling,
> which is actually in the datapath. Moreover, the hw offload installation
> and modification is not that lightweight. Meaning, if there are so many
> flows being added or modified frequently, it could stall the datapath,
>
It's currently IPv4 only but it's a good idea to add IPv6 support. I'll
put that on my to-do list.
It's also a good idea to warn about the potential perils of this
feature. I'll do that too.
I'll try to get to this soon, neither feature should be much work.
On Tue, Apr 10, 2018 at 10:13:24AM
On 04/09/2018 03:36 PM, Kevin Traynor wrote:
> On 04/06/2018 04:51 PM, Ilya Maximets wrote:
DPDK mempools are freed when they are no longer needed.
This can happen when a port is removed or a port's mtu is reconfigured so
that a new mempool is used.
It is possible that an
There is debug when a new mempool is created, but not
when it is reused or freed. Add these as it is very
difficult to debug mempool issues from logs without
them.
Signed-off-by: Kevin Traynor
---
lib/netdev-dpdk.c | 2 ++
1 file changed, 2 insertions(+)
diff --git
DPDK mempools are freed when they are no longer needed.
This can happen when a port is removed or a port's mtu
is reconfigured so that a new mempool is used.
It is possible that an mbuf is attempted to be returned
to a freed mempool from NIC Tx queues and this can lead
to a segfault.
In order to
No objections. Sounds useful for testing, and for situations where
the agent-address selection has been made by a higher controller.
Just one question: it seems like it is only IPv4: can an IPv6
agent-address be configured this way too?
I'll admit to this triggering bad memories of a situation
Implementación de Home Office
Abril 17 - webinar Interactivo
Introducción:
Las empresas han notado que ser flexibles en los horarios y espacios de trabajo
les brinda múltiples beneficios: más productividad de sus colaboradores,
economía en recursos e instalaciones, un mejor ambiente
On 04/09/2018 03:36 PM, Kevin Traynor wrote:
> On 04/06/2018 04:51 PM, Ilya Maximets wrote:
DPDK mempools are freed when they are no longer needed.
This can happen when a port is removed or a port's mtu is reconfigured so
that a new mempool is used.
It is possible that an
On Tue, 10 Apr 2018 09:49:36 -0400
Aaron Conole wrote:
> From: Alan Pevec
>
> Default ownership[1] for config files is failing on an empty system:
> Running scriptlet: openvswitch-2.9.0-3.fc28.x86_64
> warning: user openvswitch does not exist -
On 10/04/18 14:49, Aaron Conole wrote:
> +%pre
> +getent group openvswitch >/dev/null || groupadd -r openvswitch
> +getent passwd openvswitch >/dev/null || \
> +useradd -r -g openvswitch -d / -s /sbin/nologin \
> +-c "Open vSwitch Daemons" openvswitch
> +
> +%if %{with dpdk}
> +getent
On 10/04/18 14:49, Aaron Conole wrote:
> From: Alan Pevec
>
> Default ownership[1] for config files is failing on an empty system:
> Running scriptlet: openvswitch-2.9.0-3.fc28.x86_64
> warning: user openvswitch does not exist - using root
> warning: group openvswitch
From: Alan Pevec
Default ownership[1] for config files is failing on an empty system:
Running scriptlet: openvswitch-2.9.0-3.fc28.x86_64
warning: user openvswitch does not exist - using root
warning: group openvswitch does not exist - using root
...
Required user/group
On 10/04/18 11:36, Pablo Cascón wrote:
Currently to RX jumbo packets fails for NICs not supporting scatter.
Scatter is not strictly needed for jumbo support on RX. This change
fixes the issue by only enabling scatter for NICs supporting it.
Reported-by: Louis Peens
> -Original Message-
> From: Aaron Conole [mailto:acon...@redhat.com]
> Sent: Monday, April 9, 2018 4:32 PM
> To: Mooney, Sean K
> Cc: d...@openvswitch.org; Stokes, Ian ; Kevin
> Traynor ; Ilya Maximets
> >> -Original Message-
> >> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> >> Sent: Monday, 29 January, 2018 09:35
> >> To: Jan Scheurich ; Venkatesan Pradeep
> >> ; Stokes, Ian
> >> ;
Currently to RX jumbo packets fails for NICs not supporting scatter.
Scatter is not strictly needed for jumbo support on RX. This change
fixes the issue by only enabling scatter for NICs supporting it.
Reported-by: Louis Peens
Signed-off-by: Pablo Cascón
> Tuesday, March 27, 2018 10:55 AM, Shahaf Shuler:
>
> Hi,
>
> Any comments on this version?
I should have some time to look at this today. I would also echo the request
for anyone else in the community interested to review.
Ian
>
> >
> > Hi,
> >
> > Here is a joint work from Mellanox and
31 matches
Mail list logo