On 2019-07-01 1:52 PM, Ilya Maximets wrote:
> NEWS update was missed while updating docs for dynamic Flow API.
> Since this is a user visible change, it should be mentioned here.
>
> Fixes: d74ca2269e36 ("dpctl: Update docs about dump-flows and HW offloading.")
> Signed-off-by: Ilya Maximets
On Fri, Jun 28, 2019 at 2:18 PM Dumitru Ceara wrote:
> On Fri, Jun 14, 2019 at 2:38 PM wrote:
> >
> > From: Numan Siddique
> >
> > With the commit [1], the routing for the provider logical switches
> > connected to a router is centralized on the master gateway chassis
> > (if the option -
Hi, Eli.
Did you have a chance to test this?
Best regards, Ilya Maximets.
On 19.06.2019 12:16, Ilya Maximets wrote:
> 'mask' must be checked first before configuring key in flower.
>
> CC: Eli Britstein
> Fixes: 0b0a84783cd6 ("netdev-tc-offloads: Support match on priority tags")
>
Ack
On 7/1/2019 1:52 PM, Ilya Maximets wrote:
> NEWS update was missed while updating docs for dynamic Flow API.
> Since this is a user visible change, it should be mentioned here.
>
> Fixes: d74ca2269e36 ("dpctl: Update docs about dump-flows and HW offloading.")
> Signed-off-by: Ilya Maximets
>
On 30.06.2019 7:47, Eli Britstein wrote:
> This patch breaks ovs-dpctl dump-flows, when using TC offloads (kernel).
>
> I added a print in netdev_flow_dump_create, and flow_api is NULL when
> invoking ovs-dpctl dump-flows.
>
> I think new netdev objects are created to the ports (netdev_open),
On 7/1/2019 1:13 PM, Ilya Maximets wrote:
> On 30.06.2019 7:47, Eli Britstein wrote:
>> This patch breaks ovs-dpctl dump-flows, when using TC offloads (kernel).
>>
>> I added a print in netdev_flow_dump_create, and flow_api is NULL when
>> invoking ovs-dpctl dump-flows.
>>
>> I think new netdev
On 7/1/2019 1:46 PM, Ilya Maximets wrote:
> On 01.07.2019 13:24, Eli Britstein wrote:
>> On 7/1/2019 1:13 PM, Ilya Maximets wrote:
>>> On 30.06.2019 7:47, Eli Britstein wrote:
This patch breaks ovs-dpctl dump-flows, when using TC offloads (kernel).
I added a print in
Bleep bloop. Greetings Damjan Skvarc, I am a robot and I have tried out your
patch.
Thanks for your contribution.
I encountered some error that I wasn't expecting. See the details below.
git-am:
fatal: patch fragment without header at line 6: @@ -581,8 +582,9 @@
ovsdb_idl_destroy(struct
On Thu, Jun 27, 2019 at 1:13 PM Kevin Traynor wrote:
> Add documentation about vhost tx retries and external
> configuration that can help reduce/avoid them.
>
> Signed-off-by: Kevin Traynor
> Acked-by: Eelco Chaudron
> Acked-by: Flavio Leitner
> ---
>
While checking unit tests with valgrind option (make check-valgrind) I have
noticed
several memory leaks of the following format:
.
==20019== 13,883 (296 direct, 13,587 indirect) bytes in 1 blocks are
definitely lost in loss record 346 of 346
==20019==at 0x4C2FB55: calloc (in
NEWS update was missed while updating docs for dynamic Flow API.
Since this is a user visible change, it should be mentioned here.
Fixes: d74ca2269e36 ("dpctl: Update docs about dump-flows and HW offloading.")
Signed-off-by: Ilya Maximets
---
NEWS | 2 ++
1 file changed, 2 insertions(+)
diff
On 01.07.2019 13:24, Eli Britstein wrote:
>
> On 7/1/2019 1:13 PM, Ilya Maximets wrote:
>> On 30.06.2019 7:47, Eli Britstein wrote:
>>> This patch breaks ovs-dpctl dump-flows, when using TC offloads (kernel).
>>>
>>> I added a print in netdev_flow_dump_create, and flow_api is NULL when
>>>
Damjan Skvarc writes:
> Hm, telling the true, I don't know how to react on this report.
> - I have made a slight change on MY LOCAL OVS FORK
> - created a patch file (git format-patch -1 -s -n)
> - prepare a mail (according to documentation)
> - and sent it to dev list.
> probably a problem
The patch introduces experimental AF_XDP support for OVS netdev.
AF_XDP, the Address Family of the eXpress Data Path, is a new Linux socket
type built upon the eBPF and XDP technology. It is aims to have comparable
performance to DPDK but cooperate better with existing kernel's networking
stack.
The patch adds the basic spin lock functions:
ovs_spin_{lock, try_lock, unlock, init, destroy}.
OSX does not support pthread spin lock, so make it
linux only.
Signed-off-by: William Tu
---
include/openvswitch/thread.h | 22 ++
lib/ovs-thread.c | 31
On Thu, Jun 27, 2019 at 08:24:46PM +0300, Ilya Maximets wrote:
> On 26.06.2019 21:27, Ben Pfaff wrote:
> > On Tue, Jun 25, 2019 at 01:12:11PM +0300, Ilya Maximets wrote:
> >> 'netdev' datapath is implemented within ovs-vswitchd process and can
> >> not exist without it, so it should be gracefully
v4:
- 1/2 New patch: Move vhost tx retries doc to a seperate section (David)
- 2/3
-- Changed tx_retries to be a custom stat for vhost (Ilya)
-- Added back in MIN() that was dropped in v2, as in retesting I
saw it is needed when the retry limit is reached to prevent
an accounting error
--
vhost tx retries may occur, and it can be a sign that
the guest is not optimally configured.
Add a custom stat so a user will know if vhost tx retries are
occurring and hence give a hint that guest config should be
examined.
Signed-off-by: Kevin Traynor
---
On Mon, Jul 1, 2019 at 9:45 AM wrote:
>
> From: Numan Siddique
>
> With the commit [1], the routing for the provider logical switches
> connected to a router is centralized on the master gateway chassis
> (if the option - reside-on-redirect-chassis) is set. When the
> failover happens and a
On Mon, Jul 1, 2019 at 9:44 AM wrote:
>
> From: Numan Siddique
>
> The present code which sets the Port_Binding.nat_addresses
> can be simplied. This patch does this. This would help in
> upcoming commits to set the nat_addresses column with the
> mac and IPs of distributed logical router ports
The patch adds ip6gre support. Tunnel type 'ip6gre' with packet_type=
legacy_l2 is a layer 2 GRE tunnel over IPv6, carrying inner ethernet packets
and encap with GRE header with outer IPv6 header. Encapsulation of layer 3
packet over IPv6 GRE, ip6gre, is not supported yet. I tested it by
On Mon, Jul 01, 2019 at 12:24:38PM +0200, Damjan Skvarc wrote:
> While checking unit tests with valgrind option (make check-valgrind) I have
> noticed
> several memory leaks of the following format:
Thanks. I applied this to master.
I noticed that the call to ovsdb_idl_db_clear() could be moved
On 7/1/2019 3:29 PM, William Tu wrote:
On Mon, Jul 1, 2019 at 3:10 PM Gregory Rose wrote:
On 7/1/2019 12:45 PM, William Tu wrote:
The patch adds ip6gre support. Tunnel type 'ip6gre' with packet_type=
legacy_l2 is a layer 2 GRE tunnel over IPv6, carrying inner ethernet packets
and encap with
Bleep bloop. Greetings Kevin Traynor, I am a robot and I have tried out your
patch.
Thanks for your contribution.
I encountered some error that I wasn't expecting. See the details below.
checkpatch:
WARNING: Line is 81 characters long (recommended limit is 79)
#94 FILE:
On Mon, Jul 1, 2019 at 3:10 PM Gregory Rose wrote:
>
>
>
> On 7/1/2019 12:45 PM, William Tu wrote:
> > The patch adds ip6gre support. Tunnel type 'ip6gre' with packet_type=
> > legacy_l2 is a layer 2 GRE tunnel over IPv6, carrying inner ethernet packets
> > and encap with GRE header with outer
Hi Damijan, I noticed that we have some inconsistent spelling of your
first name in the Git history. Specifically, I see both "Damjan" and
"Damijan" in different places in history. While it's not possible to
fix previous entries in the Git history, I want to make sure that we're
spelling your
On 7/1/2019 2:21 PM, Ilya Maximets wrote:
> Hi, Eli.
> Did you have a chance to test this?
Yes, sorry for the delay. It works fine (though didn't test QinQ. only
native/single-tagged).
Reviewed-by: Eli Britstein
> Best regards, Ilya Maximets.
>
> On 19.06.2019 12:16, Ilya Maximets wrote:
On 7/1/2019 12:45 PM, William Tu wrote:
The patch adds ip6gre support. Tunnel type 'ip6gre' with packet_type=
legacy_l2 is a layer 2 GRE tunnel over IPv6, carrying inner ethernet packets
and encap with GRE header with outer IPv6 header. Encapsulation of layer 3
packet over IPv6 GRE, ip6gre,
vhost tx retries can provide some mitigation against
dropped packets due to a temporarily slow guest/limited queue
size for an interface, but on the other hand when a system
is fully loaded those extra cycles retrying could mean
packets are dropped elsewhere.
Up to now max vhost tx retries have
vhost tx retry is applicable to vhost-user and vhost-user-client,
but was in the section that compares them. Also, moved further
down the doc as prefer to have more fundamental info about vhost
nearer the top.
Fixes: 6d6513bfc657 ("doc: Add info on vhost tx retries.")
Reported-by: David Marchand
From: Numan Siddique
This patch handles sending GARPs for
- router port IPs of a distributed router port
- router port IPs of a router port which belongs to gateway router
(with the option - redirect-chassis set in Logical_Router.options)
Signed-off-by: Numan Siddique
---
On Fri, Jun 28, 2019 at 1:55 PM Dumitru Ceara wrote:
> On Fri, Jun 14, 2019 at 2:37 PM wrote:
> >
> > From: Numan Siddique
> >
> > The present code which sets the Port_Binding.nat_addresses
> > can be simplied. This patch does this. This would help in
> > upcoming commits to set the
From: Numan Siddique
The v1 of the patch series had just one patch which handled sending
GARPs for the logical router ports with the option -
reside-on-redirect-chassis set.
The v2+ has totall 3 patches.
Patch 1 is a simple refactor in ovn-northd code which sets the
Port_Binding.nat_addresses
From: Numan Siddique
If the ovn-controller main loop takes more than 5 seconds (if there are lots of
logical
flows) before it calls poll_block(), it causes the poll_block to wake up
immediately,
since rconn module has to send echo request. With the incremental processing,
this is
not an issue
From: Numan Siddique
The present code which sets the Port_Binding.nat_addresses
can be simplied. This patch does this. This would help in
upcoming commits to set the nat_addresses column with the
mac and IPs of distributed logical router ports and logical
router ports with
From: Numan Siddique
With the commit [1], the routing for the provider logical switches
connected to a router is centralized on the master gateway chassis
(if the option - reside-on-redirect-chassis) is set. When the
failover happens and a standby gateway chassis becomes master,
it should send
36 matches
Mail list logo