At 2023-10-19 19:46:48, "Simon Horman" wrote:
>On Tue, Oct 17, 2023 at 11:25:28AM +0800, wenx05124...@163.com wrote:
>> From: wenxu
>>
>> There is a big scope ct_lock for new conn setup. The
>> ct_lock should be hold only for conns ma
From: wenxu
There is a big scope ct_lock for new conn setup. The
ct_lock should be hold only for conns map insert and
expire rculist insert.
Signed-off-by: wenxu
---
lib/conntrack.c | 32
1 file changed, 20 insertions(+), 12 deletions(-)
diff --git a/lib
From: wenxu
There is a big scope ct_lock for new conn setup. The
ct_lock should be hold only for conns map insert and
expire rculist insert.
Signed-off-by: wenxu
---
lib/conntrack.c | 33 -
1 file changed, 20 insertions(+), 13 deletions(-)
diff --git a/lib
From: wenxu
There is a big scope ct_lock for new conn setup. The
ct_lock should be hold only for conns map insert and
expire rculist insert.
Signed-off-by: wenxu
---
lib/conntrack.c | 42 ++
1 file changed, 30 insertions(+), 12 deletions(-)
diff --git
At 2022-07-04 16:43:20, "Paolo Valerio" wrote:
>Hello wenxu,
>
>thanks for having a look at it.
>
>wenxu writes:
>
>> Hi Paolo,
>>
>> There are two small question.
>> First the ct_lock lock/unlock a
t;czl.zone_limit_seq == conn->zone_limit_seq) {
atomic_count_dec(>czl.count);
}
}
BR
wenxu
At 2022-07-02 02:14:12, "Paolo Valerio" wrote:
>From: Gaetan Rivet
>
>This patch aims to replace the expiration lists as, due to the way
>they are used,
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in pmd lockup.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: wenxu
Removing the IP iterations, and just picking the IP address
with the hash base on the least-used src-ip/dst-ip/proto triple.
Signed-off-by: wenxu
Acked-by: Paolo Valerio
---
lib/conntrack.c | 86 +
1 file changed, 13
From: wenxu
Removing the IP iterations, and just picking the IP address
with the hash base on the least-used src-ip/dst-ip/proto triple.
Signed-off-by: wenxu
---
lib/conntrack.c | 80 -
1 file changed, 10 insertions(+), 70 deletions
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in soft lockup.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
On 2022/5/5 0:55, Paolo Valerio wrote:
> Hello wenxu,
>
> Overall, I'm ok with the change. I think we should consider the case of
> , e.g. ICMP (identifier), as in that scenario, the avoidance is solely
> based on the randomness of the originating ends. Probably we may want to
>
From: wenxu
A packet go through the encap openflow(set_field tun_id/src/dst)
The tunnel wc bits will be set. But it should be clear if the
original packet is non-tunnel. It is not necessary for datapath
wc the tunnel info for match(like the similar logic for vlan).
Signed-off-by: wenxu
From: wenxu
A packet go through the encap openflow(set_field tun_id/src/dst)
The tunnel wc bits will be set. But it should be clear if the
original packet is non-tunnel. It is not necessary for datapath
wc the tunnel info for match(like the similar logic for vlan).
Signed-off-by: wenxu
At 2022-04-15 17:15:06, "Eelco Chaudron" wrote:
>Hi Wenxu,
>
>First FYI you send your emails from wenx05124...@163.com but the from in the
>header has we...@chinatelecom.cn. Guess this is not a big problem, but for
>now, however, it’s causing the mes
Hi Eelco,
Sorry the delay repling for changing job. I will follow this patch with my new
email account. Thanks.
BR
wenxu
At 2022-04-01 22:35:03, "Eelco Chaudron" wrote:
>
>
>On 14 Dec 2021, at 4:59, we...@ucloud.cn wrote:
>
>> From: wenxu
>
Hi IIya,
Any idea for this series ?
BR
wenxu
From: we...@ucloud.cn
Date: 2022-02-09 14:39:41
To: i.maxim...@ovn.org,pvale...@redhat.com
Cc: d...@openvswitch.org
Subject: [PATCH v11 2/2] conntrack: prefer dst port range during unique tuple
search>From: wenxu
>
>This comm
From: wenxu
This commit splits the nested loop used to search the unique ports for
the reverse tuple.
It affects only the dnat action, giving more precedence to the dnat
range, similarly to the kernel dp, instead of searching through the
default ephemeral source range for each destination port
From: wenxu
Like the kernel datapath. The sport nat range for well-konwn origin
sport should limit in the well-known ports.
Signed-off-by: wenxu
Acked-by: Paolo Valerio
---
lib/conntrack.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib
From: wenxu
Now, the default timeout policy for netdev datapath is hard codeing. In
some case show or modify is needed.
Add command for get/set default timeout policy. Using like this:
ovs-appctl dpctl/ct-get-default-tp [dp]
ovs-appctl dpctl/ct-set-default-tp [dp] policies
Signed-off-by: wenxu
ud.cn writes:
>>
>>> From: wenxu
>>>
>>> Now, the default timeout policy for netdev datapath is hard codeing. In
>>> some case show or modify is needed.
>>> Add command for get/set default timeout policy. Using like this:
>>>
>>&g
From: wenxu
This commit splits the nested loop used to search the unique ports for
the reverse tuple.
It affects only the dnat action, giving more precedence to the dnat
range, similarly to the kernel dp, instead of searching through the
default ephemeral source range for each destination port
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in pmd hang in conntrack.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: wenxu
Like the kernel datapath. The sport nat range for well-konwn origin
sport should limit in the well-known ports.
Signed-off-by: wenxu
Acked-by: Paolo Valerio
---
lib/conntrack.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib
From: wenxu
Now, the default timeout policy for netdev datapath is hard codeing. In
some case show or modify is needed.
Add command for get/set default timeout policy. Using like this:
ovs-appctl dpctl/ct-get-default-tp [dp]
ovs-appctl dpctl/ct-set-default-tp [dp] policies
Signed-off-by: wenxu
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in pmd hang in conntrack.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: wenxu
This commit splits the nested loop used to search the unique ports for
the reverse tuple.
It affects only the dnat action, giving more precedence to the dnat
range, similarly to the kernel dp, instead of searching through the
default ephemeral source range for each destination port
From: wenxu
Like the kernel datapath. The sport nat range for well-konwn origin
sport should limit in the well-known ports.
Signed-off-by: wenxu
Acked-by: Paolo Valerio
---
lib/conntrack.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib
From: Paolo Valerio
Date: 2022-01-12 18:19:25
To: we...@ucloud.cn,i.maxim...@ovn.org
Cc: d...@openvswitch.org
Subject: Re: [PATCH v8 3/3] conntrack: limit port clash resolution
attempts>Hello wenxu,
>
>I tested a bit more the patch, and it seems to effectively limit th
From: wenxu
Now, the default timeout policy for netdev datapath is hard codeing. In
some case show or modify is needed.
Add command for get/set default timeout policy. Using like this:
ovs-appctl dpctl/ct-get-default-tp [dp]
ovs-appctl dpctl/ct-set-default-tp [dp] policies
Signed-off-by: wenxu
From: wenxu
Like the kernel datapath. The sport nat range for well-konwn origin
sport should limit in the well-known ports.
Signed-off-by: wenxu
Acked-by: Paolo Valerio
---
lib/conntrack.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib
From: wenxu
Splits the nested loop used to search the unique ports for the
reverse tuple.
It affects only the dnat action, giving more precedence to the dnat
range, similarly to the kernel dp, instead of searching through the
default ephemeral source range for each destination port.
Signed-off
From: wenxu
Now, the default timeout policy for netdev datapath is hard codeing. In
some case show or modify is needed.
Add command for get/set default timeout policy. Using like this:
ovs-appctl dpctl/ct-get-default-timeout-policy [dp]
ovs-appctl dpctl/ct-set-default-timeout-policy [dp
From: wenxu
Like the kernel datapath. The sport nat range for well-konwn origin
sport should limit in the well-known ports.
Signed-off-by: wenxu
Acked-by: Paolo Valerio
---
lib/conntrack.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in pmd hang in conntrack.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: Paolo Valerio
Date: 2021-11-17 23:09:29
To: we...@ucloud.cn,i.maxim...@ovn.org
Cc: d...@openvswitch.org
Subject: Re: [PATCH v6 2/3] conntrack: split the dst and src port range
iterations>Hi wenxu,
>
>we...@ucloud.cn writes:
>
>> From: wenxu
>>
>>
From: Paolo Valerio
Date: 2021-11-17 23:09:29
To: we...@ucloud.cn,i.maxim...@ovn.org
Cc: d...@openvswitch.org
Subject: Re: [PATCH v6 2/3] conntrack: split the dst and src port range
iterations>Hi wenxu,
>
>we...@ucloud.cn writes:
>
>> From: wenxu
>>
>>
Hi Paolo,
Any suggestion for this version. I run all the test case success.
But the robot build show 1091: ofproto-dpif - controller action without
megaflows FAILED (ovs-macros.at:217)
Maybe there are some problem? This patch is not matter with this tescase
BR
wenxu
From: we
From: wenxu
Now, the default timeout policy for netdev datapath is hard codeing. In
some case show or modify is needed.
Add command for get/set default timeout policy. Using like this:
ovs-appctl dpctl/ct-get-default-timeout-policy [dp]
ovs-appctl dpctl/ct-set-default-timeout-policy [dp
From: wenxu
Now, the default timeout policy for netdev datapath is hard codeing. In
some case show or modify is needed.
Add command for get/set default timeout policy. Using like this:
ovs-appctl dpctl/ct-get-default-timeout-policy [dp]
ovs-appctl dpctl/ct-set-default-timeout-policy [dp
Hi Paolo,
Any suggestion for this series?
BR
wenxu
From: we...@ucloud.cn
Date: 2021-10-09 23:28:38
To: i.maxim...@ovn.org,pvale...@redhat.com
Cc: d...@openvswitch.org
Subject: [PATCH v6 1/3] conntrack: select correct sport range for well-known
origin sport>From: wenxu
>
Will do . Thanks Paolo
From: Paolo Valerio
Date: 2021-11-08 02:24:29
To: we...@ucloud.cn,i.maxim...@ovn.org,acon...@redhat.com
Cc: d...@openvswitch.org
Subject: Re: [PATCH] conntrack: support default timeout policy get/set cmd for
netdev datapath>Hi Wenxu,
>
>we...@ucloud.
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in pmd hang in conntrack.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: wenxu
Like the kernel datapath. The sport nat range for well-konwn origin
sport should limit in the well-known ports.
Signed-off-by: wenxu
Acked-by: Paolo Valerio
---
lib/conntrack.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib
From: wenxu
Splitting the two port range iterations instead of keeping it nested. And
the dst port (in case of DNAT) range would have precedence over the src
manipulation in the resolution.
Signed-off-by: wenxu
---
lib/conntrack.c | 65
ucloud.cn writes:
>
>> From: wenxu
>>
>> In case almost or all available ports are taken, clash resolution can
>> take a very long time, resulting in pmd hang in conntrack.
>>
>> This can happen when many to-be-natted hosts connect to same
>> destination:port
From: wenxu
Now, the default timeout policy for netdev datapath is hard codeing. In
some case show or modify is needed.
Add command for get/set default timeout policy. Using like this:
ovs-appctl dpctl/ct-get-default-timeout-policy [dp]
ovs-appctl dpctl/ct-set-default-timeout-policy [dp
From: wenxu
Like the kernel datapath. The sport nat range for well-konwn origin
sport should limit in the well-known ports.
Signed-off-by: wenxu
---
lib/conntrack.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntrack.c
index
From: wenxu
First one:
choose the origin select sport as current sport for each port search round
with new address, in the most of the snat case the sport did't need to modified
second one:
The sport nat range for well-konwn origin sport should limit in the
well-known ports.
last one:
Add
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in pmd hang in conntrack.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: wenxu
It is better to choose the origin select sport as current sport
for each port search round with new address.
Signed-off-by: wenxu
---
lib/conntrack.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntrack.c
index 33a1a92..76c466c
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in pmd hang in conntrack.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: wenxu
Like the kernel datapath. The sport nat range for well-konwn origin
sport should limit in the well-known ports.
Signed-off-by: wenxu
---
lib/conntrack.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntrack.c
index
From: wenxu
It is better to choose the origin select sport as current sport
for each port search round with new address.
Signed-off-by: wenxu
---
lib/conntrack.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntrack.c
index 551c206..00906f8
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in pmd hang in conntrack.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: wenxu
Like the kernel datapath. The sport nat range for well-konwn origin
sport should limit in the well-known ports.
Signed-off-by: wenxu
---
lib/conntrack.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntrack.c
index
From: wenxu
It is better to choose the origin select sport as current sport
for each port search round with new address.
Signed-off-by: wenxu
---
lib/conntrack.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntrack.c
index 551c206..00906f8
From: Aaron Conole
Date: 2021-09-07 21:46:29
To: we...@ucloud.cn
Cc:
i.maxim...@ovn.org,dlu...@gmail.com,pvale...@redhat.com,d...@openvswitch.org
Subject: Re: [PATCH v2 1/2] conntrack: restore the origin port for each round
with new address>we...@ucloud.cn writes:
>
>>
>
>> From: wenxu
>>
>> It is better to choose the origin select port as current port
>> for each port search round with new address.
>>
>> Signed-off-by: wenxu
>> ---
>
>This should happen normally.
>It doesn't happen in the case of sou
}
>
>we...@ucloud.cn writes:
>
>> From: wenxu
>>
>> In case almost or all available ports are taken, clash resolution can
>> take a very long time, resulting in soft lockup.
>>
>> This can happen when many to-be-natted hosts connect to same
>>
>
>> From: wenxu
>>
>> It is better to choose the origin select port as current port
>> for each port search round with new address.
>>
>> Signed-off-by: wenxu
>> ---
>
>This should happen normally.
>It doesn't happen in the case of sou
From: wenxu
It is better to choose the origin select port as current port
for each port search round with new address.
Signed-off-by: wenxu
---
lib/conntrack.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntrack.c
index 551c206
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in soft lockup.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: wenxu
In case almost or all available ports are taken, clash resolution can
take a very long time, resulting in soft lockup.
This can happen when many to-be-natted hosts connect to same
destination:port (e.g. a proxy) and all connections pass the same SNAT.
Pick a random offset
From: wenxu
It is better to choose the origin select port as current port
for each port search round with new address.
Signed-off-by: wenxu
---
lib/conntrack.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntrack.c
index 551c206
Got it Thanks.
From: Paolo Valerio
Date: 2021-08-31 22:25:10
To: we...@ucloud.cn,i.maxim...@ovn.org
Cc: d...@openvswitch.org,"dce...@redhat.com"
Subject: Re: [PATCH v2] conntrack: fix src port selection for DNAT case>Hello,
>
>we...@ucloud.cn writes:
>
>>
From: wenxu
For DNAT case the src port should never modified.
Fixes: 61e48c2d1db2 ("conntrack: Handle SNAT with all-zero IP address")
Signed-off-by: wenxu
---
lib/conntrack.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntra
From: wenxu
For DNAT case the sport should never modified.
Fixes: 61e48c2d1db2 ("conntrack: Handle SNAT with all-zero IP address")
Signed-off-by: wenxu
---
lib/conntrack.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/lib/conntrack.c b/lib/conntra
Hi Ilya,
Any idea for this patch?
BR
wenxu
From: Aaron Conole
Date: 2021-08-11 21:25:57
To: we...@ucloud.cn
Cc:
b...@ovn.org,i.maxim...@ovn.org,msant...@redhat.com,gr...@u256.net,d...@openvswitch.org,Timothy
Redaelli
Subject: Re: [ovs-dev] [PATCH v2] conntrack: remove
From: wenxu
Only 'nat_action_info->nat_action' is used for packet forwarding.
Other items such as min/max_ip/port are used only when creating
new connections. No need to store the whole nat_action_info in conn.
Signed-off-by: wenxu
Acked-by: Gaetan Rivet
Acked-By Michael Santana
---
20 PM wrote:
>>
>> From: wenxu
>>
>> Only the nat_action in the nat_action_info is used for conn
>> packet forward and other item such as min/max_ip/port only
>> used for the new conn operation. So the whole nat_ction_info
>> no need store in conn.
From: wenxu
Only the nat_action in the nat_action_info is used for conn
packet forward and other item such as min/max_ip/port only
used for the new conn operation. So the whole nat_ction_info
no need store in conn. This will also avoid unnecessary memory
allocation.
Signed-off-by: wenxu
Hi Gaetan,
First, Thanks for your patch. This is very useful for us. But maybe
there are some question need to be checked.
>
> ovs_mutex_unlock(>ct_lock);
>@@ -1034,7 +1057,6 @@ conn_not_found(struct conntrack *ct, struct dp_packet
>*pkt,
>const struct nat_action_info_t
From: wenxu
The ct_lock of conntrack is a global lock to protect the
conntrack table insert. Mutex lock will add more latency
for lock conflict and the poor performance for CPS of new
connections creating.
With our benchmark four pmd thread with simple conntrack
actions, The CPS is 300k
From: wenxu
Add spinlock for non pthread_spinlock platform. It using the
mutex lock. And always busy trylock to acquire the lock.
Signed-off-by: wenxu
---
include/openvswitch/thread.h | 4 ++--
lib/ovs-thread.c | 25 +
2 files changed, 27 insertions(+), 2
From: wenxu
A case for client A 10.0.0.2 snat to 1.1.1.2 with following flows.
rule1: ovs-ofctl add-flow manbr "table=0,ct_state=-trk,ip,in_port=dpdk2,
actions=ct(table=1, nat)"
rule2: ovs-ofctl add-flow manbr
"table=0,table=1,ct_state=+trk+new,ip,in_port=dpdk2, actions=ct(c
From: Aaron Conole
Date: 2021-07-09 04:05:27
To: we...@ucloud.cn
Cc: b...@ovn.org,dlu...@gmail.com,i.maxim...@ovn.org,d...@openvswitch.org
Subject: Re: [ovs-dev] [PATCH] conntrack: replace conntrack lock form mutex to
spinlock>we...@ucloud.cn writes:
>
>>
.@ovn.org,d...@openvswitch.org
Subject: Re: [ovs-dev] [PATCH v2] ipf: fix only nat the first fragment in the
reass process>we...@ucloud.cn writes:
>
>> From: wenxu
>>
>> The ipf collect original fragment packets and reass a new pkt
>> to do the conntrack logic. A
ubject: Re: [ovs-dev] [PATCH v2] ipf: fix only nat the first fragment in the
reass process>we...@ucloud.cn writes:
>
>> From: wenxu
>>
>> The ipf collect original fragment packets and reass a new pkt
>> to do the conntrack logic. After finsh the conntrack things
>>
From: wenxu
The ct_lock of conntrack is a global lock to protect the
conntrack table insert. Mutex lock will add more latency
for lock conflict and the poor performance for CPS of new
connections creating.
With our benchmark four pmd thread with simple conntrack
actions, The CPS is 300k
Hi Ilya,
How about this patch. Without this the fragment packet in the nat conntrack will
not work for the only first fragment do address nat.
BR
wenxu
From: we...@ucloud.cn
Date: 2021-06-18 14:45:50
To: i.maxim...@ovn.org
Cc: d...@openvswitch.org
Subject: [ovs-dev] [PATCH v2] ipf: fix only
From: wenxu
When the conntrack is not be found, CT will check whether the pkt has
be NATed, get the orignal tuple and search the conntrack from orignal
tuple.
If there is nat_action_info in the rule, the pkt maybe NATed.
So it should find the original tuple to find the conntrack.
Signed-off
From: wenxu
The ipf collect original fragment packets and reass a new pkt
to do the conntrack logic. After finsh the conntrack things
copy the ct meta info to each orignal packet and modify the
l4 header in the first fragment. It should modify the ip src/
dst info for all the fragments.
Signed
From: wenxu
The ipf collect original fragment packets and reass a new pkt
to do the conntrack logic. After finsh the conntrack things
copy the ct meta info to each orignal packet and modify the
l4 header in the first fragment. It should modify the ip src/
dst info for all the fragments.
Signed
ave not figured
> out what to do about it yet.
>
> On 2021/03/13 00:06, Marcelo Leitner wrote:
>> Hi there,
>>
>> On Wed, Mar 10, 2021 at 12:06:52PM +0100, Ilya Maximets wrote:
>>> Hi, Louis. Thanks for your report!
>>>
>>> Marcelo, Paul, could yo
From: wenxu
TC flower doesn't support some ct state flags such as
INVALID/SNAT/DNAT/REPLY. So it is better to reject this rule.
Signed-off-by: wenxu
---
lib/netdev-offload-tc.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/lib/netdev-offload-tc.c b/lib/netdev
From: wenxu
TC flower doesn't support the INVALID/SNAT/DNAT/REPLY ct state flag.
So it is better to reject this rule.
Signed-off-by: wenxu
---
lib/netdev-offload-tc.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/lib/netdev-offload-tc.c b/lib/netdev-offload-tc.c
index 586d99d
From: Ilya Maximets
Date: 2021-02-02 23:59:44
To: wenxu ,Ilya Maximets
Cc: Paul Blakey ,d...@openvswitch.org,Oz Shlomo
,Marcelo Leitner
Subject: Re: [ovs-dev] [PATCH 0/2] Add offload support for ct_state rpl and inv
flags>On 2/2/21 4:52 PM, wenxu wrote:
>>
>>
Hi,
just ingore my patch. Now kernel can support match invalid
ct_state in th tc flower.
BR
wenxu
From: Ilya Maximets
Date: 2021-02-02 23:33:41
To: Paul Blakey ,d...@openvswitch.org
Cc: Oz Shlomo ,i.maxim...@ovn.org,Marcelo Leitner
,wenxu
Subject: Re: [ovs-dev] [PATCH 0/2] Add offload
From: wenxu
TC flower do't support the INVALID ct state flag. So it is better
to reject this rule but not just ignore this flag.
Signed-off-by: wenxu
---
lib/netdev-offload-tc.c | 4
1 file changed, 4 insertions(+)
diff --git a/lib/netdev-offload-tc.c b/lib/netdev-offload-tc.c
index
From: wenxu
In the tc flower the pedit ipv4/6 header and tcp/udp header should
always mask the ip_proto. So it should set the mask before the
prio selection and it make the right prio selected in the non-support
multi_mask_per_prio kernel.
For case a rule with action dec_ttl. The flower now
be reporduce with your test but with vxlan device and mutiqueue
hardware underlay device.
BR
wenxu
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Hi Simon,
How about this patch?
BR
wenxu
From: Simon Horman
Date: 2020-11-26 18:44:25
To: we...@ucloud.cn
Cc: d...@openvswitch.org
Subject: Re: [PATCH] lib/tc: fix parse act pedit for tos rewrite>On Tue, Nov
24, 2020 at 11:01:09AM +0800, we...@ucloud.cn wrote:
>> Fr
From: wenxu
Check overlap between current pedit key, which is always 4 bytes
(range [off, off + 3]), and a map entry in flower_pedit_map
sf = ROUND_DOWN(mf, 4) (range [sf|mf, (mf + sz - 1)|ef]).
So for the tos the rewite the off + 3(3) is greater than mf,
and should less than ef(4) but not mf
From: wenxu
Add offload-delay option to delay offload the datapath flow.
Sometimes there is no need for offload the short connection flows which
overload the add/del flows in the HW. It is better to offload persistent
connection.
enable it as following:
ovs-vsctl set Open_Vswitch . other
From: wenxu
A packet with first frag and execute act_ct action.
The packet will stole by defrag. So the stats counter
for "gact action goto chain" will always 0. The openvswitch
update each action in order. So the flower stats finally
alway be zero. The rule will be delete adter max
From: wenxu
When restart the vswitchd with flow-restore-wait. The Vswitch doesn't
connect to the controller util the flow-restore-wait finished.
Because when bridge_configure_remotes() calls bridge_get_controllers(),
it first checks if flow-restore-wait has been set, and if so,
it ignores any
From: wenxu
The tc modify flow put always delete the original flow first and
then add the new flow. If the modfiy flow put operation failed,
the flow put operation will change from modify to create if success
to delete the original flow in tc (which will be always failed with
ENOENT, the flow
From: wenxu
The tc modify flow put always delete the original flow first and
then add the new flow. If the modfiy flow put operation failed,
the flow put operation will change from modify to create if success
to delete the original flow in tc (which will be always failed with
ENOENT, the flow
From: wenxu
The tc modify flow put always delete the original flow first and
then add the new flow. If the modfiy flow put operation failed,
the flow put operation will change from modify to create if success
to delete the original flow in tc (which will be always failed with
ENOENT, the flow
From: wenxu
The tc modify flow put always delete the original flow first and
then add the new flow. If the modfiy flow put operation failed,
the flow put operation will change from modify to create if success
to delete the original flow in tc (which will be always failed with
ENOENT, the flow
1 - 100 of 151 matches
Mail list logo