Thanks Joe.
BR,
Wang Zhike
-Original Message-
From: Joe Stringer [mailto:j...@ovn.org]
Sent: Saturday, July 22, 2017 2:29 AM
To: 王志克
Cc: ovs dev; Ben Pfaff
Subject: Re: [ovs-dev] [PATCH] pkt reassemble: fix kernel panic for ovs
reassemble
On 6 July 2017 at 13:57, Ben Pfaff &l
Message-
From: Greg Rose [mailto:gvrose8...@gmail.com]
Sent: Thursday, June 29, 2017 4:29 AM
To: 王志克
Cc: d...@openvswitch.org; Joe Stringer
Subject: Re: 答复: [ovs-dev] 答复: 答复: [PATCH] pkt reassemble: fix kernel panic for
ovs reassemble
On 06/26/2017 05:51 PM, 王志克 wrote:
> Hi Greg,
>
>
Hi Greg,
Any progress?
Thanks.
Br,
Wang Zhike
-Original Message-
From: Greg Rose [mailto:gvrose8...@gmail.com]
Sent: Friday, June 30, 2017 1:23 AM
To: 王志克
Cc: d...@openvswitch.org; Joe Stringer
Subject: Re: 答复: [ovs-dev] 答复: 答复: [PATCH] pkt reassemble: fix kernel panic for
ovs
Hi All,
I try to build rpm for ovs+dpdk, but met below compiling issue. Does someone
know how to fix it? I guess it is related to LDFLAGS='-Wl,-z,relro
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld, but no idea how to fix it.
If I follow below guide (non-rpm), everything is OK.
... yes ". Note that previously I did
not install the glibc-static, standalone ./configure still can succeed though
reports "static ...no".
So question:
why the rpmbuild cannot correct detect the -static flag for gcc?
Br,
Wang Zhike
-邮件原件-
发件人: 王志克
发送时间: 2017年6月22日 9:31
Ovs and kernel stack would add frag_queue to same netns_frags list.
As result, ovs and kernel may access the fraq_queue without correct
lock. Also the struct ipq may be different on kernel(older than 4.3),
which leads to invalid pointer access.
The fix creates specific netns_frags for ovs.
--
发件人: Darrell Ball [mailto:db...@vmware.com]
发送时间: 2017年6月21日 0:14
收件人: 王志克; ovs-dev@openvswitch.org; disc...@openvswitch.org
主题: Re: [ovs-discuss] [ovs-dev] rpmbuild failure for ovs_dpdk
Correction: ovs-disc...@openvswitch.org
On 6/20/17, 9:01 AM, "ovs-discuss-boun...@openvswitch.org o
-邮件原件-
发件人: Darrell Ball [mailto:db...@vmware.com]
发送时间: 2017年6月22日 22:08
收件人: 王志克; ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org
主题: Re: 答复: [ovs-discuss] rpmbuild failure for ovs_dpdk
This may not be the best list to help here.
Maybe a Centos mailing list, possibly or one
Hi Joe,
Please check the attachment. Thanks.
Br,
Wang Zhike
-邮件原件-
发件人: Joe Stringer [mailto:j...@ovn.org]
发送时间: 2017年6月23日 8:20
收件人: 王志克
抄送: d...@openvswitch.org
主题: Re: [ovs-dev] [PATCH] pkt reassemble: fix kernel panic for ovs reassemble
On 21 June 2017 at 18:54, 王志克 <wan
-
发件人: Joe Stringer [mailto:j...@ovn.org]
发送时间: 2017年6月24日 5:15
收件人: 王志克
抄送: d...@openvswitch.org
主题: Re: 答复: [ovs-dev] [PATCH] pkt reassemble: fix kernel panic for ovs
reassemble
Hi Wang Zhike,
I'd like if others like Greg could take a look as well, since this code is
delicate. The more
送时间: 2017年6月27日 6:26
收件人: 王志克
抄送: d...@openvswitch.org; Joe Stringer
主题: Re: [ovs-dev] 答复: 答复: [PATCH] pkt reassemble: fix kernel panic for ovs
reassemble
On 06/26/2017 04:56 AM, 王志克 wrote:
> Hi Joe,
>
> I will try to check how to send the patch. Maybe tomorrow since I am quite
&
Hi Joe, Greg,
I tried to create a pull request, please check whether it works. Thanks.
https://github.com/openvswitch/ovs/pull/187
Br,
Wang Zhike
-Original Message-
From: Joe Stringer [mailto:j...@ovn.org]
Sent: Saturday, June 24, 2017 5:15 AM
To: 王志克
Cc: d...@openvswitch.org
Subject
Hi All,
Reading the release note of DPDK section for OVS2.6, I note below:
* Basic connection tracking for the userspace datapath (no ALG,
fragmentation or NAT support yet)
I am wondering for the missing part (no ALG, fragmentation, NAT), can I have
the release plan for such
reassembly function to make reassembled packet go through conntrack.
Above cases really happen in current product deployment, and we want to keep it
work when migrating to OVS+DPDK solution.
Br,
Wang Zhike
-邮件原件-
发件人: Darrell Ball [mailto:db...@vmware.com]
发送时间: 2017年5月27日 2:45
收件人: 王志克; Ben
, output packet with size > out_port_mtu
Br,
Wang zhike
-邮件原件-
发件人: Darrell Ball [mailto:db...@vmware.com]
发送时间: 2017年5月26日 9:45
收件人: Ben Pfaff; 王志克; Darrell Ball
抄送: ovs-dev@openvswitch.org
主题: Re: [ovs-dev] Query for missing function
On 5/25/17, 2:04 PM, "ovs-
t practice.
Just my personal thought.
Br,
Wangzhike
-邮件原件-
发件人: Darrell Ball [mailto:db...@vmware.com]
发送时间: 2017年6月1日 10:16
收件人: 王志克; Ben Pfaff; Darrell Ball
抄送: ovs-dev@openvswitch.org
主题: Re: 答复: 答复: 答复: [ovs-dev] Query for missing function
On 5/31/17, 6:07 PM, "王志克" <
: 2017年6月1日 11:42
收件人: 王志克; Ben Pfaff; Darrell Ball
抄送: ovs-dev@openvswitch.org
主题: Re: 答复: 答复: 答复: 答复: [ovs-dev] Query for missing function
On 5/31/17, 8:06 PM, "王志克" <wangzh...@jd.com> wrote:
Hi Darrell,
In my opinion, it may be also hard for user to decide "conf
Hi All,
Previously I use kernel ovs, and docker veth-pair port can be added to ovs
bridge directly. In this case, docker traffic from kernel will direct to ovs
kernel module.
Now I want to use ovs+dpdk to speed up the forwarding performance, but I am
wondering how docker traffic would go to
Hi All,
I want to submit one feature, which by default should be disabled. I plan to
define one compiling macro, but I do not know how.
Can someone guide me how to add such macro in OVS? Thanks.
Br,
Wang Zhike
___
dev mailing list
Thanks Billy.
I will tune it during my test while trying to reading related code to
understand the logic.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Tuesday, September 19, 2017 9:07 PM
To: 王志克; ovs-dev@openvswitch.org; ovs-disc
Hi All,
I am using OVS_DPDK, and the target CPU has running 100%. However, I notice the
cpu frequency is NOT exceeding the max value, so the performance may not reach
the best value. I have 2 pmd only for now, each is hyper-thread core in one
physical core.
I am not sure the reason, and do
1.3 1.3 559 540
Case6 1.28 1.28 568 551
Br,
Wang Zhike
-Original Message-
From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
Sent: Wednesday, September 06, 2017 9:33 PM
To: O Mahony, Billy; 王志克; Darrel
Hi,
Please see below log:
It seems no way to set pmd-rxq-affinity for multiple queues to 2 pmd. I think
it is a bug.
[root@A01-R08-I24-169 wangzhike]# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdk0 queue-id: 3 7
...@vmware.com]
Sent: Wednesday, September 06, 2017 10:47 AM
To: 王志克; ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org
Subject: Re: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port
This same numa node limitation was already removed, although same numa is
preferred for performance
l Ball [mailto:db...@vmware.com]
Sent: Wednesday, September 06, 2017 1:39 PM
To: 王志克; ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org
Subject: Re: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port
You could use pmd-rxq-affinity for the queues you want serviced locally and
let the oth
Hi All,
I read below doc about pmd assignment for physical port. I think the limitation
“on the same NUMA node” may be not efficient.
http://docs.openvswitch.org/en/latest/intro/install/dpdk/
DPDK Physical Port Rx
Hi Kevin,
Consider the scenario:
One host with 1 physical NIC, and the NIC locates on NUMA socket0. There are
lots of VM on this host.
I can see several method to improve the performance:
1) Try to make sure the VM memory used for networking would locate on socket0
forever. Eg, if VM uses 4G
ony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 6:35 PM
To: 王志克; Darrell Ball; ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org;
Kevin Traynor
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port
Hi Wang,
If you create several PMDs o
Hi Billy,
See my reply in line.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 7:26 PM
To: 王志克; Darrell Ball; ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org;
Kevin Traynor
Subject: RE: [ovs-dev] OVS
Thanks Darrell.
It indeed works.
Br,
Wang Zhike
-Original Message-
From: Darrell Ball [mailto:db...@vmware.com]
Sent: Friday, September 08, 2017 12:34 AM
To: 王志克; ovs-dev@openvswitch.org
Subject: Re: [ovs-dev] ovs+dpdk: no way to set pmd-rxq-affinity for multiple
queues to 2 pmd
Here
Hi All,
I read below doc, and have one question:
http://docs.openvswitch.org/en/latest/intro/install/dpdk/
dpdk-socket-mem
Comma separated list of memory to pre-allocate from hugepages on specific
sockets.
Question:
OVS+DPDK can let user to specify the needed memory using dpdk-socket-mem.
...@intel.com]
Sent: Friday, September 08, 2017 11:18 PM
To: 王志克; ovs-dev@openvswitch.org; Jan Scheurich; Darrell Ball;
ovs-disc...@openvswitch.org; Kevin Traynor
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port
Hi Wang,
https://mail.openvswitch.org/pipermail/ovs-dev/2017
Hi Jan,
Do you have some test data about the cross-NUMA impact?
Thanks.
Br,
Wang Zhike
-Original Message-
From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
Sent: Wednesday, September 06, 2017 9:33 PM
To: O Mahony, Billy; 王志克; Darrell Ball; ovs-disc...@openvswitch.org;
ovs-dev
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 10:49 PM
To: Kevin Traynor; Jan Scheurich; 王志克; Darrell Ball;
ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question
Hi Billy,
Please see my reply in line.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 9:01 PM
To: 王志克; Darrell Ball; ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org;
Kevin Traynor
Subject: RE: [ovs-dev
Hi,
I create a pull request, regarding the vhost user port status.
The problem is that the port may be udpated while the vhost_reconfigured is
false. Then the vhost_reconfigured is updated.
As a result, the vhost user status is kept as LINK-DOWN. Note the traffic is OK
in this case. Only the
Thanks Darrell.
I just send it out via git send-email.
Br,
Wang Zhike (lawrence)
-Original Message-
From: Darrell Ball [mailto:db...@vmware.com]
Sent: Wednesday, August 23, 2017 8:48 AM
To: 王志克; d...@openvswitch.org
Subject: Re: [ovs-dev] vhost user port is displayed as LINK DOWN after
Hi Lance,
Your patch works. Thanks.
BR,
Wang Zhike
-Original Message-
From: Lance Richardson [mailto:lrich...@redhat.com]
Sent: Thursday, August 24, 2017 8:10 PM
To: 王志克
Cc: ovs-dev@openvswitch.org; ovs-disc...@openvswitch.org
Subject: Re: [ovs-discuss] OVS+DPDK QoS rate limit issue
tch, the status is 0 as expected.
Br,
Wang Zhike
-Original Message-
From: Darrell Ball [mailto:db...@vmware.com]
Sent: Friday, August 25, 2017 9:35 AM
To: 王志克; d...@openvswitch.org
Subject: Re: [ovs-dev] [PATCH] Fix: vhost user port status
Hi Lawrence
I am not very particular ab
ng Zhike
-Original Message-
From: Darrell Ball [mailto:db...@vmware.com]
Sent: Friday, August 25, 2017 2:04 PM
To: 王志克; d...@openvswitch.org
Subject: Re: [ovs-dev] [PATCH] netdev-dpdk: vhost get stats fix
I am wondering if we should split the
+stats->tx_errors = 0;
out from this p
Hi All,
I want to set QoS with guide from below link “egress traffic shaping”, but do
not know how for tunnel mode.
http://docs.openvswitch.org/en/latest/faq/qos/
My scenario:
I have several VM ports, and several VxLan ports in br0, and there is one
seprate eth0 port (not in br0), which is
Hi,
The topo can be same as below example.
http://docs.openvswitch.org/en/latest/howto/tunneling/?highlight=tunnel
I just wonder in such configuration how the egress shaping can be configured
for different VM. Or the QoS does not work for tunnel case?
Appreciate help.
BR,
Wang Zhike
From: 王志
Hi,
I met one issue with qemu2.8.1.1+ovs2.7.0+dpdk16.11.0. The issue is that one
windows 2008 VM can NOT send/receive packet anymore. I even can not
re-initialize the virtio adapter in VM (no reponse). I can NOT reproduce it.
>From OVS+DPDK stats, there is no packet from the vhost-user-client
can be done at end host
stack).
I am open to the decision. So if you think your patch is more suitable, I can
be the co-author.
Br,
Wang Zhike
-Original Message-
From: Darrell Ball [mailto:db...@vmware.com]
Sent: Thursday, December 07, 2017 10:14 AM
To: 王志克; d...@openvswitch.org
n
So now I believe it is hard to fix the issues insides the modules, and I would
like to present identified issues, and want to hear your proposal or fix.
Br,
Zhike Wang
-Original Message-
From: Yuanhan Liu [mailto:y...@fridaylinux.org]
Sent: Thursday, January 18, 2018 10:04 PM
To: 王志克
C
Hi,
I also found that if once theres are lots of flows, the memory (RSS) usage of
OVS process would be quite high, 2~3GB. Even then the flows disappear later,
the memory still keeps.
I am not sure how many people notices this, but if indeed OVS has such defect,
I guess this should be critical
[mailto:u9012...@gmail.com]
Sent: Saturday, February 03, 2018 1:46 AM
To: Ben Pfaff
Cc: 王志克; ovs-dev@openvswitch.org
Subject: Re: [ovs-dev][PATCH] memory: kill ovs-vswitchd under super
Hi Zhike,
On Fri, Feb 2, 2018 at 7:48 AM, Ben Pfaff <b...@ovn.org> wrote:
> On Fri, Feb 02, 2018 at 12:37:58PM
Hi,
I am testing below scenario, and I think there is some issue on TCP conntrack
sequence number filter.
Scenario:
VM1->Host1-Host2-->VM2
There is SCP file copy below VM1 and VM2, and we configured conntrack. During
the scp, I restart the openvswitch service (process stop and
Hi,
I have question about RX merge feature.
Below mentions that set mrg_rxbuf=off can improve performance. So question 1:
How much would it be affected for throughput?
*
Rx Mergeable
Pfaff
Cc: 王志克; Gregory Rose; ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org
Subject: Re: [ovs-discuss] crash when restart openvswitch with huge vxlan
traffic running
> Greg, this is a kernel issue. If you have the time, will you take a
> look at it sometime?
>
Hi all,
I worked on
Hi,
I would like to describe our scenario to use 'no-tcp-seq-chk'.
We JDCloud design conntrack hardware offloading as below:
1. SW maintains the conntrack state and timer. So packets that would impact
conntrack state/timer should be sent to CPU, like TCP FIN/RST. In this way, we
can clean
uelin" wrote:
>
>
>On 3/18/20 4:31 AM, 王志克 wrote:
>> Involve openvswitch group since this fix is highly coupled with OVS.
>> welcome comment.
>> At 2020-03-12 17:57:19, "Zhike Wang" wrote:
>>> The vhost_user_read_cb() and rte_vhost_driver_unre
Involve openvswitch group since this fix is highly coupled with OVS.
welcome comment.
At 2020-03-12 17:57:19, "Zhike Wang" wrote:
>The vhost_user_read_cb() and rte_vhost_driver_unregister()
>can be called at the same time by 2 threads, and may lead to deadlock.
>Eg thread1 calls
Hi Hepeng,
Can you please explain the sequence that how this inconsistence could happen?
Why you believe the current actions in existing netdev_flow is old?
Thanks.
Br,
wangzhike
te: Friday, September 23, 2022 at 8:59 PM
To: 王志克
Cc: "ovs-dev@openvswitch.org" , "d...@openvswitch.org"
Subject: [来自外部的邮件]Re: [External] Re:[ovs-dev,ovs-dev,v2,4/4] dpif-netdev: fix
inconsistent processing between ukey and megaflow
京东安全提示:此封邮件来自公司外部,除非您能判断发件人和知道邮件内容安全,否则请勿打开链接或者附
From: Darrell Ball [mailto:dlu...@gmail.com]
Sent: Saturday, November 09, 2019 8:12 AM
To: Zhike Wang
Cc: ovs dev; 王志克
Subject: Re: [ovs-dev] [PATCH] conntrack: Fix tcp payload length in case
multi-segments.
Thanks for the patch
Would you mind describing the use case that this patch is aiming
We would like to introduce a HW offloading solution for scenario that one
packet goes through DPDK OVS pipeline multiple times with recircle action. We
call it merged-single-table HW offloading.
The standard use case is to support conntrack with HW offloading. Example, the
packet matches flow
We would like to introduce a HW offloading solution for scenario that one
packet goes through DPDK OVS pipeline multiple times with recircle action. We
call it merged-single-table HW offloading.
The standard use case is to support conntrack with HW offloading. Example, the
packet matches flow
58 matches
Mail list logo