Re: [ovs-dev] dest mac in fast datapath does not act as expected

2022-08-18 Thread ychen
thanks! upgrade to ovs version 2.12.1 fix my problem. At 2022-08-17 18:49:46, "Ilya Maximets" wrote: >On 8/17/22 11:32, ychen wrote: >> hi, >>when we send 2 packets with different dest mac in 10s(fast datapath flow >> aging time), with

[ovs-dev] dest mac in fast datapath does not act as expected

2022-08-17 Thread ychen
hi, when we send 2 packets with different dest mac in 10s(fast datapath flow aging time), with the same userspace flow action, but the second packet act incorrectly. 1. problem phenomenon: userspace flow:

Re: [ovs-dev] meter stats cleared when modify meter bands

2021-07-28 Thread ychen
. At 2021-07-28 13:46:21, "Tonghao Zhang" wrote: >On Wed, Jul 28, 2021 at 10:57 AM ychen wrote: >> >> Hi, all: >> I have a question why meter stats need cleared when just modify meter >> bands? >> when call function hand

[ovs-dev] meter stats cleared when modify meter bands

2021-07-27 Thread ychen
Hi, all: I have a question why meter stats need cleared when just modify meter bands? when call function handle_modify_meter(), it finally call function dpif_netdev_meter_set(), in this function new dp_meter will be allocated and attched, hence stats will be cleared. if we just

[ovs-dev] dp_hash algorithm works incorretly when tcp retransmit

2021-02-03 Thread ychen
We meet a problem that same tcp session selects different ovs group bucket when in tcp retransmition, and we can easily reproduce this phenomenon. After some code research, we found that when tcp retransmit, it may call function sk_rethink_txhash(), and this function makes skb->hash changed,

[ovs-dev] same tcp session selects different ovs group bucket when tcp retransmit

2021-01-23 Thread ychen
Hi, all: recently we meet a problem that when use ovs group with selection method dp_hash, same tcp session selects different ovs group bucket when tcp packet retransmit. if we fill different snat gw in group buckets, that will make tcp session reset after packet retranmition. we

[ovs-dev] question about userspace flow stats with meter

2020-07-27 Thread ychen
hi, I want to know how datapath stats mapped to userspace flow stats? is there any documents? example: table=0,in_port=1, meter=11,goto_table:2 table=2,in_port=1,output:2 meter: rate=1Mbps when I send packets with 2Mbps from port1, and totally 1 packets transmited first I

Re: [ovs-dev] same tcp session encapsulated with different udp src port in kernel mode if packet has do ip_forward

2019-11-08 Thread ychen
pcall->mru; + op->dop.u.execute.skb_hash = upcall->skb_hash; } } -- 2.1.4 At 2019-11-06 12:04:57, "Tonghao Zhang" wrote: >On Mon, Nov 4, 2019 at 7:44 PM ychen wrote: >> >> >> >> we can easily reproduce this phenomenon

Re: [ovs-dev] same tcp session encapsulated with different udp src port in kernel mode if packet has do ip_forward

2019-11-04 Thread ychen
we can easily reproduce this phenomenon by using tcp socket stream sending from ovs internal port. At 2019-10-30 19:49:16, "ychen" wrote: Hi, when we use docker to establish tcp session, we found that the packet which must do upcall to userspace has different encapsulated

[ovs-dev] same tcp session encapsulated with different udp src port in kernel mode if packet has do ip_forward

2019-10-30 Thread ychen
Hi, when we use docker to establish tcp session, we found that the packet which must do upcall to userspace has different encapsulated udp source port with packet that only needs do datapath flow forwarding. After some code research and kprobe debug, we found the following: 1.

Re: [ovs-dev] [PATCH] dpif-netdev: Do not mix recirculation depth into RSS hash itself.

2019-10-30 Thread ychen
Thanks! I have simply verified in our testing enviroment, and it really worked! At 2019-10-24 20:32:11, "Ilya Maximets" wrote: >Mixing of RSS hash with recirculation depth is useful for flow lookup >because same packet after recirculation should match with different >datapath rule.

[ovs-dev] group dp_hash method works incorrectly when using snat

2019-09-29 Thread ychen
Hi, We found that when the same TCP session using snat with dp_hash group as output actionj, SYN packet and the other packets behaves different, SYN packet outputs to one group bucket, and the other packets outputs to another group bucket. Here is the ovs flows:

[ovs-dev] Meter measures incorrect when use multi-pmd in ovs2.10

2019-09-29 Thread ychen
Hi, I meet a problem that when send packets using netperf in multi-thread mode. The reproducing condition is like this: 1. ovs 2.10 in dpdk usermode with 2 pmd 2. Set Meter with rate=100,000pps, burst=20,000packet 3. when use single-thread mode for netperf, Meter behaves correct,

Re: [ovs-dev] vswitchd crashed when revalidate flows in ovs 2.8.2

2019-08-28 Thread ychen
0x0, __next = 0x0 } }, __size = {0x2, 0x0 , 0x2, 0x0 }, __align = 0x2 }, where = 0x7f835b0e5520 } At 2019-08-26 19:51:20, "ychen" wrote: Hi, has any one see the following backtrace? Using host libthread_db library "/lib/x86_64-linux-gnu/libthr

[ovs-dev] vswitchd crashed when revalidate flows in ovs 2.8.2

2019-08-26 Thread ychen
Hi, has any one see the following backtrace? Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfi'. Program terminated with signal SIGABRT, Aborted. #0 __GI_raise

[ovs-dev] datapath flow will match packet's tll when we use dec_ttl in action

2019-05-29 Thread ychen
hi, when I send IP packets with ttl in IP header is random in range(1-255), and with all other IP header fields stay not changed but generated 255 datapath flows each with different ttl value. of course, i use the action dec_ttl, here is code: case OFPACT_DEC_TTL:

[ovs-dev] why the behavior for weigh=0 for group's dp_hash method is different with default selection method?

2019-05-29 Thread ychen
hi, I noticed that we can set bucket's weight to 0 when add/mod group. 1. when we use default select method, and when all the buckets with weight larger than 0 change to dead, we can still pick the bucket whose weight is 0. here is the code:

[ovs-dev] can not do ecmp with ovs group when send packet out from userspace vxlan port

2018-08-14 Thread ychen
1. environment Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port "vf-10.180.0.95" Interface "vf-10.180.0.95" type: vxlan options: {csum="true", df_default="false",

Re: [ovs-dev] [PATCH v2 2/3] ofproto-dpif: Improve dp_hash selection method for select groups

2018-04-17 Thread ychen
Hi, Jan: I think the following code should also be modified + for (int hash = 0; hash < n_hash; hash++) { + double max_val = 0.0; + struct webster *winner; +for (i = 0; i < n_buckets; i++) { +if (webster[i].value > max_val) { ===> if bucket->weight=0,

Re: [ovs-dev] [PATCH 2/3] ofproto-dpif: Improve dp_hash selection method for select groups

2018-04-10 Thread ychen
Hi, Jan: When I test dp_hash with the new patch, vswitchd was killed by segment fault in some conditions. 1. add group with no buckets, then winner will be NULL 2. add buckets with weight with 0, then winner will also be NULL I did little modify to the patch, will you help to check

Re: [ovs-dev] can not update userspace vxlan tunnel neigh mac when peer VTEP mac changed

2018-03-27 Thread ychen
we update tunnel neigh cache when we receive data packet from remote VTEP? since we can fetch tun_src and outer mac sa from the data packet. At 2018-03-28 04:41:12, "Jan Scheurich" <jan.scheur...@ericsson.com> wrote: >Hi Ychen, > >Funny! Again we are alre

[ovs-dev] can not update userspace vxlan tunnel neigh mac when peer VTEP mac changed

2018-03-27 Thread ychen
Hi, I found that sometime userspace vxlan can not work happily. 1. first data packet loss when tunnel neigh cache is empty, then the first data packet triggered sending ARP packet to peer VTEP, and the data packet dropped, tunnel neigh cache added this entry when receive

[ovs-dev] can not well distributed when use dp_hash for ovs group

2018-03-20 Thread ychen
hi, I tested dp_hash for ovs group, and found that dp_hash can not well distributed, some buckets even can not be selected. In my testing environment, I have 11 buckets: group_id=131841,type=select,selection_method=dp_hash,

[ovs-dev] is there any performance consideration for max emc cache numbers and megaflow cache numbers?

2018-01-05 Thread ychen
Hi: in ovs code, MAX_FLOWS = 65536 // for megaflow #define EM_FLOW_HASH_SHIFT 13 #define EM_FLOW_HASH_ENTRIES (1u << EM_FLOW_HASH_SHIFT) // for emc cache so why choose 65536 and 8192? is there any performance consideration? can I just larger these numbers to make packet only lookup emc cache

[ovs-dev] which fields should be masked or unmasked while using megaflow match?

2017-12-27 Thread ychen
HI, is there any policy about which fields should be wildcard when using megaflow match? exp 1: table=0, priorIty=0,actions=NORMAL then the datapath flow is like that: recirc_id(0),in_port(3),eth(src=b6:49:dd:5d:3a:a6,dst=2e:b5:7b:d6:52:c2),eth_type(0x0806), packets:0, bytes:0, used:never,

Re: [ovs-dev] is there any document about how to build debian package with dpdk?

2017-09-21 Thread ychen
I have read this document, but following this guide, I can not build package for openvswitch-switch-dpdk. I want to build package with our own libdpdk, and is there any guides? At 2017-09-21 16:25:58, "Bodireddy, Bhanuprakash" wrote: >>we modified

[ovs-dev] is there any document about how to build debian package with dpdk?

2017-09-21 Thread ychen
we modified little code for dpdk, so we must rebuild ovs debian package with dpdk by ourself. so is there any guide about how to build openvswith-dpdk package? ___ dev mailing list d...@openvswitch.org

Re: [ovs-dev] does ovs bfd support flow based tunnel?

2017-09-18 Thread ychen
o monitor tunnel endpoints on OVS bridges. https://github.com/openvswitch/ovs/blob/master/ovn/controller/bfd.c On Tue, Sep 12, 2017 at 9:19 PM, ychen <ychen103...@163.com> wrote: can I enable bfd on flow based tunnel? does it work?

[ovs-dev] why the max action length is 32K in kernel?

2017-09-12 Thread ychen
in function nla_alloc_flow_actions(), there is a check if action length is greater than MAX_ACTIONS_BUFSIZE(32k), then kernel datapath flow will not be installed, and packets will droppped. but in function xlate_actions(), there is such clause: if (nl_attr_oversized(ctx.odp_actions->size)) {

[ovs-dev] does ovs bfd support flow based tunnel?

2017-09-12 Thread ychen
can I enable bfd on flow based tunnel? does it work? ___ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Re: [ovs-dev] ifup locked when start ovs in debian9 with systemd

2017-06-20 Thread ychen
tps://github.com/openvswitch/ovs/commit/15af3d44c65eb3cd724378ce1b30c51aa87f4f69 On 19 June 2017 at 07:17, ychen <ychen103...@163.com> wrote: 1. phenomenon ifup: waiting for lock on /run/network/ifstate.br-int 2. configurations /etc/network/interfaces allow-ovs br-int iface br-int

[ovs-dev] ifup locked when start ovs in debian9 with systemd

2017-06-19 Thread ychen
1. phenomenon ifup: waiting for lock on /run/network/ifstate.br-int 2. configurations /etc/network/interfaces allow-ovs br-int iface br-int inet manual ovs_type OVSBridge ovs_ports tap111 allow-br-int tap111 iface ngwintp inet manual ovs_bridge br-int ovs_type OVSIntPort 3.

Re: [ovs-dev] [BUG] ovs-ofctl version 2.5.0 will crash with OFPFMFC_BAD_COMMAND

2017-05-15 Thread ychen
I can reproduce this problem with the script supported by vguntaka in both ovs version 2.5 and ovs version 2.6. 1. Add bridge ovs-vsctl add-br br0 2. Add vm port ovs-vsctl add-port br0 tap0 – set interface tap0 type=internal ip netns add ns0 ip link set dev tap0 netns ns0