HI, is there any policy about which fields should be wildcard when using
megaflow match?
exp 1:
table=0, priorIty=0,actions=NORMAL
then the datapath flow is like that:
recirc_id(0),in_port(3),eth(src=b6:49:dd:5d:3a:a6,dst=2e:b5:7b:d6:52:c2),eth_type(0x0806),
packets:0, bytes:0, used:never, ac
Hi:
in ovs code,
MAX_FLOWS = 65536 // for megaflow
#define EM_FLOW_HASH_SHIFT 13
#define EM_FLOW_HASH_ENTRIES (1u << EM_FLOW_HASH_SHIFT) // for emc cache
so why choose 65536 and 8192? is there any performance consideration? can I
just larger these numbers to make packet only lookup emc cache
I can reproduce this problem with the script supported by vguntaka in both ovs
version 2.5 and ovs version 2.6.
1. Add bridge
ovs-vsctl add-br br0
2. Add vm port
ovs-vsctl add-port br0 tap0 – set interface tap0 type=internal
ip netns add ns0
ip link set dev tap0 netns ns0
can I enable bfd on flow based tunnel? does it work?
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
in function nla_alloc_flow_actions(), there is a check if action length is
greater than MAX_ACTIONS_BUFSIZE(32k), then kernel datapath flow will not be
installed, and packets will droppped.
but in function xlate_actions(), there is such clause:
if (nl_attr_oversized(ctx.odp_actions->size)) {
nel endpoints on OVS bridges.
https://github.com/openvswitch/ovs/blob/master/ovn/controller/bfd.c
On Tue, Sep 12, 2017 at 9:19 PM, ychen wrote:
can I enable bfd on flow based tunnel? does it work?
___
dev mailing list
d...@openvswitch.org
https://mail.ope
we modified little code for dpdk, so we must rebuild ovs debian package with
dpdk by ourself.
so is there any guide about how to build openvswith-dpdk package?
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-de
I have read this document, but following this guide, I can not build package
for openvswitch-switch-dpdk.
I want to build package with our own libdpdk, and is there any guides?
At 2017-09-21 16:25:58, "Bodireddy, Bhanuprakash"
wrote:
>>we modified little code for dpdk, so we must rebuild
hi,
I tested dp_hash for ovs group, and found that dp_hash can not well
distributed, some buckets even can not be selected.
In my testing environment, I have 11 buckets:
group_id=131841,type=select,selection_method=dp_hash,
bucket=bucket_id:51162,weight:100,actions=ct(commit,table=70,zone=2,e
Hi,
I found that sometime userspace vxlan can not work happily.
1. first data packet loss
when tunnel neigh cache is empty, then the first data packet triggered
sending ARP packet to peer VTEP, and the data packet dropped,
tunnel neigh cache added this entry when receive A
we update tunnel neigh cache when we receive data
packet from remote VTEP? since we can fetch tun_src and outer mac sa from the
data packet.
At 2018-03-28 04:41:12, "Jan Scheurich" wrote:
>Hi Ychen,
>
>Funny! Again we are already working on a solution for problem 1.
>
Hi, Jan:
When I test dp_hash with the new patch, vswitchd was killed by segment
fault in some conditions.
1. add group with no buckets, then winner will be NULL
2. add buckets with weight with 0, then winner will also be NULL
I did little modify to the patch, will you help to check w
Hi, Jan:
I think the following code should also be modified
+ for (int hash = 0; hash < n_hash; hash++) {
+ double max_val = 0.0;
+ struct webster *winner;
+for (i = 0; i < n_buckets; i++) {
+if (webster[i].value > max_val) { ===> if
bucket->weight=0,
1. environment
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "vf-10.180.0.95"
Interface "vf-10.180.0.95"
type: vxlan
options: {csum="true", df_default="false", in_key=f
Hi, all:
I have a question why meter stats need cleared when just modify meter
bands?
when call function handle_modify_meter(), it finally call function
dpif_netdev_meter_set(), in this function new dp_meter will be allocated and
attched, hence stats will be cleared.
if we just upd
we can easily reproduce this phenomenon by using tcp socket stream sending from
ovs internal port.
At 2019-10-30 19:49:16, "ychen" wrote:
Hi,
when we use docker to establish tcp session, we found that the packet which
must do upcall to userspace has different encapsulated
uct upcall
*upcalls,
op->dop.u.execute.needs_help = (upcall->xout.slow & SLOW_ACTION)
!= 0;
op->dop.u.execute.probe = false;
op->dop.u.execute.mtu = upcall->mru;
+op->dop.u.execute.skb_hash = upcall->skb_hash;
}
}
--
2.1.4
hi,
I noticed that we can set bucket's weight to 0 when add/mod group.
1. when we use default select method, and when all the buckets with weight
larger than 0 change to dead,
we can still pick the bucket whose weight is 0. here is the code:
pick_default_select_group()->group_best
hi,
when I send IP packets with ttl in IP header is random in range(1-255), and
with all other IP header fields stay not changed
but generated 255 datapath flows each with different ttl value.
of course, i use the action dec_ttl, here is code:
case OFPACT_DEC_TTL:
wc->m
Hi, all:
recently we meet a problem that when use ovs group with selection method
dp_hash, same tcp session selects different ovs group bucket when tcp packet
retransmit.
if we fill different snat gw in group buckets, that will make tcp session
reset after packet retranmition.
we
We meet a problem that same tcp session selects different ovs group bucket when
in tcp retransmition, and we can easily reproduce this phenomenon.
After some code research, we found that when tcp retransmit, it may call
function sk_rethink_txhash(), and this function makes skb->hash changed, henc
hi, I want to know how datapath stats mapped to userspace flow stats? is there
any documents?
example:
table=0,in_port=1, meter=11,goto_table:2
table=2,in_port=1,output:2
meter: rate=1Mbps
when I send packets with 2Mbps from port1, and totally 1 packets
transmited
first I expe
1. phenomenon
ifup: waiting for lock on /run/network/ifstate.br-int
2. configurations
/etc/network/interfaces
allow-ovs br-int
iface br-int inet manual
ovs_type OVSBridge
ovs_ports tap111
allow-br-int tap111
iface ngwintp inet manual
ovs_bridge br-int
ovs_type OVSIntPort
3. st
ovs/commit/15af3d44c65eb3cd724378ce1b30c51aa87f4f69
On 19 June 2017 at 07:17, ychen wrote:
1. phenomenon
ifup: waiting for lock on /run/network/ifstate.br-int
2. configurations
/etc/network/interfaces
allow-ovs br-int
iface br-int inet manual
ovs_type OVSBridge
ovs_ports tap111
Hi,
has any one see the following backtrace?
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `ovs-vswitchd unix:/var/run/openvswitch/db.sock
-vconsole:emer -vsyslog:err -vfi'.
Program terminated with signal SIGABRT, Aborted.
#0 __GI_raise
0x0,
__next = 0x0
}
},
__size = {0x2, 0x0 , 0x2, 0x0 },
__align = 0x2
},
where = 0x7f835b0e5520
}
At 2019-08-26 19:51:20, "ychen" wrote:
Hi,
has any one see the following backtrace?
Using host libthread_db library "/lib/x86_64-linux-gnu/libthr
Hi,
I meet a problem that when send packets using netperf in multi-thread mode.
The reproducing condition is like this:
1. ovs 2.10 in dpdk usermode with 2 pmd
2. Set Meter with rate=100,000pps, burst=20,000packet
3. when use single-thread mode for netperf, Meter behaves correct, and
Hi,
We found that when the same TCP session using snat with dp_hash group as
output actionj,
SYN packet and the other packets behaves different, SYN packet outputs to
one group bucket, and the other packets outputs to another group bucket.
Here is the ovs flows:
table=0,in_port=DO
Thanks!
I have simply verified in our testing enviroment, and it really worked!
At 2019-10-24 20:32:11, "Ilya Maximets" wrote:
>Mixing of RSS hash with recirculation depth is useful for flow lookup
>because same packet after recirculation should match with different
>datapath rule. Settin
Hi,
when we use docker to establish tcp session, we found that the packet which
must do upcall to userspace has different encapsulated udp source port
with packet that only needs do datapath flow forwarding.
After some code research and kprobe debug, we found the following:
1. us
.
At 2021-07-28 13:46:21, "Tonghao Zhang" wrote:
>On Wed, Jul 28, 2021 at 10:57 AM ychen wrote:
>>
>> Hi, all:
>> I have a question why meter stats need cleared when just modify meter
>> bands?
>> when call function hand
hi,
when we send 2 packets with different dest mac in 10s(fast datapath flow
aging time), with the same userspace flow action, but the second packet act
incorrectly.
1. problem phenomenon:
userspace flow:
in_port=1,table=0,cookie=0x123,priority=500,tun_id=0x3562,actions=set_fi
thanks!
upgrade to ovs version 2.12.1 fix my problem.
At 2022-08-17 18:49:46, "Ilya Maximets" wrote:
>On 8/17/22 11:32, ychen wrote:
>> hi,
>>when we send 2 packets with different dest mac in 10s(fast datapath flow
>> aging time), with the
33 matches
Mail list logo