Hi,
I would like to do some experiment with Open vSwitch kernel module.
I follow instruction as per INSTALL.Linux to compile and start the
switch with two ports and add one flow entry that forwards from one port
to the other.
When I use ovs-ofctl dump-flows I can see the flow entry that I add but
when I use ovs-dpctl dump-flows it gives no output (please see second
section in the attached file).
I am wondering if this is normal or it could be some strange behaviour
due to the way I have compiled the source code.
By the way, is there a test tool that allows me to manage the Open
vSwitch kernel module directly without running the database server and
the ovs-vswicthd daemon?
(I just want to test the forwarding performance).
In addition, I noticed that there are 255 tables in my system with only
the first table (0:classifier) is in used. Is there some way to remove
the tables that I don't use?
I also noticed that there is a different flow table structure in Open
vSwitch kernel module, using a kind of an array structure (flex_array).
The old OpenFlow reference kernel module implementation uses a
combination of two lookup tables; one hash and one linear.
Does anyone know if there is any performance comparison between these
types of lookup structures?
Best regards,
Voravit Tanyingyong
host1:/home/voravit/openvswitch# ovs-vsctl show
d3cf0951-6a39-4ff9-bc2b-69de053bdd78
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "eth4"
Interface "eth4"
Port "eth5"
Interface "eth5"
host1:/home/voravit/openvswitch# ovs-dpctl show
system@br0:
lookups: hit:0 missed:0 lost:0
flows: 0
port 0: br0 (internal)
port 1: eth4
port 2: eth5
host1:/home/voravit/openvswitch# ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000001b215d01d0
n_tables:255, n_buffers:256
features: capabilities:0x87, actions:0xfff
1(eth4): addr:00:1b:21:5d:01:d0
config: PORT_DOWN
state: LINK_DOWN
current: 10GB-FD FIBER AUTO_NEG
advertised: 1GB-FD 10GB-FD FIBER AUTO_NEG
supported: 1GB-FD 10GB-FD FIBER AUTO_NEG
2(eth5): addr:00:1b:21:5d:01:d1
config: PORT_DOWN
state: LINK_DOWN
current: 10GB-FD FIBER AUTO_NEG
advertised: 1GB-FD 10GB-FD FIBER AUTO_NEG
supported: 1GB-FD 10GB-FD FIBER AUTO_NEG
LOCAL(br0): addr:00:1b:21:5d:01:d0
config: PORT_DOWN
state: LINK_DOWN
OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0
--------------------------------------------------------------------------
host1:/home/voravit/openvswitch# ovs-ofctl dump-flows br0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=861.021s, table=0, n_packets=0, n_bytes=0, priority=0
actions=NORMAL
host1:/home/voravit/openvswitch# ovs-ofctl add-flow br0 "in_port=1
idle_timeout=0 actions=output:2"
host1:/home/voravit/openvswitch# ovs-ofctl dump-flows br0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=3.181s, table=0, n_packets=0, n_bytes=0, in_port=1
actions=output:2
cookie=0x0, duration=874.102s, table=0, n_packets=0, n_bytes=0, priority=0
actions=NORMAL
host1:/home/voravit/openvswitch# ovs-dpctl dump-flows br0
host1:/home/voravit/openvswitch#
--------------------------------------------------------------------------
host1:/home/voravit/openvswitch# ovs-ofctl dump-tables br0
OFPST_TABLE reply (xid=0x1): 255 tables
0: classifier: wild=0x3fffff, max=1000000, active=2
lookup=0, matched=0
1: table1 : wild=0x3fffff, max=1000000, active=0
lookup=0, matched=0
2: table2 : wild=0x3fffff, max=1000000, active=0
lookup=0, matched=0
3: table3 : wild=0x3fffff, max=1000000, active=0
lookup=0, matched=0
4: table4 : wild=0x3fffff, max=1000000, active=0
lookup=0, matched=0
:
:
253: table253: wild=0x3fffff, max=1000000, active=0
lookup=0, matched=0
254: table254: wild=0x3fffff, max=1000000, active=0
lookup=0, matched=0
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss