> -----Original Message----- > From: haris tanvir [mailto:[email protected]] > Sent: Tuesday, December 1, 2015 1:18 PM > To: Traynor, Kevin > Subject: RE: [ovs-discuss] OVS-dpdk Packet Error for rate beyond 1MPPS > > One more observation: > I removed this flow > > in_port=1,priority=0,actions=FLOOD > > and inserted this one > > in_port=1,priority=0,actions=output:2 > > and my performance went from 1million PPS to 4million PPS. > > Again I insert the old flow back, i get 1 million performance. Although same > flows are basically doing same function (I only have 2 ports, so FLOOD is > same as output on port 2) then why am I getting different performance on > these two flows?
Re-adding list. You are also sending to the LOCAL port when you are using FLOOD and there is a cost associated with that. > > P.S: I observe no errors when i delete all the flows. As soon as i insert > flow, i see packet errors for higher packet rates > > > > > > From: [email protected] > > To: [email protected]; [email protected] > > Subject: RE: [ovs-discuss] OVS-dpdk Packet Error for rate beyond 1MPPS > > Date: Tue, 1 Dec 2015 10:52:33 +0000 > > > > > -----Original Message----- > > > From: discuss [mailto:[email protected]] On Behalf Of haris > > > tanvir > > > Sent: Sunday, November 29, 2015 1:00 PM > > > To: [email protected] > > > Subject: [ovs-discuss] OVS-dpdk Packet Error for rate beyond 1MPPS > > > > > > Hi, > > > I have configured OVS on a host with dpdk. I have created a net-dev type > > > bridge and assigned one physical port and one vhost user ports to > it.Then, I > > > have created a VM using libvirt and assigned vhost user port to the VM. I > am > > > running dpdk L2 Forward Application inside VM (using dpdk poll mode > driver). > > > > > > I have another machine which as has Packet-gen dpdk running on a host. > > > > > > When I send Traffic from Virtual Machine to the VM through dpdk at rate > > > 1Million packet per second(64byte packet), L2 forward in the VM just > works > > > fine with zero drop rate. However, when I go beyond 1Million Packet per > > > second from packet gen, i see packet errors in OVS physical ports, and as > a > > > result any additional packets above 1MPPS does not reach VM. Please do > let > > > me know why I am facing this issue. > > > > > > OVS OUTPUT: > > > ovs-ofctl dump-ports br0 > > > OFPST_PORT reply (xid=0x2): 3 ports > > > port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0 > > > tx pkts=292126777, bytes=17527676820, drop=0, errs=0, coll=0 > > > port 1: rx pkts=4917972248, bytes=333253670655, drop=0, > errs=7153546496, > > > frame=0, over=0, crc=0 > > > tx pkts=4917673320, bytes=333233296044, drop=0, errs=0, coll=0 > > > port 2: rx pkts=4917673320, bytes=?, drop=?, errs=?, frame=?, over=?, > > > crc=? > > > tx pkts=4917967282, bytes=?, drop=287, errs=?, coll=? > > > > > > > > > LIBVIRT TAG to create VM: > > > <interface type='vhostuser'> > > > <mac address='52:54:00:3b:83:1a'/> > > > <source type='unix' > path='/usr/local/var/run/openvswitch/dpdkvhost0' > > > mode='client'/> > > > <model type='virtio'/> > > > <driver> > > > <host csum='off' gso='off' tso4='off' tso6='off' ecn='off' > > > ufo='off'/> > > > <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/> > > > </driver> > > > <address type='pci' domain='0x0000' bus='0x00' slot='0x07' > > > function='0x0'/> > > > </interface> > > > > > > OVS performance parameters: > > > coremask = 0x100 (2 sibling cores) for the OVS on host > > > > Which coremask? -c command line arg or pmd-cpu-mask ? > > > > Have a look with top and check that the forwarding in the guest and pmd are > > not on the same core. Also, have a look at PMD affinitization > > in the performance tuning section. > > https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md#performance- > tuning > > > > > > > Hugepage=1024Kb for OVS in host > > > coremask =0x3(1 logical core) for Dpdk L2 Fwd in guest > > > > > > Regards > > > Haris Tanvir _______________________________________________ discuss mailing list [email protected] http://openvswitch.org/mailman/listinfo/discuss
