Hi, Ben,

Thanks, I follow the command in
http://openvswitch.org/pipermail/discuss/2013-November/011994.html. It
works for me.
However, I find one problem. My machine has 8 cores, ovs-vswitchd works
fine if I set n-handler-threads=10, but exits if I increase the number to
20. I am guessing there is thread number upper limit proportional to the
number of cores?

Thanks,
Xuemei

On Sat, Jul 25, 2015 at 8:16 PM, Ben Pfaff <[email protected]> wrote:

> It's documented, see ovs-vswitchd.conf.db(5).
>
> On Sat, Jul 25, 2015 at 06:58:09PM -0700, Xuemei Liu wrote:
> > This is the thread information for ovs-vswitchd, I test with different
> > traffic rate, the number of threads for ovs-vswitchd is already 11,
> > including 5 handler threads (should be the threads handling userspace
> > upcalls). How can I increase the number of handler threads?
> > [image: Inline image 1]
> >
> > ​Thanks,
> > Xuemei​
> >
> >
> > On Sat, Jul 25, 2015 at 6:15 PM, Xuemei Liu <[email protected]>
> wrote:
> >
> > > Hi, all,
> > >
> > > Following Ben's advice, I can forward all packets from kernel space to
> > > userspace. However, the performance of handling packets in userspace
> is not
> > > good enough. I read the code and find threads are used to handle the
> > > userspace upcalls (in ofproto/ofproto-dpif-upcall.c, function
> > > udpif_set_thread() ). According to the code, the number of threads
> should
> > > be adjustable. What I track from the code is shown below (in
> bridge.c), but
> > > I don't know where to set the number of threads. Could you give me some
> > > hint if possible?
> > > [image: Inline image 1]
> > >
> > > Thanks,
> > > Xuemei
> > >
> > > On Fri, Jul 24, 2015 at 1:24 PM, Ben Pfaff <[email protected]> wrote:
> > >
> > >> It doesn't sound like you have a bottleneck, if CPU is less than 10%,
> so
> > >> I don't understand the question.
> > >>
> > >> On Fri, Jul 24, 2015 at 09:42:05AM -0700, Xuemei Liu wrote:
> > >> > Hi, Ben,
> > >> >
> > >> > I have one follow-up question. I setup a topology with 12 hosts and
> 12
> > >> > switches in Mininet. The hosts (acting as the packets generator to
> the
> > >> > network) should send packets around 30 pps in order to make the
> switches
> > >> > not drop packets. However, I find that the CPU/MEM usage of
> > >> ovs-vswitchd is
> > >> > not high (both < 10%). According to your experience, where do you
> think
> > >> is
> > >> > the bottleneck? How can I improve the performance of packet
> processing
> > >> > speed in userspace in Mininet?
> > >> >
> > >> > Thanks,
> > >> > Xuemei
> > >> >
> > >> > On Fri, Jul 24, 2015 at 9:30 AM, Xuemei Liu <[email protected]>
> > >> wrote:
> > >> >
> > >> > > Thanks very much Ben.
> > >> > >
> > >> > > On Fri, Jul 24, 2015 at 9:25 AM, Ben Pfaff <[email protected]>
> wrote:
> > >> > >
> > >> > >> Any SLOW_* constant indicates why a packet can't be processed in
> the
> > >> > >> fast path (e.g. in the kernel).  Such packets always have to be
> > >> handled
> > >> > >> in userspace.  Thus, tagging all flow translations with any
> SLOW_*
> > >> > >> constant causes them all to be sent to userspace.
> > >> > >>
> > >> > >> On Thu, Jul 23, 2015 at 05:07:58PM -0700, Xuemei Liu wrote:
> > >> > >> > Hi, Ben,
> > >> > >> >
> > >> > >> > After I decreased the sending rate at h1, I find s1
> userspace/h2
> > >> can
> > >> > >> > receive all packets. What does SLOW_ACTION mean? Why this will
> > >> make the
> > >> > >> > packets forwarded from kernel space to user space?
> > >> > >> >
> > >> > >> > Thanks,
> > >> > >> > Xuemei
> > >> > >> >
> > >> > >> > On Thu, Jul 23, 2015 at 4:46 PM, Ben Pfaff <[email protected]>
> wrote:
> > >> > >> >
> > >> > >> > > Well, yes, there are performance problems, as I predicted.
> > >> > >> > >
> > >> > >> > > On Thu, Jul 23, 2015 at 04:42:17PM -0700, Xuemei Liu wrote:
> > >> > >> > > > Let me try to describe it. Suppose the topology is
> h1-s1-h2,
> > >> where
> > >> > >> h1, h2
> > >> > >> > > > are hosts, and s1 is the ovs switch. I add policy in s1 to
> > >> forward
> > >> > >> > > packets
> > >> > >> > > > with dstip of h2 to h2. Then I test in two scenarios.
> > >> > >> > > > 1. h1 sends 9 packets (3 packet for each of 3 different
> flows)
> > >> to
> > >> > >> h2.
> > >> > >> > > With
> > >> > >> > > > your methods, userspace in s1 can accept all the packets,
> and
> > >> h2 can
> > >> > >> > > > receive all the 9 packets. Work perfect.
> > >> > >> > > > 2. h1 sends 10000 packets to h2. In this case, h2 receives
> the
> > >> same
> > >> > >> > > packets
> > >> > >> > > > as userspace in s1 does. However, the number of packets
> > >> received is
> > >> > >> much
> > >> > >> > > > less than 10000. Which means many packets are lost in s1.
> > >> > >> > > > Is this clear now?
> > >> > >> > > >
> > >> > >> > > > Thanks,
> > >> > >> > > > Xuemei
> > >> > >> > > >
> > >> > >> > > > On Thu, Jul 23, 2015 at 4:18 PM, Ben Pfaff <[email protected]
> >
> > >> wrote:
> > >> > >> > > >
> > >> > >> > > > > That's too vague for me to guess.  What packets are
> getting
> > >> lost?
> > >> > >> > > > >
> > >> > >> > > > > On Thu, Jul 23, 2015 at 04:10:14PM -0700, Xuemei Liu
> wrote:
> > >> > >> > > > > > Hi, Ben,
> > >> > >> > > > > >
> > >> > >> > > > > > I tried your method, but the switch seems to drop some
> > >> packets,
> > >> > >> as it
> > >> > >> > > > > does
> > >> > >> > > > > > not output the expected packets that I forward to send
> to
> > >> it.
> > >> > >> Any
> > >> > >> > > advice?
> > >> > >> > > > > >
> > >> > >> > > > > > Thanks,
> > >> > >> > > > > > Xuemei
> > >> > >> > > > > >
> > >> > >> > > > > > On Thu, Jul 23, 2015 at 10:19 AM, Xuemei Liu <
> > >> > >> [email protected]>
> > >> > >> > > > > wrote:
> > >> > >> > > > > >
> > >> > >> > > > > > > Hi, Ben,
> > >> > >> > > > > > >
> > >> > >> > > > > > > Thanks for your response.
> > >> > >> > > > > > > "You realize that this will be terrible for
> performance,
> > >> > >> right?"
> > >> > >> > > > > > > In fact, I have not got all packets sent to user
> space.
> > >> That
> > >> > >> is the
> > >> > >> > > > > > > problem I am facing now. I think performance might be
> > >> another
> > >> > >> > > problem
> > >> > >> > > > > after
> > >> > >> > > > > > > I can receive all packet in user space.
> > >> > >> > > > > > >
> > >> > >> > > > > > > Thanks,
> > >> > >> > > > > > > Xuemei
> > >> > >> > > > > > >
> > >> > >> > > > > > > On Thu, Jul 23, 2015 at 10:13 AM, Ben Pfaff <
> > >> [email protected]>
> > >> > >>
> > >> > >> > > wrote:
> > >> > >> > > > > > >
> > >> > >> > > > > > >> On Thu, Jul 23, 2015 at 10:04:23AM -0700, Xuemei Liu
> > >> wrote:
> > >> > >> > > > > > >> > I am new to ovs, and I am trying to send all
> packets
> > >> from
> > >> > >> kernel
> > >> > >> > > > > space
> > >> > >> > > > > > >> to
> > >> > >> > > > > > >> > user space. I comment the "unlikely(!flow)" in
> > >> > >> > > datapath/datapath.c.
> > >> > >> > > > > > >> > However, it seems I just receive the first packet
> of
> > >> each
> > >> > >> flow
> > >> > >> > > (the
> > >> > >> > > > > > >> first
> > >> > >> > > > > > >> > packet that match one forwarding rule in the
> bridge)
> > >> in
> > >> > >> user
> > >> > >> > > space.
> > >> > >> > > > > > >> Could
> > >> > >> > > > > > >> > anyone tell me why? and is there other way to
> achieve
> > >> my
> > >> > >> goal?
> > >> > >> > > > > > >>
> > >> > >> > > > > > >> You realize that this will be terrible for
> performance,
> > >> > >> right?
> > >> > >> > > > > > >>
> > >> > >> > > > > > >> It's kind of a waste to modify the kernel module for
> > >> this.
> > >> > >> I'd
> > >> > >> > > just
> > >> > >> > > > > > >> modify userspace to send all packets to userspace,
> > >> something
> > >> > >> like
> > >> > >> > > > > this:
> > >> > >> > > > > > >>
> > >> > >> > > > > > >> diff --git a/ofproto/ofproto-dpif-xlate.c
> > >> > >> > > > > b/ofproto/ofproto-dpif-xlate.c
> > >> > >> > > > > > >> index 52395a7..a98406a 100644
> > >> > >> > > > > > >> --- a/ofproto/ofproto-dpif-xlate.c
> > >> > >> > > > > > >> +++ b/ofproto/ofproto-dpif-xlate.c
> > >> > >> > > > > > >> @@ -4777,7 +4777,7 @@ xlate_actions(struct xlate_in
> > >> *xin,
> > >> > >> struct
> > >> > >> > > > > > >> xlate_out *xout)
> > >> > >> > > > > > >>
> > >> > >> > > > > > >>      ctx.xin = xin;
> > >> > >> > > > > > >>      ctx.xout = xout;
> > >> > >> > > > > > >> -    ctx.xout->slow = 0;
> > >> > >> > > > > > >> +    ctx.xout->slow = SLOW_ACTION;
> > >> > >> > > > > > >>      ctx.xout->has_learn = false;
> > >> > >> > > > > > >>      ctx.xout->has_normal = false;
> > >> > >> > > > > > >>      ctx.xout->has_fin_timeout = false;
> > >> > >> > > > > > >>
> > >> > >> > > > > > >
> > >> > >> > > > > > >
> > >> > >> > > > >
> > >> > >> > >
> > >> > >>
> > >> > >
> > >> > >
> > >>
> > >
> > >
>
>
>
>
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to