On Fri, Sep 23, 2011 at 03:24:47PM -0700, Jesse Gross wrote:
> On Fri, Sep 23, 2011 at 2:53 PM, Ben Pfaff <[email protected]> wrote:
> > On Fri, Sep 23, 2011 at 02:20:17PM -0700, Jesse Gross wrote:
> >> Currently it is possible for a client on a single port to generate
> >> a huge number of packets that miss in the kernel flow table and
> >> monopolize the userspace/kernel communication path. ??This
> >> effectively DoS's the machine because no new flow setups can take
> >> place. ??This adds some additional fairness by separating each upcall
> >> type for each object in the datapath onto a separate socket, each
> >> with its own queue. ??Userspace then reads round-robin from each
> >> socket so other flow setups can still succeed.
> >>
> >> Since the number of objects can potentially be large, we don't always
> >> have a unique socket for each. ??Instead, we create 16 sockets and
> >> spread the load around them in a round robin fashion. ??It's theoretically
> >> possible to do better than this with some kind of active load balancing
> >> scheme but this seems like a good place to start.
> >>
> >> Feature #6485
> >
> > Seems reasonable.
> >
> > Again I'd rate-limit the new warnings in set_upcall_pids().
> 
> These warnings are actually the same ones as patch 2, just moved.
> There are some new debug messages but they only trigger when a port is
> added or the listen mask changes, so it should be pretty low volume.
> 
> > The call to dpif_flow_dump_next() should supply NULL for the actions
> > and n_actions parameters since it doesn't need the actions. ??This is a
> > significant optimization because of how dfpi_linux_flow_dump_next() is
> > implemented. ??(I think I should have mentioned the same thing in an
> > earlier patch but I didn't notice until now.)
> 
> That's a good point, I forgot that it could be NULL.  I also killed
> off stats and folded this into patch 2.

That all makes sense.  Thank you.
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev

Reply via email to