On Mon, Sep 19, 2011 at 3:00 PM, Jesse Gross <je...@nicira.com> wrote:
> Currently it is possible for a client on a single port to generate
> a huge number of packets that miss in the kernel flow table and
> monopolize the userspace/kernel communication path.  This
> effectively DoS's the machine because no new flow setups can take
> place.  This adds some additional fairness by separating each upcall
> type for each object in the datapath onto a separate socket, each
> with its own queue.  Userspace then reads round-robin from each
> socket so other flow setups can still succeed.
>
> Since the number of objects can potentially be large, we don't always
> have a unique socket for each.  Instead, we create 16 sockets and
> spread the load around them in a round robin fashion.  It's theoretically
> possible to do better than this with some kind of active load balancing
> scheme but this seems like a good place to start.

I am not sure why you are using different ports for flow related
upcalls. Due to this round-robin assignment, of upcall socket to
vport, is looking more like random socket assignment.

DoS is happening due to missed packets. OVS does not have control over
those miss calls. so it make sense to have separate queue for each
vport.
But upcalls related to flows can be (somewhat) managed by OVS e.g. in
case of sFlow upcalls it is sampling rate and in case of uperspace
upcalls it is controller action itself.

So we can have one socket per DP for flow related upcalls.
If you think controller action can generate similar DoS then maybe we
can use same vport upcall socket to send flow related upcalls from
given vport. I think it will perform better in case of DoS using flow
related upcalls as traffic from one vport (VM) from all flows is going
to one socket rather than going towards most of them and again
monopolizing upcalls.

With this approach we can have much better control on how we are
assigning (limited) upcall sockets to vport upcall traffic. It can
help to do load balancing in future.
plus it will simplify upcall code and flows parameters.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to