On Fri, Jan 10, 2014 at 12:53 PM, Jesse Gross <je...@nicira.com> wrote:

> On Thu, Jan 9, 2014 at 5:19 PM, Andy Zhou <az...@nicira.com> wrote:
> > diff --git a/datapath/flow_table.c b/datapath/flow_table.c
> > index 4232b82..4170a96 100644
> > --- a/datapath/flow_table.c
> > +++ b/datapath/flow_table.c
> >  void ovs_flow_free(struct sw_flow *flow, bool deferred)
> >  {
> >         if (!flow)
> >                 return;
> >
> > -       flow_mask_del_ref(flow->mask, deferred);
> > +       if (flow->mask) {
> > +               struct sw_flow_mask *mask = flow->mask;
> > +
> > +               BUG_ON(!atomic_read(&mask->ref_count));
> > +
> > +               if (atomic_dec_and_test(&mask->ref_count)) {
> > +                       list_del_rcu(&mask->list);
>
> I think this can potentially leave the linked list still unprotected
> since it is shared across masks.
>
You are right. Mask list should be always be protected against the race
condition.

>
> > @@ -563,9 +553,9 @@ static int flow_mask_insert(struct flow_table *tbl,
> struct sw_flow *flow,
> >                 mask->key = new->key;
> >                 mask->range = new->range;
> >                 list_add_rcu(&mask->list, &tbl->mask_list);
> > -       }
> > +       } else if (atomic_inc_not_zero(&mask->ref_count) == 0)
> > +               return -EAGAIN;
>
> It seems like it might be nicer to handle this directly in the kernel
> rather than dumping the problem on userspace.
>
I agree. It would be nicer.   Will send out a v2 that addresses both
points.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to