Re: [patch net-next 2/3] net/sched: Change cls_flower to use IDR

2017-08-30 Thread Simon Horman
On Tue, Aug 29, 2017 at 03:25:35AM +, Chris Mi wrote:
> 
> 
> > -Original Message-
> > From: Simon Horman [mailto:simon.hor...@netronome.com]
> > Sent: Monday, August 28, 2017 7:37 PM
> > To: Chris Mi <chr...@mellanox.com>
> > Cc: netdev@vger.kernel.org; j...@mojatatu.com;
> > xiyou.wangc...@gmail.com; j...@resnulli.us; da...@davemloft.net;
> > mawil...@microsoft.com
> > Subject: Re: [patch net-next 2/3] net/sched: Change cls_flower to use IDR
> > 
> > On Mon, Aug 28, 2017 at 02:41:16AM -0400, Chris Mi wrote:
> > > Currently, all filters with the same priority are linked in a doubly
> > > linked list. Every filter should have a unique handle. To make the
> > > handle unique, we need to iterate the list every time to see if the
> > > handle exists or not when inserting a new filter. It is time-consuming.
> > > For example, it takes about 5m3.169s to insert 64K rules.
> > >
> > > This patch changes cls_flower to use IDR. With this patch, it takes
> > > about 0m1.127s to insert 64K rules. The improvement is huge.
> > 
> > Very nice :)
> > 
> > > But please note that in this testing, all filters share the same action.
> > > If every filter has a unique action, that is another bottleneck.
> > > Follow-up patch in this patchset addresses that.
> > >
> > > Signed-off-by: Chris Mi <chr...@mellanox.com>
> > > Signed-off-by: Jiri Pirko <j...@mellanox.com>
> > > ---
> > >  net/sched/cls_flower.c | 55
> > > +-
> > >  1 file changed, 23 insertions(+), 32 deletions(-)
> > >
> > > diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index
> > > bd9dab4..3d041d2 100644
> > > --- a/net/sched/cls_flower.c
> > > +++ b/net/sched/cls_flower.c
> > 
> > ...
> > 
> > > @@ -890,6 +870,7 @@ static int fl_change(struct net *net, struct sk_buff
> > *in_skb,
> > >   struct cls_fl_filter *fnew;
> > >   struct nlattr **tb;
> > >   struct fl_flow_mask mask = {};
> > > + unsigned long idr_index;
> > >   int err;
> > >
> > >   if (!tca[TCA_OPTIONS])
> > > @@ -920,13 +901,21 @@ static int fl_change(struct net *net, struct sk_buff
> > *in_skb,
> > >   goto errout;
> > >
> > >   if (!handle) {
> > > - handle = fl_grab_new_handle(tp, head);
> > > - if (!handle) {
> > > - err = -EINVAL;
> > > + err = idr_alloc_ext(>handle_idr, fnew, _index,
> > > + 1, 0x8000, GFP_KERNEL);
> > > + if (err)
> > >   goto errout;
> > > - }
> > > + fnew->handle = idr_index;
> > > + }
> > > +
> > > + /* user specifies a handle and it doesn't exist */
> > > + if (handle && !fold) {
> > > + err = idr_alloc_ext(>handle_idr, fnew, _index,
> > > + handle, handle + 1, GFP_KERNEL);
> > > + if (err)
> > > + goto errout;
> > > + fnew->handle = idr_index;
> > >   }
> > > - fnew->handle = handle;
> > >
> > >   if (tb[TCA_FLOWER_FLAGS]) {
> > >   fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]);
> > > @@ -980,6 +969,8 @@ static int fl_change(struct net *net, struct sk_buff
> > *in_skb,
> > >   *arg = fnew;
> > >
> > >   if (fold) {
> > > + fnew->handle = handle;
> > 
> > Can it be the case that fold is non-NULL and handle is zero?
> > The handling of that case seem to have changed in this patch.
> I don't think that could happen.  In function tc_ctl_tfilter(),
> 
> fl_get() will be called.  If handle is zero, fl_get() will return NULL.
> That means fold is NULL.

Thanks for the explanation, I see that now.

> > > + idr_replace_ext(>handle_idr, fnew, fnew->handle);
> > >   list_replace_rcu(>list, >list);
> > >   tcf_unbind_filter(tp, >res);
> > >   call_rcu(>rcu, fl_destroy_filter);


RE: [patch net-next 2/3] net/sched: Change cls_flower to use IDR

2017-08-28 Thread Chris Mi


> -Original Message-
> From: Simon Horman [mailto:simon.hor...@netronome.com]
> Sent: Monday, August 28, 2017 7:37 PM
> To: Chris Mi <chr...@mellanox.com>
> Cc: netdev@vger.kernel.org; j...@mojatatu.com;
> xiyou.wangc...@gmail.com; j...@resnulli.us; da...@davemloft.net;
> mawil...@microsoft.com
> Subject: Re: [patch net-next 2/3] net/sched: Change cls_flower to use IDR
> 
> On Mon, Aug 28, 2017 at 02:41:16AM -0400, Chris Mi wrote:
> > Currently, all filters with the same priority are linked in a doubly
> > linked list. Every filter should have a unique handle. To make the
> > handle unique, we need to iterate the list every time to see if the
> > handle exists or not when inserting a new filter. It is time-consuming.
> > For example, it takes about 5m3.169s to insert 64K rules.
> >
> > This patch changes cls_flower to use IDR. With this patch, it takes
> > about 0m1.127s to insert 64K rules. The improvement is huge.
> 
> Very nice :)
> 
> > But please note that in this testing, all filters share the same action.
> > If every filter has a unique action, that is another bottleneck.
> > Follow-up patch in this patchset addresses that.
> >
> > Signed-off-by: Chris Mi <chr...@mellanox.com>
> > Signed-off-by: Jiri Pirko <j...@mellanox.com>
> > ---
> >  net/sched/cls_flower.c | 55
> > +-
> >  1 file changed, 23 insertions(+), 32 deletions(-)
> >
> > diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index
> > bd9dab4..3d041d2 100644
> > --- a/net/sched/cls_flower.c
> > +++ b/net/sched/cls_flower.c
> 
> ...
> 
> > @@ -890,6 +870,7 @@ static int fl_change(struct net *net, struct sk_buff
> *in_skb,
> > struct cls_fl_filter *fnew;
> > struct nlattr **tb;
> > struct fl_flow_mask mask = {};
> > +   unsigned long idr_index;
> > int err;
> >
> > if (!tca[TCA_OPTIONS])
> > @@ -920,13 +901,21 @@ static int fl_change(struct net *net, struct sk_buff
> *in_skb,
> > goto errout;
> >
> > if (!handle) {
> > -   handle = fl_grab_new_handle(tp, head);
> > -   if (!handle) {
> > -   err = -EINVAL;
> > +   err = idr_alloc_ext(>handle_idr, fnew, _index,
> > +   1, 0x8000, GFP_KERNEL);
> > +   if (err)
> > goto errout;
> > -   }
> > +   fnew->handle = idr_index;
> > +   }
> > +
> > +   /* user specifies a handle and it doesn't exist */
> > +   if (handle && !fold) {
> > +   err = idr_alloc_ext(>handle_idr, fnew, _index,
> > +   handle, handle + 1, GFP_KERNEL);
> > +   if (err)
> > +   goto errout;
> > +   fnew->handle = idr_index;
> > }
> > -   fnew->handle = handle;
> >
> > if (tb[TCA_FLOWER_FLAGS]) {
> > fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]);
> > @@ -980,6 +969,8 @@ static int fl_change(struct net *net, struct sk_buff
> *in_skb,
> > *arg = fnew;
> >
> > if (fold) {
> > +   fnew->handle = handle;
> 
> Can it be the case that fold is non-NULL and handle is zero?
> The handling of that case seem to have changed in this patch.
I don't think that could happen.  In function tc_ctl_tfilter(),

fl_get() will be called.  If handle is zero, fl_get() will return NULL.
That means fold is NULL.

> 
> > +   idr_replace_ext(>handle_idr, fnew, fnew->handle);
> > list_replace_rcu(>list, >list);
> > tcf_unbind_filter(tp, >res);
> > call_rcu(>rcu, fl_destroy_filter);
> > --
> > 1.8.3.1
> >


RE: [patch net-next 2/3] net/sched: Change cls_flower to use IDR

2017-08-28 Thread Chris Mi
> -Original Message-
> From: Jamal Hadi Salim [mailto:j...@mojatatu.com]
> Sent: Tuesday, August 29, 2017 5:56 AM
> To: Chris Mi <chr...@mellanox.com>; netdev@vger.kernel.org
> Cc: xiyou.wangc...@gmail.com; j...@resnulli.us; da...@davemloft.net;
> mawil...@microsoft.com
> Subject: Re: [patch net-next 2/3] net/sched: Change cls_flower to use IDR
> 
> On 17-08-28 02:41 AM, Chris Mi wrote:
> > Currently, all filters with the same priority are linked in a doubly
> > linked list. Every filter should have a unique handle. To make the
> > handle unique, we need to iterate the list every time to see if the
> > handle exists or not when inserting a new filter. It is time-consuming.
> > For example, it takes about 5m3.169s to insert 64K rules.
> >
> > This patch changes cls_flower to use IDR. With this patch, it takes
> > about 0m1.127s to insert 64K rules. The improvement is huge.
> >
> > But please note that in this testing, all filters share the same action.
> > If every filter has a unique action, that is another bottleneck.
> > Follow-up patch in this patchset addresses that.
> >
> > Signed-off-by: Chris Mi <chr...@mellanox.com>
> > Signed-off-by: Jiri Pirko <j...@mellanox.com>
> 
> Acked-by: Jamal Hadi Salim <j...@mojatatu.com>
> 
> As Cong asked last time - any plans to add to other classifiers?
I think if other classifiers don't need so many items, list is enough for them.
If we change all of them, we need spend a lot of time to test them to make sure
there is no regression. But the benefit is not very big. If a certain classifier
need to change in the future, flower is an example for reference.

-Chris
> 
> cheers,
> jamal


Re: [patch net-next 2/3] net/sched: Change cls_flower to use IDR

2017-08-28 Thread Jamal Hadi Salim

On 17-08-28 02:41 AM, Chris Mi wrote:

Currently, all filters with the same priority are linked in a doubly
linked list. Every filter should have a unique handle. To make the
handle unique, we need to iterate the list every time to see if the
handle exists or not when inserting a new filter. It is time-consuming.
For example, it takes about 5m3.169s to insert 64K rules.

This patch changes cls_flower to use IDR. With this patch, it
takes about 0m1.127s to insert 64K rules. The improvement is huge.

But please note that in this testing, all filters share the same action.
If every filter has a unique action, that is another bottleneck.
Follow-up patch in this patchset addresses that.

Signed-off-by: Chris Mi 
Signed-off-by: Jiri Pirko 


Acked-by: Jamal Hadi Salim 

As Cong asked last time - any plans to add to other classifiers?

cheers,
jamal


Re: [patch net-next 2/3] net/sched: Change cls_flower to use IDR

2017-08-28 Thread Simon Horman
On Mon, Aug 28, 2017 at 02:41:16AM -0400, Chris Mi wrote:
> Currently, all filters with the same priority are linked in a doubly
> linked list. Every filter should have a unique handle. To make the
> handle unique, we need to iterate the list every time to see if the
> handle exists or not when inserting a new filter. It is time-consuming.
> For example, it takes about 5m3.169s to insert 64K rules.
> 
> This patch changes cls_flower to use IDR. With this patch, it
> takes about 0m1.127s to insert 64K rules. The improvement is huge.

Very nice :)

> But please note that in this testing, all filters share the same action.
> If every filter has a unique action, that is another bottleneck.
> Follow-up patch in this patchset addresses that.
> 
> Signed-off-by: Chris Mi 
> Signed-off-by: Jiri Pirko 
> ---
>  net/sched/cls_flower.c | 55 
> +-
>  1 file changed, 23 insertions(+), 32 deletions(-)
> 
> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
> index bd9dab4..3d041d2 100644
> --- a/net/sched/cls_flower.c
> +++ b/net/sched/cls_flower.c

...

> @@ -890,6 +870,7 @@ static int fl_change(struct net *net, struct sk_buff 
> *in_skb,
>   struct cls_fl_filter *fnew;
>   struct nlattr **tb;
>   struct fl_flow_mask mask = {};
> + unsigned long idr_index;
>   int err;
>  
>   if (!tca[TCA_OPTIONS])
> @@ -920,13 +901,21 @@ static int fl_change(struct net *net, struct sk_buff 
> *in_skb,
>   goto errout;
>  
>   if (!handle) {
> - handle = fl_grab_new_handle(tp, head);
> - if (!handle) {
> - err = -EINVAL;
> + err = idr_alloc_ext(>handle_idr, fnew, _index,
> + 1, 0x8000, GFP_KERNEL);
> + if (err)
>   goto errout;
> - }
> + fnew->handle = idr_index;
> + }
> +
> + /* user specifies a handle and it doesn't exist */
> + if (handle && !fold) {
> + err = idr_alloc_ext(>handle_idr, fnew, _index,
> + handle, handle + 1, GFP_KERNEL);
> + if (err)
> + goto errout;
> + fnew->handle = idr_index;
>   }
> - fnew->handle = handle;
>  
>   if (tb[TCA_FLOWER_FLAGS]) {
>   fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]);
> @@ -980,6 +969,8 @@ static int fl_change(struct net *net, struct sk_buff 
> *in_skb,
>   *arg = fnew;
>  
>   if (fold) {
> + fnew->handle = handle;

Can it be the case that fold is non-NULL and handle is zero?
The handling of that case seem to have changed in this patch.

> + idr_replace_ext(>handle_idr, fnew, fnew->handle);
>   list_replace_rcu(>list, >list);
>   tcf_unbind_filter(tp, >res);
>   call_rcu(>rcu, fl_destroy_filter);
> -- 
> 1.8.3.1
>