Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie

2018-04-25 Thread John Hurley
On Wed, Apr 25, 2018 at 10:13 AM, Or Gerlitz  wrote:
> On Wed, Apr 25, 2018 at 12:02 PM, John Hurley  
> wrote:
>> On Wed, Apr 25, 2018 at 9:56 AM, Or Gerlitz  wrote:
>>> On Wed, Apr 25, 2018 at 11:51 AM, John Hurley  
>>> wrote:
 On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz  wrote:
> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>  wrote:
>> From: John Hurley 
>>
>> When multiple netdevs are attached to a tc offload block and register for
>> callbacks, a rule added to the block will be propogated to all netdevs.
>> Previously these were detected as duplicates (based on cookie) and
>> rejected. Modify the rule nfp lookup function to optionally include an
>> ingress netdev and a host context along with the cookie value when
>> searching for a rule. When a new rule is passed to the driver, the netdev
>> the rule is to be attached to is considered when searching for 
>> dublicates.
>
> so if the same rule (cookie) is provided to the driver through multiple 
> ingress
> devices you will not reject it -- what is the use case for that, is it
> block sharing?

 Hi Or,
 Yes, block sharing is the current use-case.
 Simple example for clarity
 Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:

 tc qdisc add dev nfp_p0 ingress_block 22 ingress
 tc qdisc add dev nfp_p1 ingress_block 22 ingress
 tc filter add block 22 protocol ip parent : flower skip_sw
 ip_proto tcp action drop
>>>
>>> cool!
>>>
>>> Just out of curiosity, do you actually share this HW rule or you duplicate 
>>> it?
>>
>> It's duplicated. At HW level the ingress port is part of the match so 
>> technically it's
>> a different rule.
>
> I see, we have also a match on the ingress port as part of the HW API, which
> means we will have to apply a similar practice if we want to support
> block sharing quickly.
>
> Just to make sure, under tc block sharing the tc stack calls for hw
> offloading of the
> same rule (same cookie) multiple times, each with different ingress
> device, right?
>
>
> Or.

So in the example above, when each qdisc add is called, a callback
will be registered to the block.
For each callback, the dev used is passed as priv data (presumably you
do similar).
When the filter is added, the block code triggers all callbacks with
the same rule data [1].
We differentiate the callbacks with the priv data (ingress dev).

[1] https://elixir.bootlin.com/linux/v4.17-rc2/source/net/sched/cls_api.c#L741


Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie

2018-04-25 Thread Or Gerlitz
On Wed, Apr 25, 2018 at 12:02 PM, John Hurley  wrote:
> On Wed, Apr 25, 2018 at 9:56 AM, Or Gerlitz  wrote:
>> On Wed, Apr 25, 2018 at 11:51 AM, John Hurley  
>> wrote:
>>> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz  wrote:
 On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
  wrote:
> From: John Hurley 
>
> When multiple netdevs are attached to a tc offload block and register for
> callbacks, a rule added to the block will be propogated to all netdevs.
> Previously these were detected as duplicates (based on cookie) and
> rejected. Modify the rule nfp lookup function to optionally include an
> ingress netdev and a host context along with the cookie value when
> searching for a rule. When a new rule is passed to the driver, the netdev
> the rule is to be attached to is considered when searching for dublicates.

 so if the same rule (cookie) is provided to the driver through multiple 
 ingress
 devices you will not reject it -- what is the use case for that, is it
 block sharing?
>>>
>>> Hi Or,
>>> Yes, block sharing is the current use-case.
>>> Simple example for clarity
>>> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>>>
>>> tc qdisc add dev nfp_p0 ingress_block 22 ingress
>>> tc qdisc add dev nfp_p1 ingress_block 22 ingress
>>> tc filter add block 22 protocol ip parent : flower skip_sw
>>> ip_proto tcp action drop
>>
>> cool!
>>
>> Just out of curiosity, do you actually share this HW rule or you duplicate 
>> it?
>
> It's duplicated. At HW level the ingress port is part of the match so 
> technically it's
> a different rule.

I see, we have also a match on the ingress port as part of the HW API, which
means we will have to apply a similar practice if we want to support
block sharing quickly.

Just to make sure, under tc block sharing the tc stack calls for hw
offloading of the
same rule (same cookie) multiple times, each with different ingress
device, right?


Or.


Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie

2018-04-25 Thread John Hurley
On Wed, Apr 25, 2018 at 9:56 AM, Or Gerlitz  wrote:
> On Wed, Apr 25, 2018 at 11:51 AM, John Hurley  
> wrote:
>> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz  wrote:
>>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>>>  wrote:
 From: John Hurley 

 When multiple netdevs are attached to a tc offload block and register for
 callbacks, a rule added to the block will be propogated to all netdevs.
 Previously these were detected as duplicates (based on cookie) and
 rejected. Modify the rule nfp lookup function to optionally include an
 ingress netdev and a host context along with the cookie value when
 searching for a rule. When a new rule is passed to the driver, the netdev
 the rule is to be attached to is considered when searching for dublicates.
>>>
>>> so if the same rule (cookie) is provided to the driver through multiple 
>>> ingress
>>> devices you will not reject it -- what is the use case for that, is it
>>> block sharing?
>>
>> Hi Or,
>> Yes, block sharing is the current use-case.
>> Simple example for clarity
>> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>>
>> tc qdisc add dev nfp_p0 ingress_block 22 ingress
>> tc qdisc add dev nfp_p1 ingress_block 22 ingress
>> tc filter add block 22 protocol ip parent : flower skip_sw
>> ip_proto tcp action drop
>
> cool!
>
> Just out of curiosity, do you actually share this HW rule or you duplicate it?

It's duplicated.
At HW level the ingress port is part of the match so technically it's
a different rule.


Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie

2018-04-25 Thread Or Gerlitz
On Wed, Apr 25, 2018 at 11:51 AM, John Hurley  wrote:
> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz  wrote:
>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>>  wrote:
>>> From: John Hurley 
>>>
>>> When multiple netdevs are attached to a tc offload block and register for
>>> callbacks, a rule added to the block will be propogated to all netdevs.
>>> Previously these were detected as duplicates (based on cookie) and
>>> rejected. Modify the rule nfp lookup function to optionally include an
>>> ingress netdev and a host context along with the cookie value when
>>> searching for a rule. When a new rule is passed to the driver, the netdev
>>> the rule is to be attached to is considered when searching for dublicates.
>>
>> so if the same rule (cookie) is provided to the driver through multiple 
>> ingress
>> devices you will not reject it -- what is the use case for that, is it
>> block sharing?
>
> Hi Or,
> Yes, block sharing is the current use-case.
> Simple example for clarity
> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>
> tc qdisc add dev nfp_p0 ingress_block 22 ingress
> tc qdisc add dev nfp_p1 ingress_block 22 ingress
> tc filter add block 22 protocol ip parent : flower skip_sw
> ip_proto tcp action drop

cool!

Just out of curiosity, do you actually share this HW rule or you duplicate it?


Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie

2018-04-25 Thread John Hurley
On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz  wrote:
> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>  wrote:
>> From: John Hurley 
>>
>> When multiple netdevs are attached to a tc offload block and register for
>> callbacks, a rule added to the block will be propogated to all netdevs.
>> Previously these were detected as duplicates (based on cookie) and
>> rejected. Modify the rule nfp lookup function to optionally include an
>> ingress netdev and a host context along with the cookie value when
>> searching for a rule. When a new rule is passed to the driver, the netdev
>> the rule is to be attached to is considered when searching for dublicates.
>
> so if the same rule (cookie) is provided to the driver through multiple 
> ingress
> devices you will not reject it -- what is the use case for that, is it
> block sharing?

Hi Or,
Yes, block sharing is the current use-case.
Simple example for clarity
Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:

tc qdisc add dev nfp_p0 ingress_block 22 ingress
tc qdisc add dev nfp_p1 ingress_block 22 ingress
tc filter add block 22 protocol ip parent : flower skip_sw
ip_proto tcp action drop


Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie

2018-04-25 Thread Or Gerlitz
On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
 wrote:
> From: John Hurley 
>
> When multiple netdevs are attached to a tc offload block and register for
> callbacks, a rule added to the block will be propogated to all netdevs.
> Previously these were detected as duplicates (based on cookie) and
> rejected. Modify the rule nfp lookup function to optionally include an
> ingress netdev and a host context along with the cookie value when
> searching for a rule. When a new rule is passed to the driver, the netdev
> the rule is to be attached to is considered when searching for dublicates.

so if the same rule (cookie) is provided to the driver through multiple ingress
devices you will not reject it -- what is the use case for that, is it
block sharing?


[PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie

2018-04-24 Thread Jakub Kicinski
From: John Hurley 

When multiple netdevs are attached to a tc offload block and register for
callbacks, a rule added to the block will be propogated to all netdevs.
Previously these were detected as duplicates (based on cookie) and
rejected. Modify the rule nfp lookup function to optionally include an
ingress netdev and a host context along with the cookie value when
searching for a rule. When a new rule is passed to the driver, the netdev
the rule is to be attached to is considered when searching for dublicates.
When a stats update is received from HW, the host context is used
alongside the cookie to map to the correct host rule.

Signed-off-by: John Hurley 
Reviewed-by: Jakub Kicinski 
---
 drivers/net/ethernet/netronome/nfp/flower/main.h   |  8 +--
 .../net/ethernet/netronome/nfp/flower/metadata.c   | 20 +---
 .../net/ethernet/netronome/nfp/flower/offload.c| 27 --
 3 files changed, 38 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h 
b/drivers/net/ethernet/netronome/nfp/flower/main.h
index c67e1b54c614..9e6804bc9b40 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -47,6 +47,7 @@
 struct net_device;
 struct nfp_app;
 
+#define NFP_FL_STATS_CTX_DONT_CARE cpu_to_be32(0x)
 #define NFP_FL_STATS_ENTRY_RS  BIT(20)
 #define NFP_FL_STATS_ELEM_RS   4
 #define NFP_FL_REPEATED_HASH_MAX   BIT(17)
@@ -189,6 +190,7 @@ struct nfp_fl_payload {
spinlock_t lock; /* lock stats */
struct nfp_fl_stats stats;
__be32 nfp_tun_ipv4_addr;
+   struct net_device *ingress_dev;
char *unmasked_data;
char *mask_data;
char *action_data;
@@ -216,12 +218,14 @@ int nfp_flower_compile_action(struct 
tc_cls_flower_offload *flow,
  struct nfp_fl_payload *nfp_flow);
 int nfp_compile_flow_metadata(struct nfp_app *app,
  struct tc_cls_flower_offload *flow,
- struct nfp_fl_payload *nfp_flow);
+ struct nfp_fl_payload *nfp_flow,
+ struct net_device *netdev);
 int nfp_modify_flow_metadata(struct nfp_app *app,
 struct nfp_fl_payload *nfp_flow);
 
 struct nfp_fl_payload *
-nfp_flower_search_fl_table(struct nfp_app *app, unsigned long 
tc_flower_cookie);
+nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie,
+  struct net_device *netdev, __be32 host_ctx);
 struct nfp_fl_payload *
 nfp_flower_remove_fl_table(struct nfp_app *app, unsigned long 
tc_flower_cookie);
 
diff --git a/drivers/net/ethernet/netronome/nfp/flower/metadata.c 
b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
index db977cf8e933..21668aa435e8 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
@@ -99,14 +99,18 @@ static int nfp_get_stats_entry(struct nfp_app *app, u32 
*stats_context_id)
 
 /* Must be called with either RTNL or rcu_read_lock */
 struct nfp_fl_payload *
-nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie)
+nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie,
+  struct net_device *netdev, __be32 host_ctx)
 {
struct nfp_flower_priv *priv = app->priv;
struct nfp_fl_payload *flower_entry;
 
hash_for_each_possible_rcu(priv->flow_table, flower_entry, link,
   tc_flower_cookie)
-   if (flower_entry->tc_flower_cookie == tc_flower_cookie)
+   if (flower_entry->tc_flower_cookie == tc_flower_cookie &&
+   (!netdev || flower_entry->ingress_dev == netdev) &&
+   (host_ctx == NFP_FL_STATS_CTX_DONT_CARE ||
+flower_entry->meta.host_ctx_id == host_ctx))
return flower_entry;
 
return NULL;
@@ -121,13 +125,11 @@ nfp_flower_update_stats(struct nfp_app *app, struct 
nfp_fl_stats_frame *stats)
flower_cookie = be64_to_cpu(stats->stats_cookie);
 
rcu_read_lock();
-   nfp_flow = nfp_flower_search_fl_table(app, flower_cookie);
+   nfp_flow = nfp_flower_search_fl_table(app, flower_cookie, NULL,
+ stats->stats_con_id);
if (!nfp_flow)
goto exit_rcu_unlock;
 
-   if (nfp_flow->meta.host_ctx_id != stats->stats_con_id)
-   goto exit_rcu_unlock;
-
spin_lock(_flow->lock);
nfp_flow->stats.pkts += be32_to_cpu(stats->pkt_count);
nfp_flow->stats.bytes += be64_to_cpu(stats->byte_count);
@@ -317,7 +319,8 @@ nfp_check_mask_remove(struct nfp_app *app, char *mask_data, 
u32 mask_len,
 
 int nfp_compile_flow_metadata(struct nfp_app