Regards _Sugesh
> -----Original Message----- > From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev- > boun...@openvswitch.org] On Behalf Of Yuanhan Liu > Sent: Tuesday, September 5, 2017 10:23 AM > To: d...@openvswitch.org > Subject: [ovs-dev] [PATCH v2 5/8] dpif-netdev: record rx queue id for the > upcall > > From: Shachar Beiser <shacha...@mellanox.com> > > For the DPDK flow offload, which basically just binds a MARK action to a flow, > the MARK is required to be used together with a QUEUE action for the most > NICs I'm aware of. The QUEUE action then needs a queue index, which is not > given in the flow content. [Sugesh] Looks to me this is again another hardware specific req. This could have impact on RSS hash distribution and queue redistribution across pmds at runtime. > > Here we record the rx queue while recieving the pkts to solve above issue. > > Co-authored-by: Yuanhan Liu <y...@fridaylinux.org> > Signed-off-by: Shachar Beiser <shacha...@mellanox.com> > Signed-off-by: Yuanhan Liu <y...@fridaylinux.org> > --- > lib/dp-packet.h | 1 + > lib/dpif-netdev.c | 14 +++++++++----- > lib/netdev.c | 1 + > lib/netdev.h | 1 + > 4 files changed, 12 insertions(+), 5 deletions(-) > > diff --git a/lib/dp-packet.h b/lib/dp-packet.h index a7a062f..479a734 100644 > --- a/lib/dp-packet.h > +++ b/lib/dp-packet.h > @@ -709,6 +709,7 @@ enum { NETDEV_MAX_BURST = 32 }; /* Maximum > number packets in a batch. */ struct dp_packet_batch { > size_t count; > bool trunc; /* true if the batch needs truncate. */ > + int rxq; > struct dp_packet *packets[NETDEV_MAX_BURST]; }; > > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index a95b8d4..3099c73 > 100644 > --- a/lib/dpif-netdev.c > +++ b/lib/dpif-netdev.c > @@ -2474,7 +2474,8 @@ out: > static struct dp_netdev_flow * > dp_netdev_flow_add(struct dp_netdev_pmd_thread *pmd, > struct match *match, const ovs_u128 *ufid, > - const struct nlattr *actions, size_t actions_len) > + const struct nlattr *actions, size_t actions_len, > + int rxq) > OVS_REQUIRES(pmd->flow_mutex) > { > struct dp_netdev_flow *flow; > @@ -2527,6 +2528,7 @@ dp_netdev_flow_add(struct > dp_netdev_pmd_thread *pmd, > dp_netdev_alloc_flow_mark(pmd, &info.flow_mark)) { > struct dp_netdev_port *port; > port = dp_netdev_lookup_port(pmd->dp, in_port); > + info.rxq = rxq; > if (netdev_flow_put(port->netdev, match, > CONST_CAST(struct nlattr *, actions), > actions_len, ufid, &info, NULL) == 0) { @@ > -2607,7 +2609,7 @@ > flow_put_on_pmd(struct dp_netdev_pmd_thread *pmd, > if (put->flags & DPIF_FP_CREATE) { > if (cmap_count(&pmd->flow_table) < MAX_FLOWS) { > dp_netdev_flow_add(pmd, match, ufid, put->actions, > - put->actions_len); > + put->actions_len, -1); > error = 0; > } else { > error = EFBIG; > @@ -2635,6 +2637,7 @@ flow_put_on_pmd(struct dp_netdev_pmd_thread > *pmd, > in_port = netdev_flow->flow.in_port.odp_port; > port = dp_netdev_lookup_port(pmd->dp, in_port); > info.flow_mark = netdev_flow->mark; > + info.rxq = -1; > ret = netdev_flow_put(port->netdev, match, > CONST_CAST(struct nlattr *, > put->actions), @@ -5026,7 > +5029,7 @@ > handle_packet_upcall(struct dp_netdev_pmd_thread *pmd, > struct dp_packet *packet, > const struct netdev_flow_key *key, > struct ofpbuf *actions, struct ofpbuf *put_actions, > - int *lost_cnt, long long now) > + int *lost_cnt, long long now, int rxq) [Sugesh] IMHO its not really good practice to change the default packet processing path for some specific hardware offload. Rxq doesn't have any meaning for handling the packets in normal path. Why cant install flow on all the configured queues for a specific inport? Flow handling is per port, not per queue. This will assure the packets will have mark even after the rss hash distribution. This comment points to my previous suggestion to keep the flows per port than pmd. This will also take care scenarios like queue rescheduling across pmds, changing number of queues in a port. > { > struct ofpbuf *add_actions; > struct dp_packet_batch b; > @@ -5082,7 +5085,8 @@ handle_packet_upcall(struct > dp_netdev_pmd_thread *pmd, > if (OVS_LIKELY(!netdev_flow)) { > netdev_flow = dp_netdev_flow_add(pmd, &match, &ufid, > add_actions->data, > - add_actions->size); > + add_actions->size, > + rxq); > } > ovs_mutex_unlock(&pmd->flow_mutex); > emc_probabilistic_insert(pmd, key, netdev_flow); @@ -5152,7 +5156,7 > @@ fast_path_processing(struct dp_netdev_pmd_thread *pmd, > > miss_cnt++; > handle_packet_upcall(pmd, packets[i], &keys[i], &actions, > - &put_actions, &lost_cnt, now); > + &put_actions, &lost_cnt, now, > + packets_->rxq); > } > > ofpbuf_uninit(&actions); > diff --git a/lib/netdev.c b/lib/netdev.c index b4e570b..c9b7019 100644 > --- a/lib/netdev.c > +++ b/lib/netdev.c > @@ -700,6 +700,7 @@ netdev_rxq_recv(struct netdev_rxq *rx, struct > dp_packet_batch *batch) > > retval = rx->netdev->netdev_class->rxq_recv(rx, batch); > if (!retval) { > + batch->rxq = rx->queue_id; > COVERAGE_INC(netdev_received); > } else { > batch->count = 0; > diff --git a/lib/netdev.h b/lib/netdev.h index 2003165..28ad39d 100644 > --- a/lib/netdev.h > +++ b/lib/netdev.h > @@ -194,6 +194,7 @@ struct offload_info { > * it will be in the pkt meta data. > */ > uint32_t flow_mark; > + int rxq; > }; > struct dpif_class; > struct netdev_flow_dump; > -- > 2.7.4 > > _______________________________________________ > dev mailing list > d...@openvswitch.org > https://mail.openvswitch.org/mailman/listinfo/ovs-dev _______________________________________________ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev