Regards _Sugesh
> -----Original Message----- > From: Yuanhan Liu [mailto:y...@fridaylinux.org] > Sent: Monday, September 11, 2017 9:33 AM > To: Chandran, Sugesh <sugesh.chand...@intel.com> > Cc: d...@openvswitch.org > Subject: Re: [ovs-dev] [PATCH v2 5/8] dpif-netdev: record rx queue id for the > upcall > > On Sun, Sep 10, 2017 at 04:40:10PM +0000, Chandran, Sugesh wrote: > > > > > > Regards > > _Sugesh > > > > > > > -----Original Message----- > > > From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev- > > > boun...@openvswitch.org] On Behalf Of Yuanhan Liu > > > Sent: Tuesday, September 5, 2017 10:23 AM > > > To: d...@openvswitch.org > > > Subject: [ovs-dev] [PATCH v2 5/8] dpif-netdev: record rx queue id > > > for the upcall > > > > > > From: Shachar Beiser <shacha...@mellanox.com> > > > > > > For the DPDK flow offload, which basically just binds a MARK action > > > to a flow, the MARK is required to be used together with a QUEUE > > > action for the most NICs I'm aware of. The QUEUE action then needs a > > > queue index, which is not given in the flow content. > > [Sugesh] Looks to me this is again another hardware specific req. > > This could have impact on RSS hash distribution and queue > > redistribution across pmds at runtime. > > If you have read my earlier version, you should have seen similar concerns > from me. [Sugesh] I feel this has to be addressed properly to make this feature work on all the cases. > > [...] > > > handle_packet_upcall(struct dp_netdev_pmd_thread *pmd, > > > struct dp_packet *packet, > > > const struct netdev_flow_key *key, > > > struct ofpbuf *actions, struct ofpbuf *put_actions, > > > - int *lost_cnt, long long now) > > > + int *lost_cnt, long long now, int rxq) > > [Sugesh] IMHO its not really good practice to change the default > > packet processing path for some specific hardware offload. Rxq doesn't > have any meaning for handling the packets in normal path. > > The same: some concerns I have already expressed before. Unfortunately, > we didn't come up with something better. > > > Why cant install flow on all the configured queues for a specific inport? > Flow handling is per port, not per queue. > > This will assure the packets will have mark even after the rss hash > distribution. > > Like how? The QUEUE action only accepts one queue index. Setting it to > many times will only let the last one take effect. The another possiblity I'm > aware of is with the RTE_FLOW_ACTION_TYPE_RSS, which, unfortunately, is > only supported by Mellanox in DPDK. Yet again, I was told it's not functional > well. [Sugesh] Hmm. I got it what you meant. Flow director has an option called passthrough. This will allow to use RSS hash on packet after the filter. If I remember correctly this has been in supported in XL710 all the time. This will allow to program the flow without any queue index. If there is a functionality issue to configure the MARK action properly, it has to be fixed in DPDK than doing a workaround in OVS. > > Also, even it could work, I think it still would be probematic. I'm thinking > what > might happen for following case. > > Assume there is a 5-tuple flow. According to the initial RSS setting by OVS, > all > pkts match that flow would be ended up being recieved from one queue > only. If we re-do RSS settings on it, if the RSS settings are the same, the > behaviour might be the same. If not, those pkts which are supposed to be [Sugesh] When a user changes number of queues, RSS setting might change. Also in the current design, when a queue get pinned to different PMD at run time, The mark details may loose as its on the PMD struct. > distributed to one queue only might be distributed to many queues. > > Is it a valid concern? [Sugesh] I feel there will be performance and scalability issues if we wanted to program a flow for a queue ID. Hence I prefer to have flow programming without queue specific information. > > --yliu _______________________________________________ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev