Hi Ori,
On 9/28/19 6:19 PM, Ori Kam wrote:
Hi Andrew.
PSB
-----Original Message-----
From: Andrew Rybchenko <arybche...@solarflare.com>
Sent: Thursday, September 26, 2019 8:24 PM
To: Ori Kam <or...@mellanox.com>; Thomas Monjalon
<tho...@monjalon.net>; Ferruh Yigit <ferruh.yi...@intel.com>
Cc: dev@dpdk.org; jingjing...@intel.com; step...@networkplumber.org
Subject: Re: [dpdk-dev] [PATCH 01/13] ethdev: support setup function for
hairpin queue
On 9/26/19 6:58 PM, Ori Kam wrote:
Hi Andrew,
Thanks for your comments PSB.
-----Original Message-----
From: Andrew Rybchenko <arybche...@solarflare.com>
On 9/26/19 9:28 AM, Ori Kam wrote:
This commit introduce the RX/TX hairpin setup function.
RX/TX should be Rx/Tx here and everywhere below.
Hairpin is RX/TX queue that is used by the nic in order to offload
wire to wire traffic.
Each hairpin queue is binded to one or more queues from other type.
For example TX hairpin queue should be binded to at least 1 RX hairpin
queue and vice versa.
How should application find out that hairpin queues are supported?
It should be stated in the release note of the DPDK, when manufacture adds
support for this.
In addition if the application try to set hairpin queue and it fails it can mean
depending on the
error that the hairpin is not supported.
I'm talking about dev_info-like information. Documentation is nice, but
it is not
very useful to implement application which works with NICs from
different vendors.
What if we add get hairpin capabilities function.
We could have, the max number of queues, if the nic support 1:n connection,
which offloads are supported and so on. So basically create a new set of
capabilities
for hairpin this I think will also remove other concern that you have.
What do you think?
Yes, I think an API to report capabilities would be useful.
It should be also used in setup functions in order to do checks on
generic level that setup request is OK vs caps.
How many?
There is no limit to the number of hairpin queues from application all queues
can be hairpin queues.
I'm pretty sure that it could be vendor specific.
Please see my answer above.
How should application find out which ports/queues could be used for
pining?
All ports and queues can be supported, if the application request invalid
combination, for example
in current Mellanox implementation binding between two ports then the
setup function will fail.
If you would like I can add capability for this, but there are too many options.
For example number
of queues, binding limitations all of those will be very hard to declare.
Is hair-pinning domain on device level sufficient to expose limitations?
I'm sorry but I don’t understand your question.
I was just trying to imagine how we could say that we can hairpin
one port Rx queues to another port Tx queues.
Like I suggested above if I will add a capability function we could have
a field that says port_binidng supported, or something else, along this line.
Not sure that I understand, but I'll take a look when submitted.
Signed-off-by: Ori Kam <or...@mellanox.com>
---
lib/librte_ethdev/rte_ethdev.c | 213
+++++++++++++++++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 145 +++++++++++++++++++++
lib/librte_ethdev/rte_ethdev_core.h | 18 +++
lib/librte_ethdev/rte_ethdev_version.map | 4 +
4 files changed, 380 insertions(+)
diff --git a/lib/librte_ethdev/rte_ethdev.c
b/lib/librte_ethdev/rte_ethdev.c index 30b0c78..4021f38 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1701,6 +1701,115 @@ struct rte_eth_dev *
}
int
+rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t
rx_queue_id,
+ uint16_t nb_rx_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ const struct rte_eth_hairpin_conf *hairpin_conf)
Below code duplicates rte_eth_rx_queue_setup() a lot and it is very bad
from maintenance point of view. Similar problem with Tx hairpin queue
setup.
I'm aware of that. The reasons I choose it are: (same goes to Tx)
1. use the same function approach, meaning to use the current setup
function
the issues with this are:
* API break.
* It will have extra parameters, for example mempool will not be used
for hairpin and hairpin configuration will not be used for normal
queue.
It is possible to use a struct but again API break and some fields
are not
used.
* we are just starting with the hairpin, most likely there will be
modification so
it is better to have a different function.
* From application he undertand that this is a different kind of queue,
which shouldn't be
used by the application.
It does not excuse to duplicate so much code below. If we have separate
dev_info-like limitations for hairpin, it would make sense, but I hope that
it would be still possible to avoid code duplication.
We can start with the most basic implementation, which will mean that the
function
will almost be empty, when other vendors or Mellanox will require some
additional
test or code they will be able to decide if to add new code to he function, or
extract the shared code from the standard function to a specific function, and
use this function in both setup functions.
What do you think?
Let's try and take a look at the code.
[snip]
@@ -1769,6 +1793,60 @@ int rte_eth_rx_queue_setup(uint16_t port_id,
uint16_t rx_queue_id,
struct rte_mempool *mb_pool);
/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior
+ notice
+ *
+ * Allocate and set up a hairpin receive queue for an Ethernet device.
+ *
+ * The function set up the selected queue to be used in hairpin.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param rx_queue_id
+ * The index of the receive queue to set up.
+ * The value must be in the range [0, nb_rx_queue - 1] previously
supplied
+ * to rte_eth_dev_configure().
Is any Rx queue may be setup as hairpin queue?
Can it be still used for regular traffic?
No if a queue is used as hairpin it can't be used for normal traffic.
This is also why I like the idea of two different functions, in order to create
This distinction.
If so, do we need at least debug-level checks in Tx/Rx burst functions?
Is it required to patch rte flow RSS action to ensure that Rx queues of
only one kind are specified?
What about attempt to add Rx/Tx callbacks for hairpin queues?
I think the checks should be done in PMD level. Since from high level they are
the
same.
Sorry, I don't understand why. If something could be checked on generic
level,
it should be done to avoid duplication in all drivers.
Call backs for Rx/Tx doesn't make sense, since the idea is to bypass the CPU.
If so, I think rte_eth_add_tx_callback() should be patched to return an
error
if specified queue is hairpin. Same for Rx.
Any other cases?
+ * @param nb_rx_desc
+ * The number of receive descriptors to allocate for the receive ring.
Does it still make sense for hairpin queue?
Yes, since it can affect memory size used by the device, and can affect
performance.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of NUMA.
+ * The value can be *SOCKET_ID_ANY* if there is no NUMA constraint
for
+ * the DMA memory allocated for the receive descriptors of the ring.
Is it still required to be provided for hairpin Rx queue?
Yes, for internal PMD structures to be allocated, but we can if pressed
remove it.
+ * @param rx_conf
+ * The pointer to the configuration data to be used for the receive
queue.
+ * NULL value is allowed, in which case default RX configuration
+ * will be used.
+ * The *rx_conf* structure contains an *rx_thresh* structure with the
values
+ * of the Prefetch, Host, and Write-Back threshold registers of the
receive
+ * ring.
+ * In addition it contains the hardware offloads features to activate using
+ * the DEV_RX_OFFLOAD_* flags.
+ * If an offloading set in rx_conf->offloads
+ * hasn't been set in the input argument eth_conf->rxmode.offloads
+ * to rte_eth_dev_configure(), it is a new added offloading, it must be
+ * per-queue type and it is enabled for the queue.
+ * No need to repeat any bit in rx_conf->offloads which has already been
+ * enabled in rte_eth_dev_configure() at port level. An offloading
enabled
+ * at port level can't be disabled at queue level.
Which offloads still make sense in the case of hairpin Rx queue?
What about threshhods, drop enable?
Drop and thresholds make sense, for example the application can state that,
in case of back pressure to start dropping packets in order not to affect the
entire nic.
regarding offloads mainly vlan strip or vlan insert but those can also
be used in rte_flow.
But future offloads like QoS or other maybe shared.
I'm not a fan of dead parameters which are added just to use
the same structure. It raises too many questions on maintenance.
Also I don't like idea to share hairpin and regular offloads.
May be it is OK to share namespace (still unsure), but capabilities
are definitely different and some regular offloads are simply not
applicable to hairpin case.
I agree with you I think that my suggestion above (new caps for hairpin)
solve this issue. Do you agree?
I will remove the rte_eth_txconf and only hae the hairpin_conf with some new
fields, same for the Rx, is that O.K.?
I think it would be better to keep only used parameters.
Anyway, it is experimental API and we can add missing parameters
when required.
[snip]
Thanks,
Andrew.