-----Original Message-----
> Date: Fri, 7 Jul 2017 20:33:19 +0530
> From: Jerin Jacob <jerin.ja...@caviumnetworks.com>
> To: "Rao, Nikhil" <nikhil....@intel.com>
> CC: gage.e...@intel.com, dev@dpdk.org, tho...@monjalon.net,
>  bruce.richard...@intel.com, harry.van.haa...@intel.com,
>  hemant.agra...@nxp.com, nipun.gu...@nxp.com, narender.vang...@intel.com,
>  Abhinandan Gujjar <abhinandan.guj...@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 1/2] eventdev: add event adapter for
>  ethernet Rx queues
> User-Agent: Mutt/1.8.3 (2017-05-23)
> 
> -----Original Message-----
> > Date: Fri, 7 Jul 2017 11:51:01 +0530
> > From: "Rao, Nikhil" <nikhil....@intel.com>
> > To: Jerin Jacob <jerin.ja...@caviumnetworks.com>
> > CC: gage.e...@intel.com, dev@dpdk.org, tho...@monjalon.net,
> >  bruce.richard...@intel.com, harry.van.haa...@intel.com,
> >  hemant.agra...@nxp.com, nipun.gu...@nxp.com, narender.vang...@intel.com,
> >  Abhinandan Gujjar <abhinandan.guj...@intel.com>
> > Subject: Re: [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues
> > User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
> >  Thunderbird/52.2.1
> > 
> > On 7/6/2017 7:48 PM, Jerin Jacob wrote:
> > > -----Original Message-----
> > > > Date: Fri, 7 Jul 2017 03:22:31 +0530
> > > > From: Nikhil Rao <nikhil....@intel.com>
> > > > To: jerin.ja...@caviumnetworks.com
> > > > CC: gage.e...@intel.com, dev@dpdk.org, tho...@monjalon.net,
> > > >   bruce.richard...@intel.com, harry.van.haa...@intel.com,
> > > >   hemant.agra...@nxp.com, nipun.gu...@nxp.com, 
> > > > narender.vang...@intel.com,
> > > >   Nikhil Rao <nikhil....@intel.com>, Abhinandan Gujjar
> > > >   <abhinandan.guj...@intel.com>
> > > > Subject: [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues
> > > > X-Mailer: git-send-email 2.7.4
> > > > 
> > > > Signed-off-by: Abhinandan Gujjar <abhinandan.guj...@intel.com>
> > > > ---
> > <sniped>
> > > 
> > > Thanks Nikhil for the patch.
> > > 
> > > Looks like you missed this email?
> > > http://dpdk.org/ml/archives/dev/2017-June/068142.html
> > > 
> > > Can we make it as function pointer based approach(keeping all adapter 
> > > functions
> > > function prototype required for SW PMD) so that each PMD
> > > can register the ops for adapter_create etc so that same API can be used
> > > for software and hardware PMDs(the scheme like rte_flow() or rte_tm()
> > > approach)
> > > 
> > > Can discuss more on that to finalize the approach?
> > > > Yes, I did miss that email :-( (I am including the relevant extract 
> > > > below)
> 
> OK.
> 
> > 
> > >
> > > /* adapter has inbuilt port, no need to create producer port */
> > > #define RTE_EVENT_ETHDEV_CAP_INBUILT_PORT  (1ULL << 0)
> > > /* adapter does not need service function */
> > > #define RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC (1ULL << 1)
> 
> I just provided a name to share the view. You can choose better name.
> 
> > >
> > > struct rte_event_eth_rx_adap_info {
> > >   char name[32];
> > >          uint32_t adapter_cap;
> > >          /**< Ethdev RX adapter capabilities(RTE_EVENT_ETHDEV_CAP_)*/
> > > }
> > >
> > >
> > > struct rte_event_eth_rx_adap_cfg {
> > >   uint8_t rx_event_port_id;
> > >         /**< Event port identifier, the adapter enqueues mbuf events to
> > this
> > >          * port, Ignored when RTE_EVENT_ETHDEV_CAP_INBUILT_PORT
> > >          */
> > >
> > > }
> > >
> > > struct rte_eth_rx_event_adapter_queue_config {
> > >         uint32_t rx_queue_flags;
> > >          /**< Flags for handling received packets */
> > >         uint16_t servicing_weight;
> > >         /**< Relative polling frequency of ethernet receive queue, if this
> > >          * is set to zero, the Rx queue is interrupt driven
> > >          * Ignored if RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC set
> > >          */
> > >         struct rte_event ev;
> > >         /**<
> > >          *  The values from the following event fields will be used when
> > >          *  enqueuing mbuf events:
> > >          *   - event_queue_id: Targeted event queue ID for received
> > packets.
> > >          *   - event_priority: Event priority of packets from this Rx
> > queue in
> > >          *                     the event queue relative to other events.
> > >          *   - sched_type: Scheduling type for packets from this Rx queue.
> > >          *   - flow_id: If the
> > RTE_ETH_RX_EVENT_ADAPTER_QUEUE_FLOW_ID_VALID bit
> > >          *               is set in rx_queue_flags, this flow_id is used
> > for all
> > >          *               packets received from this queue. Otherwise the
> > flow ID
> > >          *               is set to the RSS hash.
> > >          */
> > > };
> > >
> > > int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, uint8_t
> > eth_port_id);
> > 
> > > int rte_event_eth_rx_adapter_get_info(uint8_t id, struct
> > rte_event_eth_rx_adap_info *info);
> > > int rte_event_eth_rx_adapter_configure(uint8_t id, struct
> > rte_event_eth_rx_adap_config *cfg);
> > > int rte_event_eth_rx_adapter_queue_add(uint8_t id, int32_t rx_queue_id,
> > const struct rte_eth_rx_event_adapter_queue_config *config);
> > > int rte_event_eth_rx_adapter_queue_del(uint8_t id, int32_t rx_queue_id)
> > > int rte_event_eth_rx_adapter_run();
> > > int rte_event_eth_rx_adapter_free(uint8_t id);
> > >
> > 
> > If I understood your idea, the function pointer struct would look something
> > like this.
> > 
> > struct rte_eventdev_rx_adapter_ops {
> >     rx_adapter_create_t     create,
> >     rx_adapter_get_info_t   info,
> >     rx_adapter_configure_t  configure,
> >     rx_adapter_queue_add_t  queue_add,
> >     rx_adapter_queue_del_t  queue_del,
> >     rx_adapter_queue_free_t queue_free,
> >       rx_adapter_free_t     free
> > };
> 
> Yes. But, I think, adapter create and free goes in struct rte_eventdev_op .
> See below.
> 
> 
> > 
> > struct rte_eventdev {
> >        ..
> >        const struct rte_eventdev_rx_adapter_ops *rx_adapter_ops;
> >        ..
> > };
> 
> An eventdev instance can have N adapters not just one. So this will be
> pointer to pointer, indexed through adapter_id.
> 
> In SW PMD case, the eventdev may have only one adapter.
> In HW PMD cases, There will more than one. example,
> - if octeontx eventdev dev_id + any external PCI n/w card like i40e or
> nicvf case an adapter with similar adapter ops as SW PMD
> - if octeontx eventdev dev_id + octeontx ethdev dev_id case another
> adapter that does not need service cores to inject event to octeontx
> eventdev
> 
> - Your generic Rx adapter we will make it as common code so that both SW PMD 
> and
> octeontx PMD in service core mode as use the functions and register the
> ops.
> 
> 
> I was thinking like this
> 
> int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, uint8_t 
> eth_port_id)
> {
>       eventdev_ops = ... /* access through dev_id */
> 
>       /* common code to get ops memory from adapter id */
>       struct rte_eventdev_rx_adapter_ops* ops = 
> rte_event_pmd_adapter_allocate(id);
> 
>       /* Fill in adapter ops from driver */
>       eventdev_ops->create_adapter(ops, dev_id, eth_port_id)
> 
> }
> 
> int rte_event_eth_rx_adapter_get_info(uint8_t id, struct 
> rte_event_eth_rx_adap_info *info)
> {
>       eventdev_ops = ... /* access through dev_id */
> 
>       struct rte_eventdev_rx_adapter_ops* ops = 
> eventdev_ops->rx_adapter_ops[id];
> 
>       /* adapter specific info ops)
>       ops->info(ops,....);
> 
> }

IMO, the mapping with your current proposal may go like this.

- rte_event_eth_rx_adapter_create() // Will just fill the adapter ops from 
driver, probably
  your existing rte_event_eth_rx_adapter_init() can go here
- exiting rte_event_eth_rx_adapter_create() will be more of
proposed rte_event_eth_rx_adapter_configure() on the given adapter.
- existing rte_event_eth_rx_adapter_queue_add() maps 1:1. But you can
  remove eth_dev_id as it is provided proposed rte_event_eth_rx_adapter_create()
- same applies to rte_event_eth_rx_adapter_queue_del()
- exiting rte_event_eth_rx_adapter_stats_get() can be changed to
  "xstat"(more like eventdev xstat scheme) so based on the adapter
capability application can get stats)


flow shall be:
1) application calls rte_event_eth_rx_adapter_create(id, eventdev_id,
ethdev_id) to create the adapter.(Drivers fills the adapter ops based on
eventdev_id + ethdev_id capabilities)
2) application calls rte_event_eth_rx_adapter_info_get() to get the
capability and other information of the adapter.
3) application calls rte_event_eth_rx_adapter_configure() to configure
based on the application need and the adapter capability.
4) application calls rte_event_eth_rx_adapter_queue_add() to link ethdev queues 
to 
eventdev queues. On this call, The driver creates the service functions or 
programs the HW registers
based on the adapter capability to take the events from ethdev and inject to 
eventdev.

> 
> 
> > 
> > But from previous emails (see struct rte_event_dev_producer_conf below), it
> > appears that the rx queue id and event information would be needed to create
> > the adapter that enqueues from the ethdev queue id to the event pmd, however
> > that information is available only at queue add time - thoughts ?
> 
> Just eventdev_id and ethdev_id is enough to create adapter.
> Free feel to change the name of API or flag name etc.
> Let me know if I have missed something?

Reply via email to