[dpdk-dev] [RFC] libeventdev: event driven programming model framework for DPDK

2016-10-09 Thread Jerin Jacob
On Fri, Oct 07, 2016 at 10:40:03AM +, Hemant Agrawal wrote:
> Hi Jerin/Narender,

Hi Hemant,

Thanks for the review.

> 
>   Thanks for the proposal and discussions. 

> 
>   I agree with many of the comment made by Narender.  Here are some 
> additional comments.
> 
> 1. rte_event_schedule - should support option for bulk dequeue. The size of 
> bulk should be a property of device, how much depth it can support.

OK. Will fix it in v2.

> 
> 2. The event schedule should also support the option to specify the amount of 
> time, it can wait. The implementation may only support global 
> setting(dequeue_wait_ns) for wait time. They can take any non-zero wait value 
> as to implement wait.  

OK. Will fix it in v2.

> 
> 3. rte_event_schedule_from_group - there should be one model.  Both Push and 
> Pull may not work well together. At least the simultaneous mixed config will 
> not work on NXP hardware scheduler. 

OK. Will remove Cavium specific "rte_event_schedule_from_group" API in v2.

> 
> 4. Priority of queues within the scheduling group?  - Please keep in mind 
> that some hardware supports intra scheduler priority and some only support 
> intra flow_queue priority within a scheduler instance. The events of same 
> flow id should have same priority.

Will try to address some solution based on capability.

> 
> 5. w.r.t flow_queue numbers in log2, I will prefer to have absolute number. 
> Not all system may have large number of queues. So the design should keep in 
> account the system will fewer number of queues.

OK. Will fix it in v2.

> 
> Regards,
> Hemant
> 
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Wednesday, October 05, 2016 12:55 PM
> > On Tue, Oct 04, 2016 at 09:49:52PM +, Vangati, Narender wrote:
> > > Hi Jerin,
> > 
> > Hi Narender,
> > 
> > Thanks for the comments.I agree with proposed changes; I will address these
> > comments in v2.
> > 
> > /Jerin
> > 
> > 
> > >
> > >
> > >
> > > Here are some comments on the libeventdev RFC.
> > >
> > > These are collated thoughts after discussions with you & others to 
> > > understand
> > the concepts and rationale for the current proposal.
> > >
> > >
> > >
> > > 1. Concept of flow queues. This is better abstracted as flow ids and not 
> > > as flow
> > queues which implies there is a queueing structure per flow. A s/w
> > implementation can do atomic load balancing on multiple flow ids more
> > efficiently than maintaining each event in a specific flow queue.
> > >
> > >
> > >
> > > 2. Scheduling group. A scheduling group is more a steam of events, so an 
> > > event
> > queue might be a better abstraction.
> > >
> > >
> > >
> > > 3. An event queue should support the concept of max active atomic flows
> > (maximum number of active flows this queue can track at any given time) and
> > max active ordered sequences (maximum number of outstanding events waiting
> > to be egress reordered by this queue). This allows a scheduler 
> > implementation to
> > dimension/partition its resources among event queues.
> > >
> > >
> > >
> > > 4. An event queue should support concept of a single consumer. In an
> > application, a stream of events may need to be brought together to a single
> > core for some stages of processing, e.g. for TX at the end of the pipeline 
> > to
> > avoid NIC reordering of the packets. Having a 'single consumer' event queue 
> > for
> > that stage allows the intensive scheduling logic to be short circuited and 
> > can
> > improve throughput for s/w implementations.
> > >
> > >
> > >
> > > 5. Instead of tying eventdev access to an lcore, a higher level of 
> > > abstraction
> > called event port is needed which is the application i/f to the eventdev. 
> > Event
> > ports are connected to event queues and is the object the application uses 
> > to
> > dequeue and enqueue events. There can be more than one event port per lcore
> > allowing multiple lightweight threads to have their own i/f into eventdev, 
> > if the
> > implementation supports it. An event port abstraction also encapsulates
> > dequeue depth and enqueue depth for a scheduler implementations which can
> > schedule multiple events at a time and output events that can be buffered.
> > >
> > >
> > >
> > > 6. An event should support priority. Per event priority is useful for 
> > > segregating
> > high priority (control messages) traffic from low priority within the same 
> > flow.
> > This needs to be part of the event definition for implementations which 
> > support
> > it.
> > >
> > >
> > >
> > > 7. Event port to event queue servicing priority. This allows two event 
> > > ports to
> > connect to the same event queue with different priorities. For 
> > implementations
> > which support it, this allows a worker core to participate in two different
> > workflows with different priorities (workflow 1 needing 3.5 cores, workflow 
> > 2
> > needing 2.5 cores, and so on).
> > >
> > >
> > >
> 

[dpdk-dev] [RFC] libeventdev: event driven programming model framework for DPDK

2016-10-07 Thread Hemant Agrawal
Hi Jerin/Narender,

Thanks for the proposal and discussions. 

I agree with many of the comment made by Narender.  Here are some 
additional comments.

1. rte_event_schedule - should support option for bulk dequeue. The size of 
bulk should be a property of device, how much depth it can support.

2. The event schedule should also support the option to specify the amount of 
time, it can wait. The implementation may only support global 
setting(dequeue_wait_ns) for wait time. They can take any non-zero wait value 
as to implement wait.  

3. rte_event_schedule_from_group - there should be one model.  Both Push and 
Pull may not work well together. At least the simultaneous mixed config will 
not work on NXP hardware scheduler. 

4. Priority of queues within the scheduling group?  - Please keep in mind that 
some hardware supports intra scheduler priority and some only support intra 
flow_queue priority within a scheduler instance. The events of same flow id 
should have same priority.

5. w.r.t flow_queue numbers in log2, I will prefer to have absolute number. Not 
all system may have large number of queues. So the design should keep in 
account the system will fewer number of queues.

Regards,
Hemant

> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob
> Sent: Wednesday, October 05, 2016 12:55 PM
> On Tue, Oct 04, 2016 at 09:49:52PM +, Vangati, Narender wrote:
> > Hi Jerin,
> 
> Hi Narender,
> 
> Thanks for the comments.I agree with proposed changes; I will address these
> comments in v2.
> 
> /Jerin
> 
> 
> >
> >
> >
> > Here are some comments on the libeventdev RFC.
> >
> > These are collated thoughts after discussions with you & others to 
> > understand
> the concepts and rationale for the current proposal.
> >
> >
> >
> > 1. Concept of flow queues. This is better abstracted as flow ids and not as 
> > flow
> queues which implies there is a queueing structure per flow. A s/w
> implementation can do atomic load balancing on multiple flow ids more
> efficiently than maintaining each event in a specific flow queue.
> >
> >
> >
> > 2. Scheduling group. A scheduling group is more a steam of events, so an 
> > event
> queue might be a better abstraction.
> >
> >
> >
> > 3. An event queue should support the concept of max active atomic flows
> (maximum number of active flows this queue can track at any given time) and
> max active ordered sequences (maximum number of outstanding events waiting
> to be egress reordered by this queue). This allows a scheduler implementation 
> to
> dimension/partition its resources among event queues.
> >
> >
> >
> > 4. An event queue should support concept of a single consumer. In an
> application, a stream of events may need to be brought together to a single
> core for some stages of processing, e.g. for TX at the end of the pipeline to
> avoid NIC reordering of the packets. Having a 'single consumer' event queue 
> for
> that stage allows the intensive scheduling logic to be short circuited and can
> improve throughput for s/w implementations.
> >
> >
> >
> > 5. Instead of tying eventdev access to an lcore, a higher level of 
> > abstraction
> called event port is needed which is the application i/f to the eventdev. 
> Event
> ports are connected to event queues and is the object the application uses to
> dequeue and enqueue events. There can be more than one event port per lcore
> allowing multiple lightweight threads to have their own i/f into eventdev, if 
> the
> implementation supports it. An event port abstraction also encapsulates
> dequeue depth and enqueue depth for a scheduler implementations which can
> schedule multiple events at a time and output events that can be buffered.
> >
> >
> >
> > 6. An event should support priority. Per event priority is useful for 
> > segregating
> high priority (control messages) traffic from low priority within the same 
> flow.
> This needs to be part of the event definition for implementations which 
> support
> it.
> >
> >
> >
> > 7. Event port to event queue servicing priority. This allows two event 
> > ports to
> connect to the same event queue with different priorities. For implementations
> which support it, this allows a worker core to participate in two different
> workflows with different priorities (workflow 1 needing 3.5 cores, workflow 2
> needing 2.5 cores, and so on).
> >
> >
> >
> > 8. Define the workflow as schedule/dequeue/enqueue. An implementation is
> free to define schedule as NOOP. A distributed s/w scheduler can use this to
> schedule events; also a centralized s/w scheduler can make this a NOOP on non-
> scheduler cores.
> >
> >
> >
> > 9. The schedule_from_group API does not fit the workflow.
> >
> >
> >
> > 10. The ctxt_update/ctxt_wait breaks the normal workflow. If the normal
> workflow is a dequeue -> do work based on event type -> enqueue,  a pin_event
> argument to enqueue (where the pinned event is returned through the normal
> 

[dpdk-dev] [RFC] libeventdev: event driven programming model framework for DPDK

2016-10-05 Thread Jerin Jacob
On Tue, Oct 04, 2016 at 09:49:52PM +, Vangati, Narender wrote:
> Hi Jerin,

Hi Narender,

Thanks for the comments.I agree with proposed changes; I will address these 
comments in v2.

/Jerin


> 
> 
> 
> Here are some comments on the libeventdev RFC.
> 
> These are collated thoughts after discussions with you & others to understand 
> the concepts and rationale for the current proposal.
> 
> 
> 
> 1. Concept of flow queues. This is better abstracted as flow ids and not as 
> flow queues which implies there is a queueing structure per flow. A s/w 
> implementation can do atomic load balancing on multiple flow ids more 
> efficiently than maintaining each event in a specific flow queue.
> 
> 
> 
> 2. Scheduling group. A scheduling group is more a steam of events, so an 
> event queue might be a better abstraction.
> 
> 
> 
> 3. An event queue should support the concept of max active atomic flows 
> (maximum number of active flows this queue can track at any given time) and 
> max active ordered sequences (maximum number of outstanding events waiting to 
> be egress reordered by this queue). This allows a scheduler implementation to 
> dimension/partition its resources among event queues.
> 
> 
> 
> 4. An event queue should support concept of a single consumer. In an 
> application, a stream of events may need to be brought together to a single 
> core for some stages of processing, e.g. for TX at the end of the pipeline to 
> avoid NIC reordering of the packets. Having a 'single consumer' event queue 
> for that stage allows the intensive scheduling logic to be short circuited 
> and can improve throughput for s/w implementations.
> 
> 
> 
> 5. Instead of tying eventdev access to an lcore, a higher level of 
> abstraction called event port is needed which is the application i/f to the 
> eventdev. Event ports are connected to event queues and is the object the 
> application uses to dequeue and enqueue events. There can be more than one 
> event port per lcore allowing multiple lightweight threads to have their own 
> i/f into eventdev, if the implementation supports it. An event port 
> abstraction also encapsulates dequeue depth and enqueue depth for a scheduler 
> implementations which can schedule multiple events at a time and output 
> events that can be buffered.
> 
> 
> 
> 6. An event should support priority. Per event priority is useful for 
> segregating high priority (control messages) traffic from low priority within 
> the same flow. This needs to be part of the event definition for 
> implementations which support it.
> 
> 
> 
> 7. Event port to event queue servicing priority. This allows two event ports 
> to connect to the same event queue with different priorities. For 
> implementations which support it, this allows a worker core to participate in 
> two different workflows with different priorities (workflow 1 needing 3.5 
> cores, workflow 2 needing 2.5 cores, and so on).
> 
> 
> 
> 8. Define the workflow as schedule/dequeue/enqueue. An implementation is free 
> to define schedule as NOOP. A distributed s/w scheduler can use this to 
> schedule events; also a centralized s/w scheduler can make this a NOOP on 
> non-scheduler cores.
> 
> 
> 
> 9. The schedule_from_group API does not fit the workflow.
> 
> 
> 
> 10. The ctxt_update/ctxt_wait breaks the normal workflow. If the normal 
> workflow is a dequeue -> do work based on event type -> enqueue,  a pin_event 
> argument to enqueue (where the pinned event is returned through the normal 
> dequeue) allows application workflow to remain the same whether or not an 
> implementation supports it.
> 
> 
> 
> 11. Burst dequeue/enqueue needed.
> 
> 
> 
> 12. Definition of a closed/open system - where open system is memory backed 
> and closed system eventdev has limited capacity. In such systems, it is also 
> useful to denote per event port how many packets can be active in the system. 
> This can serve as a threshold for ethdev like devices so they don't overwhelm 
> core to core events.
> 
> 
> 
> 13. There should be sort of device capabilities definition to address 
> different implementations.
> 
> 
> 
> 
> vnr
> ---
> 


[dpdk-dev] [RFC] libeventdev: event driven programming model framework for DPDK

2016-10-04 Thread Vangati, Narender
Hi Jerin,



Here are some comments on the libeventdev RFC.

These are collated thoughts after discussions with you & others to understand 
the concepts and rationale for the current proposal.



1. Concept of flow queues. This is better abstracted as flow ids and not as 
flow queues which implies there is a queueing structure per flow. A s/w 
implementation can do atomic load balancing on multiple flow ids more 
efficiently than maintaining each event in a specific flow queue.



2. Scheduling group. A scheduling group is more a steam of events, so an event 
queue might be a better abstraction.



3. An event queue should support the concept of max active atomic flows 
(maximum number of active flows this queue can track at any given time) and max 
active ordered sequences (maximum number of outstanding events waiting to be 
egress reordered by this queue). This allows a scheduler implementation to 
dimension/partition its resources among event queues.



4. An event queue should support concept of a single consumer. In an 
application, a stream of events may need to be brought together to a single 
core for some stages of processing, e.g. for TX at the end of the pipeline to 
avoid NIC reordering of the packets. Having a 'single consumer' event queue for 
that stage allows the intensive scheduling logic to be short circuited and can 
improve throughput for s/w implementations.



5. Instead of tying eventdev access to an lcore, a higher level of abstraction 
called event port is needed which is the application i/f to the eventdev. Event 
ports are connected to event queues and is the object the application uses to 
dequeue and enqueue events. There can be more than one event port per lcore 
allowing multiple lightweight threads to have their own i/f into eventdev, if 
the implementation supports it. An event port abstraction also encapsulates 
dequeue depth and enqueue depth for a scheduler implementations which can 
schedule multiple events at a time and output events that can be buffered.



6. An event should support priority. Per event priority is useful for 
segregating high priority (control messages) traffic from low priority within 
the same flow. This needs to be part of the event definition for 
implementations which support it.



7. Event port to event queue servicing priority. This allows two event ports to 
connect to the same event queue with different priorities. For implementations 
which support it, this allows a worker core to participate in two different 
workflows with different priorities (workflow 1 needing 3.5 cores, workflow 2 
needing 2.5 cores, and so on).



8. Define the workflow as schedule/dequeue/enqueue. An implementation is free 
to define schedule as NOOP. A distributed s/w scheduler can use this to 
schedule events; also a centralized s/w scheduler can make this a NOOP on 
non-scheduler cores.



9. The schedule_from_group API does not fit the workflow.



10. The ctxt_update/ctxt_wait breaks the normal workflow. If the normal 
workflow is a dequeue -> do work based on event type -> enqueue,  a pin_event 
argument to enqueue (where the pinned event is returned through the normal 
dequeue) allows application workflow to remain the same whether or not an 
implementation supports it.



11. Burst dequeue/enqueue needed.



12. Definition of a closed/open system - where open system is memory backed and 
closed system eventdev has limited capacity. In such systems, it is also useful 
to denote per event port how many packets can be active in the system. This can 
serve as a threshold for ethdev like devices so they don't overwhelm core to 
core events.



13. There should be sort of device capabilities definition to address different 
implementations.




vnr
---



[dpdk-dev] [RFC] libeventdev: event driven programming model framework for DPDK

2016-08-10 Thread Jerin Jacob
On Tue, Aug 09, 2016 at 09:48:46AM +0100, Bruce Richardson wrote:
> On Tue, Aug 09, 2016 at 06:31:41AM +0530, Jerin Jacob wrote:
> > Find below the URL for the complete API specification.
> > 
> > https://rawgit.com/jerinjacobk/libeventdev/master/rte_eventdev.h
> > 
> > I have created a supportive document to share the concepts of
> > event driven programming model and proposed APIs details to get
> > better reach for the specification.
> > This presentation will cover introduction to event driven programming model 
> > concepts,
> > characteristics of hardware-based event manager devices,
> > RFC API proposal, example use case, and benefits of using the event driven 
> > programming model.
> > 
> > Find below the URL for the supportive document.
> > 
> > https://rawgit.com/jerinjacobk/libeventdev/master/DPDK-event_driven_programming_framework.pdf
> > 
> > git repo for the above documents:
> > 
> > https://github.com/jerinjacobk/libeventdev/
> > 
> > Looking forward to getting comments from both application and driver
> > implementation perspective.
> > 
> 
> Hi Jerin,
> 

Hi Bruce,

> thanks for the RFC. Packet distribution and scheduling is something we've been
> thinking about here too. This RFC gives us plenty of new ideas to take on 
> board. :-)

Thanks

> While you refer to HW implementations on SOC's, have you given any thought to
> how a pure-software implementation of an event API might work? I know that

Yes. I have removed almost all hardware specific details from the API
specification. Mostly the APIs are driven by the use case.

I had impression that software based scheme will use
lib_rte_distributor or lib_rte_reorder libraries to get load balancing
and reordering features. However, if we are looking for some converged
solution without impacting the HW models then I think it is a good step
forward.

IMO, Implementing the ORDERED schedule sync method in a performance effective
way in the SW may be tricky. May be we can introduces some capability based
schemes to co-exists the HW and SW solution.

> while a software implemenation can obviously be done for just about any API,
> I'd be concerned that the API not get in the way of a very highly
> tuned implementation.
> 
> We'll look at it in some detail and get back to you with our feedback, as soon
> as we can, to start getting the discussion going.

OK

> 
> Regards,
> /Bruce
> 


[dpdk-dev] [RFC] libeventdev: event driven programming model framework for DPDK

2016-08-09 Thread Bruce Richardson
On Tue, Aug 09, 2016 at 06:31:41AM +0530, Jerin Jacob wrote:
> Hi All,
> 
> Find below an RFC API specification which attempts to
> define the standard application programming interface
> for event driven programming in DPDK and to abstract HW based event devices.
> 
> These devices can support event scheduling and flow ordering
> in HW and typically found in NW SoCs as an integrated device or
> as PCI EP device.
> 
> The RFC APIs are inspired from existing ethernet and crypto devices.
> Following are the requirements considered to define the RFC API.
> 
> 1) APIs similar to existing Ethernet and crypto API framework for
> ? Device creation, device Identification and device configuration
> 2) Enumerate libeventdev resources as numbers(0..N) to
> ? Avoid ABI issues with handles
> ? Event device may have million flow queues so it's not practical to
> have handles for each flow queue and its associated name based
> lookup in multiprocess case
> 3) Avoid struct mbuf changes
> 4) APIs to
> ? Enumerate eventdev driver capabilities and resources
> ? Enqueue events from l-core
> ? Schedule events
> ? Synchronize events
> ? Maintain ingress order of the events
> ? Run to completion support
> 
> Find below the URL for the complete API specification.
> 
> https://rawgit.com/jerinjacobk/libeventdev/master/rte_eventdev.h
> 
> I have created a supportive document to share the concepts of
> event driven programming model and proposed APIs details to get
> better reach for the specification.
> This presentation will cover introduction to event driven programming model 
> concepts,
> characteristics of hardware-based event manager devices,
> RFC API proposal, example use case, and benefits of using the event driven 
> programming model.
> 
> Find below the URL for the supportive document.
> 
> https://rawgit.com/jerinjacobk/libeventdev/master/DPDK-event_driven_programming_framework.pdf
> 
> git repo for the above documents:
> 
> https://github.com/jerinjacobk/libeventdev/
> 
> Looking forward to getting comments from both application and driver
> implementation perspective.
> 

Hi Jerin,

thanks for the RFC. Packet distribution and scheduling is something we've been
thinking about here too. This RFC gives us plenty of new ideas to take on 
board. :-)
While you refer to HW implementations on SOC's, have you given any thought to
how a pure-software implementation of an event API might work? I know that
while a software implemenation can obviously be done for just about any API,
I'd be concerned that the API not get in the way of a very highly
tuned implementation.

We'll look at it in some detail and get back to you with our feedback, as soon
as we can, to start getting the discussion going.

Regards,
/Bruce



[dpdk-dev] [RFC] libeventdev: event driven programming model framework for DPDK

2016-08-09 Thread Jerin Jacob
Hi All,

Find below an RFC API specification which attempts to
define the standard application programming interface
for event driven programming in DPDK and to abstract HW based event devices.

These devices can support event scheduling and flow ordering
in HW and typically found in NW SoCs as an integrated device or
as PCI EP device.

The RFC APIs are inspired from existing ethernet and crypto devices.
Following are the requirements considered to define the RFC API.

1) APIs similar to existing Ethernet and crypto API framework for
? Device creation, device Identification and device configuration
2) Enumerate libeventdev resources as numbers(0..N) to
? Avoid ABI issues with handles
? Event device may have million flow queues so it's not practical to
have handles for each flow queue and its associated name based
lookup in multiprocess case
3) Avoid struct mbuf changes
4) APIs to
? Enumerate eventdev driver capabilities and resources
? Enqueue events from l-core
? Schedule events
? Synchronize events
? Maintain ingress order of the events
? Run to completion support

Find below the URL for the complete API specification.

https://rawgit.com/jerinjacobk/libeventdev/master/rte_eventdev.h

I have created a supportive document to share the concepts of
event driven programming model and proposed APIs details to get
better reach for the specification.
This presentation will cover introduction to event driven programming model 
concepts,
characteristics of hardware-based event manager devices,
RFC API proposal, example use case, and benefits of using the event driven 
programming model.

Find below the URL for the supportive document.

https://rawgit.com/jerinjacobk/libeventdev/master/DPDK-event_driven_programming_framework.pdf

git repo for the above documents:

https://github.com/jerinjacobk/libeventdev/

Looking forward to getting comments from both application and driver
implementation perspective.

What follows is the text version of the above documents, for inline comments 
and discussion.
I intend to update that specification accordingly.

/**
 * Get the total number of event devices that have been successfully
 * initialised.
 *
 * @return
 *   The total number of usable event devices.
 */
extern uint8_t
rte_eventdev_count(void);

/**
 * Get the device identifier for the named event device.
 *
 * @param name
 *   Event device name to select the event device identifier.
 *
 * @return
 *   Returns event device identifier on success.
 *   - <0: Failure to find named event device.
 */
extern uint8_t
rte_eventdev_get_dev_id(const char *name);

/*
 * Return the NUMA socket to which a device is connected.
 *
 * @param dev_id
 *   The identifier of the device.
 * @return
 *   The NUMA socket id to which the device is connected or
 *   a default of zero if the socket could not be determined.
 *   - -1: dev_id value is out of range.
 */
extern int
rte_eventdev_socket_id(uint8_t dev_id);

/**  Event device information */
struct rte_eventdev_info {
const char *driver_name;/**< Event driver name */
struct rte_pci_device *pci_dev; /**< PCI information */
uint32_t min_sched_wait_ns;
/**< Minimum supported scheduler wait delay in ns by this device */
uint32_t max_sched_wait_ns;
/**< Maximum supported scheduler wait delay in ns by this device */
uint32_t sched_wait_ns;
/**< Configured scheduler wait delay in ns of this device */
uint32_t max_flow_queues_log2;
/**< LOG2 of maximum flow queues supported by this device */
uint8_t  max_sched_groups;
/**< Maximum schedule groups supported by this device */
uint8_t  max_sched_group_priority_levels;
/**< Maximum schedule group priority levels supported by this device */
}

/**
 * Retrieve the contextual information of an event device.
 *
 * @param dev_id
 *   The identifier of the device.
 * @param[out] dev_info
 *   A pointer to a structure of type *rte_eventdev_info* to be filled with the
 *   contextual information of the device.
 */
extern void
rte_eventdev_info_get(uint8_t dev_id, struct rte_eventdev_info *dev_info);

/** Event device configuration structure */
struct rte_eventdev_config {
uint32_t sched_wait_ns;
/**< rte_event_schedule() wait for *sched_wait_ns* ns on this device */
uint32_t nb_flow_queues_log2;
/**< LOG2 of the number of flow queues to configure on this device */
uint8_t  nb_sched_groups;
/**< The number of schedule groups to configure on this device */
};

/**
 * Configure an event device.
 *
 * This function must be invoked first before any other function in the
 * API. This function can also be re-invoked when a device is in the
 * stopped state.
 *
 * The caller may use rte_eventdev_info_get() to get the capability of each
 * resources available in this event device.
 *
 * @param dev_id
 *   The identifier of the device to configure.
 *