On Fri, Oct 07, 2016 at 10:40:03AM +0000, Hemant Agrawal wrote:
> Hi Jerin/Narender,

Hi Hemant,

Thanks for the review.

> 
>       Thanks for the proposal and discussions. 

> 
>       I agree with many of the comment made by Narender.  Here are some 
> additional comments.
> 
> 1. rte_event_schedule - should support option for bulk dequeue. The size of 
> bulk should be a property of device, how much depth it can support.

OK. Will fix it in v2.

> 
> 2. The event schedule should also support the option to specify the amount of 
> time, it can wait. The implementation may only support global 
> setting(dequeue_wait_ns) for wait time. They can take any non-zero wait value 
> as to implement wait.  

OK. Will fix it in v2.

> 
> 3. rte_event_schedule_from_group - there should be one model.  Both Push and 
> Pull may not work well together. At least the simultaneous mixed config will 
> not work on NXP hardware scheduler. 

OK. Will remove Cavium specific "rte_event_schedule_from_group" API in v2.

> 
> 4. Priority of queues within the scheduling group?  - Please keep in mind 
> that some hardware supports intra scheduler priority and some only support 
> intra flow_queue priority within a scheduler instance. The events of same 
> flow id should have same priority.

Will try to address some solution based on capability.

> 
> 5. w.r.t flow_queue numbers in log2, I will prefer to have absolute number. 
> Not all system may have large number of queues. So the design should keep in 
> account the system will fewer number of queues.

OK. Will fix it in v2.

> 
> Regards,
> Hemant
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Wednesday, October 05, 2016 12:55 PM
> > On Tue, Oct 04, 2016 at 09:49:52PM +0000, Vangati, Narender wrote:
> > > Hi Jerin,
> > 
> > Hi Narender,
> > 
> > Thanks for the comments.I agree with proposed changes; I will address these
> > comments in v2.
> > 
> > /Jerin
> > 
> > 
> > >
> > >
> > >
> > > Here are some comments on the libeventdev RFC.
> > >
> > > These are collated thoughts after discussions with you & others to 
> > > understand
> > the concepts and rationale for the current proposal.
> > >
> > >
> > >
> > > 1. Concept of flow queues. This is better abstracted as flow ids and not 
> > > as flow
> > queues which implies there is a queueing structure per flow. A s/w
> > implementation can do atomic load balancing on multiple flow ids more
> > efficiently than maintaining each event in a specific flow queue.
> > >
> > >
> > >
> > > 2. Scheduling group. A scheduling group is more a steam of events, so an 
> > > event
> > queue might be a better abstraction.
> > >
> > >
> > >
> > > 3. An event queue should support the concept of max active atomic flows
> > (maximum number of active flows this queue can track at any given time) and
> > max active ordered sequences (maximum number of outstanding events waiting
> > to be egress reordered by this queue). This allows a scheduler 
> > implementation to
> > dimension/partition its resources among event queues.
> > >
> > >
> > >
> > > 4. An event queue should support concept of a single consumer. In an
> > application, a stream of events may need to be brought together to a single
> > core for some stages of processing, e.g. for TX at the end of the pipeline 
> > to
> > avoid NIC reordering of the packets. Having a 'single consumer' event queue 
> > for
> > that stage allows the intensive scheduling logic to be short circuited and 
> > can
> > improve throughput for s/w implementations.
> > >
> > >
> > >
> > > 5. Instead of tying eventdev access to an lcore, a higher level of 
> > > abstraction
> > called event port is needed which is the application i/f to the eventdev. 
> > Event
> > ports are connected to event queues and is the object the application uses 
> > to
> > dequeue and enqueue events. There can be more than one event port per lcore
> > allowing multiple lightweight threads to have their own i/f into eventdev, 
> > if the
> > implementation supports it. An event port abstraction also encapsulates
> > dequeue depth and enqueue depth for a scheduler implementations which can
> > schedule multiple events at a time and output events that can be buffered.
> > >
> > >
> > >
> > > 6. An event should support priority. Per event priority is useful for 
> > > segregating
> > high priority (control messages) traffic from low priority within the same 
> > flow.
> > This needs to be part of the event definition for implementations which 
> > support
> > it.
> > >
> > >
> > >
> > > 7. Event port to event queue servicing priority. This allows two event 
> > > ports to
> > connect to the same event queue with different priorities. For 
> > implementations
> > which support it, this allows a worker core to participate in two different
> > workflows with different priorities (workflow 1 needing 3.5 cores, workflow 
> > 2
> > needing 2.5 cores, and so on).
> > >
> > >
> > >
> > > 8. Define the workflow as schedule/dequeue/enqueue. An implementation is
> > free to define schedule as NOOP. A distributed s/w scheduler can use this to
> > schedule events; also a centralized s/w scheduler can make this a NOOP on 
> > non-
> > scheduler cores.
> > >
> > >
> > >
> > > 9. The schedule_from_group API does not fit the workflow.
> > >
> > >
> > >
> > > 10. The ctxt_update/ctxt_wait breaks the normal workflow. If the normal
> > workflow is a dequeue -> do work based on event type -> enqueue,  a 
> > pin_event
> > argument to enqueue (where the pinned event is returned through the normal
> > dequeue) allows application workflow to remain the same whether or not an
> > implementation supports it.
> > >
> > >
> > >
> > > 11. Burst dequeue/enqueue needed.
> > >
> > >
> > >
> > > 12. Definition of a closed/open system - where open system is memory 
> > > backed
> > and closed system eventdev has limited capacity. In such systems, it is also
> > useful to denote per event port how many packets can be active in the 
> > system.
> > This can serve as a threshold for ethdev like devices so they don't 
> > overwhelm
> > core to core events.
> > >
> > >
> > >
> > > 13. There should be sort of device capabilities definition to address 
> > > different
> > implementations.
> > >
> > >
> > >
> > >
> > > vnr
> > > ---
> > >

Reply via email to