On Fri, Feb 02, 2024 at 11:33:19AM +0000, Bruce Richardson wrote: > On Fri, Feb 02, 2024 at 10:38:10AM +0100, Mattias Rönnblom wrote: > > On 2024-02-01 17:59, Bruce Richardson wrote: > > > On Wed, Jan 24, 2024 at 12:34:50PM +0100, Mattias Rönnblom wrote: > > > > On 2024-01-19 18:43, Bruce Richardson wrote: > > > > > Clarify the meaning of the NEW, FORWARD and RELEASE event types. > > > > > For the fields in "rte_event" struct, enhance the comments on each to > > > > > clarify the field's use, and whether it is preserved between enqueue > > > > > and > > > > > dequeue, and it's role, if any, in scheduling. > > > > > > > > > > Signed-off-by: Bruce Richardson <bruce.richard...@intel.com> > > > > > --- > > > > > > <snip> > > > > Is the normalized or unnormalized value that is preserved? > > > > > > > Very good point. It's the normalized & then denormalised version that is > > > guaranteed to be preserved, I suspect. SW eventdevs probably preserve > > > as-is, but HW eventdevs may lose precision. Rather than making this > > > "implementation defined" or "not preserved" which would be annoying for > > > apps, I think, I'm going to document this as "preserved, but with possible > > > loss of precision". > > > > > > > This makes me again think it may be worth noting that Eventdev -> API > > priority normalization is (event.priority * PMD_LEVELS) / EVENTDEV_LEVELS > > (rounded down) - assuming that's how it's supposed to be done - or something > > to that effect. > > > Following my comment on the thread on the other patch about looking at > numbers of bits of priority being valid, I did a quick check of the evdev PMDs > by using grep for "max_event_priority_levels" in each driver. According to > that (and resolving some #defines), I see: > > 0 - dpaa, dpaa2 > 1 - cnxk, dsw, octeontx, opdl > 4 - sw > 8 - dlb2, skeleton > > So it looks like switching to a bit-scheme is workable, where we measure > supported event levels in powers-of-two only. [And we can cut down priority > fields if we like]. > And just for reference, the advertized values for max_event_queue_priority_levels are:
1 - dsw, opdl 8 - cnxk, dlb2, dpaa, dpaa2, octeontx, skeleton 255 - sw [though this should really be 256, it's an off-by-one error due to the range of uint8_t type. SW evdev just sorts queues by priority using the whole priority value specified.] So I think we can treat queue priority similarly to event priority - giving the number of bits which are valid. Also, if we decide to cut the event priority level range to e.g. 0-15, I think we can do the same for the queue priority levels, so that the ranges are similar, and then we can adjust the min-max definitions to match. /Bruce