On 6/11/2025 6:29 AM, Maciej Fijalkowski wrote:
> 2k per device or pf? if per device then what if you have 4 port device on
> system with 384 cores and you load xdp progs on each pf. and then try your
> feature which needs hw tx queues.
> 

Yes, the 2k Tx queues are per device and shared equally by each PF, so a
4-port device will have 512 Tx queues per PF. ETF is enabled on the
queue/classid specified by the user via the ETF Qdisc, so the user
decides which Tx queues have ETF enabled.

> what i was trying to say is that you don't ever call __ice_vsi_get_qs().
> this has an impact on XDP or any other feature that needs hw txqs.
> 

As you mentioned, I don't think it makes sense to enable ETF Qdisc on an
XDP queue. Would you suggest that the driver not support (i.e. block the
users request, XDP and ETF mutual exclusivity)?

>>>
>>
>> Hi Maciej,
>>
>> Thanks for the feedback. The reason for using a separate array for
>> tstamp rings is a hardware limitation: the tstamp ring must always have
>> more descriptors than the corresponding Tx ring, so there isn’t a strict
>> 1:1 mapping. This is due to the hardware’s fetch profile and MDD
>> prevention requirements (mentioned in the commit message).
>>
>> Because of this, it’s not possible to simply add a `tstamp` pointer to
>> each `ice_tx_ring`—the relationship isn’t direct, and the tstamp ring
>> may be shared or sized differently.
> 
> ok - up to you generally.
> 

I'll do some work and testing on your initial suggestion.

>>
>> Regarding the interface, I agree that passing an additional array can be
>> confusing. If you have a suggestion for a cleaner way to handle this
>> (e.g., a new structure or abstraction), I’m open to it.
> 
> keep existing interfaces that work on entire array and have a separate
> call for tstamp ring array as mentioned in initial comment.
> 

Thanks,
Paul


Reply via email to