> -----Original Message----- > From: Olech, Milena <[email protected]> > Sent: Thursday, April 10, 2025 7:12 AM > To: Keller, Jacob E <[email protected]>; > [email protected] > Cc: [email protected]; Nguyen, Anthony L <[email protected]>; > Kitszel, Przemyslaw <[email protected]>; Lobakin, Aleksander > <[email protected]>; Tantilov, Emil S <[email protected]>; > Linga, Pavan Kumar <[email protected]>; Salin, Samuel > <[email protected]> > Subject: RE: [Intel-wired-lan] [PATCH v10 iwl-next 09/11] idpf: add Tx > timestamp > capabilities negotiation > > On 4/9/2025 8:09 PM, Jacob Keller wrote: > > >On 4/9/2025 7:04 AM, Olech, Milena wrote: > >> On 4/8/2025 11:23 PM, Jacob Keller wrote: > >> > >>> On 4/8/2025 3:31 AM, Milena Olech wrote: > >>>> +static void idpf_ptp_release_vport_tstamp(struct idpf_vport *vport) > >>>> +{ > >>>> + struct idpf_ptp_tx_tstamp *ptp_tx_tstamp, *tmp; > >>>> + struct list_head *head; > >>>> + > >>>> + /* Remove list with free latches */ > >>>> + spin_lock(&vport->tx_tstamp_caps->lock_free); > >>>> + > >>>> + head = &vport->tx_tstamp_caps->latches_free; > >>>> + list_for_each_entry_safe(ptp_tx_tstamp, tmp, head, list_member) > >>>> { > >>>> + list_del(&ptp_tx_tstamp->list_member); > >>>> + kfree(ptp_tx_tstamp); > >>>> + } > >>>> + > >>>> + spin_unlock(&vport->tx_tstamp_caps->lock_free); > >>>> + > >>>> + /* Remove list with latches in use */ > >>>> + spin_lock(&vport->tx_tstamp_caps->lock_in_use); > >>>> + > >>>> + head = &vport->tx_tstamp_caps->latches_in_use; > >>>> + list_for_each_entry_safe(ptp_tx_tstamp, tmp, head, list_member) > >>>> { > >>>> + list_del(&ptp_tx_tstamp->list_member); > >>>> + kfree(ptp_tx_tstamp); > >>>> + } > >>>> + > >>>> + spin_unlock(&vport->tx_tstamp_caps->lock_in_use); > >>>> + > >>>> + kfree(vport->tx_tstamp_caps); > >>>> + vport->tx_tstamp_caps = NULL; > >>>> +} > >>> Could you provide a summary and overview of the locking scheme used > >>> here? I see you have multiple spin locks for both the free bits and the > >>> in-use bits, and its a bit hard to grasp the reasoning behind this. We > >>> had a lot of issues getting locking for Tx timestamps correct in ice, > >>> though most of that had to do with quirks in the hardware. > >>> > >> > >> Ofc :) So the main idea is to have a list of free latches (indexes) and a > >> list of latches that are being used - by used I mean that the timestamp > >> for this index is requested and being processed. > >> > >> So at the beginning, the driver negotiates the list of latches with the CP > >> and adds them to the free list. When the timestamp is requested, driver > >> takes the first item of the free latches and moves it to 'in-use' list. > >> Similarly, when the timestamp is read, driver moves the index from > >> 'in use' to 'free'. > >> > > > >Ok. Is there a reason these need separate locks instead of just sharing > >the same lock? > > > > That's a very good question. In fact in most places I need to move item > from the first to the second list, so I could use the same spinlock for > both. > > The only place where only one is used is sending virtchnl message to get > the Tx timestamp value, where I search for items on 'in use' list. > > But it does not mean that we cannot share lock, because when 'in use' > is processed, it is not possible to request the new index (because we need > the lock to move from 'free' to 'in use'). > > So to summarize, at the end of the day I don't see any specific reason > of having two. > > Let me know what are your thoughts, but I guess it is safe to remove one > lock. > > Milena
I would feel better about only one lock, as it reduces the number of locks we need to think about, and removes the risk of circular locking issues. Thanks, Jake > > >> Regards, > >> Milena > >> > >>> Thanks, > >>> Jake > >>> > > > >
