On Sun 2021-02-14 00:06:11, Yiwei Zhang wrote: > The existing kthread_mod_delayed_work api will queue a new work if > failing to cancel the current work due to no longer being pending. > However, there's a case that the same work can be enqueued from both > an async request and a delayed work, and a racing could happen if the > async request comes right after the timeout delayed work gets scheduled, > because the clean up work may not be safe to run twice.
Please, provide more details about the use case. Why the work is originally sheduled with a delay. And and why it suddenly can/should be proceed immediately. > > Signed-off-by: Yiwei Zhang <[email protected]> > --- > include/linux/kthread.h | 3 +++ > kernel/kthread.c | 48 +++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 51 insertions(+) > > --- a/kernel/kthread.c > +++ b/kernel/kthread.c > @@ -1142,6 +1142,54 @@ bool kthread_mod_delayed_work(struct kthread_worker > *worker, > } > EXPORT_SYMBOL_GPL(kthread_mod_delayed_work); > > +/** > + * kthread_mod_pending_delayed_work - modify delay of a pending delayed work > + * @worker: kthread worker to use > + * @dwork: kthread delayed work to queue > + * @delay: number of jiffies to wait before queuing > + * > + * If @dwork is still pending modify @dwork's timer so that it expires after > + * @delay. If @dwork is still pending and @delay is zero, @work is > guaranteed to > + * be queued immediately. > + * > + * Return: %true if @dwork was pending and its timer was modified, > + * %false otherwise. > + * > + * A special case is when the work is being canceled in parallel. > + * It might be caused either by the real kthread_cancel_delayed_work_sync() > + * or yet another kthread_mod_delayed_work() call. We let the other command > + * win and return %false here. The caller is supposed to synchronize these > + * operations a reasonable way. > + * > + * This function is safe to call from any context including IRQ handler. > + * See __kthread_cancel_work() and kthread_delayed_work_timer_fn() > + * for details. > + */ > +bool kthread_mod_pending_delayed_work(struct kthread_worker *worker, > + struct kthread_delayed_work *dwork, > + unsigned long delay) > +{ kthread_worker API tries to follow the workqueue API. It helps to use and switch between them easily. workqueue API does not provide this possibility. Instead it has flush_delayed_work(). It queues the work when it was pending and waits until the work is procced. So, we might do: bool kthread_flush_delayed_work(struct kthread_delayed_work *dwork) > + struct kthread_work *work = &dwork->work; > + unsigned long flags; > + int ret = true; > + > + raw_spin_lock_irqsave(&worker->lock, flags); > + if (!work->worker || work->canceling || > + !__kthread_cancel_work(work, true, &flags)) { > + ret = false; > + goto out; > + } Please, use separate checks with comments as it is done, for example, in kthread_mod_delayed_work() struct kthread_work *work = &dwork->work; unsigned long flags; int ret; raw_spin_lock_irqsave(&worker->lock, flags); /* Do not bother with canceling when never queued. */ if (!work->worker) goto nope; /* Do not fight with another command that is canceling this work. */ if (work->canceling) goto nope; /* Nope when the work was not pending. */ ret = __kthread_cancel_work(work, true, &flags); if (!ret) nope; /* Queue the work immediately. */ kthread_insert_work(worker, work, &worker->work_list); raw_spin_unlock_irqrestore(&worker->lock, flags); return kthread_flush_work(work); nope: raw_spin_unlock_irqrestore(&worker->lock, flags); return false; Will this work for you? Best Regards, Petr

