On Tue, Apr 16, 2024 at 04:08:30PM +0200, Benjamin Tissoires wrote:
> again, copy/paste from bpf_timer_start().
> 
> Signed-off-by: Benjamin Tissoires <bent...@kernel.org>
> ---
>  kernel/bpf/helpers.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index e5c8adc44619..ed5309a37eda 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2728,6 +2728,29 @@ __bpf_kfunc int bpf_wq_init(struct bpf_wq *wq, void 
> *map, unsigned int flags)
>       return __bpf_async_init(async, map, flags, BPF_ASYNC_TYPE_WQ);
>  }
>  
> +__bpf_kfunc int bpf_wq_start(struct bpf_wq *wq, unsigned int flags)
> +{
> +     struct bpf_async_kern *async = (struct bpf_async_kern *)wq;
> +     struct bpf_work *w;
> +     int ret = 0;
> +
> +     if (in_nmi())
> +             return -EOPNOTSUPP;
> +     if (flags)
> +             return -EINVAL;
> +     __bpf_spin_lock_irqsave(&async->lock);
> +     w = async->work;
> +     if (!w || !w->cb.prog) {
> +             ret = -EINVAL;
> +             goto out;
> +     }
> +
> +     schedule_work(&w->work);
> +out:
> +     __bpf_spin_unlock_irqrestore(&async->lock);

Looks like you're not adding wq_cancel kfunc in this patch set and
it's probably a good thing not to expose async cancel to bpf users,
since it's a foot gun.
Even when we eventually add wq_cancel_sync kfunc it will not be
removing a callback.
So we can drop spinlock here.
READ_ONCE of w and cb would be enough.
Since they cannot get back to NULL once init-ed and cb is set.

Reply via email to