On 2018-04-11 16:42:21 [+0200], To Tejun Heo wrote:
> > > So is this perhaps related to the cpu hotplug that [1] mentions? e.g. is
> > > the cpu being hotplugged cpu 1, the worker started too early before
> > > stuff can be scheduled on the CPU, so it has to run on different than
> > > designated CPU?
> > > 
> > > [1] https://marc.info/?l=linux-mm&m=152088260625433&w=2
> > 
> > The report says that it happens when hotplug is attempted.  Per-cpu
> > doesn't pin the cpu alive, so if the cpu goes down while a work item
> > is in flight or a work item is queued while a cpu is offline it'll end
> > up executing on some other cpu.  So, if a piece of code doesn't want
> > that happening, it gotta interlock itself - ie. start queueing when
> > the cpu comes online and flush and prevent further queueing when its
> > cpu goes down.
> 
> I missed that cpuhotplug part while reading it. So in that case, let me
> add a CPU-hotplug notifier which cancels that work. After all it is not
> need once the CPU is gone.

This already happens:
- vmstat_shepherd() does get_online_cpus() and within this block it does
  queue_delayed_work_on(). So this has to wait until cpuhotplug
  completed before it can schedule something and then it won't schedule
  anything on the "off" CPU.

- The work item itself (vmstat_update()) schedules itself
  (conditionally) again.

- vmstat_cpu_down_prep() is the down event and does
  cancel_delayed_work_sync(). So it waits for the work-item to complete
  and cancels it.

This looks all good to me.

> > Thanks.

Sebastian

Reply via email to