Hi folks,

I'm trying to fix this bug:
https://bugs.freedesktop.org/show_bug.cgi?id=88401

The initial problem (as reported) looks following: performing a reload
(maybe implicitly) re-starts alsa-restore.service if it is enabled.

With a bit of debugging I've apparently found a root cause. Explanation
following.

Suppose we have CUPS installed and org.cups.cupsd.{path,service} are
started.

We enter manager_reload(), units are serialized, reset, re-read,
deserialized and then cold-plugged. (Note that e. g. unit_notify() has
special "protection" to avoid spawning jobs when a reload is in
progress.)

So, if org.cups.cupsd.path is started, *it is almost first to be
cold-plugged*. The call chain is:

unit_coldplug()
path_coldplug()
path_enter_waiting() // recheck == true
path_check_good() // returns true
path_enter_running()
manager_add_job() // at this point we're fucked up

So, a job is enqueued for org.cups.cupsd.service. This is already wrong,
because a reload should never enqueue any jobs (IIUC). So, the job is
enqueued... remember that almost all units are inactive by now. Thus we
end up re-starting half a system (the whole basic.target) as
dependencies.

Questions:
- is my analysis correct?
- if yes, then how to fix this? Maybe add a similar
  "if (UNIT(p)->manager->n_reloading <= 0)" check to
  path_enter_running() to avoid calling manager_add_job() during
  reloading?

Thanks,
-- 
Ivan Shapovalov / intelfx /

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to