On Thu, Oct 23, 2014 at 5:15 AM, Lennart Poettering <lenn...@poettering.net> wrote: > With your patch you generate a system-wide cache for that, but when do > you flush it precisely? What's the logic there?
It updates on daemon-reload or daemon-reexec, consistent with how we load modified unit files. "systemctl enable/disable <unit>" already performs a daemon-reload after manipulating symlinks, which should make the impact on users minimal. Even admins who manipulate the symlinks directly will usually get the intended effect because they're choosing which services start by default on the next boot, and we freshly populate the cache on systemd startup. They would see stale data in "systemctl status", though. > Do we really need to keep the cache around? I mean, it see that it > can help if people invoke multiple bus calls one after the other, but > then again, we actually allow passing an array in anyway, so if people > want to take benefit of the optimization they can just issue one call > with multipls args instead of many calls with one each? Aside from the small amount of additional memory required (which we can easily reduce to be proportional to the number of enabled units, if we're not doing that already), I don't see much downside to keeping the cache around. It's used every time someone runs "systemctl status <unit>", even if the calls are not one-after-the-other. Running "systemctl" alone hits the cache once for every unit. It's also an optimization for the critical path of getting PID 1 back to its event loop, which is key for minimizing socket activation latency. That said, if it's a blocker, I can work with Ken to make the cache more ephemeral. _______________________________________________ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel