Hi Arve,

On 2022/08/30 12:27, Arve Barsnes wrote:

> On Tue, 30 Aug 2022 at 11:52, Jaco Kroon <j...@uls.co.za> wrote:
>> I note that the default provider for the virtual is systemd-utils[udev],
>> followed by sys-fs/udev, sys-fs/eudev and finally sys-apps/systemd.
>> This contradicts the wiki page which states that sys-fs/udev is the
>> default (https://wiki.gentoo.org/wiki/Udev).
>> Is there more comprehensive documentation around about this, and which
>> should be preferred when?
> sys-fs/udev is also just a virtual now, which pulls in
> systemd-utils[udev], so using either works exactly the same.

Thanks, I missed that.  That does help to clear things up.

This leaves three implementations, systemd-utils[udev], eudev and systemd.

We don't use systemd so that eliminates that, can I safely assume that
systemd-utils[udev] is "extracted" from systemd and really the same
thing?  Ie, it's the udevd without the associated systemd environment? 
So really only two implementations.

eudev then is the wild horse in that it was forked from the days when
sys-fs/udev got incorporated into systemd, and have been following a
parallel but somewhat independent path?  It doesn't contain many of the
newer systemd-udev "features" and "lags behind"?

Which, assuming then my understanding is now improved (which I believe
it is), then the selection should be based as follows:

1.  If you're using systemd, use the embedded udevd.
2.  If you're using openrc, you should prefer sys-fs/udev aka
systemd-utils[udev] unless you have a specific reason to rather use eudev?

eudev has served me very well for very long, and avoided a fair number
of LVM related deadlock issues we experienced with sys-fs/udev at the
time.  We've been moving back, and I'm not convinced those have been
eliminated, but I could not yet prove any of the recent system deadlocks
we've seen relates to systemd-udev.

(The one deadlock we did manage to trap was without a doubt something
with the linux kernel IO scheduler, possibly related to raid6, however,
lvm is also involved in that path, which does involve udev.  Also the
system that most frequently runs into problems - only one with software
raid6, but it also by far makes the most aggressive use of lvm
snapshots.  Thus no definitive patterns.)

Being a creature of habit based on experience I am sceptical.  I am
contemplating throwing that one host back to eudev and seeing if that
"solves" the problem ... but how long is a piece of string.

Thanks for the information, I'll let the above simmer a good long while.

Kind Regards,

Reply via email to