On Tue, Apr 07, 2015 at 05:22:55PM -0400, Jude Nelson wrote: > > > > "report every kind of device, since it listens to the kernel's driver > > core > > > > (i.e. libudev learns about network interfaces, buses, power supplies, > > > > etc.--stuff for which there are no device files" > > > > Currently, it doesn't *report* devices; that takes something longer term, > > like inotify, polling a netlink socket, or listening to a daemon. > > > > It also has no clue about events or hardware that could not have a > > corresponding device, since it uses block/char and major:minor to find > > the hardware. > > > > I have a general idea of how to get information like this, by recursing > > through /sys or /dev, and I know of some code I could use as a starting > > point, but I don't know what the ideal format is. > > If someone points me at a program they'd like to use without libudev > > (preferably C with minimal dependencies) that doesn't cover a lot of > > ground (ie, it's clear what functionality udev provides, and I wouldn't > > need to duplicate much of libudev to get it working), that would be a > > good starting point for expanding libsysdev. > > > > You might find something useful in vdev_linux_sysfs_register_devices() and > vdev_linux_sysfs_find_devices() functions in vdevd/os/linux.c. They're > both involved in generating the initial coldplug device listing. They only > need libc to work, and libvdev/sglib.h for basic data structures.
I know how to get the devices that show up in /dev; I'm not sure about getting the sysfs entries that *don't* show up there. I'm also not sure how anything beyond this is used. > > > > To avoid the troublesome corner case where a libudev client crashes and > > > >> potentially leaves behind a directory in /dev/uevents/, I would > > recommend > > > >> mounting runfs [1] on /dev/uevents. Runfs is a special FUSE > > filesystem I > > > >> wrote a while back that ensures that the files created in it by a > > > >> particular process get automatically unlinked when that process dies > > (it > > > >> was originally meant for holding PID files). > > Hmm... > > Do we need to have a subdirectory of the mountpoint? > > Could you just use ACLs if you need to make a limited subset available? > > I get the impression that we can do this for mdev via a script along > > these lines: > > > > FILENAME=`env | sha512sum | cut -d' ' -f1` > > for f in /dev/uevents/* > > do env >"$f"/$FILENAME > > done > > > > but it would be *nicer* if we only needed to write one file. > > > > I agree that one file per event is ideal (or even a circular logfile of > events, if we could guarantee only one writer). However, I'm not sure yet > what a fine-grained ACL for device events would look like. My motivation > for per-client directories is that unprivileged clients can be made to see > only its own events and no one else's by default (i.e. by chmod'ing the > directory to 0700), and that they make it easy reason about sending > post-processed events only to the clients you want--just change the list of > directories to iterate over in that for-loop :) Which is not trivial in shell, unless you have a special command to do the work of figuring out which which directories get what. ...which seems to make doing this in shell pointless, since the corresponding C is nearly as trivial. > > Also, wouldn't mounting that with runfs result in records of uevents > > getting erased if they're written by a helper rather than a daemon? > > > > Yes; good catch. There are a couple straightforward ways to address this: > (1) have a separate, unprivileged device-event-log daemon curate > /dev/uevents/ and have the helper scripts forward device events to it, or > (2) fork and/or patch runfs to allow files to persist if they're generated > by a certain whitelist of programs (i.e. all programs in a particular set > of directories, like /lib/vdev/), but disappear otherwise once the creating > process dies. What about (3) having an option for runfs that lets it erase directories (with their subentries) on process termination, but lets regular files persist until then? > Thanks for your feedback! > -Jude You're welcome. Thanks, Isaac _______________________________________________ Dng mailing list [email protected] https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
