On Tue, Jul 28, 2015 at 12:07 AM, Ido Perlmuter <[email protected]> wrote: > Hello there. > > Currently, I've implemented the "fg" command in a pretty dumb way: I read > the service's rc.log file for perp or log/run file for s6, and take the > last argument to tinylog or s6-log to figure out where log files are > stored. Of course, this is a very bad way to do this, since I'm assuming > the services are using tinylog/s6-log, and I'm using a regular expression > that could easily miss. > You're not going to be able to get a separate view of stdout and stderr since you've redirected those to the same place. As long as that isn't an issue it should be pretty easy to solve this the right way instead of using strace, funky interstitial logging scripts, parsing the log script, etc. Assuming you're on linux and the output file descriptor is stable, "the right way" is to use procfs to directly read against the file descriptor that the logger is outputting to. I don't know which file descriptors tinylog uses but s6-log uses fd 4 for its file.
The below is for s6-log, something similar is doable with perp: Use s6-svstat to find out the logger pid, parse the output for the process id, then readlink /proc/$loggerpid/fd/4 to get the logger output location (or, if you're feeling lazy, just tail the fd directly). Caveats: you'll need account access to read $svcdir/$svc/log/supervise as well as /proc/$pid/fd, and the current fd 4 will stop being useful when a file rotation happens. The first permission issue can be solved by doctoring the permissions to the supervise directory beforehand (711 is safe and supervise/status is mode 444 already), the second permission issue cannot be solved that way since procfs resists chmod attempts. With correct sudoers access to a wrapper program much of the privileged access can be done safely by non-privileged folks, though standard "limited" sudo access caveats apply (don't allow untrusted people update the script in sounce control, etc). Either way though, this method is cleaner than parsing the log/run script and is less prone to misinformation (if the run script changes to point elsewhere but s6-log hasn't restared, parsing the script will fail). > > I'm looking for some advice how reading the service's stdout/stderr streams > could be done in a more fool proof, general way. The only way I know to tap > into a process' output streams is via strace, but that means the user will > have to install it, and run it as root, so that's not good. > Strace can be run as non-root, straces limitations are the same as any other program: you need access to the calling user account. It's not quite as limited as you're thinking but still a bad solution since any time you're calling ptrace on a program you're adjusting its internal state a little bit, and some badly written programs don't respond well to that. Having strace installed everywhere isn't a bad idea for other reasons, but using it for log reading is. More genenerally speaking, unless your goal is to write a full middleware translator between supervisorctl and process supervisors, some portions (like the logging part) are going to be really fragile. s6-log uses fd 4 for current, svlogd uses fd 6, but there's nothing stopping a developer from using logger(1) (or something more esoteric [0]) in their log/run script at which point all that "find the true destination of the log stream" stuff is totally moot. Also, fully expect people to do stupid things: I have both runit and s6 active on my workstation, and until recently had a few services supervised under s6 but logging via either svlogd (runit) or multilog (daemontools). Not that any of those are bad, but you can't rely on people to use the "right" logger with a supervision system unless you go the route of supervisord and don't offer that choice in the first place. I'd say write the management compatibility layer (which it sounds like you already have), and then use the energy you'd spend fighting unix pipes on teaching people how to make the most out of perp. Cheers! -Colin [0] Most of the services at work don't log anything locally and instead send data down a zmq socket to N log brokers that forward along to their subscribers. It's pretty heavy-weight, really slick, and mostly functional. -- "If the doors of perception were cleansed every thing would appear to man as it is, infinite. For man has closed himself up, till he sees all things thru' narrow chinks of his cavern." -- William Blake
