Hey Folks,

I've been playing around with SMF recently in my spare time. In
particular, writing a simple service that will take ZFS snapshots
according to a schedule, and do it in a (hopefully) cleanly architected
way[1].

There's some blog posts about what I've been doing at:

http://blogs.sun.com/roller/page/timf?entry=zfs_automatic_snapshots_prototype_1
http://blogs.sun.com/roller/page/timf?entry=zfs_automatic_snapshots_smf_service

and some thoughts on zfs-discuss:

http://www.opensolaris.org/jive/thread.jspa?threadID=8643&tstart=60#37156


To summarise, I'm using a transient service that builds a cron job to
take automatic snapshots of a given ZFS filesystem (and optionally, it's
children), storing the various options for each instance in the SMF
repository. I figured that this allows admins to easily manage their
snapshot schedules, without having to mess about with cron jobs (yuck).


Here's my question: given that the cron job itself is responsible for
carrying out the tasks that the service offers, it's disconnected from
the usual service logging that the smf-method enjoys. Is there any way
to have the cron job log it's output to the service instance log file ?
(apart from the obvious re-direction of stdout/stderr)


I'd love it if there was some sort of helper-command (or smf_include.sh
function) that I could use in a shell script, along the lines of:

svclog <FMRI> start
.
.
.
svclog <FMRI> stop

which would allow me to save all stdout and stderr produced between
those commands into the correct SMF log for that instance, without
having me having to do all that annoying shell-redirection stuff ...


Would this be a valid RFE, or am I missing something obvious that's
already provided ? I'd welcome any help (or education via baseball
bats :-) that you guys could offer!

        cheers,
                        tim


[1] though that's open to debate!
-- 
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operations            http://blogs.sun.com/timf


Reply via email to