> Darren J Moffat wrote: > > Dan Mick wrote: > >> It seems to me tha the problem is there are two states and three cases: > >> > >> 1) administrative disablement because of administrator preference > >> 2) "maintenance" because of a correctable error (s/w or h/w config, > >> dependency problems) > >> 3) "can't run" because a required h/w or condition is not present (not > >> right platform, absence of device, absence of particular bus > >> technology, etc.) > >> > >> 1 maps to 'disabled', 2 maps to 'maintenance', but 3 really is a > >> different case. There's no purely-administrative action that can > >> resolve it like most 'maintenance' states, but it's not an arbitrary > >> rescindable policy decision either. > > > > Agreed. There is a distinction that doesn't appear to be clear between > > the human admin disabled this vs the system choose to disable this. Both > > result in it not being run but they are different. > > > > Whats more from a security audit trail view they are very different and > > it would be nice it we could express that difference. > > > > I'll disagree. If the state is "can't run" because of a missing dependency, > then it is just like any other service which has a dependency. Why is > hardware different? More to the point, hardware doesn't have code, so > you are really dependent on software associated with hardware, no? > -- richard
I agree with Dan's list, and I agree with Richard about (3) above: the proper implementation of (3) is offline because of a missing dependency, whether it be h/w or s/w. And of course since our dependencies are expressed as FMRIs they can be any kind of thing. I'd prefer to see this solved with device dependencies, but ipmievd will presumably need to putback sooner, hence why the only existing thing which makes sense is the self-disable -t (which other services are also doing). This isn't ideal, but it's the only available thing which makes sense until we have richer dependencies available. -Mike -- Mike Shapiro, Solaris Kernel Development. blogs.sun.com/mws/