David Bustos wrote: >Quoth Darren.Reed at sun.com on Thu, Oct 26, 2006 at 07:54:16PM -0700: > >>David Bustos wrote: >> >>>What would you think about something like this: >>> >>> $ svcadm enable nfs/server >>> Error: svc:/network/nfs/server:default is locked. >>> Use share(1M) to administer this service. >>> $ echo $? >>> 1 >>> $ >>> >> ... > >>Mt gut instinct on seeing "default is locked" is to search for a >>"svcadm unlock" (or "share unlock" or similar) command... >>the message and requirement to use "share(1M)" given that >>message doesn't seem intuitive. >> > >Sure, "locked" might not be the most user-friendly terminology. I think >the most accurate thing to say would be that the general/enabled >property is private / not public, but that seems worse. In any case, >I'd like to debate the terminology after the core concept. > > >>Oh and the message as given could also be taken to mean that >>you're not meant to use svcadm, at all, with nfs/server. >> > >That's the point! >
I'm not sure I buy that and after getting used to using svcadm to manage everything (there is a nice amount of symmetry there), it doesn't seem at all right to have some services (or eventually all of them?) defacto-managed by the other tools. Regardless of whether or not there are shares defined in the /etc/dfs/dfstab file, "svcadm disable nfs/server" should still turn off NFS file serving - preferably without altering the contents of that file. To be required to use unshare is not at all admin friendly. >>>I believe this is also a better way to solve the problem that >>> >>> 6245225 RFE: SMF_EXIT_DISABLE and SMF_EXIT_DISABLE_TEMPORARILY >>> >>>was filed for. That is, if the developer knows that a service shouldn't >>>be enabled, then rather than having the enable operation succeed and the >>>service magically disable itself, the user should be prevented from >>>enabling it in the first place and redirected to the appropriate >>>interface. >>> >>Why not just have another state for the service such as pending? >> >... > >I'm pretty sure altering the state machine is out of the question here. > Is it fixed for good or are there possibilities it could be expanded upon in the future? Or more to the point, why not alter the state machine to try and better fit the model for services? It kind of feels like the design was modeled around services being either "black" (disabled) or "white" (enabled), ignoring the "grey" areas inbetween, so we're stuck trying to make everything "black" or "white" rather than trying to find more colour and depth in investigating the "grey" areas. >>>Or does it only make sense to lock >>>an entire service? >>> >>I think the problem here is not the value of the property, per se, >>but rather the state of the service itself. >> >>Using the above, if I was to enable nfs/server, use share(1M) to >>share a filesystem, edit /etc/dfs/dfstab to remove the line added >>by share(1M) and then reboot, into what state does nfs/server >>return to? Or if I edit that file thus and restart the service? >> >>online, offline, maintainance mode or locked? >> > >Well I don't recall all of the details of the NFS services, but for the >purposes of this argument, today I believe nfs/server would disable >itself after you enabled it. share(1M) would then enable it along with >its companion services. After reboot, I believe the NFS services would >be started (because share(1M) enabled them), but they would discover >that there are no shares (because you modified dfstab), and would >disable themselves. > Right - and this leads to some amount of confusion when you scramble about trying to figure out why "enable" seemingly did nothing. >With the locking feature, if you tried to enable nfs/server, svcadm >would fail and say you should use share(1M). I think ideally the same >would happen when you tried to edit dfstab, but I'm sure putting this >feature into the filesystem is rife with landmines. Anyway, after >reboot I would expect the NFS services to start but fall into >maintenance, since them being enabled but no lines being in dfstab >should be considered an inconsistent configuration, since share(1M) >would never have done that. > If it gets into the maintenance state via this path then it should also be going that way if I do "svcadm enable nfs/server" without having any active lines prepared by share(1M). I don't see the two states as being any different. If I had any argument with respect to nfs/server, I'd say that the NFS group are not heading in the right direction here. The work flow I expect to have when enabling NFS shares is: - enable service - verify service is enabled/running - share some filesystems - verify filesystems are exported correctly With SMF it has become: - share some filesystems - verify service is enabled/running - verify filesystems are exported correctly The subtle difference here is that the NFS server is not a single daemon, but a combination of both exporting the filesystem and performing the I/O operations. Maybe I'm just old fashioned in wanting to confirm that daemons are up and running and configured properly before starting to share filesystems. As an example of what I did before SMF, to verify that it was ok for me to share a NFS file system (and that everything was running), I'd do "showmount -e" just to see it say "no filesystems exported", confirming that rpc.mountd was running and talking with rpcbind. Now I can't do that until after running share - I just have to trust that everything will be ok...it feels strange to an old timer like me :) Darren