On 11/17/06, Andy Rumer <[EMAIL PROTECTED]> wrote:
We have developers who need access to application log and configuration data, where the 
application is running in a zone.  The actual request is usually for a "Read-Only 
Unix Account".

We had, until recently been able to push them off due to SOX compliance issues.  Then one 
day we had a "brilliant" idea.  We could loopback mount the application 
filesystem to the global zone, then nfs share that filesystem to a centralized box where 
we could mount up the data ro,noexec...  And everything was great until someone decided 
the originating zone needed to be rebooted, not realizing the loopback mount and nfs 
share existed.  Then we were in a bit of a pickle.

Details:
  0> Solaris 10 03/05
  1> Mounted a ufs filesystem directly into a non-global zone, where the 
application runs and writes rather large log files
  2> loopback mount that filesystem back to the global zone read-only, noexec
  3> share the loopback mount via nfs
  4> mount  the filesystem in a client zone on a different host where the 
developers have access
  5> reboot theapplication zone without unmounting the shared loopback mount
  6> zone fails to boot because fsck returns error 32 for the filesystem
  7> df -k in the global gives I/O error on the loopback mount
  8> manually fsck the filesystem from the global zone -- no problems found
  9> zone fails to boot because fsck returns error 39 for the filesystem
  10> all attempts to unmount the "stale" loopback mount fail
  11> due to time constraints the physical system is rebooted causing a outage 
(performace degredation, really because all the apps are clustered/load balanced) 
to the other 8 production zones on the box

Questions:
  1>  Is this a bug, just a really bad idea, or both?

It's a bad idea primary because you created a dependency of the global
zone on the local zone.  I will imagine that you need a separate
service that lofs'es the said filesystem and then share it out.

  2>  Any better resolutions than a reboot of the global?

I believe that in 10>, you have unshared the filesystem and tried
'umount -f' to no effect.  Can we have an RFE where 'umount -f' should
umount a filesystem that is no longer there.  I vaguely recall having
some problems in that respect but I do not have any machines available
to test it with now.

  3>  Is there a better way to provide read-only access to the 
log/configuration files (The dev team can have basically no write access 
whatsoever to the prod system)?

If you mount the said filesystem in the global zone and "lofs" into
the production zone, you can share it out without creating the
localzone dependency.


--
Just me,
Wire ...
_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to