We have developers who need access to application log and configuration data,
where the application is running in a zone. The actual request is usually for
a "Read-Only Unix Account".
We had, until recently been able to push them off due to SOX compliance issues.
Then one day we had a "brilliant" idea. We could loopback mount the
application filesystem to the global zone, then nfs share that filesystem to a
centralized box where we could mount up the data ro,noexec... And everything
was great until someone decided the originating zone needed to be rebooted, not
realizing the loopback mount and nfs share existed. Then we were in a bit of a
0> Solaris 10 03/05
1> Mounted a ufs filesystem directly into a non-global zone, where the
application runs and writes rather large log files
2> loopback mount that filesystem back to the global zone read-only, noexec
3> share the loopback mount via nfs
4> mount the filesystem in a client zone on a different host where the
developers have access
5> reboot the application zone without unmounting the shared loopback mount
6> zone fails to boot because fsck returns error 32 for the filesystem
7> df -k in the global gives I/O error on the loopback mount
8> manually fsck the filesystem from the global zone -- no problems found
9> zone fails to boot because fsck returns error 39 for the filesystem
10> all attempts to unmount the "stale" loopback mount fail
11> due to time constraints the physical system is rebooted causing a outage
(performace degredation, really because all the apps are clustered/load
balanced) to the other 8 production zones on the box
1> Is this a bug, just a really bad idea, or both?
2> Any better resolutions than a reboot of the global?
3> Is there a better way to provide read-only access to the
log/configuration files (The dev team can have basically no write access
whatsoever to the prod system)?
This message posted from opensolaris.org
zones-discuss mailing list