Fotis, Thanks for the input, and it does seem like a good idea to use a dedicated user and I can get away with it just requires a few extra questions during my campus' annual CYA security audit :)
> While at a Juelich meeting in Feb'14, we picked up the following > concept: > - once an installation software set is finalized, > it gets "frozen" & ownership goes from sw group to sys group. > This is an interesting idea and, one step further in avoiding > shooting yourself in the foot! > It's very likely an explicit separation of roles, coming out of who > knows what war story. > > I find these practices really wise, given that nearly any HPC build > step implies > downloading & running 3rd-party software, whereby many things could > go awry. > As users keep coming with software setup requests, it's easy to be > "lured away"! > (LOL: "please install for me the package HackTree/v3" :) > > At some point you need to call your setup fixed and hit the > "production" button, > which is in effect what the Juelich fellows do. An applause for the > practice! That is an interesting idea. Likely a bit "overkill" in my situation since I'm currently only full-time sysadmin working on this cluster and two other sysadmins help when they can so there's really only 1 role and it has three people. Though I can imagine how this would be extremely useful when there are well defined groups with separate duties. > > I suspect that mount namespaces could be a nice way to go about it > under linux: > http://www.ibm.com/developerworks/linux/library/l-mount-namespaces/index.html > (never tried it, still in the todo list) > That is very interesting article , and now on my to-do list! > If you have improvements upon the above ideas, please swap the > subject and throw them in the list. > Well right now my "/apps" directory lives on ZFS (exported via NFS). It's mounted read-write on only one login node where builds are performed and everything else mounts it read-only. I have ZFS taking hourly snapshots while we're still in development. One thing I can imagine being a possible way to "freeze" the filesystem is ZFS snapshots. This is something I use to freeze RPM repositories. The idea would be that the filesystem where apps are installed has limited access and is not what is actually used by the cluster. When apps are installed and marked production, a snapshot is taken. In ZFS a filesystem's snapshots can be exposed by doing something like "zfs set snapdir=visible tank/apps". Then the snapshot would be exported via NFS (which are read-only at the filesystem level). Not exactly easiest thing to automate, but I like the possibilities offered from ZFS when it comes to situations like this. Have not tried this exact idea but use something similar in production to expose the snapshot directory to a web server then symlink each frozen RPM repo snapshot directory to the yum repo's docroot when an update to the currently active repos is needed. > enjoy, > Fotis > > > -- > echo "sysadmin know better bash than english" | sed s/min/mins/ \ > | sed 's/better bash/bash better/' # Yelling in a CERN forum > > > > >

