Dave Lorand <[EMAIL PROTECTED]> writes:
 
> Do you install each package in a separate volume per @sys, or do you
> have a common volume containing all architechtures?  This seems to be
> the major issue about which folks here have waflled in the past.  Some
> packages, like Matlab, seem to go better if there is a separate volume
> for each architechture because the authors wrote with single-platform
> installations in mind.  Others, like Perl and Sendmail, expect to mingle
> architechtures within their trees, and they know how to separate
> architechture-specific files from architechture-independant ones.  I'd
> appreciate anyone's comments on this issue.

It depends on who you talk to :)  Most of the software I install, I put it
all in the same volume, unless it's an excessively large (+1 gig) package.  
The only benefit I see from making separate volumes for different
architectures deals with releasing the whole package versus releasing the
individual architectures. If I make a change to the solaris version of
IMSL, but not not the aix version, then I only need to release the
architecture that was modified.  There may be other benefits of doing
per-architecture volumes, but this is the only one that sticks out in my
head.

Regarding cell layout, I had nothing to do with our current layout, but it
seems to make sense to me.  We have 3 main cells, and I see us eventually
merging 2 of the cells and leaving us with a cell mainly for users, and a
cell for software.  This is basically how the cells look:

/afs/<cell>/adm      - admin databases, logs, tools.
           /sadm
           /admin      

/afs/<cell>/contrib  - open source software
                       /emacs -> emacs1934  - symlink to default version  
                       /emacs1929           - cont.emacs1929
                       /emacs1934           - cont.emacs1934

/afs/<cell>/dist     - licensed software                                
                       /sas -> sas610       - symlink to default version
                       /sas609              - dist.sas609
                       /sas610              - dist.sas610

/afs/<cell>/lockers  - Course, user-purchasable, and departmental lockers
                       /class/bus590        - lck.cls.bus590
                       /users/w/wilson      - lck.usr.wilson
                       /dept/pr             - lck.dpt.pr

/afs/<cell>/project  - Various software and research projects
                       /linux               - project.linux
                       (I think I would have used prj.linux or proj.linux)

/afs/<cell>/source   - Source for in-house programs, kerberos, afs
                       /krb5                - src.krb5

/afs/<cell>/system   - Installed software symlinked on client
                       machines per architecture. (/usr/local, /usr, etc)
                       /sun4x_56/usr        - sun4x_56.usr
                       (new naming scheme, was system.sun4x_56.usr, but
                       because of naming size limits, it was re-done)

/afs/<cell>/users    - User directories
                       /w/wilson            - users.wilson
                       (another cell of ours uses the naming scheme
                        users.w.wilson, so it just depends on what you
                        think looks more consistant with your mount-point 
                        structure)

/afs/<cell>/www      - Web related stuff
                       /verity              - www.verity

This is our basic setup and like I said, it seems to work well and serve
its purpose.  Our naming scheme is such that you know, *usually* without
looking, what the volume name will be for any given mount-point. Anyways,
I hope this helps.

Brian

[ Brian D. Wilson   Systems Programmer ]
[ [EMAIL PROTECTED]   Computing Services ]  
[ 919.515.5498     NC State University ]


Reply via email to