Hello Dave,

Some thoughts about cell layout. At /afs/@cell, we have:

    @sys-dirs architecture specific binaries and libraries
              rs_aix41 rs_aix42 rs_aix43 etc
    
    admin  -  restricted to members of system:administrators
              has scripts and data for cell administration 

    afs    -  images and downloads from /afs/transarc.com (for installing afs)

    common -  stuff that is architecture independent

    common/downloads
              fetched compressed tarballs of various sources
    common/etc
              "global" data files for whole cell (eg master CellservDB)
    common/scripts
              shell scripts, awks, perls, and other non-compiled stuff
              runnable on multiple sysnames
    common/src
              local source tree

    mirror/ftp/export
              "source" copy of outbound anonymous FTP export directory
               files placed here automagically appear in anonFTP

    mirror/ftp/import
              mirrored copy of inbound anonymous FTP dropoff directory
              files placed in anonFTP appear here automagically

    projects  project related dirs/files

    public    dirs with "system:anyuser rl" for users with private $HOMEs
              in local cell. For data exchange/sharing.
    
    tmp       handy for system:authusers 

    u         $HOMEs
    u/backups
              mountpoints for Backup volumes of $HOME volumes.

    wsadmin   AFS "package" configuration files

Also, I believe large cells use indirection dirs for access to $HOMEs.
Eg: user's home volumes are mounted under:

        /afs/@cell/users/u${NN}/${LOGNAME}

    where ${NN} is the user's UID modulo 32, ${LOGNAME} is login name
    Also, to simplyify access, symbolic links are used.

    So, for example: LOGNAME=bender, UID=4719, therefore NN=15

        /afs/@cell/home/b/e/bender -> /afs/@cell/users/u15/bender

    The advantages of this are:
    a) it is simple to find a home directory by simply cd'ing to
       /afs/@cell/home/b/e/bender
                       * | *|

    b) the "modulo 32" mountpoints evenly distribute $HOME volume
       mounts below /afs/@cell/users/u${NN}/

    c) ${HOME} mountpoint directories (/afs/@cell/users/u${NN}/)
       and symbolic link directories (/afs/@cell/home/b/e/bender)
       do not grow to too large. This is goodness since ls'ing
       large directories takes time under /afs.

I hope this helps!
--
cheers
paul                             http://acm.org/~mpb

   "In order to make anything from scratch,
    you must first create the universe." --Carl Sagan




Dave Lorand <[EMAIL PROTECTED]> wrote:
>Hi all,
>
>I've been working with AFS for about a year at a site with a lot of legacy
>cruft built up over the last 6 or 8 years.  We're going to be building a
>new set of servers soon (as soon as Sun ships them to us) and we're looking
>forward to the chance to start from a clean slate.  I was wondering how
>different sites structure their AFS space, especially installed software
>(the main use for AFS here).  There seem to be two distinct systems here,
>and the two sites I cd'd into from here (andrew.cmu.edu and athena.mit.edu)
>both seemed to have different setups.
>
>I'd appreciate any comments on how you structure your AFS space and the
>rationale behind your choices.
>
>Thanks in advance,
>
>Dave
> ____________________________________________________________
>| Dave Lorand, System Administrator | [EMAIL PROTECTED] |
>| Social Science Research Computing | 773-702-3792           |
>| University of Chicago             | 773-702-2101 (fax)     |
>+-----------------------------------+------------------------+
> ---> finger [EMAIL PROTECTED] for my PGP key <--

Reply via email to