Scott wrote me, to say:
> 
> > There is no way to do this.  Mount points are just "special" file names in
> 
> Run: /usr/afs/bin/salvager -showmount

Oops.  Scott is absolutely right.  It will be real slow though.  And you'll
have to do it on each fileserver.

> 
> > points.  This still wouldn't discover foreign mount points in *other cells*.
> 
> Remote mount points have the cellname there.

Right.  Not much chance you can run the salvager on every
AFS fileserver in the universe.

Nevertheless, this might well work for Lee Melvin, who only wants
to find mount points *within* a certain set of volumes.
It'll be ghastly slow, but probably not as bad as doing backups,
which are often done nightly already.

> 
> > Most versions of find (and almost certainly rdist) are not AFS-aware,
> > and cannot be made to do this easily.
> 
> find already stats files to find directories.  It uses the link count to
> find how many directories are there.  AFS for some reason doesn't update
> that count, but you can easily modify find to stat each file, and if it's
> a directory, do a pioctl() call to get the mount point.  If that call 
> succeeds, you know you're going into a new volume.

It's going to be really slow and inefficient.  I presume that
Lee Melvin (who posed this question) was looking for something
a bit faster.

I believe the reason this question was posed, was to see
if there tools that would be convenient ways to replicate
part of a directory tree in AFS.  The "find" is presumably
to look for things that changed.  If it's acceptable that
the process to look for changed things is slow (or if you don't
mind saving a lot of information (timestamps, link counts, etc.) so
that you can optimize later traversals of the directory tree) this
will work.  If you really want to replicate AFS, you're going to have
to do more than just look for changed files; you're also going to look
for ACL's that may have changed (better cache those as well), and you
definitely would need to recognize mount points so that you can replicate
those too.  Quite doable, and programs like synctree might
be a good start, but certainly not pretty.

Hm.  Another more intersting way you could do this would be
to have a sort of "fake" cache manager.  It would know about a list
of volumes, and for each volume in question, it would fetch
every inode in question, and keep a callback.  Whenever the
callback was broken, it would check to see if the file had
changed on the fileserver.  It could look for, and traverse
mount points as well, and add those volumes as desired, if
that was useful.  If there were a large number of inodes, it
might be possible to keep callbacks pending on everything,
in which case the fake cache manager might have to manage some
sort of periodic traversal to look for things that have
changed.  To keep from ganging up on a fileserver that goes
down, the fake cache manager would also need some notion
of "it's down, so don't look at it right now".  Obviously,
AFS source would facilitate writing such a creature, but
there should be enough information in the include files &
libraries to allow the writing of such an animal.  It wouldn't
be easy, though, and I suspect it's really rather more work than
qualcomm would want to invest in.

> 
> > To do this, you could do a "vos dump" and "vos restore" to copy
> > volumes over.  You can check the volume modification timestamp
> 
> How do you dump/restore across cells?  I didn't think that was possible.

Of course it is.  You just need the proper credentials in both cells.
A vos dump is just a bunch of data; the destination cell won't
care that it came from somewhere else.  You can either do
it all on one machine, with a pipeline, or you can do a vos dump
on the first machine with the proper credentials in the first cell,
transport the resulting file in any way convenient (ssh, 8mm tape,
*lots* of pigeons with little punched cards), and do a vos restore
on another machine with the proper credentials for the 2nd cell.

Off-hand, if I were qualcomm, I'd order the possibilites as
follows (from best to worst)
(1)  doing "vos dump/vos restore"
        (if at all possible).  I'd make sure I was using a rational
        scheme for volume names so that a regular expression can pick
        the right volumes out of a "vos listvol" listing (and hence
        catch any new volumes added.)  Best to avoid any need to
        hack xd if at all possible.  Requires some (slight) discipline
        when creating volumes and a certain level of cooperation in
        managing things such as UID's between the two cells.  A simple-minded
        version could be a few shell scripts run out of cron, and a good
        weekend project.
(2) adapting or writing something like sycntree, to replicate AFS
        directory trees.  Depending on the selection of things
        like synctree, and the level of programming expertise
        available, I'd guess this could easily take a month to implement.
(3) write a "fake" cache manager to catch modifications dynamically.
        Nifty project, very educational, but a *lot* of work.
        I'd plan on a year of a very good programmer's time, for
        this one.

                                -Marcus Watts
                                UM ITD PD&D Umich Systems Group

Reply via email to