I'm bringing some new vicep partitions online, and moving volumes
from older partitions to the new ones.  While I've been a second-
string support person for AFS for many years, I haven't done much
with large-scale tasks like this.  And I also find myself the main
support person.

I did a few 'vos move's by hand to make sure I knew what was going
on.  I also wanted to fix up some odd replication decisions that
had been made many years ago, so I also wanted to remove old
readonly volumes and create new ones.  I managed to end up with
a few cases such as this:

# vos examine campus
campus                            536870967 RW     237820 K  On-line
    afsfs13.server.rpi.edu /vicepb
    RWrite  536870967 ROnly  536870968 Backup  537194436
    MaxQuota     400000 K
    Creation    Wed Sep  7 14:00:12 1994
    Copy        Thu Jun 30 23:30:24 2011
    Backup      Mon Jul 11 05:31:43 2011
    Last Update Wed Apr 14 16:41:17 2010
    544 accesses in the past day (i.e., vnode references)

    RWrite: 536870967     ROnly: 536870968     Backup: 537194436
    number of sites -> 4
       server afsfs13.server.rpi.edu partition /vicepb RW Site
       server afsfs12.server.rpi.edu partition /vicepb RO Site
       server afsfs14.server.rpi.edu partition /vicepb RO Site
       server afsfs15.server.rpi.edu partition /vicepb RO Site

Looks (mostly) reasonable, right?  (except the odd decision that
it does not have a readonly volume on 13-b).  However, the output
of 'vos listvol afsfs13.server.rpi.edu b -long' includes:

campus                            536870967 RW     237820 K  On-line
    afsfs13.server.rpi.edu /vicepb
    RWrite  536870967 ROnly  536870968 Backup  537194436
    MaxQuota     400000 K
    Creation    Wed Sep  7 14:00:12 1994
    Copy        Thu Jun 30 23:30:24 2011
    Backup      Mon Jul 11 05:31:43 2011
    Last Update Wed Apr 14 16:41:17 2010
    897 accesses in the past day (i.e., vnode references)

While the output for 'vos listvol afsfs13.server.rpi.edu a -long'
includes (note: on vicep*A*):

campus.readonly                   536870968 RO     178012 K  On-line
    afsfs13.server.rpi.edu /vicepa
    RWrite  536870967 ROnly  536870968 Backup  537194436
    MaxQuota     400000 K
    Creation    Thu Jul 30 10:47:30 2009
    Copy        Thu Jul 30 10:47:30 2009
    Backup      Wed Jul 29 05:31:19 2009
    Last Update Thu Jul 30 10:47:23 2009
    4225 accesses in the past day (i.e., vnode references)

Now, I *did* vos move 'campus' from vicepa to vicepb on afsfs13.
And originally I did create a readonly volume on vicepb, but then
I noticed this oddity in the listvol output so I removed that.

I did all these test vos-moves at the end of June, and then I went
on vacation for a week.  So I'm not 100% sure how I got things into
this state.  But my guess is that I did some 'vos remsite's when I
should have done 'vos remove's.

So my question is, would it reasonably safe for me to do:

   vos syncvldb afsfs13 vicepa campus -verbose
   vos syncvldb afsfs13 vicepa campus -verbose
   vos addsite afsfs13 vicepb campus
   vos release afsfs13

to clear this up?  It looks like I only made this mistake on a
few volumes, but as luck would have it they are all very
important volumes and I've never tried a 'vos syncvldb' before,
so I'm a bit extra paranoid about trying things.

--
Garance Alistair Drosehn                =     [email protected]
Senior Systems Programmer               or   [email protected]
Rensselaer Polytechnic Institute;             Troy, NY;  USA
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to