I went ahead with what I thought would work and it does! thanks for the
help form everyone! We appreciate it very much. Now we can move off the
old servers and into the 21st century. :-)
[root@afs-vmc 2019-March]# vos volinfo root.afs
vsu_ClientInit: Could not get afs tokens, running unauthenticated.
root.afs 536901418 RW 290 K On-line
afs-vma.psc.edu /vicepea
RWrite 536901418 ROnly 0 Backup 0
MaxQuota 1000 K
Creation Thu Dec 23 09:37:33 1999
Copy Thu Mar 7 14:46:40 2019
Backup Sat Mar 2 02:32:35 2019
Last Access Tue Feb 12 13:45:30 2013
Last Update Tue Jan 9 00:24:10 2007
0 accesses in the past day (i.e., vnode references)
RWrite: 536901418 ROnly: 536903611
number of sites -> 5
server afs-vma.psc.edu partition /vicepea RW Site
server velma.psc.edu partition /vicepcb RO Site
server fred.psc.edu partition /vicepbd RO Site
server daphne.psc.edu partition /vicepdc RO Site
server afs-vma.psc.edu partition /vicepea RO Site
-Susan
On Thu, Mar 7, 2019 at 1:43 PM Susan Litzinger <[email protected]> wrote:
> I got that to work for root.cell and now I find that root.afs is in the
> same situation. Does anyone see any harm in doing the same steps for
> root.afs? It's current status is:
>
> [root@afs-vmc 2019-March]# vos volinfo -id root.afs -localauth
> root.afs 536901418 RW 290 K On-line
> daphne.psc.edu /vicepdc
> RWrite 536901418 ROnly 0 Backup 536903011
> MaxQuota 1000 K
> Creation Thu Dec 23 09:37:33 1999
> Copy Sun Jun 10 11:24:25 2012
> Backup Sat Mar 2 02:32:35 2019
> Last Access Tue Feb 12 13:45:30 2013
> Last Update Tue Jan 9 00:24:10 2007
> 0 accesses in the past day (i.e., vnode references)
>
> RWrite: 536901418 ROnly: 536903611 Backup: 536903011
> number of sites -> 5
> *server daphne.psc.edu <http://daphne.psc.edu> partition /vicepdc
> RW Site *
> server velma.psc.edu partition /vicepcb RO Site
> server fred.psc.edu partition /vicepbd RO Site
> server daphne.psc.edu partition /vicepdd RO Site
> * server afs-vmc.psc.edu <http://afs-vmc.psc.edu> partition
> /vicepgb RO Site -- Not released*
>
> So I will remove the daphe vicepdd RO volume using 'vos remove' , 'vos
> release' root.afs, create a RO on daphne vicepc using 'vos remsite', then
> move the volume to afs-vmc partition vicepgb using 'vos move'.
>
> BUT, should the afs.root and afs.cell volumes be on different servers as
> they were in our initial implementation? I couldn't find anything in the
> doc that states this but wanted to make sure before I went ahead and did
> it.
>
> Thanks very much. -susan
>
>
> On Thu, Mar 7, 2019 at 12:29 PM Jeffrey Altman <[email protected]>
> wrote:
>
>> On 3/7/2019 11:59 AM, Susan Litzinger wrote:
>> > Hmm.. I moved removed the incorrect RO and created a new RO on velma,
>> > then tried to 'release' the new one prior to moving it to a different
>> > server and that doesn't work. I'm hesitant to go ahead and move it if
>> > it's not in a good state.
>>
>> "vos remsite" only modifies the location database. It does not remove
>> volumes from vice partitions. You needed to execute "vos remove" not
>> "vos remsite". You are still receiving the EXDEV error from velma
>> because there are still two vice partitions attached to velma each of
>> which have a volume from the same volume group.
>>
>> The fact that you were able to get into this situation is due to bugs in
>> OpenAFS which were fixed long ago in AuriStorFS. To cleanup:
>>
>> vos remove -server vlema.psc.edu -partition vicepcb -id 537176385
>>
>> and then
>>
>> vos release -id root.cell
>>
>> If you are still seeing errors, examine the VolserLog on velma.psc.edu
>> and use
>>
>> vos listvol -server velma.psc.edu -fast | grep 537176385
>>
>> to see if there are stranded readonly volumes left on somewhere.
>>
>> Jeffrey Altman
>>
>>