Andrew Deason wrote: ] On Mon, 29 Aug 2011 14:18:14 -0500 ] John Tang Boyland <[email protected]> wrote: ] ] > NB: FileLog says: ] > Mon Aug 29 13:57:10 2011 FSYNC_com_VolOff: failed to get heavyweight reference to volume 536875958 (state=20, flags=0x18) ] ] 20 is VOL_STATE_DELETED. This issue was fixed in gerrit 4261, ee2811b0, ] in 1.6.0pre4. It affects a lot of things when you move volumes around, ] and then move them back to where they were originally. ] ] We're up to pre7 now, btw, with 1.6.0 imminent. If you want to be ] running prereleases, you really don't want to lag behind in them; there ] are many known issues in the older pres.
Good point. I'm running the latest openafs-server from Scientific Linux 6. Hopefully they will release a later RPM soon. ] > admin 87 % vos zap jeremiah a fa11.cs351.backup -force -verbose ] > vos: forcibly removing all traces of volume 536875958, please wait...done. ] ] Also just by the way, 'vos zap -force' is not a very smart tool, and it ] usually doesn't make a lot of sense for non-RW volumes. For RO and BK ] volumes, all it does is remove a few header files without touching any ] of the data, which is not what you want to do. ] ] If you have a troublesome non-RW volume, and 'vos remove/zap' and ] salvaging does not help, my general advice is just to stop right there ] (don't do 'vos zap -force'). Those two should handle every possible ] case, so if they are not handling it then there's a bug. Trying more ] things might make things worse or lose valuable debugging information. OK. That's useful for the future. I'll stay away from zap -force. (An earlier "zap" had "error 4".) Even if SL6 releases a later RPM for openafs-server soon, I suppose the volume is irretrievably broken. Time to create a new volume, I guess. I suppose then I can try to remove/zap the whole RW volume and maybe get back into a clean state. Thanks, John _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
