Hi! Though client-access doesn't seem to give me any problems, I run into the following kind of problem quite frequently: (running 1.4.2fc3 on all 4 mentioned hosts)
(Mail ordered chronologically, for most relevant error: see under "FileLog", possibly related previous post, see http://www.openafs.org/pipermail/openafs-info/2004-March/012687.html) olympia-vpn ~ # vos release -v homes.sderoeck homes.sderoeck RWrite: 536870921 ROnly: 536870922 Backup: 536870923 RClone: 536870922 number of sites -> 5 server olympia-vpn partition /vicepa RW Site -- New release server olympia-vpn partition /vicepb RO Site -- New release server eltom partition /vicepa RO Site -- New release server bubbles-vpn partition /vicepa RO Site -- New release server ebu-vpn partition /vicepa RO Site -- Old release This is a completion of a previous release Starting transaction on cloned volume 536870922... done Creating new volume 536870922 on replication site ebu-vpn: done Starting ForwardMulti from 536870922 to 536870922 on ebu-vpn (full release). Deleting the releaseClone 536870922 ... done updating VLDB ... done Released volume homes.sderoeck successfully --> So far so good During the release, vos examine gives the following: olympia-vpn logs # vos examine homes.sderoeck homes.sderoeck 536870921 RW 223346 K On-line olympia-vpn /vicepa RWrite 536870921 ROnly 536870922 Backup 536870923 MaxQuota 1500000 K Creation Sun Jan 9 23:48:56 2005 Copy Sun Jan 9 23:48:56 2005 Backup Tue Sep 19 12:50:54 2006 Last Update Tue Sep 19 12:52:36 2006 10332 accesses in the past day (i.e., vnode references) RWrite: 536870921 ROnly: 536870922 Backup: 536870923 RClone: 536870922 number of sites -> 5 server olympia-vpn partition /vicepa RW Site -- New release server olympia-vpn partition /vicepb RO Site -- New release server eltom partition /vicepa RO Site -- New release server bubbles-vpn partition /vicepa RO Site -- New release server ebu.kotnet.ulyssis.student.kuleuven.ac.be partition /vicepa RO Site -- Old release Volume is currently LOCKED --> Also quite normal But immediately after the release, I get the following: olympia-vpn logs # vos examine homes.sderoeck **** Could not attach volume 536870921 **** RWrite: 536870921 ROnly: 536870922 Backup: 536870923 number of sites -> 5 server olympia-vpn partition /vicepa RW Site server olympia-vpn partition /vicepb RO Site server eltom partition /vicepa RO Site server bubbles-vpn partition /vicepa RO Site server ebu.kotnet.ulyssis.student.kuleuven.ac.be partition /vicepa RO Site FileLog: Tue Sep 19 13:11:23 2006 CopyOnWrite failed: Partition /vicepa that contains vol ume 536870921 may be out of free inodes(errno = 2) Tue Sep 19 13:11:41 2006 CopyOnWrite failed: Partition /vicepa that contains vol ume 536870921 may be out of free inodes(errno = 2) Tue Sep 19 13:11:42 2006 Volume : 536870921 vnode = 38602 Failed to create inode : errno = 2 Tue Sep 19 13:11:42 2006 DT: inode=843153619850886, name=camera_mode.h~, errno=2 Tue Sep 19 13:11:42 2006 Volume : 536870921 vnode = 38534 Failed to create inode : errno = 2 Tue Sep 19 13:11:42 2006 CopyOnWrite failed: Partition /vicepa that contains vol ume 536870921 may be out of free inodes(errno = 2) Tue Sep 19 13:11:42 2006 CopyOnWrite failed: Partition /vicepa that contains vol ume 536870921 may be out of free inodes(errno = 2) --> But I don't seem to be out of free inodes: (hdd1 <-> vicepa) olympia-vpn logs # dumpe2fs /dev/hdd1 dumpe2fs 1.39 (29-May-2006) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cad230a-088e-4024-9bbc-536018cdbeb5 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype needs_recovery sparse_super large_file Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 24428544 Block count: 48839600 Reserved block count: 2441980 Free blocks: 5313534 Free inodes: 24113310 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Filesystem created: Fri Dec 31 01:55:45 2004 Last mount time: Sun Sep 17 21:13:54 2006 Last write time: Sun Sep 17 21:13:54 2006 Mount count: 8 Maximum mount count: 26 Last checked: Sun Sep 17 01:17:45 2006 Check interval: 15552000 (6 months) Next check after: Fri Mar 16 00:17:45 2007 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: tea Directory Hash Seed: b185b632-c745-4588-a643-64fae5b18d60 Journal backup: inode blocks Journal size: 32M VolserLog: Tue Sep 19 12:58:47 2006 1 Volser: ListVolumes: Volume 536870922 (V0536870922.vo l) will be destroyed on next salvage Tue Sep 19 13:05:36 2006 trans 175 on volume 536870922 is older than 300 seconds Tue Sep 19 13:06:06 2006 trans 175 on volume 536870922 is older than 330 seconds Tue Sep 19 13:06:36 2006 trans 175 on volume 536870922 is older than 360 seconds Tue Sep 19 13:07:06 2006 trans 175 on volume 536870922 is older than 390 seconds Tue Sep 19 13:07:36 2006 trans 175 on volume 536870922 is older than 420 seconds Tue Sep 19 13:08:06 2006 trans 175 on volume 536870922 is older than 450 seconds Tue Sep 19 13:08:36 2006 trans 175 on volume 536870922 is older than 480 seconds Tue Sep 19 13:09:06 2006 trans 175 on volume 536870922 is older than 510 seconds Tue Sep 19 13:09:36 2006 trans 175 on volume 536870922 is older than 540 seconds Tue Sep 19 13:10:06 2006 trans 175 on volume 536870922 is older than 570 seconds Tue Sep 19 13:10:36 2006 trans 175 on volume 536870922 is older than 600 seconds Tue Sep 19 13:11:10 2006 1 Volser: Delete: volume 536870922 deleted Tue Sep 19 13:11:56 2006 VAttachVolume: Error reading namei vol header /vicepa/V0536870921.vol; error=101 Tue Sep 19 13:11:56 2006 VAttachVolume: Error attaching volume /vicepa/V0536870921.vol; volume needs salvage; error=101 Tue Sep 19 13:11:56 2006 1 Volser: ListVolumes: Could not attach volume 536870921 (/vicepa:V0536870921.vol), error=101 Thanks for reading this far :) Stefaan _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
