On Fri, 2007-07-06 at 12:03 +0200, Bernd Schubert wrote:
> I just got this in dmesg:
> 
> [67100.343686] Lustre: Found inode with zero generation or link -- this may 
> indicate disk corruption (inode: 26398122/151277060, link 0, count 1)
> [67100.358660] Lustre: Found inode with zero generation or link -- this may 
> indicate disk corruption (inode: 26398122/151277060, link 0, count 1)
> [67100.368242] Lustre: Found inode with zero generation or link -- this may 
> indicate disk corruption (inode: 26398122/151277060, link 0, count 1)
> [67100.368248] Lustre: Skipped 6 previous similar messages
> [67100.386517] Lustre: Found inode with zero generation or link -- this may 
> indicate disk corruption (inode: 26398114/151277054, link 0, count 1)
> 
> So I did run an e2fsck again and among the orphaned inodes e2fsck found in 
> the 
> journal were also these above.
> 
> This might or might not be a bug in our additional patches for 2.6.20, but 
> its 
> hard to test, since I have not access to your cvs tree for lustre-1.6.1 with 
> 2.6.18 support and older kernels won't run on our hardware.
> 
> Let me know if there's anything I can do to debug this.

Hi Bernd,

This happens because it is actually possible to get a zero generation inode 
once every (random < 2^32) inodes.

Ldiskfs needs to have this patch to skip inodes with generation = 0.

linux-2.6.9-34.orig/fs/ext3/ialloc.c    2007-01-03 13:30:33.000000000 +0000
+++ linux-2.6.9-34/fs/ext3/ialloc.c     2007-01-03 13:42:04.000000000 +0000
@@ -721,6 +721,8 @@ got:
        insert_inode_hash(inode);
        spin_lock(&sbi->s_next_gen_lock);
        inode->i_generation = sbi->s_next_generation++;
+       if (unlikely(inode->i_generation == 0))
+               inode->i_generation = sbi->s_next_generation++;
        spin_unlock(&sbi->s_next_gen_lock);
 
        ei->i_state = EXT3_STATE_NEW;

There also needs to be a change in mds_fid2dentry(). These patches can be found 
in bz10419.

Thanks,
Kalpak.

> 
> Cheers,
> Bernd
> 
> 

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to