fsck.reiserfs /dev/sdb1

reiserfsck 3.6.19 (2003 www.namesys.com)

*************************************************************
** If you are using the latest reiserfsprogs and  it fails **
** please  email bug reports to reiserfs-l...@namesys.com, **
** providing  as  much  information  as  possible --  your **
** hardware,  kernel,  patches,  settings,  all reiserfsck **
** messages  (including version),  the reiserfsck logfile, **
** check  the  syslog file  for  any  related information. **
** If you would like advice on using this program, support **
** is available  for $25 at  www.namesys.com/support.html. **
*************************************************************

Will read-only check consistency of the filesystem on /dev/sdb1
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
###########
reiserfsck --check started at Mon Jul 26 13:14:03 2010
###########
Replaying journal..
Reiserfs journal '/dev/sdb1' in blocks [18..8211]: 0 transactions replayed
Checking internal tree..bad_directory_item: block 19300354: The directory item [9874 14345 0x1 DIR (3)] has a broken entry (4)
bad_leaf: block 19300354, item 8: The corrupted item found (9874 14345 0x1 DIR (3), len 1032, location 2088 entry count 27, fsck need 0, format old)
finished
Comparing bitmaps..finished
Fatal corruptions were found, Semantic pass skipped
1 found corruptions can be fixed only when running with --rebuild-tree
###########
reiserfsck finished at Mon Jul 26 13:14:28 2010
###########
=============================================================

fsck.reiserfs --rebuild-tree /dev/sdb1
reiserfsck 3.6.19 (2003 www.namesys.com)

*************************************************************
** Do not  run  the  program  with  --rebuild-tree  unless **
** something is broken and MAKE A BACKUP  before using it. **
** If you have bad sectors on a drive  it is usually a bad **
** idea to continue using it. Then you probably should get **
** a working hard drive, copy the file system from the bad **
** drive  to the good one -- dd_rescue is  a good tool for **
** that -- and only then run this program.                 **
** If you are using the latest reiserfsprogs and  it fails **
** please  email bug reports to reiserfs-l...@namesys.com, **
** providing  as  much  information  as  possible --  your **
** hardware,  kernel,  patches,  settings,  all reiserfsck **
** messages  (including version),  the reiserfsck logfile, **
** check  the  syslog file  for  any  related information. **
** If you would like advice on using this program, support **
** is available  for $25 at  www.namesys.com/support.html. **
*************************************************************

Will rebuild the filesystem (/dev/sdb1) tree
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
Replaying journal..
Reiserfs journal '/dev/sdb1' in blocks [18..8211]: 0 transactions replayed
###########
reiserfsck --rebuild-tree started at Mon Jul 26 13:15:17 2010
###########

Pass 0:
####### Pass 0 #######
Loading on-disk bitmap .. ok, 3050304 blocks marked used
Skipping 8807 blocks (super block, journal, bitmaps) 3041497 blocks will be read
0%....20%....40%....60%...block 15076429: The number of items (4864) is incorrect, should be (1) - corrected
block 15076429: The free space (1800) is incorrect, should be (4048) - corrected
pass0: vpf-10110: block 15076429, item (0): Unknown item type found [117440512 246031 0x8060000 ??? (15)] - deleted
.80%....pass0: block 19300354, item 9874 14345 0x1 DIR (3), len 1032, location 2088 entry count 27, fsck need 0, format old: 2 entries were deleted
100%
2 directory entries were hashed with not set hash.
271513 directory entries were hashed with "r5" hash.
    "r5" hash is selected
Flushing..finished
    Read blocks (but not data blocks) 3041497
        Leaves among those 13081
            - leaves all contents of which could not be saved and deleted 4
        Objectids found 270948

Pass 1 (will try to insert 13077 leaves):
####### Pass 1 #######
Looking for allocable blocks .. finished
0%....20%....40%....60%....80%....pass1: block 19300354, item 8, entry 4: The entry "libpixbufloader-icns.so" of the [9874 14345 0x1 DIR (3)] has hash offset 387966336 not larger smaller than the previous one 445987968. The entry is deleted.
pass1: block 19300354, item 8, entry 4: The entry "libpixbufloader-wbmp.so" of the [9874 14345 0x1 DIR (3)] has hash offset 445987968 not larger smaller than the previous one 445987968. The entry is deleted.
pass1: block 19300354, item 8, entry 5: The entry "libpixbufloader-wbmp.so" of the [9874 14345 0x1 DIR (3)] has hash offset 445987968 not larger smaller than the previous one 1044000256. The entry is deleted.
pass1: block 19300354, item 8, entry 5: The entry "io-wmf.so" of the [9874 14345 0x1 DIR (3)] has hash offset 825379072 not larger smaller than the previous one 1044000256. The entry is deleted.
pass1: block 19300354, item 8, entry 5: The entry "libpixbufloader-jpeg.so" of the [9874 14345 0x1 DIR (3)] has hash offset 1044000256 not larger smaller than the previous one 1044000256. The entry is deleted.
100%
Flushing..finished
    13077 leaves read
        12918 inserted
        159 not inserted
####### Pass 2 #######

Pass 2:
0%....20%....40%....60%....80%....100%
Flushing..finished
    Leaves inserted item by item 159
Pass 3 (semantic):
####### Pass 3 #########
vpf-10680: The directory [9874 14345] has the wrong block count in the StatData (3) - corrected to (2)
vpf-10650: The directory [9874 14345] has the wrong size in the StatData (1032) - corrected to (768)
Flushing..finished
    Files found: 206748
    Directories found: 22971
    Symlinks found: 41122
    Others: 101
Pass 3a (looking for lost dir/files):
####### Pass 3a (lost+found pass) #########
Looking for lost directories:
Looking for lost files:
Flushing..finished
    Objects without names 6
    Files linked to /lost+found 6
Pass 4 - finished
Flushing..finished
Syncing..finished
###########
reiserfsck finished at Mon Jul 26 13:22:20 2010
###########

Reply via email to