Re: [PATCH] reiserfs: eliminate minimum window size for bitmap searching
On Tue, 22 Aug 2006 19:46:34 -0700 Clay Barnes <[EMAIL PROTECTED]> wrote: > Perhaps I mis-recall, but shouldn't delayed writes (or something along > those lines) cause a case where two files are being incrementally > written rare? If we did delayed allocation, yes. But we generally don't. (Exceptions: XFS, reiser4, ext4, ext2 prototype circa 2001).
Re: [PATCH] reiserfs: eliminate minimum window size for bitmap searching
On 17:11 Tue 22 Aug , Andrew Morton wrote: > > I can see that the bigalloc code as-is is pretty sad, but this is a scary > patch. It has the potential to cause people significant performance > problems way, way ahead in the future. > > For example, suppose userspace is growing two files concurrently. It could > be that the bigalloc code would cause one file's allocation cursor to > repeatedly jump far away from the second. ie: a beneficial side-effect. > Without bigalloc that side-effect is lost and the two files blocks end up > all intermingled. Perhaps I mis-recall, but shouldn't delayed writes (or something along those lines) cause a case where two files are being incrementally written rare? It seems that this case would only occur if two processes were writing two files in small chunks and calling fsync constantly (*cough* evolution column resizing bug *cough*), PLUS the two would have to have the same offset (or close) for the file writes. It seems that the risk of fragmentation is a lesser danger than the full system near lock-up caused by the old behavour. --Clay > > I don't know if that scenario is realistic, but I bet there are similar > accidental oddities which can occur as a result of this change. > > But what are they?
Re: [PATCH] reiserfs: eliminate minimum window size for bitmap searching
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Andrew Morton wrote: > I can see that the bigalloc code as-is is pretty sad, but this is a scary > patch. It has the potential to cause people significant performance > problems way, way ahead in the future. > > For example, suppose userspace is growing two files concurrently. It could > be that the bigalloc code would cause one file's allocation cursor to > repeatedly jump far away from the second. ie: a beneficial side-effect. > Without bigalloc that side-effect is lost and the two files blocks end up > all intermingled. > > I don't know if that scenario is realistic, but I bet there are similar > accidental oddities which can occur as a result of this change. > > But what are they? Bigalloc doesn't cause that effect one way or the other. You'll end up with blocks still intermingled, just in 32 block[1] chunks. It doesn't throw the cursor way out, just until the next 32 block free window. Another thread writing will do the same thing, and the blocks can end up getting intermingled in the same manner on a different part of the disk. The behavior you're describing can only be caused by bad hinting: Two files that are placed too close to each other. This patch changes the part of the allocator that is *only* responsible for finding the free bits. Where it should start looking for them is a decision made earlier in determine_search_start(). This patch just reverts the change that Chris and I submitted ages ago as part of a number of block allocator enhancements, not as a bug fix. I think I traced it to the 2.5 days, but I can't find that particular email. Neither of us anticipated the problem that MythTV users are hitting with it. Reverting it just makes that part of the allocator behave similarly to the ext[23] allocator where it just collects available blocks from a starting point. For every day use, I don't think performance should be terribly affected, and it definitely fixes the pathological case that the MythTV users were seeing. - -Jeff [1]: For simplicity, I'll continue to reference 32 blocks as the chunk size. In reality, it can be anything up to 32 blocks. - -- Jeff Mahoney SUSE Labs -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org iD8DBQFE66OXLPWxlyuTD7IRApXFAJ9bUsrtfmRC2kOMMWcCel4BZq6/SgCfTcsV rS6dvKc6MowiAY+r/0Jhp5A= =MwOb -END PGP SIGNATURE-
Re: [PATCH] reiserfs: eliminate minimum window size for bitmap searching
I can see that the bigalloc code as-is is pretty sad, but this is a scary patch. It has the potential to cause people significant performance problems way, way ahead in the future. For example, suppose userspace is growing two files concurrently. It could be that the bigalloc code would cause one file's allocation cursor to repeatedly jump far away from the second. ie: a beneficial side-effect. Without bigalloc that side-effect is lost and the two files blocks end up all intermingled. I don't know if that scenario is realistic, but I bet there are similar accidental oddities which can occur as a result of this change. But what are they?
Re: [PATCH] reiserfs: eliminate minimum window size for bitmap searching
Jeff Mahoney wrote: > > > Also, I think the bigalloc behavior just ultimately ends up introducing > even more fragmentation on an already fragmented file system. It'll keep > contiguous chunks together, but those chunks can end up being spread all > over the disk. > > -Jeff > Yes, and almost as important, it makes it difficult to understand and predict the allocator, which means other optimizations become harder to do.
Re: Reiser4 stress test.
On Tuesday 22 August 2006 01:23, Hans Reiser wrote: > Thanks Andrew, please be patient and persistent with us at this time, as > one programmer is on vacation, and the other is only able to work a few > hours a day due to an illness. No problem. I'll post what I find to the list; the posts will still be there when you have time to devote to chasing bugs. They're not urgent problems for me; I just happen to have the time and interest to devote myself to solving them right now, and it appears I'll be able to muddle through the code okay. Andrew
Re: Reiser4 stress test.
On Tuesday 22 August 2006 01:23, Hans Reiser wrote: > Thanks Andrew, please be patient and persistent with us at this time, as > one programmer is on vacation, and the other is only able to work a few > hours a day due to an illness. No problem. I'll post what I find to the list; the posts will still be there when you have the time to devote to solving bugs. The delay will do me no harm whatsoever and I may even get to the bottom of one or two bugs in the meantime. (I happen to have time to spare at the moment). Andrew Wade
Re: [PATCH] reiserfs: eliminate minimum window size for bitmap searching
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 David Masover wrote: > Jeff Mahoney wrote: >> The problem is that finding the window isn't really a direct function of >> free space, it's a function of fragmentation. You could have a 50% full >> file system that still can't find a 32 block window by having every >> other block used. I know it's an extremely unlikely case, but it >> demonstrates the point perfectly. > > Maybe, but it's still not a counterpoint. No matter how fragmented a > filesystem is, freeing space can open up contiguous space, whereas if > space is not freed, you won't open up contiguous space. > > Thus, if your FS is 50% full and 100% fragmented, then you wait till > space is freed, because if nothing happens, or if more space is filled > in, you'll have the same problem at 60% than you did at 50%. If, > however, you're at 60% full, and 10% of the space is freed, then it's > fairly unlikely that you still don't have contiguous space, and it's > worth it to scan once more at 50%, and again if it then drops to 40%. > > So, if your FS is 90% full and space is being freed, I'd think it would > be worth it to scan again at 80%, 70%, and so on. I'd also imagine it > would do little or nothing to constantly monitor an FS that stays mostly > full -- maybe give it a certain amount of time, but if we're repacking > anyway, just wait for a repacker run. It seems very unlikely that > between repacker runs, activity between 86% and 94% would open up > contiguous space. > > It's still not a direct function of freed space (as opposed to free > space), but it starts to look better. > > I'm not endorsing one way or the other without benchmarks, though. I'd like to see benchmarks too. The goal is obviously to minimize seeks, but my feeling is that blocks that aren't entirely contiguous but are located in close enough proximity to each other so that they are all in the drive's cache anyway will perform better than 128k chunks spread all over the disk. Your solution is one possible approach, but I'd rather kill off bigalloc for reasons described below. Also, for clarification, the 128k I keep quoting is just what reiserfs_file_write() breaks larger writes into. It seems MythTV writes in large chunks (go figure, it's a streaming media application ;), so they get split up. For smaller writes, they'll go to the allocator with a request of that many blocks. reiserfs_{writepage,prepare_write,commit_write} all operate on one page (and so one block, usually) at a time. In the end, finding a contiguous window for all the blocks in a write is an advantageous special case, but one that can be found naturally when such a window exists anyway. >>> Hmm. Ok, I don't understand how this works, so I'll shut up. >> >> If the space after the end of the file has 32 or more blocks free, even >> without the bigalloc behavior, those blocks will be used. > > For what behavior -- appending? For any allocation after the first one. The allocator chooses a starting position based on the last block it knows about before the position of the write. This applies for both appends and sparse files. >> Also, I think the bigalloc behavior just ultimately ends up introducing >> even more fragmentation on an already fragmented file system. It'll keep >> contiguous chunks together, but those chunks can end up being spread all >> over the disk. > > This sounds like the NTFS strategy, which was basically to allow all > hell to break loose -- above a certain chunk size. Keep chunks of a > certain size contiguous, and you limit the number of seeks by quite a lot. The bigalloc behavior ends up reducing local fragmentation at the expense of global fragmentation. The free space of the test file system that prompted this patch was *loaded* with 31 block chunks. All of these were skipped until we backed off and searched for single block chunks - or worse, ignored the close chunks in favor of a contiguous chunk elsewhere. I don't think this is ideal behavior at all. Certainly it's better to have a contiguous chunk of 63 blocks and one block elsewhere. That lone block might only be a few blocks away and in the disk's cache already, but bigalloc doesn't take that into account either. The start of the allocation could be at the end of a bitmap group, leaving empty space where we naturally should have just grown the file. Without bigalloc, we still end up getting as many blocks together as we can in a particular bitmap before moving on to another one. It will group as many free blocks together as it can, and then try to find the next window. Bigalloc just meant that two windows of 16 blocks, a block apart, wasn't good enough. Once it's time to move on to another bitmap, the skip_busy behavior (enabled by default), will search for bitmap groups that are at least 10% free until the file system is 95% full[1]. We're already seeking anyway so this gives us the best chance of finding a group with room to grow. It also l
Re: [PATCH] reiserfs: eliminate minimum window size for bitmap searching
Jeff Mahoney wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 David Masover wrote: Jeff Mahoney wrote: When a file system becomes fragmented (using MythTV, for example), the bigalloc window searching ends up causing huge performance problems. In a file system presented by a user experiencing this bug, the file system was 90% free, but no 32-block free windows existed on the entire file system. This causes the allocator to scan the entire file system for each 128k write before backing down to searching for individual blocks. Question: Would it be better to take that performance hit once, then cache the result for awhile? If we can't find enough consecutive space, such space isn't likely to appear until a lot of space is freed or a repacker is run. The problem is that finding the window isn't really a direct function of free space, it's a function of fragmentation. You could have a 50% full file system that still can't find a 32 block window by having every other block used. I know it's an extremely unlikely case, but it demonstrates the point perfectly. Maybe, but it's still not a counterpoint. No matter how fragmented a filesystem is, freeing space can open up contiguous space, whereas if space is not freed, you won't open up contiguous space. Thus, if your FS is 50% full and 100% fragmented, then you wait till space is freed, because if nothing happens, or if more space is filled in, you'll have the same problem at 60% than you did at 50%. If, however, you're at 60% full, and 10% of the space is freed, then it's fairly unlikely that you still don't have contiguous space, and it's worth it to scan once more at 50%, and again if it then drops to 40%. So, if your FS is 90% full and space is being freed, I'd think it would be worth it to scan again at 80%, 70%, and so on. I'd also imagine it would do little or nothing to constantly monitor an FS that stays mostly full -- maybe give it a certain amount of time, but if we're repacking anyway, just wait for a repacker run. It seems very unlikely that between repacker runs, activity between 86% and 94% would open up contiguous space. It's still not a direct function of freed space (as opposed to free space), but it starts to look better. I'm not endorsing one way or the other without benchmarks, though. In the end, finding a contiguous window for all the blocks in a write is an advantageous special case, but one that can be found naturally when such a window exists anyway. Hmm. Ok, I don't understand how this works, so I'll shut up. If the space after the end of the file has 32 or more blocks free, even without the bigalloc behavior, those blocks will be used. For what behavior -- appending? Also, I think the bigalloc behavior just ultimately ends up introducing even more fragmentation on an already fragmented file system. It'll keep contiguous chunks together, but those chunks can end up being spread all over the disk. This sounds like the NTFS strategy, which was basically to allow all hell to break loose -- above a certain chunk size. Keep chunks of a certain size contiguous, and you limit the number of seeks by quite a lot.
Re: [PATCH] reiserfs: eliminate minimum window size for bitmap searching
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 David Masover wrote: > Jeff Mahoney wrote: >> When a file system becomes fragmented (using MythTV, for example), the >> bigalloc window searching ends up causing huge performance problems. In >> a file system presented by a user experiencing this bug, the file system >> was 90% free, but no 32-block free windows existed on the entire file >> system. >> This causes the allocator to scan the entire file system for each >> 128k write >> before backing down to searching for individual blocks. > > Question: Would it be better to take that performance hit once, then > cache the result for awhile? If we can't find enough consecutive space, > such space isn't likely to appear until a lot of space is freed or a > repacker is run. The problem is that finding the window isn't really a direct function of free space, it's a function of fragmentation. You could have a 50% full file system that still can't find a 32 block window by having every other block used. I know it's an extremely unlikely case, but it demonstrates the point perfectly. >> In the end, finding a contiguous window for all the blocks in a write is >> an advantageous special case, but one that can be found naturally when >> such a window exists anyway. > > Hmm. Ok, I don't understand how this works, so I'll shut up. If the space after the end of the file has 32 or more blocks free, even without the bigalloc behavior, those blocks will be used. Also, I think the bigalloc behavior just ultimately ends up introducing even more fragmentation on an already fragmented file system. It'll keep contiguous chunks together, but those chunks can end up being spread all over the disk. - -Jeff - -- Jeff Mahoney SUSE Labs -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org iD8DBQFE6yjsLPWxlyuTD7IRAuT0AJ9ssQafYPW+Gy/E/xN+LKCxamjycwCgqL6P aUbgXdn+0+K3sJhWGBWtrno= =NDyT -END PGP SIGNATURE-
Re: problem with reiser3
Marcos Dione wrote: On Mon, Aug 21, 2006 at 08:23:30PM -0500, David Masover wrote: it would be better to create a backup on a spare bigger partition using dd_rescue (pad not recoverable zones with zeroes), then run fsck on the created image. unluckly I can't. it's a 160 GiB partition and I don't have spare space. How much spare space do you have? You may be able to do some tricks with dm_snapshot... right now, I have 45 MiB of space in my spare disk. I *could* (should?) make more space, but can't guarrantee anythung. That won't be enough. Worst case, decide whether the data on that 160 gig partition is worth buying a cheap 200 or 300 gig drive for this backup.
Re: [PATCH] reiserfs: eliminate minimum window size for bitmap searching
Jeff Mahoney wrote: When a file system becomes fragmented (using MythTV, for example), the bigalloc window searching ends up causing huge performance problems. In a file system presented by a user experiencing this bug, the file system was 90% free, but no 32-block free windows existed on the entire file system. This causes the allocator to scan the entire file system for each 128k write before backing down to searching for individual blocks. Question: Would it be better to take that performance hit once, then cache the result for awhile? If we can't find enough consecutive space, such space isn't likely to appear until a lot of space is freed or a repacker is run. In the end, finding a contiguous window for all the blocks in a write is an advantageous special case, but one that can be found naturally when such a window exists anyway. Hmm. Ok, I don't understand how this works, so I'll shut up.
[PATCH] reiserfs: eliminate minimum window size for bitmap searching
When a file system becomes fragmented (using MythTV, for example), the bigalloc window searching ends up causing huge performance problems. In a file system presented by a user experiencing this bug, the file system was 90% free, but no 32-block free windows existed on the entire file system. This causes the allocator to scan the entire file system for each 128k write before backing down to searching for individual blocks. In the end, finding a contiguous window for all the blocks in a write is an advantageous special case, but one that can be found naturally when such a window exists anyway. This patch removes the bigalloc window searching, and has been proven to fix the test case described above. Signed-off-by: Jeff Mahoney <[EMAIL PROTECTED]> diff -ruNp linux-2.6.18-rc4.orig/fs/reiserfs/bitmap.c linux-2.6.18-rc4.orig.devel/fs/reiserfs/bitmap.c --- linux-2.6.18-rc4.orig/fs/reiserfs/bitmap.c 2006-08-22 09:49:45.0 -0400 +++ linux-2.6.18-rc4.orig.devel/fs/reiserfs/bitmap.c2006-08-22 10:19:35.0 -0400 @@ -1019,7 +1019,6 @@ static inline int blocknrs_and_prealloc_ b_blocknr_t finish = SB_BLOCK_COUNT(s) - 1; int passno = 0; int nr_allocated = 0; - int bigalloc = 0; determine_prealloc_size(hint); if (!hint->formatted_node) { @@ -1046,28 +1045,9 @@ static inline int blocknrs_and_prealloc_ hint->preallocate = hint->prealloc_size = 0; } /* for unformatted nodes, force large allocations */ - bigalloc = amount_needed; } do { - /* in bigalloc mode, nr_allocated should stay zero until -* the entire allocation is filled -*/ - if (unlikely(bigalloc && nr_allocated)) { - reiserfs_warning(s, "bigalloc is %d, nr_allocated %d\n", -bigalloc, nr_allocated); - /* reset things to a sane value */ - bigalloc = amount_needed - nr_allocated; - } - /* -* try pass 0 and pass 1 looking for a nice big -* contiguous allocation. Then reset and look -* for anything you can find. -*/ - if (passno == 2 && bigalloc) { - passno = 0; - bigalloc = 0; - } switch (passno++) { case 0: /* Search from hint->search_start to end of disk */ start = hint->search_start; @@ -1105,8 +1085,7 @@ static inline int blocknrs_and_prealloc_ new_blocknrs + nr_allocated, start, finish, -bigalloc ? -bigalloc : 1, +1, amount_needed - nr_allocated, hint-> -- Jeff Mahoney SUSE Labs
Re: reiser4-2.6.18-rc2-mm1: possible circular locking dependency detected in txn_end
Hello, On 12 August 2006 17:26, Laurent Riffard wrote: > Le 03.08.2006 17:07, Laurent Riffard a écrit : > > Le 03.08.2006 08:09, Alexander Zarochentsev a écrit : > >> On Tuesday 01 August 2006 01:29, Laurent Riffard wrote: > >>> Le 31.07.2006 21:55, Vladimir V. Saveliev a écrit : > Hello > > What kind of load did you run on reiser4 at that time? > >>> > >>> I just formatted a new 2GB Reiser4 FS, then I moved a whole > >>> ccache cache tree to this new FS (cache size was about 20~30 > >>> Mbytes). Something like: > >>> > >>> # mkfs.reiser4 /dev/vglinux1/ccache > >>> # mount -tauto -onoatime /dev/vglinux1/ccache /mnt/disk > >>> # mv ~laurent/.ccache/* /mnt/disk/ > >> > >> I was not able to reproduce it. Can you please try the following > >> patch? > >> > >> > >> lock validator friendly locking of new atom in > >> atom_begin_and_assign_to_txnh and locking of two atoms. > >> > >> Signed-off-by: Alexander Zarochentsev <[EMAIL PROTECTED]> > >> --- > >> > >> fs/reiser4/txnmgr.c | 14 -- > >> fs/reiser4/txnmgr.h | 15 +++ > >> 2 files changed, 23 insertions(+), 6 deletions(-) > > [patch snipped] > > > I tried this patch: it's slow as hell (CPU is ~100% system) and it overhead of locking dependency checks? also disabling CONFIG_REISER4_DEBUG should help tp reduce cpu usage. > > panics when syncing... please apply another patch lock validator friendly locking of new atom in atom_begin_and_assign_to_txnh and locking of two atoms. Signed-off-by: Alexander Zarochentsev <[EMAIL PROTECTED]> --- fs/reiser4/txnmgr.c | 14 -- fs/reiser4/txnmgr.h | 15 +++ 2 files changed, 23 insertions(+), 6 deletions(-) --- linux-2.6-git.orig/fs/reiser4/txnmgr.c +++ linux-2.6-git/fs/reiser4/txnmgr.c @@ -397,7 +397,7 @@ static void atom_init(txn_atom * atom) INIT_LIST_HEAD(ATOM_OVRWR_LIST(atom)); INIT_LIST_HEAD(ATOM_WB_LIST(atom)); INIT_LIST_HEAD(&atom->inodes); - spin_lock_init(&atom->alock); + spin_lock_init(&(atom->alock)); /* list of transaction handles */ INIT_LIST_HEAD(&atom->txnh_list); /* link to transaction manager's list of atoms */ @@ -732,10 +732,12 @@ static int atom_begin_and_assign_to_txnh assert("jmacd-17", atom_isclean(atom)); /* -* do not use spin_lock_atom because we have broken lock ordering here -* which is ok, as long as @atom is new and inaccessible for others. +* lock ordering is broken here. It is ok, as long as @atom is new +* and inaccessible for others. We can't use spin_lock_atom or +* spin_lock(&atom->alock) because they care about locking +* dependencies. spin_trylock_lock doesn't. */ - spin_lock(&(atom->alock)); + check_me("", spin_trylock_atom(atom)); /* add atom to the end of transaction manager's list of atoms */ list_add_tail(&atom->atom_link, &mgr->atoms_list); @@ -751,7 +753,7 @@ static int atom_begin_and_assign_to_txnh atom->super = reiser4_get_current_sb(); capture_assign_txnh_nolock(atom, txnh); - spin_unlock(&(atom->alock)); + spin_unlock_atom(atom); spin_unlock_txnh(txnh); return -E_REPEAT; @@ -2112,11 +2114,11 @@ static void fuse_not_fused_lock_owners(t atomic_inc(&atomf->refcount); spin_unlock_txnh(ctx->trans); if (atomf > atomh) { - spin_lock_atom(atomf); + spin_lock_atom_nested(atomf); } else { spin_unlock_atom(atomh); spin_lock_atom(atomf); - spin_lock_atom(atomh); + spin_lock_atom_nested(atomh); } if (atomh == atomf || !atom_isopen(atomh) || !atom_isopen(atomf)) { release_two_atoms(atomf, atomh); @@ -2794,10 +2796,10 @@ static void lock_two_atoms(txn_atom * on /* lock the atom with lesser address first */ if (one < two) { spin_lock_atom(one); - spin_lock_atom(two); + spin_lock_atom_nested(two); } else { spin_lock_atom(two); - spin_lock_atom(one); + spin_lock_atom_nested(one); } } --- linux-2.6-git.orig/fs/reiser4/txnmgr.h +++ linux-2.6-git/fs/reiser4/txnmgr.h @@ -503,6 +503,7 @@ static inline void spin_lock_atom(txn_at { /* check that spinlocks of lower priorities are not held */ assert("", (LOCK_CNT_NIL(spin_locked_txnh) && + LOCK_CNT_NIL(spin_locked_atom) && LOCK_CNT_NIL(spin_locked_jnode) && LOCK_CNT_NIL(spin_locked_zlock) && LOCK_CNT_NIL(rw_locked_dk) && @@ -514,6 +515,20 @@ static inline void spin_lock_atom(txn_at LOCK_CNT_INC(spin_locked); } +static inline void spin_lock_atom_nested(txn_atom *atom) +{ + assert("
Re: reiserfs v3.6 problems
mack ragan wrote: Hi, My root partition will not mount...strange problems have built up to this, like not being able to read files as root...and I get the following output frim "debugreiserfs /dev/hdb2": debugreiserfs 3.6.19 (2003 www.namesys.com) Filesystem state: consistency is not checked after last mounting Reiserfs super block in block 16 on 0x342 of format 3.6 with standard journal Count of blocks on the device: 6359728 Number of bitmaps: 195 Blocksize: 4096 Free blocks (count of blocks - used [journal, bitmaps, data, reserved] blocks): 6359728 Root block: 0 Filesystem is clean Tree height: 0 Hash function used to sort names: "r5" Objectid map size 968, max 972 Journal parameters: Device [0x0] Magic [0x482c3fa] Size 8193 blocks (including 1 for journal header) (first block 18) Max transaction length 1024 blocks Max batch size 900 blocks Max commit age 30 Blocks reserved by journal: 0 Fs state field: 0x3: FATAL corruptions exist. some corruptions exist. sb_version: 2 inode generation number: 49892980 UUID: 3eeff8ac-8c07-4a98-b126-7bcebd4480c1 LABEL: Set flags in SB: ATTRIBUTES CLEAN I have run "reiserfsck --rebuild-sb" and "reiserfsck --rebuild-tree" based on some forum/message boards regarding similar symptoms/issues, but still no good. Is there anyway I can recover? I'm not very experienced with reiserfs, so any help is very appreciated. It would be better at first to have a backup on a bigger spare partition using dd. Then try reiserfsck --rebuild-tree on the created image and send the output. Thanks!
Re: problem with reiser3
Marcos Dione wrote: On Tue, Aug 22, 2006 at 03:38:40AM +0400, Edward Shishkin wrote: Marcos Dione wrote: hi all. I'm having problems checking a reiser3 filesystem. reiserfsck says: bread: Cannot read the block (27887610): (Input/output error). I passed badblocks over the partition and found 61 bad sectors (which I know is bad), did you specify reiserfs block size (via option -b) when creating a list of bad blocks? dunno which is the block size. tried to figure it out, but (from dmesg trying to mount the partition): debugreiserfs /dev/sda1 then look at Blocksize it the output ReiserFS: sda1: journal params: device sda1, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 and: $ sudo reiserfstune /dev/sda1 reiserfstune: Filesystem looks not cleanly umounted, check the consistency first. I assume it's the 8192 the kernel reports... it would be better to create a backup on a spare bigger partition using dd_rescue (pad not recoverable zones with zeroes), then run fsck on the created image. unluckly I can't. it's a 160 GiB partition and I don't have spare space.