Re: [reiserfs-list] copying partition to partition, sector by sector, live
On Mon, Apr 22, 2002 at 11:47:55PM +0100, Matthew Toseland wrote: | On Sun, Apr 21, 2002 at 08:39:55PM -0500, Phil Howard wrote: | On Mon, Apr 22, 2002 at 02:07:33AM +0100, Matthew Toseland wrote: | | | Ummm, LVM snapshots? (man lvcreate). | | No. Nothing to do with LVM. | I was suggesting a solution. Your problem is that reiserfs's metadata is | so dynamic that if you copy the partition while it is active, you end up | with metadata loss, which has to be fixed by reiserfsck. A possible | solution is to get a consistent snapshot. To do this, do: | | lvcreate -n snapshotname -L 500M -v /dev/vgname/partition name | dd if=/dev/vgname/snapshotname of=/usr/local/temp/snapshot.img bs=1M | lvremove -f /dev/vgname/snapshotname | | (1) 500M could be anything; it is the amount of space needed to log all | changes to the partition while the dd is going on; you can extend it later | but once it fills up completely, you're scr00d; presumably the dd will | return an error Sounds like journaling at the sector level. How does all that change get replayed after the snapshot is done? I'm starting to think it might be better to go back to using rsync on mounted filesystems. -- - | Phil Howard - KA9WGN | Dallas | http://linuxhomepage.com/ | | [EMAIL PROTECTED] | Texas, USA | http://phil.ipal.org/ | -
Re: [reiserfs-list] copying partition to partition, sector by sector, live
On Mon, Apr 22, 2002 at 02:58:34PM -0400, Chris Mason wrote: | On Sun, 2002-04-21 at 21:39, Phil Howard wrote: | On Mon, Apr 22, 2002 at 02:07:33AM +0100, Matthew Toseland wrote: | | | Ummm, LVM snapshots? (man lvcreate). | | No. Nothing to do with LVM. | | | Doing it safely will require something like lvm or evms snapshots. You | could do the sector by sector copy and then run reiserfsck | --rebuild-tree. The latest versions of reiserfsprogs are faster, the | speed relative to a search for updated files will depend on your data | set. | | More importantly you just don't get a consistent copy, regardless of the | FS you choose. I wouldn't consider a sector by sector copy on a mounted | FS a valid backup of any type of filesystem, especially not a tree based | one. I've never have a problem doing this with ext2 filesystems. By comparison I have had ext2 filesystems totally corrupted by just a power-reset. But I'm starting to think that with reiserfs, I need to go back to rsync as the backup mechanism. Some memory leak and stalling problems with rsync seem to be fixed, now. -- - | Phil Howard - KA9WGN | Dallas | http://linuxhomepage.com/ | | [EMAIL PROTECTED] | Texas, USA | http://phil.ipal.org/ | -
Re: [reiserfs-list] copying partition to partition, sector bysector, live
On Mon, 2002-04-22 at 19:24, Phil Howard wrote: On Mon, Apr 22, 2002 at 11:47:55PM +0100, Matthew Toseland wrote: | On Sun, Apr 21, 2002 at 08:39:55PM -0500, Phil Howard wrote: | On Mon, Apr 22, 2002 at 02:07:33AM +0100, Matthew Toseland wrote: | | | Ummm, LVM snapshots? (man lvcreate). | | No. Nothing to do with LVM. | I was suggesting a solution. Your problem is that reiserfs's metadata is | so dynamic that if you copy the partition while it is active, you end up | with metadata loss, which has to be fixed by reiserfsck. A possible | solution is to get a consistent snapshot. To do this, do: | | lvcreate -n snapshotname -L 500M -v /dev/vgname/partition name | dd if=/dev/vgname/snapshotname of=/usr/local/temp/snapshot.img bs=1M | lvremove -f /dev/vgname/snapshotname | | (1) 500M could be anything; it is the amount of space needed to log all | changes to the partition while the dd is going on; you can extend it later | but once it fills up completely, you're scr00d; presumably the dd will | return an error Sounds like journaling at the sector level. How does all that change get replayed after the snapshot is done? The snapshots use a simple copy on write setup. Before changing a block on the source, the original is copied to the snapshot. This allows for very fast snapshot creation, and a moderate runtime cost doing the copies. When you're done with the backup, you can just delete the snapshot without affecting the (now modified) original. I'm starting to think it might be better to go back to using rsync on mounted filesystems. It might. Snapshots are very useful, but are best in databases and other setups where you need to freeze the FS at a specific point in time, and when you need an absolute minimum of application down time. -chris
Re: [reiserfs-list] copying partition to partition, sector by sector, live
From: Phil Howard [EMAIL PROTECTED] On Mon, Apr 22, 2002 at 02:58:34PM -0400, Chris Mason wrote: | Doing it safely will require something like lvm or evms snapshots. You | could do the sector by sector copy and then run reiserfsck | --rebuild-tree. The latest versions of reiserfsprogs are faster, the | speed relative to a search for updated files will depend on your data | set. | | More importantly you just don't get a consistent copy, regardless of the | FS you choose. I wouldn't consider a sector by sector copy on a mounted | FS a valid backup of any type of filesystem, especially not a tree based | one. But I'm starting to think that with reiserfs, I need to go back to rsync as the backup mechanism. Some memory leak and stalling problems with rsync seem to be fixed, now. More significantly there was a security problem fixed in January. -- |\__/|\__/|\__ --= 8-) EHM =-- __/|\__/|\__/| \|| | [EMAIL PROTECTED] PGP 8881EF59 | ||/ \ \ | __| -O #include stddisclaimer.h O- |__ | / / \___\_|/82 04 A1 3C C7 B1 37 2A E3 6E 84 DA 97 4C 40 E6\|_/___/
Re: [reiserfs-list] ReiserFS BUG.
Hi, So you ran reiserfsck --rebuild-tree which finished properly, then mounted fs, got kernel oops, and then reiserfsck --fix-fixable aborted. Right? Could you provide us metadata of your partition extracted with: debugreiserfs -p /dev/xxx | bzip2 -c xxx.bz2 and put it somewhere on ftp. We would like to test reiserfsck and the kernel for such cases. This is going to take a while (10-12 hours or so). But so far I have gotten about 30 of these lines: BROKEN BLOCK HEAD 34531780 left 158686879, 5568 /sec BROKEN BLOCK HEAD 34550672 left 158666522, 5568 /sec I have put the debug.bz2 at http://www.tnonline.net/debug.bz2 Debugreiserfs finished with: Packed 210432 blocks: compessed 201522 full blocks 8910 leaves with broken block head 145 corrupted leaves 37 internals 1294 descriptors 0 data packed with ratio 0.07 //anders
Re: [reiserfs-list] copying partition to partition, sector by sector, live
On Mon, 22 Apr 2002 23:47:55 BST, Matthew Toseland said: lvcreate -n snapshotname -L 500M -v /dev/vgname/partition name You missed the -s flag. From 'man lvcreate' (LVM 1.0.3): -s, --snapshot Create a snapshot logical volume (or snapshot) for an existing, so called original logical volume (or origin). Snapshots provide a 'frozen image' of the contents of the origin while the origin can still be updated. They enable consistent backups and online recovery of removed/overwritten data/files. The snapshot does not need the same amount of storage the origin has. In a typical scenario, 15-20% might be enough. In case the snapshot runs out of storage, use lvextend(8) to grow it. Shrinking a snapshot is supported by lvreduce(8) as well. Run lvdisplay(8) on the snapshot in order to check how much data is allo cated to it. dd if=/dev/vgname/snapshotname of=/usr/local/temp/snapshot.img bs=1M lvremove -f /dev/vgname/snapshotname The rest of this looks OK (although I've *NOT* tried it myself). I'm not sure what the granularity of the snapshot is - whether you need to allocate space for each block that's modified, or each 4M segment, or what... -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech msg05199/pgp0.pgp Description: PGP signature
Re: [reiserfs-list] kernel loop with lseek + LFS on reiser partition.
Hello! On Mon, Apr 22, 2002 at 07:10:16PM +0200, Dieter N?tzel wrote: I think this is because of expanding-truncate patch. So, the real bug is this process cannot get interrupted until finished which opens a window for resource-eating. /database/db1 time ~/Entwicklung/ReiserFS/testprg 0.000u 545.310s 9:12.69 98.6% 0+0k 0+0io 124pf+0w 9 minutes as expected from your HW config. -rw-r--r--1 nuetzel users137438953485 Apr 22 17:22 seek.tmp /dev/sdb5 859412166264693148 20% /database/db1 Isn't this only a bad glibc speed test? Huh? glibc have nothing to do with this result. With my latest latencytest0.42-png tests I found that there are one or two locks remaining in the VFS or ReiserFS code (preemption and/or lock-break could be the culprit, too). We already have identified a problem place and testing the patch before releasing it to the public. Bye, Oleg