Adam Goryachev schrieb: > Ralf Gross wrote: > > Hi, > > > > I want to upgrade the backuppc data space of one of my backuppc > > server. /var/lib/backuppc (reiserfs) is at the moment a plain lvm > > (1TB, 4x250GB, 740GB used) and I want to update to raid5/lvm (1,5TB, > > 4x500GB). > > > > I did upgrade an other server which had no lvm volume a feew weeks > > ago. This was easy, I just copied the reiserfs partition to the new > > system with dd an netcat and resized/grow the partition afterwards. > > > > What is the best way to do this with lvm? I have attached 2 external > > USB disks (500GB+ 300GB = 800GB with lvm) as a temp. storage for the > > old data, because the 4 on-board SATA ports are all used by the old > > backuppc data. > > > > I'm not sure if I can just dd the old lvm volume to one big file on the > > USB disk, replace the disks, dd the file back to the lvm volume and > > resize the reiserfs fs? > > > > dd if=/dev/mapper/VolGroup00-LogVol00 bs=8192 of=backuppc.dump > > > > ..replace disks, create new lvm volume... > > > > dd if=backuppc.dump of=/dev/mapper/bigger-lvm-volume bs=8192 > > > > I think the dd data includes information about the lvm volume/logical > > groups. I guess A lvm snapshot will not help much. > > > I think if you do that, you will have problems...... I would do this: > stop backuppc and unmount the filesystem (or mount readonly) > resize the reiserfs filesystem to < 800G
I already tried this, but resize_reiserfs gives me an bitmap error. I realized that my first idea with dd and the backuppc.dump file will need a additional gzip command to work, because the destiantion fs is smaller than the source. > resize the LVM partition to <800G > dd the LVM partition containing the reiserfs filesystem to your spare > LVM partition > replace the 4 internal HDD's > create the new LVM / RAID/etc setup on the new drives > dd the USB LVM partition onto the internal LVM partition you have configured > resize the reiserfs filesystem to fill the new LVM partition size Because resize_reiserfs is not working, this is no option :( > I don't promise it will work, but if it doesn't, you do at least still > have your original drives with all the data, > The problem I see in your suggestion is that you are copying a 1TB > filesystem/partition into a 800GB one therefore if you have stored data > at the end of the drive, then it will be lost, the above should solve > that problem. At the moment I'm transfering the data with cp, but in the last 12 hours only 50% of the data (~380GB) were copied. And this is only the cpool directory. But this is what I expected with cp. I thought about an other fancy way... * remove the existing volg/lv data on the usb disks * use vgextend to expand the existing backuppc volg with the 2 usb disks * pvmove the data from 3 of the 4 old disks to the usb disks * remove the 3 old disks with vgreduce * replace 3 of the 4 disks with the new ones * create a raid 5 with 3 new disk (3x 500GB = 1TB) * create a new pv on the raid * expand the backuppc volg with vgextend * pvmove the last old disk and the usb disks to the raid pv * remove the last old disk + usb disks with vgreduce * replace the last old disk with the new one * grow the raid 5 (this is possible since kernel 2.6.17 or so...) * pvresize the raid 5 pv Sounds like a lot of fun ;) Ralf ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/