Re: [BackupPC-users] How to move backuppc data from lvm to bigger disks/lvm?

2007-06-15 Thread Ralf Gross
Adam Goryachev schrieb:
 Ralf Gross wrote:
  Hi,
 
  I want to upgrade the backuppc data space of one of my backuppc
  server. /var/lib/backuppc (reiserfs) is at the moment a plain lvm
  (1TB, 4x250GB, 740GB used) and I want to update to raid5/lvm (1,5TB,
  4x500GB).
 
  I did upgrade an other server which had no lvm volume a feew weeks
  ago. This was easy, I just copied the reiserfs partition to the new
  system with dd an netcat and resized/grow the partition afterwards.
 
  What is the best way to do this with lvm? I have attached 2 external
  USB disks (500GB+ 300GB = 800GB with lvm) as a temp. storage for the
  old data, because the 4 on-board SATA ports are all used by the old
  backuppc data.
 
  I'm not sure if I can just dd the old lvm volume to one big file on the
  USB disk, replace the disks, dd the file back to the lvm volume and
  resize the reiserfs fs? 
 
  dd if=/dev/mapper/VolGroup00-LogVol00 bs=8192 of=backuppc.dump
 
  ..replace disks, create new lvm volume...
 
  dd if=backuppc.dump of=/dev/mapper/bigger-lvm-volume bs=8192
 
  I think the dd data includes information about the lvm volume/logical
  groups. I guess A lvm snapshot will not help much.

 I think if you do that, you will have problems.. I would do this:
 stop backuppc and unmount the filesystem (or mount readonly)
 resize the reiserfs filesystem to  800G

I already tried this, but resize_reiserfs gives me an bitmap error.
I realized that my first idea with dd and the backuppc.dump file will
need a additional gzip command to work, because the destiantion fs is
smaller than the source.

 resize the LVM partition to 800G
 dd the LVM partition containing the reiserfs filesystem to your spare
 LVM partition
 replace the 4 internal HDD's
 create the new LVM / RAID/etc setup on the new drives
 dd the USB LVM partition onto the internal LVM partition you have configured
 resize the reiserfs filesystem to fill the new LVM partition size

Because resize_reiserfs is not working, this is no option :(
 
 I don't promise it will work, but if it doesn't, you do at least still
 have your original drives with all the data,
 The problem I see in your suggestion is that you are copying a 1TB
 filesystem/partition into a 800GB one therefore if you have stored data
 at the end of the drive, then it will be lost, the above should solve
 that problem.

At the moment I'm transfering the data with cp, but in the last 12
hours only 50% of the data (~380GB) were copied. And this is only the
cpool directory. But this is what I expected with cp.

I thought about an other fancy way...

* remove the existing volg/lv data on the usb disks
* use vgextend to expand the existing backuppc volg with the 2 usb
  disks
* pvmove the data from 3 of the 4 old disks to the usb disks
* remove the 3 old disks with vgreduce
* replace 3 of the 4 disks with the new ones
* create a raid 5 with 3 new disk (3x 500GB = 1TB)
* create a new pv on the raid
* expand the backuppc volg with vgextend
* pvmove the last old disk and the usb disks to the raid pv
* remove the last old disk + usb disks with vgreduce
* replace the last old disk with the new one
* grow the raid 5 (this is possible since kernel 2.6.17 or so...)
* pvresize the raid 5 pv


Sounds like a lot of fun ;)

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup in progress, but never ends.

2007-06-15 Thread Craig Barratt
Regis writes:

  I tested the zize of the new directory
 $ while [ 1 ]; do du -sk new; sleep 30; done
  And the size is always the same.
 
 Any ideas?
 
 Could some one help me to solve this problem ?

I'd recommend manually running the smbclient command and pipe
the output into tar tvf - so you can see how far it gets
and perhaps where it hangs.

You might try running WinXX chkdsk on the client.  You might
also try disabling your AV software to see if that makes a
difference.

Craig

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to move backuppc data from lvm to bigger disks/lvm?

2007-06-15 Thread Ralf Gross
Holger Parplies schrieb:
 Hi,
 
 Adam Goryachev wrote on 15.06.2007 at 11:28:13 [Re: [BackupPC-users] How to 
 move backuppc data from lvm to bigger disks/lvm?]:
  Ralf Gross wrote:
   [...]
   dd if=/dev/mapper/VolGroup00-LogVol00 bs=8192 of=backuppc.dump
   [...]
   I think the dd data includes information about the lvm volume/logical
   groups.
  [...]
  The problem I see in your suggestion is that you are copying a 1TB
  filesystem/partition into a 800GB one therefore if you have stored data
  at the end of the drive, then it will be lost, the above should solve
  that problem.
 
 just to make it clearer:
 the device file name /dev/mapper/VolGroup00-LogVol00 means you have not been
 very imaginative when choosing VG and LV names, but aside from that, it

That's true, but with only one volg/lv I didn't care much about it.

 represents a plain block device. The filesystem is not and should not be
 aware of how the underlying block device is implemented. Reading
 /dev/mapper/VolGroup00-LogVol00 gives you the concatenation of the raw
 blocks it consists of in ascending order, just like reading /dev/sda1,
 /dev/sda, /dev/fd0 or /dev/sr0 does (/dev/sr0 is likely not writable though)
 - nothing more and nothing less. Meta-information about VG and LVs is
 stored in the PVs outside the data allocated to any LV.

Ok, then 

dd if=/dev/VolGroup00/LogVol00 bs=8192 | gzip -2 -c  backuppc.dump.gz
gzip -dc backuppc.dump.gz | dd of=/dev/fancy-vg-name/fancy-lv-name

should work. I created a file with 
'dd if=/dev/zero of=/var/lib/backuppc/null'
this should help compressing the unused space better.
 
 I agree that you will need at least as much space as your LV takes up if you
 want to copy it. I would add that copying into a file will probably give you
 more trouble than copying to a block device (provided it is large enough).
 There's simply one layer less of arbitrary file size limits you would be
 dealing with.

Yeah, last time I did this on an other system  I was able to use
resize_reiserfs and it worked very well. I've no idea why
resize_reiserfs is now giving me this bitmap error.

Ralf

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to extract information on duplicate files from BackupPC

2007-06-15 Thread Krsnendu dasa
You might consider fslint It is designed specifically for this task.

On 15/06/07, Craig Barratt [EMAIL PROTECTED] wrote:
 David writes:

  Is there an easy way to extract information on what files BackupPC has found
  are duplicate? I'd like to eliminate duplicate files on my system, and since
  BackupPC has already gone to the work of identifying them, I'd like to get
  some sort of report listing the paths (relative to the original system(s)) 
  to
  the duplicated files
 
  I've looked in the documentation and searched the mail list archives,
  but haven't found this. My apologies if this has been covered before;
  feel free to point me to the appropriate URL if so.

 You need to find all the files with same inode.  You can do
 that with find, but that is expensive.

 Three years ago Yaroslav Halchenko reported that he had modified
 the linux locate tool to also save the inode in its database.
 That allows very fast lookups via inode (after the fixed overhead
 of crawling the tree to update the locate db).  He posted his
 scripts here:

 http://www.onerussian.com/Linux/ilocate/ilocate.phtml

 Craig

 -
 This SF.net email is sponsored by DB2 Express
 Download DB2 Express C - the FREE version of DB2 express and take
 control of your XML. No limits. Just data. Click to get it now.
 http://sourceforge.net/powerbar/db2/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Automating windows clients installation

2007-06-15 Thread Rodrigo Real

Hi guys

I am using backuppc with success for at least 3 years, I backup mostly
linux servers, but also have some windows hosts. Some of these
linux machines are through a VPN, using openVPN.

Now I am thinking about backing up some disperse Windows hosts using
the Internet as the network, with the backuppc server in  a safe
place. The idea is that the amount of data on each machine will be
something like 100-200MB.

One of the problems I see are the difficulties related to installing
and configuring OpenVPN on windows, configure the samba shares, and
backuppc. Although this is not a very very hard problem, it is for final
users. 

My idea is to build a sort of installer for this situation, which with
a few questions would do the job. But I don't have any experience with
this kind of thing in the windows world. I believe it could be done
with Inno Setup (http://www.jrsoftware.org/isinfo.php), which seems to
be a nice program, but I never used it.

Do you think it is a good idea to implement this sort of thing? 

The main question seems to be: Is this sort of thing already
available?

Best regards,
Rodrigo




-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Automating windows clients installation

2007-06-15 Thread Lemonbit

Rodrigo Real wrote:


My idea is to build a sort of installer for this situation, which with
a few questions would do the job. But I don't have any experience with
this kind of thing in the windows world. I believe it could be done
with Inno Setup (http://www.jrsoftware.org/isinfo.php), which seems to
be a nice program, but I never used it.


You might also want to check out NSIS [0], the installer created by  
the guys that make Winamp and which is being used by lots of programs.


Nils Breunese.

[0] http://nsis.sourceforge.net/




PGP.sig
Description: Dit deel van het bericht is digitaal ondertekend
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] command line restore problems

2007-06-15 Thread Matt Miller
Holger Parplies wrote:

 correct. The regular expression used for checking the validity of a share
 name in BackupPC_tarCreate 2.1.1-2sarge2 (which is the closest to 2.1.2-6
 I've come across) does not allow ampersands. That's a bug which is fixed in
 BackupPC 3.0.0 (possibly earlier).
 
 Change line 128 of BackupPC_tarCreate:
 
 - if ( $opts{s} !~ /^([\w\s\.\/\$-]+)$/  $opts{s} ne * ) {
 + if ( $opts{s} =~ m{(^|/)\.\.(/|$)} ) {

This worked beautifully! Thanks!


-- 
- Matt Miller -
Solutions for Progress
728 South Broad Street
Philadelphia, PA  19146
215-701-6108 (v)
215-972-8109 (f)


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Cygwin, Rsyncd running slow

2007-06-15 Thread Ski Kacoroski
Hi,

I am setting up my first rsyncd/cygwin backup of a windows client.  The
client has 60GB of data on it and it currently takes 23 hours for a
full backup (100MB link between the client and the server).  I thought
the problem may the size and number of files with rsync so I split it
into 3 backups of roughly 20GB each and it is still just as slow.

Any ideas on how to speed this up or figure out where the slowdown is
are most appreciated.

cheers,

ski

-- 
When we try to pick out anything by itself, we find it
 connected to the entire universeJohn Muir

Chris Ski Kacoroski, [EMAIL PROTECTED], 206-501-9803

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cygwin, Rsyncd running slow

2007-06-15 Thread nilesh vaghela

In my case :20 gb of backup is taking around 98 minutes full backup and 45
minutes incremental backup.

What your setting of config.pl.

On 6/15/07, Ski Kacoroski [EMAIL PROTECTED] wrote:


Hi,

I am setting up my first rsyncd/cygwin backup of a windows client.  The
client has 60GB of data on it and it currently takes 23 hours for a
full backup (100MB link between the client and the server).  I thought
the problem may the size and number of files with rsync so I split it
into 3 backups of roughly 20GB each and it is still just as slow.

Any ideas on how to speed this up or figure out where the slowdown is
are most appreciated.

cheers,

ski

--
When we try to pick out anything by itself, we find it
connected to the entire universeJohn Muir

Chris Ski Kacoroski, [EMAIL PROTECTED], 206-501-9803

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/





--
Nilesh Vaghela
ElectroMech
Redhat Channel Partner and Training Partner
74, Nalanda Complex, Satellite Rd, Ahmedabad
25, The Emperor, Fatehgunj, Baroda.
www.electromech.info
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cygwin, Rsyncd running slow

2007-06-15 Thread Adam Goryachev
Ski Kacoroski wrote:
 Hi,

 I am setting up my first rsyncd/cygwin backup of a windows client.  The
 client has 60GB of data on it and it currently takes 23 hours for a
 full backup (100MB link between the client and the server).  I thought
 the problem may the size and number of files with rsync so I split it
 into 3 backups of roughly 20GB each and it is still just as slow.

 Any ideas on how to speed this up or figure out where the slowdown is
 are most appreciated.
   
One thing I noticed is that my anti-virus program would do a virus scan 
during the initial rsync search to build the list of files, and I assume 
again while it was actually transferring the changed files.
Turning off the AV would significantly decrease the time for rsyncd to 
build the file list.

Regards,
Adam


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/