Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Nils Breunese (Lemonbit)

Toni Van Remortel wrote:


How can I reduce bandwidth usage for full backups?

Even when using rsync, BackupPC does transfer all data on a full  
backup,

and not only the modified files since the last incremental or full.


That's not true. Only modifications are transfered over the network  
when using rsync. Full backups are just more thoroughly checking  
whether files have changed (not just comparing timestamps, but  
actually checking the contents of the files).


Nils Breunese.


PGP.sig
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Toni Van Remortel
Nils Breunese (Lemonbit) wrote:
 Toni Van Remortel wrote:
 How can I reduce bandwidth usage for full backups?

 Even when using rsync, BackupPC does transfer all data on a full backup,
 and not only the modified files since the last incremental or full.
 That's not true. Only modifications are transfered over the network
 when using rsync. Full backups are just more thoroughly checking
 whether files have changed (not just comparing timestamps, but
 actually checking the contents of the files).
Then I wonder what gets transferred. If I monitor a full dump, the
bandwidth usage is way much higher than when I copy it manually. If a
simple 'rsync -auv' takes 2 hours to complete against a backup from 1
day ago, they why does BackupPC take 2 days for the same action?

I'm out of ideas, and hacks.

-- 
Toni Van Remortel
Linux System Engineer @ Precision Operations NV
+32 3 451 92 26 - [EMAIL PROTECTED]


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] How can I let users browse and restore their backups

2007-11-26 Thread Alexander Lenz
Hi there, BackupPC Community,

what is the easiest way to let our users (we are about 30 here) browse and 
restore
the backups that were made from their machines ?
- Without granting admin access to them. -

We'd need some restricted accounts to access backuppc via http, which should
allow exclusive access to the user's machine, but to none of the other backups.
Is there a way to accomplish that  - e.g., adding users to  
/etc/backuppc/htpasswd
and htgroup ..?
Problem is, the whole backup directory structure is owned by user backuppc
and same-named group.
Maybe some pl script could do the access control in the first place ?

Hoping for some helpful ideas...
Best regards,
Alexander Lenz



Metaversum GmbH
Geschaeftsfuehrer: Jochen Hummel, Dietrich Charisius, Dr. Mirko Caspar
Rungestr. 20, D-10179 Berlin, Germany
Amtsgericht Berlin Charlottenburg HRB 99412 B
CONFIDENTIALITY NOTICE: The information contained in this communication is 
confidential to the sender, and is intended only for the use of the addressee. 
Unauthorized use, disclosure or copying is strictly prohibited and may be 
unlawful. If you have received this communication in error, please notify us 
immediately at the contact numbers or addresses noted herein.
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Nils Breunese (Lemonbit)

Toni Van Remortel wrote:


Nils Breunese (Lemonbit) wrote:

Toni Van Remortel wrote:

How can I reduce bandwidth usage for full backups?

Even when using rsync, BackupPC does transfer all data on a full  
backup,

and not only the modified files since the last incremental or full.

That's not true. Only modifications are transfered over the network
when using rsync. Full backups are just more thoroughly checking
whether files have changed (not just comparing timestamps, but
actually checking the contents of the files).

Then I wonder what gets transferred. If I monitor a full dump, the
bandwidth usage is way much higher than when I copy it manually. If a
simple 'rsync -auv' takes 2 hours to complete against a backup from 1
day ago, they why does BackupPC take 2 days for the same action?

I'm out of ideas, and hacks.


It might be because BackupPC doesn't run the equivalent of rsync -auv.  
See $Conf{RsyncArgs} in your config.pl for the options used and  
remember rsync is talking to BackupPC's rsync interface, not a stock  
rsync. There's much more going on: the compression, the checksumming,  
the pooling, the nightly jobs (if your backup job really needs two  
days then it probably gets in the way of the nightly jobs), that's all  
not happening when you run a plain rsync -auv. The traffic shouldn't  
be much higher though (after the initial backup of course), I think.


Could you give us some numbers? How much traffic are you seeing for a  
BackupPC backup compared to a 'plain rsync'?


Nils Breunese.

P.S. You might want to check out rdiff-backup (http://www.nongnu.org/rdiff-backup/ 
) if you're looking for an rsync style incremental backup tool and  
don't need the compression, pooling, rotation and web interface that  
BackupPC gets you out of the box.


PGP.sig
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How can I let users browse and restore their backups

2007-11-26 Thread Nils Breunese (Lemonbit)

Alexander Lenz wrote:


Hi there, BackupPC Community,

what is the easiest way to let our users (we are about 30 here)  
browse and restore

the backups that were made from their machines ?
- Without granting admin access to them. -

We'd need some restricted accounts to access backuppc via http,  
which should
allow exclusive access to the user's machine, but to none of the  
other backups.
Is there a way to accomplish that  - e.g., adding users to  /etc/ 
backuppc/htpasswd

and htgroup ..?
Problem is, the whole backup directory structure is owned by user  
backuppc

and same-named group.
Maybe some pl script could do the access control in the first place ?

Hoping for some helpful ideas…


This is all already built-in and ready to go. Just add the users to  
your htpasswd file and associate the hosts with the users in your  
BackupPC hosts file. See http://backuppc.sourceforge.net/faq/BackupPC.html#step_4__setting_up_the_hosts_file 
.


Nils Breunese.

PGP.sig
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Les Mikesell
Toni Van Remortel wrote:

 How can I reduce bandwidth usage for full backups?

 Even when using rsync, BackupPC does transfer all data on a full backup,
 and not only the modified files since the last incremental or full.
 That's not true. Only modifications are transfered over the network
 when using rsync. Full backups are just more thoroughly checking
 whether files have changed (not just comparing timestamps, but
 actually checking the contents of the files).
 Then I wonder what gets transferred. If I monitor a full dump, the
 bandwidth usage is way much higher than when I copy it manually. If a
 simple 'rsync -auv' takes 2 hours to complete against a backup from 1
 day ago, they why does BackupPC take 2 days for the same action?

On fulls, backuppc adds --ignore-times to the rsync arguments.  Try a 
comparison against that for the bandwidth check.  This makes both ends 
read the entire contents of the filesystem so the time will increase 
correspondingly.  The backuppc side is also uncompressing the stored 
copy and doing the checksum exchange in perl code which will slow it 
down a bit.  You can speed this up some with the checksum-seed option. 
If you don't care about this extra data check, you could probably edit 
Rsync.pm and remove that setting.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Les Mikesell
Toni Van Remortel wrote:

 Could you give us some numbers? How much traffic are you seeing for
  a BackupPC backup compared to a 'plain rsync'?
 Full backup, run for the 2nd time today (no changes in files):
 - - BackupPC full dump : killed it after 30mins, as it pulled all data
 again (2.8GB)

This doesn't make any sense to me.  I run backups on some remote 
machines that could not possibly work if rsync fulls copied unchanged 
data.  How are you measuring the traffic?

 Well, we need the web interface of BackupPC, we need the reporting
 functionality of BackupPC, but we like to to be more bandwidth efficient.
 Maybe I'll write an Xfer module that uses plain rsync ...

It should work the way you expect as-is, although the rsync-in-perl that 
knows how to read the compressed archive is somewhat slower.

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backuppc and Amanda

2007-11-26 Thread Paddy Sreenivasan
I'm a developer in Amanda (http://amanda.zmanda.com) project.  Is anyone
using Amanda and Backuppc together?

I'm interested in integrating Backuppc with Amanda.  Amanda will be
the media manager (support for tapes and other media) consolidator of data
from a group of BackupPC clients.  Amanda's application api
(http://wiki.zmanda.com/index.php/Application_API) can be used for integration.
Lots of details have to be worked out.

If anyone is interested in developing such a solution, please send me an email
(paddy at zmanda dot com).  Any suggestions on what features would be
useful in a integrated solution?

Thanks,
Paddy

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] Backuppc and Amanda

2007-11-26 Thread Stephen Joyce
On Mon, 26 Nov 2007, Paddy Sreenivasan wrote:

 I'm a developer in Amanda (http://amanda.zmanda.com) project.  Is anyone
 using Amanda and Backuppc together?

I've always thought that Bacula was a better fit to be integrated (how 
tightly or loosely is debatable) with BackupPC.

Looking at this has been on my agenda for a long time; it's just been 
back-burnered due to more pressing needs.

 I'm interested in integrating Backuppc with Amanda.  Amanda will be
 the media manager (support for tapes and other media) consolidator of data
 from a group of BackupPC clients.  Amanda's application api
 (http://wiki.zmanda.com/index.php/Application_API) can be used for 
 integration.
 Lots of details have to be worked out.

 If anyone is interested in developing such a solution, please send me an email
 (paddy at zmanda dot com).  Any suggestions on what features would be
 useful in a integrated solution?

There's nothing to stop an admin from backing up dumps from the 
pc/$hostname directory now with Amanda, Bacula, or many other products. One 
product doesn't have to know anything about the other--but that's not an 
integrated solution. I'll also assume that you are not suggesting to 
backup the pool or cpool, but rather the individual hosts' dumps...

Any tape backup application would have to not only dump and restore the 
files, but (assuming you'd be backing up hosts' dumps) understand enough 
about BackupPC to link them into the (c)pool and re-add the info about 
the original backup to the backups file (so it is available via the 
BackupPC CGI). Linking into the (c)pool is almost mandatory if you're going 
to support restoring multiple hosts. It would probably also be necessary to 
persuade BackupPC not to delete the newly-restored backup during it's 
nightly tasks. I think the ability to lock a dump so that it's only 
removable by positive admin action has been discussed, but is not currently 
implemented.

I'm sure there are other issues that would be encountered, but as I said, 
I've not given it more than cursory thought yet.

 Thanks,
 Paddy

Cheers, Stephen
--
Stephen Joyce
Systems AdministratorP A N I C
Physics  Astronomy Department Physics  Astronomy
University of North Carolina at Chapel Hill Network Infrastructure
voice: (919) 962-7214and Computing
fax: (919) 962-0480   http://www.panic.unc.edu

Don't judge a book by its movie.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc and Amanda

2007-11-26 Thread Les Mikesell
Paddy Sreenivasan wrote:
 I'm a developer in Amanda (http://amanda.zmanda.com) project.  Is anyone
 using Amanda and Backuppc together?

I'm using them separately with some of the same hosts as targets.

 I'm interested in integrating Backuppc with Amanda.  Amanda will be
 the media manager (support for tapes and other media) consolidator of data
 from a group of BackupPC clients.  Amanda's application api
 (http://wiki.zmanda.com/index.php/Application_API) can be used for 
 integration.
 Lots of details have to be worked out.

Since neither really needs any day to day attention, I kind of like the 
fact that they don't know anything about each other and have no common 
point of failure.  I'm just hoping I'll never have to restore from the 
amanda tapes again - it's been so long that I've almost forgotten how.

 If anyone is interested in developing such a solution, please send me an email
 (paddy at zmanda dot com).  Any suggestions on what features would be
 useful in a integrated solution?

I suppose an amanda method that could include a tar copy of some of the 
backuppc hosts into its run would be nice for system where you are using 
rsync and don't have the bandwidth to make a direct copy.  But it might 
make more sense to try to glue backuppc's pooled archive into bacula if 
you want to combine tape and on-line backups.

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] define destination dir

2007-11-26 Thread Holm Kapschitzki
Hello,

i habe 4 older ide devices a 160 gb and i want to backup a few client
hosts. So i cannot use one single device for backuppc. I read the doku
and read something of configuring topdir to config the path where the
data is backupt. On the other hand i read on debian package topdir is
hardcoded.

So my question is how to define different dir (at the different config
files) for each host where i can backup the data?

greets holm


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] define destination dir

2007-11-26 Thread Paul Archer
Look in the config.pl file (if debian, it's probably 
/etc/backuppc/config.pl).

If you have four 160GB drives, I would suggest using MD/LVM to create one 
large logical volume. The best arrangement would probably be something 
like a RAID 5 with all four drives, and maybe an LVM volume on top of that.

Paul

Tomorrow, Holm Kapschitzki wrote:

 Hello,

 i habe 4 older ide devices a 160 gb and i want to backup a few client
 hosts. So i cannot use one single device for backuppc. I read the doku
 and read something of configuring topdir to config the path where the
 data is backupt. On the other hand i read on debian package topdir is
 hardcoded.

 So my question is how to define different dir (at the different config
 files) for each host where i can backup the data?

 greets holm


 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/





___
Can't you recognize bullshit? Don't you think it would be a
useful item to add to your intellectual toolkits to be capable
of saying, when a ton of wet steaming bullshit lands on your
head, 'My goodness, this appears to be bullshit'?
_Neal Stephenson, Cryptonomicon__

-10921 days until retirement!-

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] define destination dir

2007-11-26 Thread dan
enter the ZFS troll :)

if you run opensolaris or a BSD with ZFS you can use ZFS as a zraid and you
get the benefits of LVM, Raid5, and filesystem level compression all in one.

I have noticed ZFS to be very resource friendly under heavy load even with
compression and zraid enabled.

search google and you will see a dynamic zraid volume on 16 flash keys and
the filesystem remaining 'up' with 2 devices out and then rebuilt when they
are plugged back in without user intervention.  not to mention the zraid
made a pretty impressive 4Gb(256mb flash keys) array with smoking fast IO

On Nov 26, 2007 8:39 PM, Paul Archer [EMAIL PROTECTED] wrote:

 Look in the config.pl file (if debian, it's probably
 /etc/backuppc/config.pl).

 If you have four 160GB drives, I would suggest using MD/LVM to create one
 large logical volume. The best arrangement would probably be something
 like a RAID 5 with all four drives, and maybe an LVM volume on top of
 that.

 Paul

 Tomorrow, Holm Kapschitzki wrote:

  Hello,
 
  i habe 4 older ide devices a 160 gb and i want to backup a few client
  hosts. So i cannot use one single device for backuppc. I read the doku
  and read something of configuring topdir to config the path where the
  data is backupt. On the other hand i read on debian package topdir is
  hardcoded.
 
  So my question is how to define different dir (at the different config
  files) for each host where i can backup the data?
 
  greets holm
 
 
 
 -
  This SF.net email is sponsored by: Microsoft
  Defy all challenges. Microsoft(R) Visual Studio 2005.
  http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
 
 



 ___
 Can't you recognize bullshit? Don't you think it would be a
 useful item to add to your intellectual toolkits to be capable
 of saying, when a ton of wet steaming bullshit lands on your
 head, 'My goodness, this appears to be bullshit'?
 _Neal Stephenson, Cryptonomicon__

 -10921 days until retirement!-

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] define destination dir

2007-11-26 Thread Nils Breunese (Lemonbit)

Holm Kapschitzki wrote:


i habe 4 older ide devices a 160 gb and i want to backup a few client
hosts. So i cannot use one single device for backuppc. I read the doku
and read something of configuring topdir to config the path where  
the

data is backupt. On the other hand i read on debian package topdir is
hardcoded.

So my question is how to define different dir (at the different config
files) for each host where i can backup the data?


I don't think this is possible, because BackupPC uses hardlinks and  
these hardlinks need to be all on the same filesystem. You'll have to  
combine the drives into a single volume (LVM?) and then use that to  
backup to (mount the combined volume as /var/lib/backuppc).


Nils Breunese.


PGP.sig
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Nils Breunese (Lemonbit)

Toni Van Remortel wrote:


And I have set up BackupPC here 'as-is' in the first place, but we saw
that the full backups, that ran every 7 days, took about 3 to 4 days  
to
complete, while for the same hosts the incrementals finished in 1  
hour.
That's why I got digging into the principles of BackupPC, as I  
wanted to

know why the full backups don't works 'as expected'.


Well, I can tell you BackupPC using rsync as the Xfermethod is working  
just fine for us. The incrementals don't take days, all seems normal.  
I hope you'll be able to find the problem in your setup.


Nils Breunese.


PGP.sig
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/