Re: [BackupPC-users] Data integrity checks

2017-01-02 Thread Andreas Piening
Hi Jan,

the file based deduplication is baed on checksums, so if a new file is stored 
with the same name and file-size it will only be stored as a new file if the 
checksum is different. If the checksum is different, a hard link will be used 
to point at the already existing copy.
But these checksums are used for deduplication only and as far as I know there 
is no additional integrity check, for example on a restore.

Honestly I don’t think it is really needed. I’m using a ZFS volume for backuppc 
which has build in block level checksums for integrity.
Probably this is an option for you?

Kind regards

Andreas

> Am 02.01.2017 um 12:29 schrieb Jan Stransky :
> 
> Hi,
> 
> does backuppc do some data integrity checks on stored files or files 
> to-be-stored? Something like regular md5sum checks.
> 
> Jan

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multiple instances of backuppc

2016-02-22 Thread Andreas Piening
May I suggest to create virtual containers with LXC?
Proxmox, which is based on debian makes this very easy.

Since the containers are separated from each other, you don’t need to do any 
kind of config file hacking.

> Am 22.02.2016 um 16:21 schrieb Alessandro Polverini :
> 
> Hello,
> I would like to use backuppc for some of my servers and I would like to 
> have separate physical disks with different instances of backuppc running.
> 
> I'm using debian, I tried to create a different set of config files and 
> hack the init.d file, but I was unable to run a second instance: does 
> someone have some hints to reach that goal?
> 
> Thanks,
> Alex
> 
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Retentiona and Archive

2016-01-17 Thread Andreas Piening
Hi Max,

can you be more precise on the part in the docs you don’t understand that well?

If your want to be able to have more than 365 days of retention I would suggest 
this as a possible configuration:
$Conf{FullAgeMax} = 373;
$Conf{FullKeepCntMin} = 53;
$Conf{FullPeriod} = 6.97;

You should adjust your Incremental-Parameters as well. Since the full period is 
approximately 7, multiplied with 53 as a FullKeepCntMin you should have 371 
days or more than one year of retention.
For this specific part the documentation is pretty nice since it offers 
examples for that: 
http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_fullkeepcnt_ 

> Am 17.01.2016 um 14:42 schrieb Max Ricketts :
> 
> I have installed Backuppc succesfully and all is working well.  I am having 
> troubles understanding the retention policies for full and incremental.
> 
> I am looking at being able to restore one particular file from any day in the 
> past year.
> 
> I have read about archiving off, but not sure how to set the retention time 
> on this either.  
> 
> Some help or explanation would be great.  I have read the docs, and dont 
> completely understand it.
> 
> Many thanks
> 
> Max
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Creating a "read everywhere" user to backup Windows profiles

2016-01-14 Thread Andreas Piening

> Am 14.01.2016 um 16:44 schrieb Les Mikesell <lesmikes...@gmail.com>:
> 
> On Thu, Jan 14, 2016 at 8:43 AM, Andreas Piening
> <andreas.pien...@gmail.com> wrote:
>> 
>>> Isn't there a specific "Backup Operator" account on windows which has
>>> "super" permissions for exactly this reason? I'm not sure if that
>>> account will work over samba though?
>>> 
>> At least not by that name, there is for example a „SYSTEM“ user which seems 
>> to have access everywhere. It doesn’t have a password and does not show up 
>> in the normal users list. Maybe it is possible to „enable“ this user so that 
>> the account can be used with CIFS / SMB to get access.
>> Does anyone use SMB to backup the whole system drive or the /Users dir? Or 
>> is the only real option to install a rsyncd with cygwin or something like 
>> that?
>> I’m trying to get backups with SMB working since I want to be as less 
>> intrusive to the clients as I can. Means: No additional software installed, 
>> no additional services. Just crating a share and that’s it.
> 
> Backup Operator (and Administrator) are groups you can add to the user
> to give additional permissions.  If you are in an Active Directory
> environment you would need to join the domain and have domain
> credentials, though.   I'd recommend setting up rsync anyway because
> it does a better job of tracking renames, changed files with old
> timestamps, and deletions.
> 
> -- 
>   Les Mikesell
> lesmikes...@gmail.com

Oh you’re right, there is such a Group (the name is localized …). I added a 
local user to both groups: Administrator and Backup Operator but when I do a 
backup of a share for the folder c:\Users I still get NT_STATUS_ACCESS_DENIED 
for a lot of subfolders.
This was the case in my test with the Windows 7 Pro machine in a workgroup, and 
again with the same machine added to a AD domain. Same behavior.
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Creating a "read everywhere" user to backup Windows profiles

2016-01-14 Thread Andreas Piening

> Am 14.01.2016 um 00:13 schrieb Adam Goryachev 
> <mailingli...@websitemanagers.com.au>:
> 
> On 14/01/16 07:11, Andreas Piening wrote:
>> I wonder what the easiest / best way is to create a „read everywhere“ user 
>> on ms windows to create backups with via CIFS / SMBFS.
>> 
>> Ideally I would like to run a short .cmd script or do a couple of clicks to 
>> give a local windows user (let’s assume ‚backuppc‘) full read access to 
>> everything under c:\Users. Even better with write access to be able to 
>> restore in place.
>> I know that I can enable inheritance for permissions in c:\Users and 
>> overwrite all permissions on subfolders with the current one. But this would 
>> also enable read for everyone for every user on other users profiles which I 
>> don’t like. And even this does not work everywhere, even not with an 
>> administrative account. I need to take ownership recursively in order to do 
>> that and I don’t want to own other users files.
>> 
>> Is there a better way?
> 
> Isn't there a specific "Backup Operator" account on windows which has 
> "super" permissions for exactly this reason? I'm not sure if that 
> account will work over samba though?
> 
> Regards,
> Adam

At least not by that name, there is for example a „SYSTEM“ user which seems to 
have access everywhere. It doesn’t have a password and does not show up in the 
normal users list. Maybe it is possible to „enable“ this user so that the 
account can be used with CIFS / SMB to get access.
Does anyone use SMB to backup the whole system drive or the /Users dir? Or is 
the only real option to install a rsyncd with cygwin or something like that?
I’m trying to get backups with SMB working since I want to be as less intrusive 
to the clients as I can. Means: No additional software installed, no additional 
services. Just crating a share and that’s it.
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Creating a "read everywhere" user to backup Windows profiles

2016-01-13 Thread Andreas Piening
I wonder what the easiest / best way is to create a „read everywhere“ user on 
ms windows to create backups with via CIFS / SMBFS.

Ideally I would like to run a short .cmd script or do a couple of clicks to 
give a local windows user (let’s assume ‚backuppc‘) full read access to 
everything under c:\Users. Even better with write access to be able to restore 
in place.
I know that I can enable inheritance for permissions in c:\Users and overwrite 
all permissions on subfolders with the current one. But this would also enable 
read for everyone for every user on other users profiles which I don’t like. 
And even this does not work everywhere, even not with an administrative 
account. I need to take ownership recursively in order to do that and I don’t 
want to own other users files.

Is there a better way? 
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate failed

2016-01-13 Thread Andreas Piening
Please take a look at Server > LOG file in the web interface. The bottom lines 
should give you some more information on what exactly caused the failure.

> Am 13.01.2016 um 20:51 schrieb David Rotger :
> 
> Hi Carl,
> 
> I use the web interface, when I restore a TAR localhost bakcup, in the status 
> section I see:
> 
> localhost restore 
> root <> 2016-01-13 20:002016-01-13 19:53
> BackupPC_tarCreate failed 

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ignore "file has vanished"

2016-01-12 Thread Andreas Piening
Not a real answer to your question on instructing BPC to ignore „file has 
vanished errors“ but I have two suggestions on avoiding this error:

- You can exclude the directories with the often altered PHP files. For me this 
sounds like a machine of a WEB developer constantly editing PHP files. In this 
case the source is probably already „backed up“ by a versioning control system 
(git, mercurial, whatever…)
- You can use snapshots, whether they are LVM based or even easier ZFS based. 
You can create and remove the snapshots with SSH commands in your PRE and POST 
backup scripts for the client.

The error comes from RSYNC itself I think. I don’t think there is an option for 
rsync to ignore these „vanished files“ errors. And honestly I don’t even think 
it’s a good idea to ignore them. 

> Am 12.01.2016 um 11:48 schrieb Gandalf Corvotempesta 
> :
> 
> Would be possible to instruct BPC to ignore "file has vanished" errors
> (error code 24) ?
> I'm backingup many clients that has tons of volatile file: php
> sessions, maildirs used by imap, where each file is renamed according
> to imap flags or moved in imap folders and so on.
> 
> Every time i make a backup, many files are vanishing, this is
> absolutely normal but BPS mark this as an error (even rsync mark this
> as benign error)
> 
> Would be possible to ignore this and mark backup as finished properly ?

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Errors on smbclient directory listing

2015-12-26 Thread Andreas Piening
Hi,

I can’t get SMB based backups to run on a fresh install.
The XferLOG gives me the following error:

cli_list: Error: unable to parse name from info level 260

When I manually execute smbclient just like Backuppc does, I get the same error 
message. Sometimes together with a

NT_STATUS_NO_MEMORY listing \\* 

I can connect with smbclient without problems, the error raises as soon as I do 
an „ls“. I can even „cd“ into different directories but when I do a „ls“ there 
the same error occurs.

So this is not exactly a Backuppc issue, there’s something wrong on the 
smbclient / SMB side but I have no idea how to dig into that and hopefully 
someone heard of this problem before?
I can connect and browse the share from Mac OS X and Windows without problems.

Additional Infos:
- Ubuntu 14.04.3 LTS
- backuppc 3.3.0-1ubuntu1
- SAMBA / smbclient Version 4.1.6-Ubuntu
- Backup-Share: Windows 10

Any ideas are welcome!

Andreas--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] calculating changes to filesystem over time

2015-02-11 Thread Andreas Piening
Hi Rob,

may I suggest to create a large snapshot (it depends on your scenario what 
‚large‘ is, for my setup 10 GB is more than enough) and just watch it for a 
while?

 watch lvs

You can run this in a screen session and get back to it every hour or so and 
you’ll see how fast your snapshot gets filled up. There’s no risk at this 
point, since the snapshot will get inactive even when it reaches 100% of the 
assigned space but the originating volume will still be writable.
However, after you have enough usage information you should remove the snapshot 
because it slows down the possible write speed to the originating volume.

Hope that helps,

Andreas
 
 Am 11.02.2015 um 17:56 schrieb Rob Owens row...@ptd.net:
 
 I am trying to set up LVM snapshots and I need to figure out how large of a 
 space to allocate for the snapshot.  Is there a tool I can use to monitor the 
 changes to my filesystem over some period of time?  In other words, I want to 
 know how much changed data there is over a day, a week, etc.
 
 This would be helpful not only for calculating LVM snapshot space, but also 
 storage pool space.
 
 Thanks for any advice you can provide.
 
 -Rob
 
 --
 Dive into the World of Parallel Programming. The Go Parallel Website,
 sponsored by Intel and developed in partnership with Slashdot Media, is your
 hub for all things parallel software development, from weekly thought
 leadership blogs to news, videos, case studies, tutorials and more. Take a
 look and join the conversation now. http://goparallel.sourceforge.net/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Correct rsync parameters for doing incremental transfers of large image-files

2012-05-14 Thread Andreas Piening
There are already two USB-disks that are swapped every few days and I copy 
nightly images on the currently connected drive. Some weeks ago, the office had 
a water-pipe break. The water has been stopped early enough so there was no 
damage, but my customer assigned me to create a external backup solution. He 
wants me to be able to completely restore the system including the virtual 
machines if someone breaks into the office and steals the hardware ore 
something like that. If this happens, I need at least one day to buy new 
hardware. But the point is that I need to be able to restore the system 
afterwards to a working state just like it was one day before the disaster 
happens.

As I understand the rsync functionality the algorithm is able to do in-file 
incremental updates. My problem is just that I can't figure out what prevents 
it from working in my case...

Am 15.05.2012 um 00:53 schrieb Les Mikesell:

 On Mon, May 14, 2012 at 5:20 PM, Andreas Piening
 andreas.pien...@gmail.com wrote:
 Hi Les,
 
 using a outdated image for restoring and manually copying things over is
 not an option for me. The Server is a domain-controller with several
 profiles, running two databases and have proprietary software installed that
 relies on registry settings and all that silly stuff.
 
 I don't see another chance then using a nightly created image atm and I can
 live with space-waste, no pooling and heavy IO but I just want this in-file
 incremental rsync to work as expected because the only thing that breaks it
 is that I can't transfer 80 GB of data through a DSL50 line every night.
 This would exceed the time-window I have for the backup.
 
 Can you park another drive nearby - preferably on a different machine
 but not absolutely necessary?   Then script a local copy during your
 backup window and dribble the offsite copy out after it completes.
 That gives you a much faster restore option unless you have a site
 disaster or it failed mid-backup, and the backuppc history will cover
 those risks.
 
 -- 
Les Mikesell
   lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Correct rsync parameters for doing incremental transfers of large image-files

2012-05-13 Thread Andreas Piening
Hi Les,

excuse my floppy frase: Real time recovery is not what I'm looking for. I ment 
I want to be able to get the system into a workable state simply by downloading 
and restoring from an image. If I need to manually assemble the system disk 
from a file based backup and fiddle around with a non-bootable and hard to 
debug windows server that's not a solution for me. Repairing a broken windows 
installation is an extremely time consuming pain compared to react on some 
file not found or wrong path error messages from a linux-os which directly 
leads to the problem.

I understand your concept with restoring a (not necessary up-to-date) image and 
doing a file based restore on top, but I have never tried this and don't feel 
so comfortable with this.
Please can you tell me more concrete about the windows-version you did this 
with and did you use the normal ntfs file-system driver or ntfs-3g while 
doing the file-based overlay restore?
How often have you done this successfully or better have you ever had problems 
with file permissions, lost attributes or anything else? Or have you done 
additional steps for getting the system drive bootable again?

Thank you very much,

Andreas

Am 12.05.2012 um 16:57 schrieb Les Mikesell:

 On Sat, May 12, 2012 at 9:17 AM, Andreas Piening
 andreas.pien...@gmail.com wrote:
 I want a backup that gives me the opportunity to get the server back and 
 running within a few minutes + download time of the image + restore time 
 from partimage.
 It is ok to loose the files created since the backup-run last night, I see 
 this more as a live insurance. The documents are already backed up with 
 daily and weekly  backup-sets via backuppc.
 
 If you need close to real-time recovery, you need to have some sort of
 live clustering with failover.  Just copying the images around will
 take hours.  For the less critical things, I normally keep a 'base'
 image of the system type which can be a file for a VM, or a clonezilla
 image for hardware that can be used as a starting point but I don't
 update those very often.   Backuppc just backs up the live files
 normally for everything so for the small (and more common) problems
 you can grab individual files easily and for a disaster you would
 start with the nearest image master and do a full restore on top of
 it, more or less the same as with hardware.
 
 -- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Correct rsync parameters for doing incremental transfers of large image-files

2012-05-12 Thread Andreas Piening
Hi Les,

I allready thought about that and I agree that the handling of large image 
files is problematic in general. I need to make images for the windows-based 
virtual machines to get them back running when a disaster happens. If I go away 
from backuppc for transfering these images, I don't see any benefits (maybe 
because I just don't know of a image solution that solves my problems better).
As I already use backuppc to do backups of the data partitions (all linux 
based) I don't want my backups to become more complex than necessary.
I can live with the amount of harddisk space the compressed images will consume 
and the IO while merging the files is acceptable for me, too.
I can tell the imaging software (partimage) to cut the image into 2 GB volumes, 
but I doubt that this enables effective pooling, since the system volume I make 
the image from has temporary files, profiles, databases and so on stored. If 
every image file has changes (even if there are only a few megs altered), I 
expect the rsync algorithm to be less effective than comparing large files 
where it is more likely to have a unchanged long part which is not 
interrupted by artificial file size boundaries resulting from the 2 GB volume 
splitting.

I hope I made my situation clear.
If anyone has experiences in large image file handling which I may benefit 
from, please let be know!

Thank you very much,

Andreas Piening

Am 12.05.2012 um 06:04 schrieb Les Mikesell:

 On Fri, May 11, 2012 at 4:01 PM, Andreas Piening
 andreas.pien...@gmail.com wrote:
 Hello Backuppc-users,
 
 I stuck while trying to identify the suitable rsync parameters to handle 
 large image file backups with backuppc.
 
 Following scenario: I use partimage to do LVM-snapshot based full images of 
 my virtual (windows-) machines (KVM) blockdevices. I want to save theses 
 images from the virtualization server to my backup machine running backuppc. 
 The images are between 40 and 60 Gigs uncompressed each. The time-window for 
 the backup needs to stay outside the working hours and is not large enough 
 to transfer the images over the line every night. I red about rsync's 
 capability to only transfer the changed parts in the file by a clever 
 checksum-algorithm to minimize the network traffic. That's what I want.
 
 I tested it by creating a initial backup of one image, created a new one 
 with only a few megs of changed data and triggered a new backup process. But 
 I noticed that the whole file was re-transfered. I waited till the end to 
 get sure about that and decided that it was not the ultimate idea to check 
 this with a compressed 18 GB image file but this was my real woking data 
 image and I expected it to work like expected. Searching for reasons for the 
 complete re-transmission I ended in a discussion-thread where they talked 
 about rsync backups of compressed large files. The explanation made sense to 
 me: The compression algorithm can cause a complete different archive file 
 even if just some megs of data at the beginning of the file hast been 
 altered, because of recursion and back-references.
 So I decided to store my image uncompressed which is about 46 Gigs now. I 
 found out that I need to add the -C parameter to rsync, since data 
 compression is not enabled per default. Anyway: the whole file was 
 re-created in the second backup run instead of just transfering the changed 
 parts, again.
 
 My backuppc-option RsyncClientCmd is set to $sshPath -C -q -x -l root 
 $host $rsyncPath $argList+ which is backup-pcs default disregarding the 
 -C.
 
 Honestly, I don't understand the exact reason for this. There are some 
 possibilities that may be guilty:
 
 - partimage does not create a linear backup image file, even if it is 
 uncompressed
 - there is just another parameter for rsync I missed which enables 
 differential file-changes-transfers
 - rsync exames the file but decides to not use differential updates for 
 this one because of it's size or just because it's created-timestamp is not 
 the same as the prior one
 
 Please give me a hint if you've successfully made differential backups of 
 large image files.
 
 I'm not sure there is a good way to handle very large files in
 backuppc.  Even if rysnc identifies and transfers only the changes,
 the server is going to copy and merge the unchanged parts from the
 previous file which may take just as long anyway, and it will not be
 able to pool the copies.Maybe you could split the target into many
 small files before the backup.  Then any chunk that is unchanged
 between runs would be skipped quickly and the contents could be
 pooled.
 
 -- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security

Re: [BackupPC-users] Correct rsync parameters for doing incremental transfers of large image-files

2012-05-12 Thread Andreas Piening
I want a backup that gives me the opportunity to get the server back and 
running within a few minutes + download time of the image + restore time from 
partimage.
It is ok to loose the files created since the backup-run last night, I see this 
more as a live insurance. The documents are already backed up with daily and 
weekly  backup-sets via backuppc.

Anyways, I think I should keep at least one additional copy of the 
image-backup, if the latest backup has not been created correctly I can use the 
image created one night before. It is ok for me to loose 50 GB backup space if 
I can sleep better.

Your suggestion with rsyncing from the snapshot has its benefits: I can mount 
the nts-partition (aka C:\ drive) of the windows server 2008 via ntfs and do 
the backup from the mountpoint. This way I would save much space and get a 
incremental backup-set for let's say 2 weeks in the past for allmost free.
But I asked here on the list some weeks ago if it is reliable to do a backup 
this way without loosing any file permissions, shadow copies, file-attributes 
or something like that. I have allready done this with linux-servers and know 
that it works there perfectly well (ok I need to install grub manually after 
the files are restored), but I'm quite unsure about windoze. I have not yet 
tried to create a new dev, format it with mkfs.ntfs and sync the files back and 
try to boot from it. But no one told me that he ever have successfully tried 
this or that it should work at all and I have learned to know windows boot 
drives to be very fragile.

But I can't believe that I'm the only one who needs to do backups of virtual 
windows machines over the network who is not willing to pay 900 bucks for a 
acronis true image licence per server! And the only difference there would be 
that acronis is able to store incremental diffs for a already created backup 
but after a week or so I need to do a full backup there too. The performance 
and space-efficiency of acronis is better that with partimage but not that I 
would spend over 1000 EUR for that better...

I'm allready happy if I get my rsync to make differential transfers of my image 
files, no matter if I waste several gigs of space...

Andreas

Am 12.05.2012 um 15:28 schrieb Tim Fletcher:

 On 12/05/12 11:57, Andreas Piening wrote:
 Hi Les,
 
 I allready thought about that and I agree that the handling of large image 
 files is problematic in general. I need to make images for the windows-based 
 virtual machines to get them back running when a disaster happens. If I go 
 away from backuppc for transfering these images, I don't see any benefits 
 (maybe because I just don't know of a image solution that solves my problems 
 better).
 As I already use backuppc to do backups of the data partitions (all linux 
 based) I don't want my backups to become more complex than necessary.
 I can live with the amount of harddisk space the compressed images will 
 consume and the IO while merging the files is acceptable for me, too.
 I can tell the imaging software (partimage) to cut the image into 2 GB 
 volumes, but I doubt that this enables effective pooling, since the system 
 volume I make the image from has temporary files, profiles, databases and so 
 on stored. If every image file has changes (even if there are only a few 
 megs altered), I expect the rsync algorithm to be less effective than 
 comparing large files where it is more likely to have a unchanged long 
 part which is not interrupted by artificial file size boundaries resulting 
 from the 2 GB volume splitting.
 
 I hope I made my situation clear.
 If anyone has experiences in large image file handling which I may benefit 
 from, please let be know!
 
 The real question is what are you trying to do, do you want a backup (ie 
 another single copy of a recent version of the image file) or an archive (ie 
 a series of daily or weekly snapshots of the images as they change)?
 
 BackupPC is designed to produce archives mainly of small to medium sized 
 files and it stores the full file not changes (aka deltas) and so for large 
 files (multi gigabyte in your case) that change each backup it is much less 
 efficient.
 
 To my mind if you already have backuppc backing up your data partitions and 
 the issue is that you want to back up the raw disk images from your virtual 
 machines OS disks the best thing to snapshot them as you have already setup 
 and then simply rsync that snapshot to another host which will just transfer 
 the deltas between the diskimages. This will leave you with backuppc 
 providing an ongoing archive for your data partitions and a simple rsync 
 backup for your root disks that will at worse mean you lose a days changes in 
 case of a total failure.
 
 -- 
 Tim Fletchert...@night-shade.org.uk

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape

[BackupPC-users] Correct rsync parameters for doing incremental transfers of large image-files

2012-05-11 Thread Andreas Piening
Hello Backuppc-users,

I stuck while trying to identify the suitable rsync parameters to handle large 
image file backups with backuppc.

Following scenario: I use partimage to do LVM-snapshot based full images of my 
virtual (windows-) machines (KVM) blockdevices. I want to save theses images 
from the virtualization server to my backup machine running backuppc. The 
images are between 40 and 60 Gigs uncompressed each. The time-window for the 
backup needs to stay outside the working hours and is not large enough to 
transfer the images over the line every night. I red about rsync's capability 
to only transfer the changed parts in the file by a clever checksum-algorithm 
to minimize the network traffic. That's what I want.

I tested it by creating a initial backup of one image, created a new one with 
only a few megs of changed data and triggered a new backup process. But I 
noticed that the whole file was re-transfered. I waited till the end to get 
sure about that and decided that it was not the ultimate idea to check this 
with a compressed 18 GB image file but this was my real woking data image and I 
expected it to work like expected. Searching for reasons for the complete 
re-transmission I ended in a discussion-thread where they talked about rsync 
backups of compressed large files. The explanation made sense to me: The 
compression algorithm can cause a complete different archive file even if just 
some megs of data at the beginning of the file hast been altered, because of 
recursion and back-references.
So I decided to store my image uncompressed which is about 46 Gigs now. I found 
out that I need to add the -C parameter to rsync, since data compression is 
not enabled per default. Anyway: the whole file was re-created in the second 
backup run instead of just transfering the changed parts, again.

My backuppc-option RsyncClientCmd is set to $sshPath -C -q -x -l root $host 
$rsyncPath $argList+ which is backup-pcs default disregarding the -C.

Honestly, I don't understand the exact reason for this. There are some 
possibilities that may be guilty:

- partimage does not create a linear backup image file, even if it is 
uncompressed
- there is just another parameter for rsync I missed which enables 
differential file-changes-transfers
- rsync exames the file but decides to not use differential updates for this 
one because of it's size or just because it's created-timestamp is not the same 
as the prior one

Please give me a hint if you've successfully made differential backups of large 
image files.

Thank you very much,

Andreas Piening
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restoring complete virtualized Windows-Servers / Saving MBR

2012-04-16 Thread Andreas Piening
Hi Jim,

I think I was not able to make myself perfectly clear: I don't use virtual disk 
images.
I use LVM-Volumes instead (you can think of them as partitions that are 
presented as block devices to ma virtual machines).

One thing you stated about doing virtual disk image backups is that they're 
huge. As soon as anything in the virtual machine changes, the complete image 
needs to be stored again to keep the image backup in sync. That's what I want 
to avoid by using LVM volumes: I can create a R/O snapshot of them and mount 
them into the filesystem just like an external harddrive connected to the 
system. From there I can do a file-based backups which is way more space 
efficient.

As I don't have any experiences in restoring a virtual machine from a 
file-based backup (I think of possible permission and special attribute 
problems from the NTFS-filesystem/driver, missing boot-record etc.) I ask if 
someone can tell me about that or if this is not a working backup solution at 
all.

Thank you for your response,

Andreas Piening

Am 16.04.2012 um 02:56 schrieb Jim Kyle:

 On Sunday, April 15, 2012, at 7:37:20 PM, Andreas Piening wrote:
 
 = I need to be able to completely restore the system including the
 = virtual machines if the machine gets lost, or damaged to a
 = non-repairable state
 
 If you do a full backup of the host, including all the KVM configuration
 and VM files together with the virtual disk files themselves, then a full
 restore should automagically take care of the virtual machines. They would
 not need to be backed up separately. You should also be able to restore
 their data by simply restoring just the virtual disk files, included in the
 full host backup.
 
 I don't do this myself, though, to back up my VirtualBox virtual machines,
 because my backup media isn't large enough to handle full host backups.
 Instead, I simply copy the virtual disk files individually to one of
 several external drives. This has worked to give me full backup on the VMs
 on the few occasions that I've needed it.
 
 -- 
 Jim Kyle
 mailto: j...@jimkyle.com
 
 
 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restoring complete virtualized Windows-Servers / Saving MBR

2012-04-16 Thread Andreas Piening
Hi,

thank you for your response.

At the moment I don't use disk images. Instead I use LVM volumes which are 
directly connected to my KVM-machines.
There is a way like creating images of the LVM volumes with a image tool like 
partimage. These images would be compressed like 50GB in combined size, no 
problem to copy it to an external usb drive but much data to transfer over a 
VDSL50 internet connection.
The point is that the customer wants me to backup the whole system (real-server 
including 2 VMs) over the network to a different location. So when the server 
gets unusable damaged for instance by beeing flooded with water because of a 
pipe-break, or the system gets stolen, I should be able to buy new hardware and 
get back to the state one day before the disaster occurred.

I like the efficient way of file based backups backupPC uses, so I ask for 
experiences on that. But maybe I should search for a partition image tool that 
supports incremental backups. I only know of Acronis True Image but this is a 
commercial (and not cheap) way.

Am 16.04.2012 um 09:28 schrieb hans...@gmail.com:

 Yes, I see BackupPC as a solution for what I call data archive backups, as 
 opposed to full host bare-metal.
 
 For the latter wrt physical machines I tend to do relatively infrequent 
 image snapshots of the boot and system partitions, keeping 
 frequently-changing working data on separate partitions, which are backed 
 up by BPC.
 
 I treat VM images as part of a third category, together with large media 
 files, either manually or via scripts, simply copying them to external 
 (esata) drives that get rotated offsite.
 
 For my use case, it would simply be impractical to have BPC keep so many 
 multiple copies of old versions of this third category, they're just too 
 large. The working data handled by the VMs is backed up by BPC (usually via a 
 central filer), but not the OS/boot partitions.
 
 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restoring complete virtualized Windows-Servers / Saving MBR

2012-04-16 Thread Andreas Piening
Hi Gerald,

urbackup is completely new to me. The features sounds exactly like what I need. 
The website and documentation doesn't look that evolved from the first sight, 
at least compared to backupPC. But I think I just need to try the software out: 
The setup looks easy and I think I can do a backup/restore test with a naked 
windows xp in a VirtualBox.

Thank you very much for the hint,

Andreas

Am 16.04.2012 um 14:16 schrieb Gerald Brandt:

 Hi,
 
 I use BackupPC for backups, and Urbackup for disaster recovery, imaging the 
 boot drive only.  BackupPC does the data drive.
 
 http://www.urbackup.org/
 
 Gerald
 
 
 Two completely separate backup schemes are needed here. 
 
 One for full cold-metal restores of the boot/OS level stuff, and IMO this 
 is best done with imaging style software, in your case specifically 
 targeted for windoze/ntfs systems. These don't need to be done very 
 frequently, as little is changing from day to day. BPC is not intended to 
 provide for this kind of backup, especially regarding Windows. Many Linux 
 sysadmins simply re-install their OS from automated scripts and then restore 
 config files rather than bothering to fully image their boot/OS partitions, 
 but Windows isn't suited to that approach.
 
 The type of backup is for working data, which requires the frequent 
 full/incremental/archive that BPC is designed for. Details about the 
 infrastructure under the filesystem are irrelevant to BPC, except when 
 considering how to optimize performance when a small backup window becomes 
 an issue.
 
 What you are doing with LVM snapshotting should only be necessary for certain 
 applications that keep their files open, like DB servers, Outlook and some 
 FOSS mailsystems. And then only if these services need to be kept running as 
 close to 24/7 as possible, otherwise your scripts can just close the server 
 processes down until the backup is complete and then bring them back up again.
 
 I can't advise on the NTFS-specific extended attributes and newfangled 
 security stuff, but unless you're using software that specifically leverages 
 that MS-proprietary stuff, it shouldn't IMO be an issue.
 
 
 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2
 
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 
 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Restoring complete virtualized Windows-Servers / Saving MBR

2012-04-15 Thread Andreas Piening
Hello BackupPC-Users,

I am planing to use BackupPC to backup a server with 2 virtual machines over 
the network and I have a few questions on that.
Following scenario:
= The server is linux-based (ssh/rsync available)
= There are 2 virtual machines (KVM) running Windows Server 2008 R2 64 bit
= Each virtual server uses 2 LVM volumes as their system disks
= I need to be able to completely restore the system including the virtual 
machines if the machine gets lost, or damaged to a non-repairable state
= I can backup the linux system via rsync. I can create R/O snapshots for the 
LVM-volumes used by the virtual machines (KVM) and backup them. This way I can 
do the backup without stopping the machines. Moreover I can use rsync instead 
of SMB which supports compression on the network and has better performance, 
right?

Here are my questions:
= If i want to restore the virtualized Windows Server 2008 machines, is it 
enough to mount the lvm-volumes RW and copy all files back via BackupPC/rsync? 
Do I need an additional MBR-backup to get the volume bootable after restore? 
May I run into file permission issues or problems with special attributes, 
because the windows-servers are using NTFS as their filesystems. How can I 
handle this?
= BackupPC normally allows me to download files or folders from the 
backup-set, or restore them to the same destination where they're backuped 
from. Will I be able to restore via rsync to another location of my choice (for 
instance when I want to re-install the system on new hardware)?

If there is some documentation which gives me hints on that, feel free to point 
me on this.

Thank you in advance,

Andreas Piening
--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Possible to mimic the dirvish behaviour of rsync?

2010-12-22 Thread Andreas Piening
Hi Timothy,

thank you very much for your reply. You're right: I don't want to minimize
the disk activity, but only the amount of data which is transfered over
the network.
I allready use rsync as the backup-method, so it seems I'm allready fine
with these settings.

One additional question: You say that the number of incremental backups
will slow down the backup process, which makes sense to me because
backuppc needs to iterate over all incremental backups. But which settings
are the best if I want a history that goes more than 7 days in the past?
For instance if I want to keep the last 14 days in the backup, is it
enough to set FullKeepCnt=2 and leave FullPeriod=6.97 and
IncrPeriod=0.97?
Does FullKeepCnt=2 still use pooling then? What I mean is: Does it need
twice the space as with FullKeepCnt=1, or are there still hardlinks used
to address the unchanged files?

The more I think about it the more I understand how much magic must be
going on in the backuppc backend!

Thank you again!

Andreas

On Tue, December 21, 2010 10:11 pm, Timothy J Massey wrote:
 Andreas Piening andreas.pien...@gmail.com wrote on 12/21/2010 03:33:18
 PM:

 Because I really like the benefits from backuppc (compression, re-
 using backups of multiple occuring single files over different
 hosts, great web-interface...) I ask myself how I can get backuppc
 to mimic this behavior from dirvish. Is it enough to screw up
 FullPeriod and FullAgeMax to let's say 999?
 Since I'm not fine with this idea which seems to conflict with the
 way backuppc operates, I ask for assistance.

 This is not necessary.  While a full backup will *read* every file on the
 target, it will not *transfer* every file from the target:  it will only
 transfer changed files (and only the changes, at that).

 In other words, unless you are trying to avoid disk activity on the target
 (which is *very* unlikely), simply using rsync as the transfer mechanism
 will give you what you're looking for.

 Don't mangle the full and incremental settings.  The longer between fulls,
 the longer the incrementals will take:  they will have more and more files
 to check every day.  A weekly full resets the changed file count, and it
 will only transfer the needed files for that time.

 Timothy J. Massey
 Out of the Box Solutions, Inc.



--
Forrester recently released a report on the Return on Investment (ROI) of
Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even
within 7 months.  Over 3 million businesses have gone Google with Google Apps:
an online email calendar, and document program that's accessible from your 
browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Possible to mimic the dirvish behaviour of rsync?

2010-12-21 Thread Andreas Piening
Dear backuppc-users-list,

I use backuppc to backup some of my customers servers and dirvish to backup 
some others. The thing that I really like about dirvish (apart from it's easy 
configuration), is that it creates only one initial snapshot and than unlimited 
incremental backups based on hard-links. That means that there is never a 
second full backup after the initial one, ever! This dramatically decreases the 
amount of data needed to be transferred over the network over the time. This is 
especially essential if I need to backup a server over a slow 
DSL-internet-connection where a full-backup takes days

Because I really like the benefits from backuppc (compression, re-using backups 
of multiple occuring single files over different hosts, great web-interface...) 
I ask myself how I can get backuppc to mimic this behavior from dirvish. Is it 
enough to screw up FullPeriod and FullAgeMax to let's say 999?
Since I'm not fine with this idea which seems to conflict with the way backuppc 
operates, I ask for assistance.

Im sorry if I was not able to make my intension perfectly clear, but please 
feel free to ask a more specific question.

Thank you in advance!

Andreas Piening
--
Forrester recently released a report on the Return on Investment (ROI) of
Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even
within 7 months.  Over 3 million businesses have gone Google with Google Apps:
an online email calendar, and document program that's accessible from your 
browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/