[BackupPC-users] BackupPC Working Well - Except Archiving

2008-11-17 Thread samk-01
I may have missed something obvious but I am unable to create an archive of 
existing backups using the CGI interface and schedule facilities.  All other 
normal backups are successfully created this way.  If started manually from the 
CGI interface the archive is created successfully.

When scheduled via the CGI the Server log shows:
2008-11-17 08:00:00 Next wakeup is 2008-11-17 08:15:00


When scheduled via the CGI the Host log shows:
Blank i.e. no entries


Settings used for testing (have I missed others that are relevant?)
BackupsDisable = 0
BlackoutPeriods = Deleted
BackupZeroFilesIsFatal = Unticked


The on-line documentation at Sourceforge mentions creating an archive at the 
command line using BackupPC_archiveStart.  This method fails as the system 
cannot find BackupPC_archiveStart.


Once archiving is operating as required the next step will be to incorporate 
ArchivePreUserCmd and ArchivePostUserCmd to enable the archive to be 
created on an external Truecrypt disk; however this stage will be tested 
separately and is mentioned for completeness only.

Any and all suggestions on how to get this working will be welcomed.

SamK


-
Email sent from www.virginmedia.com/email
Virus-checked using McAfee(R) Software and scanned for spam


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pool size graph missing legend in Centos 5 after latest rrdtool updates.

2008-11-17 Thread Nils Breunese (Lemonbit)
Pete Geenhuizen wrote:

 Nils Breunese (Lemonbit) wrote:
 I'm not using this BackupPC mod, but there was a post on the RPMForge
 users mailinglist about this problem with the latest rrdtool  
 updates: http://lists.rpmforge.net/pipermail/users/2008-November/002040.html

 Nils Breunese.


 Thanks Nils, unfortunately I'm not savvy enough with either Cati or
 rrdtool to figure out how best to fix it.  I did find a bug on the
 Gentoo bugzilla that seems to have solved the problem, so I opened a
 bugzilla incident on Centos 5 referencing the Gentoo bug.

Please not that those *.rf packages are not official CentOS packags,  
but packages from the third-party RPMForge repository. Filing bugs on  
the CentOS bugzilla won't get you much help probably.

Nils Breunese.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Working Well - Except Archiving

2008-11-17 Thread Nils Breunese (Lemonbit)
SamK wrote:

 The on-line documentation at Sourceforge mentions creating an  
 archive at the command line using BackupPC_archiveStart.  This  
 method fails as the system cannot find BackupPC_archiveStart.

The BackupPC_archiveStart binary is probably not in your path. Try  
calling it using the full path, e.g. /usr/local/BackupPC/bin/ 
BackupPC_archiveStart on our systems.



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pool size graph missing legend in Centos 5 after latest rrdtool updates.

2008-11-17 Thread Pete Geenhuizen


Nils Breunese (Lemonbit) wrote:

 Please not that those *.rf packages are not official CentOS packags,  
 but packages from the third-party RPMForge repository. Filing bugs on  
 the CentOS bugzilla won't get you much help probably.

 Nils Breunese.

   
Yup right you are wrong place I wasn't thinking of that, got the closed 
message directing me to RPMforge right after I sent my email.

-- 
Unencumbered by the thought process.  
 -- Click and Clack the Tappet brothers 


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Working Well - Except Archiving

2008-11-17 Thread samk-01
InstallDir on my system = /usr/share/backuppc
In /usr/share/backuppc/bin are various files including:
BackupPC_archive  (Note not BackupPC_archiveStart)
BackupPC_archiveHost

The on-line docs at Sourceforge do not correspond with my Ubuntu 8.04-Server 
LTS distribution.

At the command line, using the full path, both can be started.  From the CGI it 
appears that BackupPC_archiveHost is started with the following command:
$Installdir/bin/BackupPC_archiveHost $tarCreatePath $splitpath $parpath $host 
$backupnumber $compression $compext $splitsize $archiveloc $parfile *

Still, it doesn't explain why it it fails to run when scheduled or fails to 
create a log entry, but works when started manually using the button in the CGI 
and creates the log entries.

SamK

 
 From: Nils Breunese (Lemonbit) [EMAIL PROTECTED]
 Date: 2008/11/17 Mon AM 09:45:41 GMT
 To: General list for user discussion,
   questions and support backuppc-users@lists.sourceforge.net
 Subject: Re: [BackupPC-users] BackupPC Working Well - Except Archiving
 
 SamK wrote:
 
  The on-line documentation at Sourceforge mentions creating an  
  archive at the command line using BackupPC_archiveStart.  This  
  method fails as the system cannot find BackupPC_archiveStart.
 
 The BackupPC_archiveStart binary is probably not in your path. Try  
 calling it using the full path, e.g. /usr/local/BackupPC/bin/ 
 BackupPC_archiveStart on our systems.
 
 
 
 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
 Build the coolest Linux based applications with Moblin SDK  win great prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 


-
Email sent from www.virginmedia.com/email
Virus-checked using McAfee(R) Software and scanned for spam


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Working Well - Except Archiving

2008-11-17 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

[EMAIL PROTECTED] wrote:
 
 Still, it doesn't explain why it it fails to run when scheduled or fails to 
 create a log entry, but works when started manually using the button in the 
 CGI and creates the log entries.

AFAIK, you don't schedule an archive... The scheduling only applies to
full/incremental backups.

Of course, I could be wrong... but that is my experience and
understanding of the docs...

Regards,
Adam
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkkhXs4ACgkQGyoxogrTyiU/pgCfcf1yGN0gfvdiV/jPFENUO0GR
YC8AoIMqKS8llMWONOEWtDzn2RZhEKEI
=VZPV
-END PGP SIGNATURE-

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] fileListReceive failed on centos 4.4

2008-11-17 Thread tilm

Hi,
got the same problem but i'm using rsync cmd without ssh:
$Conf{RsyncClientCmd} = '/usr/bin/sudo $rsyncPath $argList';

This is the output from the error log:
incr backup started back to 2008-11-14 08:56:58 (backup #0) for directory /var
Running: /usr/bin/sudo /usr/bin/rsync --server --sender --numeric-ids --perms 
--owner --group -D --links --hard-links --times --block-size=2048 --recursive . 
/var/
Xfer PIDs are now 31103
Got remote protocol 29
Negotiated protocol version 28
Sent exclude: /var/lib/backuppc
Sent exclude: /var/lib/tmp
Sent exclude: /var/lib/spool
Sent exclude: /var/lib/cache
Read EOF: 
Tried again: got 0 bytes
fileListReceive() failed
Done: 0 files, 0 bytes
Got fatal error during xfer (fileListReceive failed)
Backup aborted (fileListReceive failed)

FYI: The initial full backup works fine but the first incr backup fails.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Working Well - Except Archiving

2008-11-17 Thread Jack Coats
Yes, the ubuntu install is different from the 'source' install, so the 
documentation doesn't fit exactly.
After trying the Ubuntu package, I had to totally remove it and install 
from the sourceforge files (not .deb either)
The install went nicely, even though it isn't the 'ubuntu way', it 
worked well and the documents were totally in sync.

[EMAIL PROTECTED] wrote:
 InstallDir on my system = /usr/share/backuppc
 In /usr/share/backuppc/bin are various files including:
 BackupPC_archive  (Note not BackupPC_archiveStart)
 BackupPC_archiveHost

 The on-line docs at Sourceforge do not correspond with my Ubuntu 8.04-Server 
 LTS distribution.

 At the command line, using the full path, both can be started.  From the CGI 
 it appears that BackupPC_archiveHost is started with the following command:
 $Installdir/bin/BackupPC_archiveHost $tarCreatePath $splitpath $parpath $host 
 $backupnumber $compression $compext $splitsize $archiveloc $parfile *

 Still, it doesn't explain why it it fails to run when scheduled or fails to 
 create a log entry, but works when started manually using the button in the 
 CGI and creates the log entries.

 SamK

   
 From: Nils Breunese (Lemonbit) [EMAIL PROTECTED]
 Date: 2008/11/17 Mon AM 09:45:41 GMT
 To: General list for user discussion,
  questions and support backuppc-users@lists.sourceforge.net
 Subject: Re: [BackupPC-users] BackupPC Working Well - Except Archiving

 SamK wrote:

 
 The on-line documentation at Sourceforge mentions creating an  
 archive at the command line using BackupPC_archiveStart.  This  
 method fails as the system cannot find BackupPC_archiveStart.
   
 The BackupPC_archiveStart binary is probably not in your path. Try  
 calling it using the full path, e.g. /usr/local/BackupPC/bin/ 
 BackupPC_archiveStart on our systems.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Working Well - Except Archiving

2008-11-17 Thread samk-01
Having read the Description within the files BackupPC_archive and 
BackupPC_archiveHost it appears that BackupPC_archive begins the archive 
creation process and then passes control to BackupPC_archiveHost which uses the 
settings listed in the CGI (ArchiveClientCmd). 

This implies that BackupPC_archive could become a cron job and thereby achieve 
the desired scheduling.  Consequently, creating an archive occurs at a 
predictable date/time which might be beneficial taking into account the archive 
will take longer to complete than a normal full/incremental backup.

Is this analysis correct?

 
 Adam Goryachev wrote:
 
 AFAIK, you don't schedule an archive... The scheduling only applies to 
 full/incremental backups.
 
 Of course, I could be wrong... but that is my experience and understanding of 
 the docs...
 
I have only recently begun to explore the use of BackupPC and may be 
investigating areas which are better understood by more experienced users.  In 
my view, if archiving is designed not to make use of the scheduling facility 
built into BackupPc, it is a little confusing for the CGI to present the same 
scheduling options as those available for full/incremental backups.

SamK

-
Email sent from www.virginmedia.com/email
Virus-checked using McAfee(R) Software and scanned for spam


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Working Well - Except Archiving

2008-11-17 Thread samk-01
Having read the Description within the files BackupPC_archive and 
BackupPC_archiveHost it appears that BackupPC_archive begins the archive 
creation process and then passes control to BackupPC_archiveHost which uses the 
settings listed in the CGI (ArchiveClientCmd). 

This implies that BackupPC_archive could become a cron job and thereby achieve 
the desired scheduling.  Consequently, creating an archive occurs at a 
predictable date/time which might be beneficial taking into account the archive 
will take longer to complete than a normal full/incremental backup.

Is this analysis correct?

 
 Adam Goryachev wrote:
 
 AFAIK, you don't schedule an archive... The scheduling only applies to 
 full/incremental backups.
 
 Of course, I could be wrong... but that is my experience and understanding of 
 the docs...
 
I have only recently begun to explore the use of BackupPC and may be 
investigating areas which are better understood by more experienced users.  In 
my view, if archiving is designed not to make use of the scheduling facility 
built into BackupPc, it is a little confusing for the CGI to present the same 
scheduling options as those available for full/incremental backups.

SamK

-
Email sent from www.virginmedia.com/email
Virus-checked using McAfee(R) Software and scanned for spam


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-17 Thread Ermanno Novali
Hi everyone,
i'm a backuppc user and i use it on several different linux servers,
with backuppc backupping in some cases to internal hdd, raid or external hdds.

I'd like to mirror the backuppc pool - I searched through ml archives
and found that mirroring the backuppc pool (wherever it is) with rsync
on an external hard drive isn't efficient and doesn't scale good -
i've tried myself and is cpu and time consuming and very very long for
big pools - not very reliable.

So i've tried to mirror the pool with rdiff-backup, and it seems a
little better, but not the optimal solution.

In this ml the best solutions for this task are two hdd with pool on
them (two external, or two in raid maybe) or dd form pool to external
mirror disk - but NOT mirroring the backup with rsync or something
like that - right? can you confirm that?

And dd is time consuming like rsync but more reliable for backuppc pool?

Thank you so much,
have a nice day

Ermanno

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Keeping a specific full backup

2008-11-17 Thread pete davidson
Hi all

I recently restored a crashed mac harddrive from a backuppc full backup
(phew..).  I'd like to keep that particular full backup (so if I later
realize I actually needed some file I thought I didn't need in the original
restore it's still there).  Is there a way to change my current config.pl
settings or individual [hostname].pl settings to not overwrite that
particular backup (I'm using the default settings at the moment for how
often and how many full  incremental backups to keep)?

Many thanks in advance

Pete
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] moving a volume

2008-11-17 Thread Veon
Ray Todd Stevens wrote:
 We have a backuppc system setup that has been running for a while now.   We 
 are 
 expanding the office and I am going to need more storage space.  To do this I 
 will need to 
 copy the data off, reconfigure the array with more drives and then reload the 
 system.

 How is the best way to do this

cp -a  is the best way imho.
I moved a storage fs few days ago without problems.

Hope this help. (and sorry for my english).

Cya,
Veon.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] moving a volume

2008-11-17 Thread Ermanno Novali
On Mon, Nov 17, 2008 at 5:54 PM, Ermanno [EMAIL PROTECTED] wrote:
 cp -a  is the best way imho.
 I moved a storage fs few days ago without problems.


have you experimented this with success? i'v asked something quite
similar today on this ml,
because i read that rsync (and cp, of course) are not optimal ways to
copy a backuppc pool

thanks

Ermanno

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XP Client setup problem

2008-11-17 Thread Paul Mantz
On Sun, Nov 16, 2008 at 2:42 PM, Eric Snyder [EMAIL PROTECTED] wrote:
 I have set up XP clients before with great trouble. It was over a year
 ago. I added a new computer and copied a current clients config, changed
 the the client specific settings and I get errors. I changed the share
 to C$ and get the No files dumped for share C$ error message. I believe
 that it is a problem on the new client side since the other XP clients
 are backing up just fine.


Hello Eric,

If I recall correctly, you need to open up the configuration panel for
Explorer and make sure that option 'Enable Simple Sharing' is turned
off.  After that, make sure that the folder you want to back up is
shared correctly.


-- 
Paul Mantz
http://www.mcpantz.org
BackupPC - Network Backup with De-Duplication http://www.backuppc.com
Zmanda - Open source backup and recovery http://www.zmanda.com/

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keeping a specific full backup

2008-11-17 Thread Nils Breunese (Lemonbit)
pete davidson wrote:

 I recently restored a crashed mac harddrive from a backuppc full  
 backup (phew..).  I'd like to keep that particular full backup (so  
 if I later realize I actually needed some file I thought I didn't  
 need in the original restore it's still there).  Is there a way to  
 change my current config.pl settings or individual [hostname].pl  
 settings to not overwrite that particular backup (I'm using the  
 default settings at the moment for how often and how many full   
 incremental backups to keep)?

I recommend creating an archive of that particular backup.

Nils Breunese.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XP Client setup problem

2008-11-17 Thread Rob Poe
So,

Do people seem to think that Samba (i.e. Windows Shares) is the way to 
go?  I've been using RSync (excluding .EXE, .DLL, etc) .. which seems to 
make the pools smaller (this isn't a full backup -- but rather a Oh, 
crap, the HD failed and I need the files on the computer)..



Eric Snyder wrote:
 I have set up XP clients before with great trouble. It was over a year 
 ago. I added a new computer and copied a current clients config, changed 
 the the client specific settings and I get errors. I changed the share 
 to C$ and get the No files dumped for share C$ error message. I believe 
 that it is a problem on the new client side since the other XP clients 
 are backing up just fine.

 I have set up a user on the new client and added them to the Backup 
 Operators group. I still have the same error. Any help would be 
 appreciated.

 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
 Build the coolest Linux based applications with Moblin SDK  win great prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keeping a specific full backup

2008-11-17 Thread Pedro M. S. Oliveira
You can also create a new host just like that and put the original one with 
schedule/enable backups=1 or 2 
That way it wont keep doing backups.

cheers,
Pedro
On Thursday 06 November 2008 07:38:26 pete davidson wrote:
 Hi all
 
 I recently restored a crashed mac harddrive from a backuppc full backup
 (phew..).  I'd like to keep that particular full backup (so if I later
 realize I actually needed some file I thought I didn't need in the original
 restore it's still there).  Is there a way to change my current config.pl
 settings or individual [hostname].pl settings to not overwrite that
 particular backup (I'm using the default settings at the moment for how
 often and how many full  incremental backups to keep)?
 
 Many thanks in advance
 
 Pete
 

-- 
--
Pedro Oliveira
IT Consultant 
Email: [EMAIL PROTECTED]  
URL:   http://pedro.linux-geex.com
Telefone: +351 96 5867227
--
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-17 Thread Adam Goryachev
Ermanno Novali wrote:
 Hi everyone,

 I'd like to mirror the backuppc pool - I searched through ml archives
 and found that mirroring the backuppc pool (wherever it is) with rsync
 on an external hard drive isn't efficient and doesn't scale good -
 i've tried myself and is cpu and time consuming and very very long for
 big pools - not very reliable.
 In this ml the best solutions for this task are two hdd with pool on
 them (two external, or two in raid maybe) or dd form pool to external
 mirror disk - but NOT mirroring the backup with rsync or something
 like that - right? can you confirm that?
   
Yes. RAID1 and dd are equally good except that to use dd you must stop
backuppc and unmount the filesystem dor the duration of the dd (with
RAID1 you only need to unmount the FS just before you break the RAID to
remove the external drive).
 And dd is time consuming like rsync but more reliable for backuppc pool?
   
Generally dd and RAID1 will copy the entire pool in the time it takes to
read the entire disk and write the entire disk. The problem with cp and
rsync etc is that they need to read the filesystem structure, and make a
huge number of small reads and small writes. Especially rsync needs to
read the entire list of filenames and store them in memory before even
starting to copy the data to the destination. This is what makes rsync a
poor choice.

Of course, I've not discussed if or how the changes in rsync v3 modify
the above discussion, AFAIK, it is meant to solve or improve the
situation by starting to copy the content before reading the entire file
list, and also being less memory intensive while copying the data.

Also, if your pool is only 10G of data, and your filesystem is 2TB, then
rsync or cp will work better. The above discussion mostly applies to
large pools. Although the definition of large pools is somewhat
murky, and it differs depending on your backuppc hardware, I would guess
something around 500G would be large...

Regards,
Adam

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-17 Thread Martin Leben
Ermanno Novali wrote:
 [...]
 I'd like to mirror the backuppc pool - I searched through ml archives
 and found that mirroring the backuppc pool (wherever it is) with rsync
 on an external hard drive isn't efficient and doesn't scale good -
 i've tried myself and is cpu and time consuming and very very long for
 big pools - not very reliable.
 
 So i've tried to mirror the pool with rdiff-backup, and it seems a
 little better, but not the optimal solution.
 
 In this ml the best solutions for this task are two hdd with pool on
 them (two external, or two in raid maybe) or dd form pool to external
 mirror disk - but NOT mirroring the backup with rsync or something
 like that - right? can you confirm that?
 
 And dd is time consuming like rsync but more reliable for backuppc pool?
 
 Thank you so much,
 have a nice day
 
 Ermanno


Hi,

Yes use dd (or even better dd-rescue that is restartable and gives progress 
indication) for big pools. For smaller pools you might use cp -a or rsync 
-aH (restartable). You have to find out the practical upper limit for the 
latter methods depending on your requirements.


Another alternative is to use at least three disks in a rotating scheme and 
RAID1. (Those of you who have been reading the list for more than a few days 
are 
getting tired of hearing this by now, I imagine...!) Say you have three disks 
labeled 1, 2 and 3. Then you would rotate them according to the schedule below, 
which guarantees that:
- there is always at least one disk in the BackupPC server.
- there is always at least one disk in the off-site storage.
- all disks are never at the same location.

1 2 3   (a = attached, o = off-site)
a o o
a a o - RAID sync
o a o
o a a - RAID sync
o o a
a o a - RAID sync
. . .

On top of the RAID1 I recommend that you use LVM even though it might not be 
strictly necessary right now if your backups fit on one disk. The reason for 
doing that is when your disk are too small you can expand by using an second 
set 
of disks in a similar setup and add that second RAID1 to the volume.

Good luck!

/Martin


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-17 Thread Ermanno Novali
 Also, if your pool is only 10G of data, and your filesystem is 2TB, then
 rsync or cp will work better. The above discussion mostly applies to
 large pools. Although the definition of large pools is somewhat
 murky, and it differs depending on your backuppc hardware, I would guess
 something around 500G would be large...

I've tested rsync and rdiff-backup
(http://www.nongnu.org/rdiff-backup/) and rdiff-backup
seems to be more responsive, and seems to begin to write before rsync...

anyone here with experience with rdiff-backup? I think that (except
being -maybe- less time consuming)
it's like rsync (good  bads included)

And thanks for your reply,
Ermanno

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-17 Thread Ermanno Novali
 Yes use dd (or even better dd-rescue that is restartable and gives progress
 indication) for big pools. For smaller pools you might use cp -a or rsync
 -aH (restartable). You have to find out the practical upper limit for the
 latter methods depending on your requirements.

Thanks for dd-rescue suggestion, i'll take a look at it


 Another alternative is to use at least three disks in a rotating scheme and
 RAID1. (Those of you who have been reading the list for more than a few days 
 are
 getting tired of hearing this by now, I imagine...!) Say you have three disks
 labeled 1, 2 and 3. Then you would rotate them according to the schedule 
 below,
 which guarantees that:
 - there is always at least one disk in the BackupPC server.
 - there is always at least one disk in the off-site storage.
 - all disks are never at the same location.

 1 2 3   (a = attached, o = off-site)
 a o o
 a a o - RAID sync
 o a o
 o a a - RAID sync
 o o a
 a o a - RAID sync
 . . .


i'll try this too, where i have a raid or i need it
thanks!

Ermanno

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-17 Thread dan
I use rsync v3 on a pool of about 280GB and about a few million files.  With
rsync v3, the writes start within a few seconds of starting the sync and it
traverses the entire pool in 10-20 minutes.  I only transfer about 3-4GB of
files each night with rsync reducing that to about 1GB over a T1 at 196KB/s
in about 1.5-2 hours.  Considering the transfer itself is going to take 90
minutes just by bandwidth restrictions, this is not bad.  I used to run this
on rsync 2.x but it took at least 3 hours to complete as the initial file
list would take ages and ages.  I also began push memory off to swap to make
room for rsync which ground system performance to a halt.

I think that you can use rsync v3 (on both sides) to sync pools without
issue.

I'm assuming that you are using linux here also.  With *solaris you have the
zfs option as well.

I did some research a while back to use a cluster filesystem for the storage
pool but all cluster filesystems have much lower I/O performance than an
on-disk filesystem.

Also, I did try to do some software raid mirroring over iscsi but did not do
much more that basic testing.  The problem here is that the raid mirroring
is syncronous so the slow iscsi connection will effect backup performance
quite a bit.  I couldnt find any info on making the linux software raid work
in async mode with the local drive being the priority drive.

Unfortunately ZFS only does 2 redundant device raid workalike and you would
want more redundancy to make this work.  If you could do raidz* with any
number of redundant drives you could also put local disk cache and log
drives in place and let zfs handle the slow link on iscsi.

local |   remote
disk1 disk2 disk3  |disk4 disk5 disk6
disk7=log,disk8=cache  |




On Mon, Nov 17, 2008 at 4:26 PM, Ermanno Novali [EMAIL PROTECTED]wrote:

  Yes use dd (or even better dd-rescue that is restartable and gives
 progress
  indication) for big pools. For smaller pools you might use cp -a or
 rsync
  -aH (restartable). You have to find out the practical upper limit for
 the
  latter methods depending on your requirements.

 Thanks for dd-rescue suggestion, i'll take a look at it


  Another alternative is to use at least three disks in a rotating scheme
 and
  RAID1. (Those of you who have been reading the list for more than a few
 days are
  getting tired of hearing this by now, I imagine...!) Say you have three
 disks
  labeled 1, 2 and 3. Then you would rotate them according to the schedule
 below,
  which guarantees that:
  - there is always at least one disk in the BackupPC server.
  - there is always at least one disk in the off-site storage.
  - all disks are never at the same location.
 
  1 2 3   (a = attached, o = off-site)
  a o o
  a a o - RAID sync
  o a o
  o a a - RAID sync
  o o a
  a o a - RAID sync
  . . .
 

 i'll try this too, where i have a raid or i need it
 thanks!

 Ermanno

 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's
 challenge
 Build the coolest Linux based applications with Moblin SDK  win great
 prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-17 Thread tmassey
dan [EMAIL PROTECTED] wrote on 11/17/2008 09:29:19 PM:

 I use rsync v3 on a pool of about 280GB and about a few million 
 files.  With rsync v3, the writes start within a few seconds of 
 starting the sync and it traverses the entire pool in 10-20 minutes.
 I only transfer about 3-4GB of files each night with rsync reducing 
 that to about 1GB over a T1 at 196KB/s in about 1.5-2 hours. 
 Considering the transfer itself is going to take 90 minutes just by 
 bandwidth restrictions, this is not bad.

How does this work while running this simultaneously with, say, a backup 
(or link or nightly or whatever)?  Do you worry about that, or do you just 
try to make sure the two don't run simultaneously?

My biggest worry regarding these outside-of-BackupPC hacks is that when I 
need them, I'm going to find that they're not going to work because it was 
running, say, simultaneous to an actual backup.

Don't get me wrong:  I'll take the hacks.  It's better than nothing.  I, 
like I think *most* of us, would kill (or even pay for!) a method of 
replicating a pool in a guaranteed-correct way, especially at the host or 
even backup level.  But I still worry about using these hacks for 
production.

Tim Massey
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] merge host from one pool into another pool

2008-11-17 Thread Msquared
On Thu, Nov 13, 2008 at 10:55:36AM +0100, Michael Kuss wrote:

 historically, I run backuppc on two servers. One runs version 2.1.2 on
 Scientific Linux 4.7, the other 3.x (can't check right now) on Fedora
 Core 7.  Both backup different sets of hosts. Some hosts are being
 backed up by both of them (primarily laptops and one of the servers).
 Now, I would like to add the FC7 server (called benz) to the pool of the
 SL4 server too, including the history of backups.
 
 How would I achieve this?

I'm not sure that it would work, but I would use rsync to copy pc/benz
from one server to the other, then copy the config files for benz then add
the benz host to the other server.  (With rsync, make sure you turn on all
the relevant options for link handling: this should preserve the tree of
links under pc/benz, but will make the transfer take longer.)

I expect that that BackupPC would take care of everything for you during
one of its maintenance tasks, but I'm not sure this is actually the case.

If this is not the case, then you can just remove everything you copied
across and try something else.

I know this is not a great deal of help, but sometimes it pays to just go
ahead and give it a shot, especially since the amount of effort required
to try it is not that high...

If this works, let us know!

Regards, Msquared...

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Vista client unauthorized user

2008-11-17 Thread Oz Dror
I have installed xp client unsuccessfully

On vista rsyncd failed.  Any ideas why and how to debug it.

I get the following error:
 auth failed on module home from unknown (192.168.0.4): unauthorized user

/etc/rsyncd:

gid = users
read only = false
use chroot = false
transfer logging = false
log file = /var/log/rsyncd.log
log format = %h %o %f %l %b
hosts allow = 192.168.0.4
hosts deny = 0.0.0.0/0
strict modes = false
[root]
path = /cygdrive/c/
auth users = backuppc
secrets file = /etc/rsyncd.secrets
[home]
path = /cygdrive/c/users/[my home dir]/documents/quicken/
auth users = backuppc
secrets file = /etc/rsyncd.secrets

Thanks
Oz



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Question about attrib file structure

2008-11-17 Thread Jeffrey J. Kosowsky
Is the following true:
1. If a directory is *empty*, is there any reason for it to have an
   attrib file?
   
   Because in playing around with creating and deleting directory contents, I
   found that sometimes even after emptying directory contents, the
   subsequent incremental backups may sometimes still have (empty)
   attrib files.

2. If not, can I safely erase any (empty) attrib file that has no files
   associated with it?

3. Other than type=10 (delete), is there *any* reason for an attrib
   file to contain an entry for a file that is not present in the directory?

   Because, I have found some attrib files on my system in past backups that
   have file entries with type 0 (i.e *not* type 10) yet there is no
   file present in the directory.

4. If not, can I safely *remove* any non type=10 attrib entry if the
   corresponding file is not in the directory?

Thanks

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Vista client unauthorized user

2008-11-17 Thread Cody Dunne
Oz Dror wrote:
 I have installed xp client unsuccessfully
 
 On vista rsyncd failed.  Any ideas why and how to debug it.
 
 I get the following error:
  auth failed on module home from unknown (192.168.0.4): unauthorized user
 
 /etc/rsyncd:

You mean /etc/rsyncd.conf, right?

 
 gid = users
 read only = false
 use chroot = false
 transfer logging = false
 log file = /var/log/rsyncd.log
 log format = %h %o %f %l %b
 hosts allow = 192.168.0.4
 hosts deny = 0.0.0.0/0

Try removing the allow and deny lines temporarily until it starts working.

 strict modes = false
 [root]
 path = /cygdrive/c/
 auth users = backuppc
 secrets file = /etc/rsyncd.secrets
 [home]
 path = /cygdrive/c/users/[my home dir]/documents/quicken/

Isn't the path case sensitive? e.g.-/cygdrive/c/Users/[my home 
dir]/Documents/Quicken/

 auth users = backuppc

Also case sensitive.

 secrets file = /etc/rsyncd.secrets

backuppc is in there, right? Like this: backuppc:password

 
 Thanks
 Oz

Also, can you connect to it manually or increase the verbosity and tail 
/var/log/rsyncd.log on the client?

Cody

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] merge host from one pool into another pool

2008-11-17 Thread dan
I have done this on a small setup a while ago.  I just pulled over the pc
directory from one into the other and adjusted my config files for the
imported backups..  aditionally I made an error when copying and brought
over numeric user ids for the files and had to chown them.  Im not sure if
the nightly run will handle the typical deduplication or not.  I never
checked.

so..yet you can merge them but I am not sure if you get deduplication in the
next nightly run...

On Mon, Nov 17, 2008 at 8:41 PM, Msquared [EMAIL PROTECTED]wrote:

 On Thu, Nov 13, 2008 at 10:55:36AM +0100, Michael Kuss wrote:

  historically, I run backuppc on two servers. One runs version 2.1.2 on
  Scientific Linux 4.7, the other 3.x (can't check right now) on Fedora
  Core 7.  Both backup different sets of hosts. Some hosts are being
  backed up by both of them (primarily laptops and one of the servers).
  Now, I would like to add the FC7 server (called benz) to the pool of the
  SL4 server too, including the history of backups.
 
  How would I achieve this?

 I'm not sure that it would work, but I would use rsync to copy pc/benz
 from one server to the other, then copy the config files for benz then add
 the benz host to the other server.  (With rsync, make sure you turn on all
 the relevant options for link handling: this should preserve the tree of
 links under pc/benz, but will make the transfer take longer.)

 I expect that that BackupPC would take care of everything for you during
 one of its maintenance tasks, but I'm not sure this is actually the case.

 If this is not the case, then you can just remove everything you copied
 across and try something else.

 I know this is not a great deal of help, but sometimes it pays to just go
 ahead and give it a shot, especially since the amount of effort required
 to try it is not that high...

 If this works, let us know!

 Regards, Msquared...

 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's
 challenge
 Build the coolest Linux based applications with Moblin SDK  win great
 prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XP Client - samba vs rsync

2008-11-17 Thread dan
smb is not ideal accross any sort of WAN or slow link.. smb is very much a
LAN filesystem layer..

rsync is very functional, even on vista.  You can create a user specifically
to run rsync  and give them read access to the directories you want to
backup.

Some people do have problems with rsync on windows but I have setup many
machines with various windows from 2000 to vista sp2 and never ever had an
issue.  I use deltacopy
http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jspfor rsync as it
installs easily and just works...




On Mon, Nov 17, 2008 at 9:08 PM, Msquared [EMAIL PROTECTED]wrote:

 On Mon, Nov 17, 2008 at 12:14:16PM -0600, Rob Poe wrote:

  Do people seem to think that Samba (i.e. Windows Shares) is the way to
  go?  I've been using RSync (excluding .EXE, .DLL, etc) .. which seems to
  make the pools smaller (this isn't a full backup -- but rather a Oh,
  crap, the HD failed and I need the files on the computer)..

 I've found rsync ideal for that scenario, and use it to back up the
 Windows partition on my dual-boot laptop.  (I'm backing up via a VPN
 across wireless, and SMB requires extra work in order to make it work
 across a router.)

 Regards, Msquared...

 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's
 challenge
 Build the coolest Linux based applications with Moblin SDK  win great
 prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] (Improved) Routine to delete individual files from selected backups...

2008-11-17 Thread Jeffrey J. Kosowsky
I have vastly improved and completely rewritten my program
BackupPC_deleteFiles.pl. Also many bugs were fixed ;)

The routine now allows you to delete arbitrary files and directories
(or list or globs thereof) across multiple hosts and shares, and
arbitrary (contiguous) backup ranges.

Specifically, you can now delete files from either a single backup or
from a range of backups. The program then appropriately deletes and/or
moves files and attributes and correspondingly adds or removes type=10 delete
attributes so as to make sure that the files show as fully deleted from the
backup range while not affecting the files visible from subsequent
backups that were not deleted.

The only thing it can't do (and refuses to do) is to delete files that
are hard links or directories that contain hard links since I couldn't
find any easy way to find and keep track of hard links.

The program provides lots of (optional) verbosity and debugging levels
so you can be sure you are deleting what you want to (and from a
debugging perspective that the appropriate visibility and inheritance
rules are being faithfully applied).


Since the program is now 1000+ lines long, I won't post it, but I will
be happy to email it to anyone interested or post it if there is
enough demand. Instead I will just copy over the logic so people can
check it if they are so inclined (note it took me multiple attempts
before I truly understood the topology of the backup chains and how to
efficiently and accurately encode it).

I will also include a copy of the usage message and options:


usage: $0 [options] files/directories...
  NOTE: if -s option not set, then file/directory names include the share name

  Required options:
-h host Host (or - for all) from which path is offset
-n backRangeRange of successive backup numbers to delete.
N   delete files from backup N (only)
M-N delete files from backups M-N (inclusive)
-M  delete files from all backups up to M (inclusive)
M-  delete files from all backups up from M (inlusive)
-   delete files from ALL backups

  Optional options:
-s shareNameShare name (or - for all) from which path is offset
(don\'t include the 'f' mangle)
-l  Just list backups by host (with level noted in parentheses)
-r  Allow directories to be removed too
-H  Skip over hard links (otherwise exits without deletions if 
hard links found)
-m  Paths are unmangled (i.e. apply mangle to paths)
-q  Don\'t show deletions
-t  Trial run -- do everything but deletions
-c  Clean up pool - schedule BackupPC_nightly to run (requires 
server running)
Only runs if files were deleted
-d levelTurn on debug level



 Program logic is as follows:

 1. First construct a hash of hashes of 3 arrays and 2 hashes that
encapsulates the structure of the full and incremental backups
for each host. This hash is called:
%backupsHoHA{hostname}{key} 
where the keys are: ante, post, baks, level, vislvl
with the first 3 keys having arrays as values and the final 2
keys having hashes as values. This pre-step is done since this
same structure can be re-used when deleting multiple files and
dirs (with potential wilcards) across multiple shares, backups,
and hosts. The component arrays and hashes are constructed as
folows:
 
- Start by constructing the simple hash %LevelH whose keys map
  backup numbers to incremental backup levels based on the
  information in the corresponding backupInfo file.

- Then, for each host selected, determine the list (@Baks) of
  individual backups from which files are to be deleted based on
  bakRange and the actual existing backups.
  
- Based on this list determine the list of direct antecedent
  backups (@Ante) that have strictly increasing backup levels
  starting with the previous level 0 backup. This list thus
  begins with the previous level zero backup and ends with the
  last backup before @Baks that has a lower incremental level.
  Note: this list may be empty if @Baks starts with a full (level
  0) backup. Note: there is at most one (and should in general be
  exactly one) incremental backup per level in this list starting
  with level 0.

- Similarly, constuct the list of direct descendants (@Post) of
  the elements of @Baks that have strictly decreasing backup
  levels starting with the first incremental backup after @Baks
  and continuing until we reach a backup whose level is less than
  or equal to the level of the lowest incremental backup in @Baks
  (which may or 

Re: [BackupPC-users] Vista client unauthorized user

2008-11-17 Thread Oz Dror
Thanks for responding.

I have done all the correction that you undigested. I am still havening 
the auth issue.
When I run the backup in a command line
I noticed these lines:

Negotiated protocol version 28
Got response: 73c7ac91a97cd967d94504b9a6347037
Auth: got challenge: Mkx8AVu0NSVA9mANktJ9Tg, reply:  c8eskal82WfZRQS5pjRwNw
Error connecting to module home at 192.168.0.97:873: auth failed on 
module home

what does that mean? Which user failed Authorization.?  My user account 
on the vista machine or backuppc user
does the backuppc needs to be an administrator user of the vista machine
In my current setup it is not even a user of the vista machine.

Cody Dunne wrote:


 Oz Dror wrote:
 I have installed xp client unsuccessfully

 On vista rsyncd failed.  Any ideas why and how to debug it.

 I get the following error:
  auth failed on module home from unknown (192.168.0.4): unauthorized 
 user

 /etc/rsyncd:

 You mean /etc/rsyncd.conf, right?


 gid = users
 read only = false
 use chroot = false
 transfer logging = false
 log file = /var/log/rsyncd.log
 log format = %h %o %f %l %b
 hosts allow = 192.168.0.4
 hosts deny = 0.0.0.0/0

 Try removing the allow and deny lines temporarily until it starts 
 working.

 strict modes = false
 [root]
 path = /cygdrive/c/
 auth users = backuppc
 secrets file = /etc/rsyncd.secrets
 [home]
 path = /cygdrive/c/users/[my home dir]/documents/quicken/

 Isn't the path case sensitive? e.g.-/cygdrive/c/Users/[my home 
 dir]/Documents/Quicken/

 auth users = backuppc

 Also case sensitive.

 secrets file = /etc/rsyncd.secrets

 backuppc is in there, right? Like this: backuppc:password


 Thanks
 Oz

 Also, can you connect to it manually or increase the verbosity and 
 tail /var/log/rsyncd.log on the client?

 Cody


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Vista client unauthorized user

2008-11-17 Thread Jaco Lange
Hi

I am using deltacopy on Windows vista 32/64 bit and Server 2008 without any
problem
http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp


Regards
Jaco
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Any reason why log files are not in standard gzip format?

2008-11-17 Thread Jeffrey J. Kosowsky
I understand why the cpool files are compressed using zlib with a
twist so that you can also save the checksums.

But why do the log files have to be in a format that can't be read
with standard unix tools? Especially since the system logs sit in
/var/log along with all the other files that get compressed with
either gzip or bzip2.

That would seem to be easier than being forced to either open a web
browser or enter the full path to
/usr/share/BackupPC/bin/BackupPC_zcat as it is stored on my system.

I'm sure I'm missing something here...

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc mirroring with rdiff-backup or not?

2008-11-17 Thread Nils Breunese (Lemonbit)
[EMAIL PROTECTED] wrote:

 Don't get me wrong:  I'll take the hacks.  It's better than  
 nothing.  I, like I think *most* of us, would kill (or even pay  
 for!) a method of replicating a pool in a guaranteed-correct way,  
 especially at the host or even backup level.  But I still worry  
 about using these hacks for production.

You could also set up two BackupPC servers in two different locations.  
No hacks needed.

Nils Breunese.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/