Re: [BackupPC-users] CmdQueueNice

2012-06-07 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 08:05:33 -0500 on Thursday, June 7, 2012:
  On Thu, Jun 7, 2012 at 3:42 AM, Tyler J. Wagner ty...@tolaris.com wrote:
   On 2012-06-06 17:45, Jeffrey J. Kosowsky wrote:
   I am able to achieve most of what you wish by doing most of my
   customization in the individual host-specific config files since
   variables set there override variables set in config.pl
  
   Alternatively, one could add a line of perl code at the end of the
   master config.pl file to source a set of customizations that would
   again overwrite the defaults. Then after an upgrade, you would only
   need to add back that sourcing line
  
   The standard behaviour is that the daemon uses a default value if no
   setting is found in the config file. Config files are for configuration
   (away from defaults). Unfortunately, this isn't the case here.
  
  But there are 'package' defaults for code/configs as shipped and
  updated in a distribution and 'local' defaults that the admin usually
  wants to keep - and which will conflict with updates of the
  package.

Of course, but at least by separating out your changes it makes it
easier to update since you only need to check those variables to see
if they have changed... So I find that separating out my changes both
keeps everything 'cleaner' and makes updating easier. YMMV.

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] CmdQueueNice

2012-06-06 Thread Jeffrey J. Kosowsky
Adam Goryachev wrote at about 21:14:52 +1000 on Wednesday, June 6, 2012:
  Even better, would be if backuppc could support reading config file
  snippets from a directory, that way all the local changes could be
  stored in separate files, and the package could upgrade the config.pl
  file without overwriting local changes.

I am able to achieve most of what you wish by doing most of my
customization in the individual host-specific config files since
variables set there override variables set in config.pl

Alternatively, one could add a line of perl code at the end of the
master config.pl file to source a set of customizations that would
again overwrite the defaults. Then after an upgrade, you would only
need to add back that sourcing line

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Parity (par2) command running on archives even though set to 0

2012-05-31 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 12:05:02 -0400 on Thursday, May 31, 2012:
  Pascal Mosimann pascal.mosim...@alternatique.ch wrote on 05/31/2012 
  11:12:54 AM:
  
 Why does BackupPC run the parity command if I've told it not to by 
   passing
 it a 0?  And how do I return to the 3.1 behavior?  According to the
 documentation, setting it to 0 should disable it, not cause it to run 
  
   with
 a parameter of 0...  And if for some reason we would *want* the 
   parity to
 run with a parity of 0, could we have a parameter that disables it? 
   Maybe
 -1?  How do you calculate *negative* parity!  :)
   
   Hi Tim,
   
   Same situation here: I don't want to run par2. Because the archive is 
   done on USB drives and it takes too long.
   
   It looks between version 1.16 and 1.17 of BackupPC_archiveHost (see 
   http://backuppc.cvs.sourceforge.net/viewvc/backuppc/BackupPC/bin/
   BackupPC_archiveHost?r1=1.16r2=1.17), 
   the condition to run par2 has changed on line 157
From
  ...
  if ( $parfile != 0 ) {
  ...
   
   to
  ...
  if ( length($parfile) ) {
  ...
   
   I've modified the condition back to if ( $parfile != 0 ) and now it 
   skips the par2 execution.
  
  Thank you for the followup!  I actually changed it to:
  
   if ( length($parfile)  $parfile != 0 ) {
  
  (and sent a patch to the list that was seemingly ignored).
  
  My perl-fu is not the greatest;  the reason why I changed it to the above 
  is to be able to handle two conditions:  parfile set to 0, *and* the 
  parfile parameter unset (which is the default, by the way) or set to an 
  empty string.  Perl considers unset variables equal to an empty string, 
  which also is equal to 0 in a comparison, but I don't like depending on 
  such assumptions.  I'd rather the code be explicit about what it's trying 
  to accomplish.  Maybe that makes me inelegant.

Well, Perl purposely does such casting to simplify the type of test
you are attempting to accomplish since it is often awkward to test if
a variable is defined, empty string or zero, so it's not only legal
but in a sense the right way to do things.

Indeed, your code is in a sense not only *inelegant* but *wrong* since
it will spit out a nasty warning about $parfile being uninitialized if
$parfile is unset and you use the (recommended) '-w' flag, since
neither 'length' nor comparisons are expected to operate on
uninitialized variables. Thus, your code is almost certainly less
preferable to the simpler: 
   if($parfile) 

I think that most Perl coders would understand and expect the above to
only return true if the variable $parfile is set to a
non-zero/non-empty string value.

  
  Of course, if you want elegant you could use:
  
   if ( $parfile ) {

Not just elegant but *working* --- your code spits out warnings with
the '-w' flag and it's sloppy programming to write Perl code that
doesn't work with -w flag (indeed, the Perl manpage goes so far as to
say that not making '-w' mandatory is a *bug* in Perl -- i.e., all
Perl code should be written to pass the '-w' (warning) tests).

  
  because Perl will evaulate an undefined value as an empty string, and an 
  empty string as a false, and a zero as a false.  It's certainly concise...
  
  But like I said, my perl-fu is not the greatest, and I prefer explicit to 
  implicit.  If this is somehow offensive to the sense of style of Perl 
  hackers, then so be it.

It's not about explicit/implicit, it's about code that works, is
transportable, and is (relatively) understandable. I think just about
any non-totally novice PERL programmer understands how undefined
variables are evaluated (and when they give errors) -- it's one of the
key benefits of Perl.
  
  And let me know:  I'd truly like to know the right way to construct an 
  if statement that handles a variable that is unset, set to a null string 
  or set to zero.  Bonus points for avoiding the warning when using
  -w... 

As above, simpler is actually better code and avoids the -w warnings.
i.e, just use:
 if($parfile)

  The only way I've been able to handle this in other projects is something 
  like:
  
  $variable=0 if not defined $variable;
  if ( $variable != 0 ) {
  
Not sure this is necessary and it will give warning if $variable is
non-numeric i.e. a string.

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: 

Re: [BackupPC-users] Deleting certain files not working

2012-05-23 Thread Jeffrey J. Kosowsky
See below for comments...
RYAN M. vAN GINNEKEN wrote at about 15:45:40 -0600 on Wednesday, May 23, 2012:
  I'm bumping this thread again as it seems deleting must be
 possible but i keep getting this error for these file 

Well, I am the author of that script
  
  $ /etc/backuppc/BackupPC_deleteFile.pl -h mx1.computerking.ca -r -n - -d 4 
  f%2fusr%2flocal%2fbackups%2fzimbra 
  Error: Can't delete root share directory: f%2fusr%2flocal%2fbackups%2fzimbra

As the error message states, the program disallows removing any (root)
share directory and as the comment in the code states that is done
because frankly it seemed a bit dangerous to allow and I assumed that
removing an entire share was a rare case.

The idea being that 'shares' are in general rarely changed and are
part of the basic, typically static config file. My assumption was
that the typical use case is when a user wants to delete a specific
file/folder that was inadvertently backed up rather than going back
and deleting an entire share that had been intentionally and
permanently included for backup.

That being said, I think the program would still probably work if the
above error message (and perl 'die') were deleted but I haven't tested
it to make sure there are no gotcha edge cases.


  backuppc@venus:~/pc$ /etc/backuppc/BackupPC_deleteFile.pl -h 
  mx1.computerking.ca -r -n - -d 4 
  f%2fusr%2flocal%2fbackups%2fzimbra/fsessions 
  [mx1.computerking.ca][][0 1][] 
  ANTE[mx1.computerking.ca]: 
  BAKS[mx1.computerking.ca]: 0 1 
  POST[mx1.computerking.ca]: 
  [mx1.computerking.ca][0] f%2fusr%2flocal%2fbackups%2fzimbra/fsessions: 
  Directory contains hard link: 
  /ffull-20120421.070005.402/faccounts/f9de/f086/f9de08691-ab2f-406e-a105-56c35c702928/fldap_latest.xml
   ... ABORTING... 

Files that are hard links or folders that contain hard links cannot be
safely deleted if there are other hard links to the file outside the
directory being deleted due to the way that BackupPC represents hard
links in the pc tree. In particular, in order to delete a hard link,
you would need to in a sense find all the other files representing
that same hard link due to the way that hard links are captured in the
pc tree. This would require searching through all the attrib files in
the backup itself and in preceding backups in the hierarchy which
could truly take a *long* time...

So this is not a bug so much of as a limitation based on how BackupPC
represents hard-links.

If you had read the usage instructions (-u), you would see that there
are 3 options:
   -H action   Treatment of hard links contained in deletion tree:
0|abort  abort with error=2 if hard links in tree [default]
1|skip   Skip hard links or directories containing them
2|force  delete anyway (BE WARNED: this may affect backup
 integrity if hard linked to files outside
 tree)

Just as an editorial comment, it boggles my mind that anyone would use
a low level program like this that destructively removes files from a
backup tree without having CAREFULLY read the 'usage'
documentation. Programs like this are powerful and if used wrong can
obediently remove entire directory trees from all your backups. In
fact, it is for reasons like this that I purposely chose to disallow
removing shares so that someone wouldn't inadvertently do so. More
generally, why would you not want to take the time to understand what
this program does?

  See below if you think you might be able to help with my other deleting of 
  file problems 
  
  
  - Original Message -
  
  From: RYAN M. vAN GINNEKEN r...@computerking.ca 
  To: General list for user discussion, questions and support 
  backuppc-users@lists.sourceforge.net 
  Sent: Saturday, 21 April, 2012 1:44:16 PM 
  Subject: Re: [BackupPC-users] Deleting certain files not working 
  
  
  Hello anyoneon this list got tthis script working 
  
  - Original Message -
  
  From: RYAN M. vAN GINNEKEN r...@computerking.ca 
  To: backuppc-users@lists.sourceforge.net 
  Sent: Thursday, 19 April, 2012 4:37:33 PM 
  Subject: [BackupPC-users] Deleting certain files not working 
  
  
  Hello all i am having some problems deleting files from my backups using the 
  instructions found here 
  http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=BackupPC_DeleteFile
   
  
  I have gotten the script installed and working for some dirctories like this 
  one 
  
  backuppc@venus:~$ /var/lib/backuppc/BackupPC_deleteFile.pl -h 
  t420s-1.solar.lan -r -n - -d 4 /f%2fcygdrive%2fc/fIntel 
  [t420s-1.solar.lan][][8 11 12 13 14 15 16][] 
  ANTE[t420s-1.solar.lan]: 
  BAKS[t420s-1.solar.lan]: 8 11 12 13 14 15 16 
  POST[t420s-1.solar.lan]: 
  LOOKING AT: [t420s-1.solar.lan] [][8 11 12 13 14 15 16][] 
  f%2fcygdrive%2fc/fIntel 
  BAKS[8](0) t420s-1.solar.lan/8/f%2fcygdrive%2fc/fIntel [X][-][-] 
  BAKS[11](1) 

Re: [BackupPC-users] encrypted pc and pool directory

2012-05-18 Thread Jeffrey J. Kosowsky
Gerry George wrote at about 16:38:47 -0400 on Thursday, May 17, 2012:
  Actually this coincides with an idea I had for using BackupPC for use as a
  backup service.  It would have to operate differently to the standard
  configuration, though.  The system I envisioned was as follows:
  
 - rather than the BackupPC Server polling clients, the clients would be
 responsible for initiating the connection to the BackupPC server.
 - The BackupPC server would need to run Rsyncd in order to listen for
 connections and expose the backup store location to the client, based on
 the authentication and other defined criteria (alloted space, compression,
 encryption, authorization)
 - the clients would run rsync (or some other process) which will send
 the data across to the BackupPC server, over SSH (for example), which 
  would
 utilize encryption for the SSH path.
 - Optionally, the data can (possibly) be encrypted BY THE CLIENT, and
 sent across as raw bits to be stored on the Rsync store.  This would mean
 that, as was suggested  by John's boss, the server does not have access to
 the unencrypted data, as the client could choose their own password which
 the server/service provider would not have.  This would mean, though that
 data recovery from failed disks would be a royal pain
  
  Issues:
  
 - Client access to the data - the web interface would become much more
 complex, as it would now need to be accessed over a WAN or Internet in
 order to check or manipulate clients backups and restores.
 - Client would now need to keep backup state information
 - WAN link becomes issue - Internet connection speeds will determine
 backup duration.
 - Backing up of clients may be limited to the use of Rsync and SSH.
  
  
  Other Considerations:
  
 - Client can optionally have a staging server which offers a web
 interface for local consumption, interacts directly with the backup 
  server
 (as a sort of gateway), keeps backup state and status, and stores commonly
 accessed info (backup details, file lists, etc), and would be responsible
 for requesting files for restore from the backup server.  This could aid
 with system security, as the Backup Service will have less interfaces to
 expose to the public.
 - Secure encrypted communications can then happen between staging server
 and BackupPC server(s), with on-disk encryption, if needed, being done by
 the staging server before shipping files over.
  
  
  This means that BackupPC would need to be changed from a pull backup
  system (by the server), to  push backups (by the clients).  It would also
  change the way the web interface operated (if clients now access from the
  server), or the structure and relationship between systems if the option of
  a gateway or staging server is utilized.
  
  While I am not a programmer, and would not be able to even begin to provide
  any assistance in this, I think such an option would not just put BackupPC
  over the top (as it is already there), but would place it in a completely
  new class of software (BaaS - Backups as a Service), and open up a whole
  new realm of options for OSS fans.
  
  
  Any criticisms (or dissecting, correcting, whatever) of the above is
  welcomed.  Does anyone think this may be feasible?

Yeah -- why would anyone ever want to do this?
The whole beauty/simplicity of BackupPC is that it does not need any
specialized client to install, manage and run -- it simply uses
existing ssh/rsync/smb/tar/ftp etc. applications. Nor is there
anything to run or break on the client.

Plus, any encryption on the client side hidden to the server would
completely destroy BackupPC's pooling/deduplication feature which is
perhaps one of its strongest and most unique features.

Plus, this would require a near-complete rewrite of BackupPC.

So, why the heck would anyone want to do this?

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] encrypted pc and pool directory

2012-05-18 Thread Jeffrey J. Kosowsky
Gerry George wrote at about 10:27:04 -0400 on Friday, May 18, 2012:
  On Fri, May 18, 2012 at 9:55 AM, Jeffrey J. Kosowsky
  backu...@kosowsky.orgwrote:
  
   Gerry George wrote at about 16:38:47 -0400 on Thursday, May 17, 2012:
 Actually this coincides with an idea I had for using BackupPC for use
   as a
 backup service.  It would have to operate differently to the standard
 configuration, though.  The system I envisioned was as follows:

- rather than the BackupPC Server polling clients, the clients would
   be
responsible for initiating the connection to the BackupPC server.
- The BackupPC server would need to run Rsyncd in order to listen for
connections and expose the backup store location to the client,
   based on
the authentication and other defined criteria (alloted space,
   compression,
encryption, authorization)
- the clients would run rsync (or some other process) which will send
the data across to the BackupPC server, over SSH (for example),
   which would
utilize encryption for the SSH path.
- Optionally, the data can (possibly) be encrypted BY THE CLIENT, and
sent across as raw bits to be stored on the Rsync store.  This would
   mean
that, as was suggested  by John's boss, the server does not have
   access to
the unencrypted data, as the client could choose their own password
   which
the server/service provider would not have.  This would mean, though
   that
data recovery from failed disks would be a royal pain

 Issues:

- Client access to the data - the web interface would become much
   more
complex, as it would now need to be accessed over a WAN or Internet
   in
order to check or manipulate clients backups and restores.
- Client would now need to keep backup state information
- WAN link becomes issue - Internet connection speeds will determine
backup duration.
- Backing up of clients may be limited to the use of Rsync and SSH.


 Other Considerations:

- Client can optionally have a staging server which offers a web
interface for local consumption, interacts directly with the backup
   server
(as a sort of gateway), keeps backup state and status, and stores
   commonly
accessed info (backup details, file lists, etc), and would be
   responsible
for requesting files for restore from the backup server.  This could
   aid
with system security, as the Backup Service will have less
   interfaces to
expose to the public.
- Secure encrypted communications can then happen between staging
   server
and BackupPC server(s), with on-disk encryption, if needed, being
   done by
the staging server before shipping files over.


 This means that BackupPC would need to be changed from a pull backup
 system (by the server), to  push backups (by the clients).  It would
   also
 change the way the web interface operated (if clients now access from
   the
 server), or the structure and relationship between systems if the
   option of
 a gateway or staging server is utilized.

 While I am not a programmer, and would not be able to even begin to
   provide
 any assistance in this, I think such an option would not just put
   BackupPC
 over the top (as it is already there), but would place it in a
   completely
 new class of software (BaaS - Backups as a Service), and open up a whole
 new realm of options for OSS fans.


 Any criticisms (or dissecting, correcting, whatever) of the above is
 welcomed.  Does anyone think this may be feasible?
  
   Yeah -- why would anyone ever want to do this?
   The whole beauty/simplicity of BackupPC is that it does not need any
   specialized client to install, manage and run -- it simply uses
   existing ssh/rsync/smb/tar/ftp etc. applications. Nor is there
   anything to run or break on the client.
  
   Plus, any encryption on the client side hidden to the server would
   completely destroy BackupPC's pooling/deduplication feature which is
   perhaps one of its strongest and most unique features.
  
   Plus, this would require a near-complete rewrite of BackupPC.
  
   So, why the heck would anyone want to do this?
  
  
  Well, the data de-duplication issue has been conceded.
  
  However, why would one wish to have a push backup server which waits for
  the clients to send backups - easy, to run a remote backup service for
  disparate clts on separate (and remote) networks, whose systems are all
  separate, distinct and unrelated to each other.

So? BackupPC has no problem dealing with disparate systems now. It
does not care what the systems are. Plus a push system allows for
queuing to manage network and server bandwidth.

If you wish to avoid the central scheduler and initiate backups from
your

Re: [BackupPC-users] Compare backups without restoring

2012-03-30 Thread Jeffrey J. Kosowsky
One simple possibility would be to use backuppc-fuse to mount the pc
tree and then use normal *nix routines like diff or cmp to find
differences.

If you only care to know which files differ, rather than how and if
you are using the rsync/rsyncd transfer method you could write a
custom perl routine that compares the md4 rsync checksums to look for
differences. Note md4 checksums are appended to cpool files after the
file is backed up a second time, so the routine would need to be smart
enough to default to comparing the entire file if no checksums exist.

More generally, if you are looking for maximum speed/robustness and
you want to take advantage of the pool structure, write a routine that
recurses down the 2 backups doing the following:

1. If file exists only in one backup -- different (VERY FAST)
2. Otherwise, if both files link to the same pool file -- same (VERY FAST)
3. Otherwise, if files link to pool files with different stem -- different 
(VERY FAST)
4. Otherwise, if files link to same pool stem with different suffix:
   First compare file size (from attrib file) (FAST)
   If same and and rsync checksums exist on both, then compare checksums (FAST)
   Otherwise, compare the *compressed* payload (STILL FASTER than decompressing)

Assuming that most files are unchanged, you will rarely need to do any
actual file compares. Even when you do, it will be sped up either by
using rsync checksums or by comparing the compressed files directly.



N.Trojahn wrote at about 14:09:06 +0200 on Friday, March 30, 2012:
  Hello list,
  
  I'd like to find the differences between to backups of a certain host
  without restoring the two backups (way too large) and diff 'em.
  
  Anyone has an idea how to achieve this using a script or something like
  that which runs over the BackupPC pool?
  
  Thanx in advance!
  Falko
  
  --
  This SF email is sponsosred by:
  Try Windows Azure free for 90 days Click Here 
  http://p.sf.net/sfu/sfd2d-msazure
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Extracting Checksums from Backuppc Quickly?

2012-03-01 Thread Jeffrey J. Kosowsky
See the archives or the Wikki - I have written routines that check the
embedded md4sum checksums (available with rsync transfer method after
the second time a file is backed up) against the file contents. This
checks integrity.

I have also written a routine that adds the md4 checksum to files that
do not yet have it (e.g., files that have only been backed up once
under rsync).

Kyle Anderson wrote at about 09:52:22 -0500 on Thursday, March 1, 2012:
  List,
  To do audits, I like to get checksums from files and compare them to
  make sure that I have files backed up properly.
  
  I've done cli restores piped to md5sums and simply md5sum'ing files
  using the fuse filesystem. Both methods are ok
  
  But I feel like I'm taking the long way around. Backuppc does have the
  md4? checksum already encoded in the pool filename right? Is there a
  quick way to go from filename - checksum using the information backuppc
  already knows ?
  
  Kyle
  
  --
  Virtualization  Cloud Management Using Capacity Planning
  Cloud computing makes use of virtualization - but cloud computing 
  also focuses on allowing computing to be delivered as a service.
  http://www.accelacomm.com/jaw/sfnl/114/51521223/
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Extracting Checksums from Backuppc Quickly?

2012-03-01 Thread Jeffrey J. Kosowsky
Kyle Anderson wrote at about 11:10:26 -0500 on Thursday, March 1, 2012:
  I have BackupPC_digestVerify.pl, but I don't understand if this can do
  what I'm asking.
  
  This tool looks like it adds and verifies the sums like you say, but can
  it actually tell me what the sum is from a known filename?
Well there is no one sum. It verifies both the md4 block checksums and
the uncompressed full file md4 checksum. Not sure why you would want
to print them out for hundreds of thousands or more files since they
are pretty meaningless and their format is pretty unique to rsync

   Also I looks like the filename it wants might be the cpool
  filename, is that right?  What kind of filenames is it expecting?

Not sure what version you have, but it's pretty clear from the usage
message that depending on the flag, you can verify a cpool directory
tree, an individual cpool file or a pc directory/file.

Here is the latest version:


#!/usr/bin/perl
#
#
# BackupPC_digestVerify.pl
#   
#
# DESCRIPTION

#   Check contents of cpool and/or pc tree entries (or the entire
#   tree) against the stored rsync block and file checksum digests,
#   including the 2048-byte block checksums (Adler32 + md4) and the
#   full file md4sum.

#   Optionally *fix* invalid digests (using the -f flag).
#   Optionally *add* digests to compressed files that don't have a digest.

#
# AUTHOR
#   Jeff Kosowsky (plus modified version of Craig Barratt's digestAdd code)
#
# COPYRIGHT
#   Copyright (C) 2010, 2011  Jeff Kosowsky
#   Copyright (C) 2001-2009  Craig Barratt (digestAdd code)
#
#   This program is free software; you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation; either version 2 of the License, or
#   (at your option) any later version.
#
#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#   GNU General Public License for more details.
#
#   You should have received a copy of the GNU General Public License
#   along with this program; if not, write to the Free Software
#   Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
#
#
#
# Version 0.3, released February 2011
#
#

use strict;
use warnings;
use Getopt::Std;

use lib /usr/share/BackupPC/lib;
use BackupPC::Xfer::RsyncDigest;
use BackupPC::Lib;
use File::Find;

use constant RSYNC_CSUMSEED_CACHE  = 32761;

my $default_blksize = 2048;
my $dotfreq=1000;
my %opts;
if ( !getopts(cCpavft:dVQ, \%opts) || @ARGV !=1
 || (defined($opts{v}) + defined($opts{f})  1)
 || (defined($opts{c}) + defined($opts{C}) + defined($opts{p})  1)
 || (defined($opts{Q}) + defined($opts{V})  1)) {
print STDERR EOF;
usage: $0 [-c|-C|-p] [-v|-f] [-a][-V|-Q] [-d] [-t] [File or Directory]
  Verify Rsync digest in compressed files containing digests.
  Ignores directories and files without digests (firstbyte = 0xd7) unless
  -a flag set.
  Only prints if digest inconsistent with file content unless verbose flag.
  Note: zero length files are skipped and not counted.

  Options:
-c   Consider path relative to cpool directory
-C   Entry is a single cpool file name (no path)
-p   Consider path relative to pc directory
-v   Verify rsync digests
-f   Verify  fix rsync digests if invalid/wrong
-a   Add rsync digests if missing
-t   TopDir
-d   Print a '.' to STDERR for every $dotfreq digest checks
-V   Verbose - print result of each check
 (default just prints result on errors/fixes/adds)
-Q   Don\'t print results even with errors/fixes/adds

In non-quiet mode, the output consists of 3 columns.
  1. inode number
  2. return code:
   0 = digest added
   1 = digest ok
   2 = digest invalid
   3 = no digest
   0 other error (see source)
  3. file name

EOF
exit(1);
}

#NOTE: BackupPC::Xfer::RsyncDigest-digestAdd opens fils O_RDWR so
#we should run as user backuppc!
die(BackupPC::Lib-new failed\n) if ( !(my $bpc = BackupPC::Lib-new) );
#die(BackupPC::Lib-new failed\n) if ( !(my $bpc = BackupPC::Lib-new(, , 
, 1)) ); #No user check

my $Topdir = $opts{t} ? $opts{t} : $bpc-TopDir();
$Topdir = $Topdir . '/';
$Topdir =~ s|//*|/|g;

my $root = '';
my $path;
if ($opts{C}) {
$path = $bpc-MD52Path($ARGV[0], 1, $Topdir/cpool);
$path =~ m|(.*/)|;
$root = $1; 
}
else {
$root = $Topdir . pc/ if $opts{p};
$root = $Topdir . cpool/ if $opts{c};
$root =~ s|//*|/|g;
$path = $root . $ARGV[0];
}

my $add = $opts{a};
my $verify = $opts{v};
my $fix = $opts{f};
my $verbose = $opts{V};
my $quiet = $opts{Q};
my 

Re: [BackupPC-users] Migrating backuppc (yes, that again...:)

2012-02-27 Thread Jeffrey J. Kosowsky
Brad Alexander wrote at about 14:57:41 -0500 on Friday, February 24, 2012:
  Hey all,
  
  I'm running into a problem migrating my /var/lib/backuppc pc
  directory. I got cpool, log, pool, tmp, and trash migrated via rsync,
  and I am attempting to migrate the pc directory.

It's really not possible to *separately* migrate the pc directory
since it has no way of knowing about and hence preserving the links to
the cpool/pool. So you will have a duplicate copy of the pool files
(rather than links) in your pc tree, even if you use the -H flag. If
you do use the -H flag, duplicate copies within the pc tree will be
linked to themselves but not to the pool. Then when BackupPC_nightly
runs, all the pool files will be erased (unless a new backup has been
done in the interim) since there will be no links to the pc tree.

There really is no magical way to migrate a BackupPC archive other
than one of the following:
1. Use rsync -H on the entire /var/lib/BackupPC tree which becomes
   increasingly slow if not unworkable for large archives
2. Copy over the raw filesystem using some notion of disk copy (e.g.,
   dd) or filesystem snapshots. This requires you to be using the same
   type of filesystem and to copy over the entire filesystem.
3. Use either BackupPC_tarCopy or my program BackupPC_copyPcPool to
   copy over the archive preserving links. These are typically much
   slower than #2 but they scale better than #1

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling Options, Scripts, and starting over

2012-02-23 Thread Jeffrey J. Kosowsky
Brad Morgan wrote at about 16:31:09 -0700 on Thursday, February 23, 2012:
  I've also seen a couple of useful scripts (BackupPC_copyPcPool.pl,
  BackupPc_deleteFile.pl) and a jLib.pm but I haven't seen any documentation
  about how and where to install these files. Could someone point me in the
  right direction?

I am guilty of writing all of the above.
jLib.pm should be put in the same place where Lib.pm is which on
Fedora is:
   /usr/share/BackupPC/lib/BackupPC
The executables (.pl) can be put anywhere you typically store your
executables.
One option is /usr/local/bin
Another would be ~/bin
Or anywhere where you want as long as you specify the path to them

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] I've Tried Everything

2012-02-16 Thread Jeffrey J. Kosowsky
PLEASE DON'T TOP-POST - it makes it nearly impossible to follow the
thread, especially with multiple people chiming in
Zach Lanich wrote at about 02:01:53 -0500 on Thursday, February 16, 2012:
  This is all the Log has in it when I try rsync:
  
  2012-02-15 21:59:02 full backup started for directory /Users/zlanich/Sites
  2012-02-15 21:59:35 Got fatal error during xfer (Child exited prematurely)
  
  
  
  On Thu, Feb 16, 2012 at 12:30 AM, Les Mikesell lesmikes...@gmail.comwrote:
  
   On Wed, Feb 15, 2012 at 11:20 PM, Zach Lanich zlan...@gmail.com wrote:
  
   Hey guys i've been trying for like 12 hrs to get BackupPC to work. im
   not Amazing at linux, but i have tried Tar, Rsync, etc. each thing has a
   reason it fails. i got ssh set up and tested it and it works fine via
   ssh keys. using Tar, it errors (no files to dump) if i leave the
   BackupZeroFilesIsFatal box checked, it errors if there's less files in
   the attempted backup than the prev one, and even if i un-check it, it
   errors (65280
   ) on a random .png file that seems to be fine. can sum1 plz help me. i
   know BackupPC is supposed to be one of the best so i must be missing
   something
  
  
  
   What error did you get with rsync?
  
   --
 Les Mikesell
   lesmikes...@gmail.com
  
  
  
   --
   Virtualization  Cloud Management Using Capacity Planning
   Cloud computing makes use of virtualization - but cloud computing
   also focuses on allowing computing to be delivered as a service.
   http://www.accelacomm.com/jaw/sfnl/114/51521223/
   ___
   BackupPC-users mailing list
   BackupPC-users@lists.sourceforge.net
   List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
   Wiki:http://backuppc.wiki.sourceforge.net
   Project: http://backuppc.sourceforge.net/
  
  
  
  --
  --
  Virtualization  Cloud Management Using Capacity Planning
  Cloud computing makes use of virtualization - but cloud computing 
  also focuses on allowing computing to be delivered as a service.
  http://www.accelacomm.com/jaw/sfnl/114/51521223/
  
  --
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] oh my ... changing backupdirectory

2012-02-13 Thread Jeffrey J. Kosowsky
Ingo P. Korndoerfer wrote at about 11:04:27 +0100 on Monday, February 13, 2012:
  hello,
  
  i have been going around in circles and pretty much grazed all i could
  find on google and then finally found a way to
  get this to work, and though it might be worth communicating this, so it
  can maybe get included in the wiki ?

Why would we want to include such an absolutely brain-dead and
dangerous workaround to be in the wiki?  All you did was find a way to
destroy all notions of security and permissions on the backup
partition. Hey, I can't make a link so let me just run chmod ugo+rwx
on the entire backup tree. That has to be perhaps the absolutely
stupidest and most dangerously ignorant idea I have ever seen.
Please don't go near the wiki and please don't suggest workarounds
that you yourself don't understand lest you confuse some other newbie.

  
  i have succesfully installed backuppc under ubuntu and can connect and
  all is fine.
  
  except, i want my backups on a usb mounted external disk and then could
  not start the backuppc server anymore
  here is what i did:
  
  i followed the instructures here:
  
  http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Change_archive_directory#Changing_the_storage_location_of_the_BackupPC_pool
  
  i.e.
  
  sudo /etc/init.d/backuppc stop
  
  cp -dpR /var/lib/backuppc/. /mnt/backups1/backuppc

Why not just do cp -a?

  mv /var/lib/backuppc /var/lib/backuppc.orig
  ln -s /mnt/backups1/backuppc /var/lib/backuppc
  
  sudo /etc/init.d/backuppc start
  
  and get the famous can not create test hardlink error.

One potential issue might be that the external drive is not mounted as
root and hence only has your user permissions -- so in particular the
backuppc user cannot access it.


  the solution after a lot of trying was to
THE FOLLOWING IS NOT THE SOLUTION in any intelligent sense of the word!
  
  cd /mnt/backups1
  sudo su
  chmod -R ugo+rwx backupppc

This is in general a *very* bad idea.
You have just made your backups readable, writable, and executable to
all. All the files are now executable.

Why don't you try to figure out the problem rather than just hitting
it all with a sledgehammer by setting all perms to maximally
permissive?

Why don't you try making a test link yourself as user backuppc and see
where and why it fails?

  
  and then finally
  
  sudo /etc/init.d/backuppc start
  
  would work.
  
  i am still lacking a grasp of what the real problem was and what would
  have been the proper way to go about this.
You lack a total understanding of Linux security permissions.

  the permissions of the original installation and the copied installation
  (which i am using now via soft link) look like this:
  
  original install in /var/lib:
  
  drwxr-x---  7 backuppc backuppc 4096 Feb 13 09:37 .
  drwxr-xr-x 63 root root 4096 Feb 13 10:23 ..
  drwxr-x---  2 backuppc backuppc 4096 Feb 13 10:10 cpool
  drwxr-x---  2 backuppc backuppc 4096 Feb 13 10:13 log
  drwxr-x---  3 backuppc backuppc 4096 Feb 13 10:10 pc
  drwxr-x---  2 backuppc backuppc 4096 Jun 30  2011 pool
  drwxr-x---  2 backuppc backuppc 4096 Jun 30  2011 trash
  
  after copy to external disk (could not start backuppc server with this,
  but why. the permissions are all the same):
  
  drwxr-x--- 7 backuppc backuppc 4096 Feb 13 09:37 .
  drwxrwxrwx 5 ingo ingo 4096 Feb 13 10:59 ..
  drwxr-x--- 2 backuppc backuppc 4096 Feb 13 10:10 cpool
  drwxr-x--- 2 backuppc backuppc 4096 Feb 13 10:13 log
  drwxr-x--- 3 backuppc backuppc 4096 Feb 13 10:10 pc
  drwxr-x--- 2 backuppc backuppc 4096 Jun 30  2011 pool
  drwxr-x--- 2 backuppc backuppc 4096 Jun 30  2011 trash
  
  after chmod (now working)
  
  drwxrwxrwx 7 backuppc backuppc 4096 Feb 13 09:37 .
  drwxrwxrwx 4 ingo ingo 4096 Feb 13 09:44 ..
  drwxrwxrwx 2 backuppc backuppc 4096 Feb 13 10:54 cpool
  drwxrwxrwx 2 backuppc backuppc 4096 Feb 13 10:54 log
  drwxrwxrwx 3 backuppc backuppc 4096 Feb 13 10:54 pc
  drwxrwxrwx 2 backuppc backuppc 4096 Jun 30  2011 pool
  drwxrwxrwx 2 backuppc backuppc 4096 Jun 30  2011 trash
  
  is it the parent directory permissions that are messing the thing up ?
  
  thanks for any comments
  
  cheers
  
  ingo
  
  
  --
  
  +--+
  | Ingo Korndoerfer korndoer...@crelux.com Work: +49 89 700760210 |
  | Head of Crystallography   Fax:  +49 89 700760222 |
  | Crelux GmbH; Crystallography |
  | Am Klopferspitz 19a  |
  | Martinsried, 82152 Germany   |
  +--+
  
  --
  --
  Try before you buy = 

Re: [BackupPC-users] different hosts, different schedules, different directories

2012-02-13 Thread Jeffrey J. Kosowsky
Ingo P. Korndoerfer wrote at about 13:19:59 +0100 on Monday, February 13, 2012:
  hello,
  
  here comes the next question i could not find answered anywhere. please
  fee free to just point me to older posts
  if this has been discussed a 1000 times already ...
  
  so i have different directory trees on my host that i originally wanted
  back-upped with different schedules.
  it seems that does not really work. well, probably no harm done just
  backing up everything with the same schedule.
  although, just in case, if anybody knows how to do this ...

I'm not sure I see or understand the question here. I know English is
not your first language but please try to be a little clearer in
stating what you are really trying to do and what your question is.

  
  also ... how do i go about different directory trees on different
  computers with different schedules.
Again not sure what you mean by schedules. Are you talking about when
backups run or about what gets backed up?

  so on one computer i may want /home/data backed-up. but on another there
  is /home/clients.
  
  it seems this is possible by going rather unelegantly directly into each
  hosts config files. is that the way it is done,

Why is this inelegant? It's exactly the purpose of having
host-specific config files so that you can do things (e.g.,
scheduling) differently for different hosts. 
  or is there a way to accomplish this from the web interface ?

I never use the web interface (I find that inelegant :P ) but I'm
pretty sure it is possible

  1000 thanks
  
  ingo
  
  
  
  
  --
  
  +--+
  | Ingo Korndoerfer korndoer...@crelux.com Work: +49 89 700760210 |
  | Head of Crystallography   Fax:  +49 89 700760222 |
  | Crelux GmbH; Crystallography |
  | Am Klopferspitz 19a  |
  | Martinsried, 82152 Germany   |
  +--+
  
  --
  --
  Try before you buy = See our experts in action!
  The most comprehensive online learning library for Microsoft developers
  is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
  Metro Style Apps, more. Free future releases when you subscribe now!
  http://p.sf.net/sfu/learndevnow-dev2
  
  --
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Network Backup of backuppc

2012-02-13 Thread Jeffrey J. Kosowsky
Fred Warren wrote at about 09:13:43 -0800 on Monday, February 13, 2012:
  I would like to run backup-pc on site and keep a duplicate copy offisite.
   So I want 2 backup-pc servers. One onsite and one offsite. With the
  offsite copy not running, but the data being synced with the onsite copy.
  If there is some kind of failure with the onsite copy of backup-pc. I could
  then start the offsite-copy and restore files from there.
  
  What I have discovered so far is that even if I stop the backupc-service on
  the onsite server, I  cant keep the offsite server updated  via rsync. the
  first time I rsync it works fine. But then the next time I do an update,
  with all new data, deuplication, hand links created and changed on the
  onsite server, the next rsync  is a total disaster.
  
  Are there some magic sync settings that will allow me to keep a backup of
  the backup-pc data by rsyncing it to another system?
  

This has been discussed hundreds of times - please see the archives...

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XferLog.z : How do I read this thing?

2012-02-10 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 16:51:34 -0500 on Thursday, February 9, 2012:
  Hello!
  
  I've set up a new backup server, and for the first time I haven't disabled 
  compression.  BackupPC is now creating log files in (what it is claiming 
  is) .z format.  How do I read these?  I've tried zcat (not in gzip 
  format), uncompress (no error, but no file), and unzip (end of central 
  directory not found).  file XferLog.0.z says that this file is data 
  (which doesn't help...).
  
  So, how do I read these files, and even better:  

/usr/share/BackupPC/bin/BackupPC_zcat
(exact location may be different on your system)

  how do I go back to my plaintext logs?
I don't believe you can. The logs are pooled and hence when
compression is turned on they are compressed and stored in the cpool.

  (I *knew* there were great reasons why I have always immediately disable 
  compresson on all of my backup servers!  :)  )

It's really not a big deal...

  
  Tim Massey
  
   
  Out of the Box Solutions, Inc. 
  Creative IT Solutions Made Simple!
  http://www.OutOfTheBoxSolutions.com
  tmas...@obscorp.com 
   
  22108 Harper Ave.
  St. Clair Shores, MI 48080
  Office: (800)750-4OBS (4627)
  Cell: (586)945-8796 
  
  --
  --
  Virtualization  Cloud Management Using Capacity Planning
  Cloud computing makes use of virtualization - but cloud computing 
  also focuses on allowing computing to be delivered as a service.
  http://www.accelacomm.com/jaw/sfnl/114/51521223/
  
  --
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XferLog.z : How do I read this thing?

2012-02-10 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 17:40:43 -0500 on Thursday, February 9, 2012:
  Bowie Bailey bowie_bai...@buc.com wrote on 02/09/2012 05:01:32 PM:
  
   On 2/9/2012 4:51 PM, Timothy J Massey wrote:
Hello!
   
I've set up a new backup server, and for the first time I haven't
disabled compression.  BackupPC is now creating log files in (what it
is claiming is) .z format.  How do I read these?  I've tried zcat
(not in gzip format), uncompress (no error, but no file), and unzip
(end of central directory not found).  file XferLog.0.z says that
this file is data (which doesn't help...).
   
So, how do I read these files, and even better:  how do I go back to
my plaintext logs?
   
(I *knew* there were great reasons why I have always immediately
disable compresson on all of my backup servers!  :)  ) 
   
   Backuppc uses a special compression format.  You can read it with the
   BackupPC_zcat program.  On my machine, it is located in
   /usr/local/BackupPC/bin.
  
  I love BackupPC, but that is the *dumbest* thing *ever*.  (Sorry, Craig! 
  :)  ).

Why? 

BackupPC compresses and pools the log files which is consistent with
the handling of all other files in the pc tree (except for the
'backups' info file). This keeps everything streamlined and
consistent.

Of course one could have made another design choice, but calling this
the dumbest thing ever is not very helpful nor intelligent.

  And even if you for some reason thought that was a *brilliant* idea, why 
  wouldn't you change the extension?  Would bpz be so hard?  :)

Because it uses zLib compression and I believe .Z is a common
extension for that. This is not the DOS/Windows world where you just
willy-nilly make up new 3 letter extensions for every program under
the sun.

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XferLog.z : How do I read this thing?

2012-02-10 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 09:52:03 -0500 on Friday, February 10, 2012:
  Jeffrey J. Kosowsky backu...@kosowsky.org wrote on 02/10/2012 08:55:34 
  AM:
  
   Timothy J Massey wrote at about 17:40:43 -0500 on Thursday, February 9, 
  2012:
  At the price of making it, at a very minimum, very awkward to deal with. I 
  can't easily cat, grep, tail, etc.  I have to perform jumping-jacks to do 
  any of these things, just so the files can be compressed.

The log files can be quite long and they contain a lot of
redundancy allowing for very high compression levels . Compression is
a good idea here!

  And why would *log* files necessarily be handled consistently with backup 
  data files?  Do image manipulation programs store their log files in 
  .GIF's?

I think it is very *clever* to use the same pooling and compression
scheme for both the backup files and the metadata -- whether attrib
files or log files. You may think differently, but simply calling it
the dumbest idea ever shows both a lack of understanding and
appreciation of the work done by Craig, your smiley notwithstanding.
 
   Because it uses zLib compression and I believe .Z is a common
   extension for that. This is not the DOS/Windows world where you just
   willy-nilly make up new 3 letter extensions for every program under
   the sun.
  
  No.  .Z is *not* simply an extension for I used zLib on this.  .Z is the 
  extension used for files created by the compress command.  Try using the 
  compress command on a BackupPC log file...

The point is that both 'compress' and BackupPC use 'zlib' compression,
hence rather than creating some non-standard new suffix, Craig chose
to use an existing standard to signal that it similarly uses
zlib. Again, you may have different tastes, but what Craig did makes
perfect sense to me by reminding me that the files are indeed compressed.

  And whether UNIX or DOS, having two *incompatible* file types share the 
  same extension is just a really bad idea.  Would you expect to find a .gz 
  file that couldn't be handled by gzip?!?

As many people have already told you, the log files are meant to be
read by the GUI. If you want to manually read and parse them yourself,
you are welcome to, but since they are not intended to be accessed
directly by users there was no need to be pedantic in the suffix
naming convention.

 Oh, excuse me, was that neither helpful nor intelligent?

You are the one who called this the dumbest idea. It's not dumb at
all, it just doesn't do what you want or expect it to do. If you don't
like it and would like an alternative option to have non-compressed
log files, then stop griping and either write your own code to add the
new functionality or pay someone else to write it. I'm really sick of
people who spend their time complaining rather than constructively
contributing back.

  
  Tim Massey
  
   
  Out of the Box Solutions, Inc. 
  Creative IT Solutions Made Simple!
  http://www.OutOfTheBoxSolutions.com
  tmas...@obscorp.com 
   
  22108 Harper Ave.
  St. Clair Shores, MI 48080
  Office: (800)750-4OBS (4627)
  Cell: (586)945-8796 
  
  --
  --
  Virtualization  Cloud Management Using Capacity Planning
  Cloud computing makes use of virtualization - but cloud computing 
  also focuses on allowing computing to be delivered as a service.
  http://www.accelacomm.com/jaw/sfnl/114/51521223/
  
  --
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XferLog.z : How do I read this thing?

2012-02-10 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 09:41:53 -0500 on Friday, February 10, 2012:
  I usually monitor backups (especially when I've just created a new guest 
  to back up or when I'm having problems) by tail -f /path/to/XferLog.  I 
  can't do that with these compressed log files (or, I can't figure out how 
  to, anyway).
  
  GUI's are great and all (and I use BackupPC's a *lot*), but it would be 
  nice to not have them break the command line, too!  :(
  
  Or, at least, outsmart me.  Does someone have a way to simulate a tail 
  -f with the compressed logs?

Seems trivial to me (plus or minus some buffering delay)...

tail -n +0 -f /path/to/XferLog.Z | BackupPC_zcat

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XferLog.z : How do I read this thing?

2012-02-10 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 11:27:36 -0500 on Friday, February 10, 2012:
  Jeffrey J. Kosowsky backu...@kosowsky.org wrote on 02/10/2012 10:33:38 
  AM:
   The point is that both 'compress' and BackupPC use 'zlib' compression,
   hence rather than creating some non-standard new suffix, Craig chose
   to use an existing standard to signal that it similarly uses
   zlib. Again, you may have different tastes, but what Craig did makes
   perfect sense to me by reminding me that the files are indeed 
  compressed.
  
  Hey, JAR files and ODT files use LZW compression, just like ZIP!  Why 
  don't we use ZIP as an extension for them?  It would be so... elegant!
  
  The point of an extension is to tell the poor, lowly human what type of 
  data is contained in that file in a glance. 

This is truly getting tiresome. You have been told by multiple people,
multiple times that the poor, lowly human is not supposed to be
reading these files directly. Poor, lowly humans are expected to use
the documented GUI.

For those who want to delve into the internal file structure of
BackupPC, Craig has been kind enough to *signal* compression using the
.Z extension. If anybody is too confused by this or does not understand the
structure of BackupPC well enough to know how to read such files, then
that person should probably stick with the GUI.

The developer, and in particular an open source developer, has no
obligation to make all the inner workings coincide with your idea of
how internal files should be named.

When I first started playing with BackupPC it took me all of about 10
seconds to figure out that cat/gzip/compress etc. did not work on
these files and that I had to use the *included* BackupPC_zcat
utility. It took orders of magnitude less time than this thread
flaming Craig for the dumbest idea.

  If someone creates a random new file format (kind of like
  BackupPC's compressed log format!), they should *NOT* recycle a
  very well understood extension so that the poor, lowly human won't
  be able to figure out why the canonical tool for working with that
  format doesn't work (which it does *not*) without some sort of
  magic knowledge.  The fact that zLib was (ab)used to create this
  new format does not mean you should use .Z, any more than
  OpenOffice (or Java or any of the dozens of new formats that ZIP
  their multi-file contents) should call their files .ZIP.
  
  And at least in the case of OpenOffice, et. al, the canonical ZIP 
  management tool *WILL* *ACTUALLY* do something productive, unlike 
  BackupPC's .Z log files and the canonical tool for managing .Z files.

It's an *internal* file -- not meant to be accessed by every day
users. Those that want to delve in have never had any problem figuring
it out before. It's truly not worth complaining about.

Personally, the last thing I want is to have another three letter
extension I need to remember. YMMV. If it truly bothers you so much
then submit a patch or fork the code and use your own naming
convention. There are many more important things to worry about and
much better uses of limited and valuable programming time for
BackupPC.

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC integration with Tarsnap

2012-02-10 Thread Jeffrey J. Kosowsky
Rob Hasselbaum wrote at about 11:37:49 -0500 on Friday, February 10, 2012:
  On Fri, Feb 10, 2012 at 10:46 AM, Les Mikesell lesmikes...@gmail.comwrote:
  
   I don't know anything about tarsnap but it looks like it has its own way
   of tracking incremental changes.  Is there some reason you can't just run
   it independently from the original source?Someone has mentioned a fuse
   filesystem that works on top of the backuppc archive on the list before -
   that might work if you have to use backuppc's copy.
  
  
  One of the things I like about Tarsnap compared to other off-site backup
  services is that it's nicely scriptable so I can manage what gets backed up
  and when from one central server just like BackupPC itself. By the same
  token, though, Tarsnap is not very convenient to run from the individual
  PCs because some of them are Windows and I'd need to deploy scripts to run
  it periodically through Cygwin.
  
  I'll take a look at the FUSE filesystem. Thanks for the tip! I'm guessing,
  though, that it won't be any more efficient than just exploding the archive
  to a /tmp directory and having Tarsnap walk through it there.

Be aware that the BackupPC FUSE filesystem while incredibly useful and
slick, is also quite slow since the directories and file attributes
need to be read from the compressed attrib files.

It's great for browsing backups manually using *nix tools and it's
good for short scripts. It likely will be unbearably slow for tarring
an entire backup filesystem. It still would be instructive to try it
and see how much of a performance penalty it introduces.

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Next release

2012-02-09 Thread Jeffrey J. Kosowsky
Lars Tobias Skjong-Børsting wrote at about 17:41:05 +0100 on Thursday, February 
9, 2012:
  Hi,
  
  On 3/27/09 7:44 PM, Paul Mantz wrote:
   On Thu, Mar 26, 2009 at 8:01 PM, o...@jltechinc.como...@jltechinc.com  
   wrote:
   Is it possible to get a CVS copy?
  
   I tried: cvs -z3.2
   -d:pserver:anonym...@backuppc.cvs.sourceforge.net:/cvsroot/backuppc co
   BackupPC
  
   ...but received the dreaded __CONFIGURE_BIN_LIST__ error when I ran
   the ./configure.pl
   
   I've also started following the CVS repository on github.  You can
   find that at:
  
   http://github.com/pcmantz/backuppc-cvs/tree/master
  
  Where is the upstream repository these days? It seems the one on SF.net 
  is no longer in use as it has no commits for a long time. I can't find 
  any info on any other repository than the SF.net CVS-repository.
  
  Paul: where do you sync your backuppc-cvs repo on github from?
  
  -- 
  Regards, Lars Tobias

I may not have this exactly right, but basically Craig Barrett is the
primary and in many cases sole developer of BackupPC, although other
users certainly contribute bug fixes and even entire modules.

Hence, unlike other multi-developer public domain projects, there
really hasn't been a widely used repository with frequent new
contributions and builds. Basically, when Craig has a new release
ready, he distributes it. Often, there is an initial beta release to
shake out the last bugs, followed weeks to months later by an official
release.

Craig has been relatively silent now but last we heard, he is working
on a new 4.x version that is a radical rewrite of the current way of
storing the metadata and file hierarchy of each backup. As far as I
know, Craig is working on this alone and none of it has been released
to any public server. I assume there will be beta releases when it
gets closer to completion.

Meanwhile, since Craig is focused on 4.x and 3.x is quite stable,
there really has been no recent activity on the 3.x tree except for a
bug fix release about a year or so ago. Other than collecting bug
fixes, there is no official work on changing or adding features to 3.x.
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc is Hammering My Clients

2012-02-01 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 07:40:20 -0600 on Wednesday, February 1, 2012:
  On Wed, Feb 1, 2012 at 2:30 AM, Kimball Larsen quang...@gmail.com wrote:
  
    Do any
   have local time machine backups that might be included?
  
   No, time machine is on external drives, specifically excluded from backups.
  
  It might be worth checking that the excludes work and the links that
  make it show on the desktop aren't being followed.
  
   Or
   directories with very large numbers of files?
  
   This I can check on.  What is considered very large numbers of files?  
   More than 1024?  More than 102400?
  
  It would be relative to the amount of RAM available - probably millions.

I have no trouble backing up half a million files on a system with
just 512MB. Surely, if you have high-powered relatively new PCs you
will have many 4+ Gigabytes so it is unlikely that RAM swapping will
be the problem. Plus any 3.0 version of rsync uses RAM quite
efficiently.

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [SOLVED] Backuppc is Hammering My Clients

2012-02-01 Thread Jeffrey J. Kosowsky
Kimball Larsen wrote at about 09:30:38 -0700 on Wednesday, February 1, 2012:
  I just wanted to follow up with a description of what I changed to solve 
  this: 
  
  First off, the users with performance problems on their machines
  during backups all had a copy of Parallels (Windows emulation
  software) that was either running or had been run in the last day.
  Parallels stores a virtual hard drive in a single file that is
  quite large - 14GB in one case and nearly 30 in another.  These
  files were being included in the backups, and I suspect are the
  main culprit of my problem.

This would certainly explain the behavior. Since those files likely
change each day, Rsync needs to read through them and calculate
rolling md4 checksums for each several kilobyte block along with a
full file md4 checksum. The checksums then need to be aligned with
(and possibly also calculated on) the server side. When the checksums
don't match and can't be aligned the differing blocks need to be
transferred. Finally the new file is reconstructed on the server side
and compressed. Clearly, this is not a trivial task for a 30GB file.

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] shadowmountrsync not mount correctly the Shadow copy device

2012-01-28 Thread Jeffrey J. Kosowsky
Flako wrote at about 11:47:40 -0300 on Saturday, January 28, 2012:
  Hello
  I'm trying to use shadowmountrsync 0.4.5.3 on Windows XP SP3 and  cygwin 
  2.763.
  Sshd and rsyncd services are working properly.
  Commands: shadowmountrsync-u 2 and shadowmountrsync-d work
  properly if run locally (not run as remote command via ssh)
  
  When I run shadowmountrsync-u 2 as a remote command via ssh, I
  sometimes vshadow.exe generates the snapshot and not others, it
  generates when I have not gained access to this using dosdev and
  mount.
  
  the link with the log of execution:
  http://www.fileupyours.com/view/309484/shadowmountrsync_local.log
  (shows when running locally) and
  http://www.fileupyours.com/view/309484/shadowmountrsync_remote.log
  (shows when I run it via ssh)

I looked at the remote log. It all looks fine. 'At' runs, the shadows
are created, dosdev runs, and rsyncd is started.
Looks to me like shadowmountrsync is running perfectly.

Note that I seem to recall that when run via ssh, the cygwin mount
command doesn't show the mounted shadowmounts. This is probably a
protection issue since ssh runs as the SYSTEM user.

My guess is that the problem is due to a mistake in your rsyncd
configuration. Perhaps the password or read/write configurations or
stanzas are set up incorrectly.

Try to see if you can get rsyncd working directly and manually on your
original mounts (C:, D:).
I.e. First launch rsyncd remotely on your Windows XP client:
 cygrunsrv -S rsyncd
Then manually try to use rsyncd from your Backuppc server.
Something like: rsync user@winhost::c destination
Then try using Backuppc with the direct rsyncd transfer method
(without shadowmountrsync).

My guess is that you will find a problem in either your rsyncd.conf or
in your BackupPC config file for the rsyncd transfer method. Once you
fix that, shadowmountrsync should work.

If you can get all the above working without shadowmountrsync, but
shadowmountrsync still fails then we can see what might if anything
might be wrong with your shadowmountrsync configuration and how to
troubleshoot that.

  I think the problem is in the task that runs via at, because when I
  see a job fails  C:\cygwin\bin\bash.exe -c
  /usr/local/bin/shadowmountrsync -2-u 2 that never ends ..

Probably not. The very fact that Pass #2 is running means that the
'at' command ran successfully. The shadowmountrsync process never ends
since rsyncd is still running and never finished.

  But do not know what else to look .. I'm sick to debug the code ..

I don't think anything is wrong with the code :P

  If I try to mount the shadow device, not walk, for example:
  Being the Shadow copy device name: \ \? \ GLOBALROOT \ Device \
  HarddiskVolumeShadowCopy1 that generated shadowmountrsync
  
  I try to mount it by hand:
  $ /Bin/dosdev Z: \?\GLOBALROOT\ Device\HarddiskVolumeShadowCopy1
  Z:: The operation completed successfully.
  
  $ Mount-f Z: /shadow/Z
  
  $ Ls /shadow /Z
  ls: can not access /shadow /Z: No such file or directory

You can't mount global roots directly under cygwin (see cygwin user
list archives). This is the reason that I had to use dosdev to create
the mounts.

  $ mount
  C:/cygwin /bin on /usr /bin type ntfs (binary, auto)
  C:/cygwin /lib on /usr /lib type ntfs (binary, auto)
  C:/cygwin on / type ntfs (binary, auto)
  Z: on /shadow / unknown type Z (binary, user)
  C: on /cygdrive /c type ntfs (binary, posix = 0, user, noumount, auto)
  D: on /cygdrive /d type iso9660 (binary, posix = 0, user, noumount, auto)

When you are running via ssh, I think mount doesn't show the
shadowmounts even though they are there.

  
  The truth is that I'm giving up on using shadowmountrsync not know
  what else to look ..
  
  any idea?

Give up if you want or spend some time figuring it out -- in
particular, start by following my suggestions above.

If you decide to give up, that's your choice...

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc incremental taking up a lot of bandwidth with no additional changes

2012-01-20 Thread Jeffrey J. Kosowsky
smallpox wrote at about 07:15:07 -0800 on Friday, January 20, 2012:
  i was under the impression that rsync does the comparison with little or 
  no bandwidth.

First, PLEASE DON'T TOP-POST - it makes following and responding
to a thread near impossible.

Second, what makes you think the issue is a bandwidth issue? You just
said that it takes 10 minutes. Have you determined that the time is
due to bandwidth bottlenecks and not just disk reads and rsync
computations?

Third, why would you expect rsync -- even without backuppc -- to take
significantly fewer than 10 minutes to crawl a directory tree of
a hundred thousand files including in reading in the inode
information from each file, transmitting it across the network,
comparing it to said information stored in the BackupPC attrib
file. Plus any changed files will require reading, decompressing, and
computing rolling md4 checksums on each of the target files as part of
the rsync algorithm. Then the full directory tree needs to be
constructed on the BackupPC Server including attrib files and any
changed files (this is for incrementals). Do you expect this to all
happen instantaneously?

Fourth, are you sure that you want to do continuous (your words)
backups? That means that you are creating an ever-increasing (and
theoretically infinite) number of incremental backups each requiring a
full directory tree parallel to the source plus attrib files and
changed files. Additionally, any time a file changes by even one byte,
than a whole new copy is saved to the pool (think log files or system
files that may change multiple times per second).

In summary, I think you are trying to solve a problem that may not
need to be solved, using a tool that is not meant to solve it, without
understanding what is causing your problems and without knowing how
the tool actually works in the first place :)



  On 1/20/2012 5:50 AM, Michael Stowe wrote:
   BackupPC 3.2.1
   Windows 7 rsyncd over ssh, west coast
   the server is in the east.
  
   my goal is to have it updated every 15 minutes, i've gotten that to work
   but for incremental, with no changes, it's still taking about 10 minutes
   most of the time and it is doing traffic.
   I would expect most of that traffic would be checking to see if there are
   changes to the files.
  
   i do not understand why incremental with absolutely no changes is taking
   so long ?  ultimately i'd like to have a hundred thousand files being
   backed up constantly, is this just a dream ?
   How would you propose that changes be recognized, if not through rsync's
   mechanisms of comparison?
  
   You can always go to cmd/cifs and use time stamps to determine what's
   changed, I suppose.
  
   thanks in advance
  
   --
   Keep Your Developer Skills Current with LearnDevNow!
   The most comprehensive online learning library for Microsoft developers
   is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
   Metro Style Apps, more. Free future releases when you subscribe now!
   http://p.sf.net/sfu/learndevnow-d2d
   ___
   BackupPC-users mailing list
   BackupPC-users@lists.sourceforge.net
   List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
   Wiki:http://backuppc.wiki.sourceforge.net
   Project: http://backuppc.sourceforge.net/
  
  
  --
  Keep Your Developer Skills Current with LearnDevNow!
  The most comprehensive online learning library for Microsoft developers
  is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
  Metro Style Apps, more. Free future releases when you subscribe now!
  http://p.sf.net/sfu/learndevnow-d2d
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] moving per-pc log files

2012-01-20 Thread Jeffrey J. Kosowsky
Till Hofmann wrote at about 16:23:56 +0100 on Friday, January 20, 2012:
  Hello everybody,
  
  since my backup partition is on a RAID5 which doesn't do anything but
  keeping my backups, I want the hard drives to automatically spin down
  (standby) when there is nothing to do.
  It's working properly, I only have one problem: backuppc writes to the
  per-pc log file in $topdir/pc/$host/LOG. every times it wakes up and
  tries to ping (or nmblookup) that host.
  It does nothing but writing one line in the log file that it couldn't reach
  the host.
  
  As I'm trying to save energy and protect my hard drives from too many spin
  downs/spin ups, I want to prevent these unnecessary spin ups. I already
  moved the general log file to a different hard drive, but there is no
  option to move the per-pc log file (or I haven't found it).

Are you really concerned about O(24) spin-ups per day? I wouldn't
think that a once an hour spin-up would add much to the wear-and-tear
on your drive. And if you are worried about energy, just decrease the
time to spin-down to say 1 minute. That will save more than 98% of the
spin energy... Plus if you have multiple machines to back up you are
probably spreading the backup load across much of the day anyway...

  My question: Is there a way to
  1) either move the per-pc log file (just like the general log
  file)?
Not really possible without much hacking since the LOG files are
pooled and hence must be on the same filesystem as the pool and pc
  tree.

  2) or prevent backuppc from logging that it couldn't reach the host?
Code is open source and interpreted. Grep for the line that logs the
data and comment it out (plus/minus other lines that should go with it)

  I already tried to cache the log file, without success. if you know a way
  to do this (without caching everything which is not what I want with a
  RAID5) that would solve the problem, too.
  

All that being said, preventing the logging still likely won't prevent
the hourly wake-up spin-up (at least in the absence of caching) since
at each wake up, BackupPC looks at the pc tree to determine aging of
backups... And reading spins up the disks as much as writing...

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc incremental taking up a lot of bandwidth with no additional changes

2012-01-20 Thread Jeffrey J. Kosowsky
Stefan Peter wrote at about 21:00:22 +0100 on Friday, January 20, 2012:
  On 01/20/2012 08:49 PM, Jeffrey J. Kosowsky wrote:
   In summary, I think you are trying to solve a problem that may not
   need to be solved, using a tool that is not meant to solve it, without
   understanding what is causing your problems and without knowing how
   the tool actually works in the first place :)
  
  can I use this sentence for my own purposes or do you have a copyright 
  on it?

Sure use it to your hearts content... but if anyone asks you where it
came from, I just made it up on the spur of the moment...

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Laptops with multiple ethernet cards/MAC addresses

2012-01-16 Thread Jeffrey J. Kosowsky
The laptops I back up have both a wired and wireless Ethernet
connection with (different) MAC addresses.

I use static DNS so that when the laptops are attached at home they
are given a fixed (known) IP address so that BackupPC can find them
using my /etc/hosts file.

On my old D-Link router, I used to assign the same static DNS address
to both MAC addresses so that no matter which connection was used, I
had the same fixed IP address.

The problem is that my new Verizon router does not allow the same IP
address to be correlated with different MAC addresses.

So now it seems that I can only match the laptop name (used by
BackupPC) against only one of the IP addresses so that it will only
get backed up on one of the two interfaces?

Is there any simple way to overcome this problem?

For example would it be possible to match 2 IP/names addresses against
the same host backup so that if one fails then it tries the other?
(this is in a sense the opposite of ClientNameAlias that allows you to
map multiple hosts to one IP address)

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't fork

2012-01-16 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 17:13:44 -0500 on Monday, January 16, 2012:
  Peter Thomassen m...@peter-thomassen.de wrote on 01/16/2012 12:31:05 AM:
  
   On 01/11/2012 08:00 PM, Timothy J Massey wrote:
I would add this: 45 GB and 185,000 files is, in my opinion, far from 
  big.
I have a number of servers backing up hosts that are 5 to 10 times as 
  big,
and bigger. and that is with 1 GHz anemic processors and 512 MB RAM.

I think the answers that you're getting are correct: you're probably 
  short
of some sort of resource. But this is far from a normal situation. 
  Tiny
little backup servers are able to do much much bigger hosts. There's
something fundamentally weird about your setup.
   
   Upgrading RAM from 128 to 512 MB solved that problem. However, not
   another one occurs with the same host. I'll look into it and start
   another thread, if necessary.
  
  Wow:  you were trying to do backups with 128MB RAM?  I thought *I* was 
  mean with only using 512MB RAM!  :)

Wow: you must live in luxury to have 128MB... I have done it with 64MB
and a 500MHz CPU on my NAS device... and it works just fine...
 

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Laptops with multiple ethernet cards/MAC addresses

2012-01-16 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 17:26:43 -0500 on Monday, January 16, 2012:
  Jeffrey J. Kosowsky backu...@kosowsky.org wrote on 01/16/2012 05:00:45 
  PM:
  
   The problem is that my new Verizon router does not allow the same IP
   address to be correlated with different MAC addresses.
   
   So now it seems that I can only match the laptop name (used by
   BackupPC) against only one of the IP addresses so that it will only
   get backed up on one of the two interfaces?
   
   Is there any simple way to overcome this problem?
  
  You could:
  
  1) Use a better (real) Firewall
  
  That would be, by *FAR*, the best solution.  Other solutions:
  
  2) Use a better (real) DNS server with client-update capabilities (See #1)
  3) Use a better (real) DHCP server (See #1)

You certainly have a good point in theory and that would be the
correct approach for a larger/more formal network. But for me, it's
just a simple home network... I really don't want to start adding a
new/separate firewall and/or DNS and/or DHCP layer just to get
BackupPC to run when everything runs just fine using my
Verizon-provided router as router/firewall/gateway/dns/dhcp etc. (I do
have a software firewall also on each machine).


  These are really just more specific details embodied by #1.  Now that I've 
  beat that dead horse...
  
  4) Use NetBIOS name resolution.  (This merely lets you substitute a 
  completely *different* and hopefully less broken name server 
  infrastructure.  See #1...  :) )

Same reasoning as above... And I really would like to avoid NetBIOS
since I don't even have it running on my Linux machines...

   For example would it be possible to match 2 IP/names addresses against
   the same host backup so that if one fails then it tries the other?
   (this is in a sense the opposite of ClientNameAlias that allows you to
   map multiple hosts to one IP address)
  
  Why fix an IP routing/name resolution issue at the BackupPC level?  Fix it 
  where the problem is, rather than paper over it.  While 'another layer of 
  indirection can fix anything' (http://en.wikipedia.org/wiki/Indirection , 
  second paragraph), it doesn't mean that it *should*!  :)  If you want to 
  go that way, there's always

Well while you are right in theory and from an 'elegance' perspective,
since the only problem is BackupPC, I actually would prefer a simpler
and more targeted solution that doesn't require me to redo or change
my network -- even if it is a bit klugey...

  
  5) Create two BackupPC hosts, one for each of the IP addresses your 
  Verizon DHCP server will be assigning to the client...
  
  And then, knowing your ability to create great little Perl utilities to 
  manipulate the BackupPC pool:
  
  6) Create a tool to merge the backups of one host into the other...  ;)
  

Well, I appreciate your confidence, but that would be a real PITA.
That being said, I do at some point intent plan on writing just such a
utility that a more general level allows for the merging of two
different pools... but I haven't had the time

  
  P.S.:  In case it doesn't come across in text:  I give you honest, 
  serious, mad props for your Perl code that manipulates the pool, though I 
  personally am very reluctant to use them.  However, I still think you need 
  to fix this problem where the problem actually *is*, rather than paper 
  over it with BackupPC mangling.
  T.J.M.

Thanks.
Again, if I were running a large network or a production server, I
would totally agree with you.

But it might just be simpler, easier, and cleaner for me to hack the
BackupPC code so that if one host ip name/number then it tries
another...


--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Laptops with multiple ethernet cards/MAC addresses

2012-01-16 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 17:00:45 -0500 on Monday, January 16, 2012:
  The laptops I back up have both a wired and wireless Ethernet
  connection with (different) MAC addresses.
  
  I use static DNS so that when the laptops are attached at home they
  are given a fixed (known) IP address so that BackupPC can find them
  using my /etc/hosts file.
  
  On my old D-Link router, I used to assign the same static DNS address
  to both MAC addresses so that no matter which connection was used, I
  had the same fixed IP address.
  
  The problem is that my new Verizon router does not allow the same IP
  address to be correlated with different MAC addresses.
  
  So now it seems that I can only match the laptop name (used by
  BackupPC) against only one of the IP addresses so that it will only
  get backed up on one of the two interfaces?
  
  Is there any simple way to overcome this problem?
  
  For example would it be possible to match 2 IP/names addresses against
  the same host backup so that if one fails then it tries the other?
  (this is in a sense the opposite of ClientNameAlias that allows you to
  map multiple hosts to one IP address)
  

Just was thinking that the following simple hack should probably
work...

1. Set up two different static IPs, one for each network interface.
2. Enter both IPs (or equivalent names as defined in /etc/hosts) in
   the BackupPC/hosts file -- call them 'hostA' and 'hostB'
3. Create a *symlink* from TopDir/pc/hostB to TopDir/pc/hostA

Then whenever the *common* backup host ages, BackupPC will launch a
new backup -- either hostA or hostB depending on which NIC is
currently active (note: I am assuming that only one interface is
active at a time). That way if one IP addressdidn't back up then the
other would back up in place using the same history of numbered full
and incremental backups. Once either backup completed then both hosts
would look updated since they both point to the same common pc host
subdirectory of backups. Note that the host name doesn't appear at all
except as the name of the top level directory in the pc tree, so it
makes no difference whether the backup is initiated as hostA or
hostB.

The only potential issue would be collisions but even that typically shouldn't
happen since I assume that only one IP address is ever active at one
time. Potentially there might be issues if the user switched interfaces in
the middle of a backup before the first version complete or timed out
but hopefully the new backup would catch the other as a partial backup
and either continue from there or erase it... still I imagine there
could be weird edge cases though they would only occur if you switched
network interfaces mid-backup...


--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Laptops with multiple ethernet cards/MAC addresses

2012-01-16 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 17:57:27 -0600 on Monday, January 16, 2012:
  On Mon, Jan 16, 2012 at 5:32 PM, Jeffrey J. Kosowsky
  backu...@kosowsky.org wrote:
  
     1) Use a better (real) Firewall
    
     That would be, by *FAR*, the best solution.  Other solutions:
    
     2) Use a better (real) DNS server with client-update capabilities (See 
   #1)
     3) Use a better (real) DHCP server (See #1)
  
   You certainly have a good point in theory and that would be the
   correct approach for a larger/more formal network. But for me, it's
   just a simple home network... I really don't want to start adding a
   new/separate firewall and/or DNS and/or DHCP layer just to get
   BackupPC to run when everything runs just fine using my
   Verizon-provided router as router/firewall/gateway/dns/dhcp etc. (I do
   have a software firewall also on each machine).
  
  Have you checked all the capabilities?  Lots of consumer routers have
  local DNS service with update capability - or at least the ability to
  specify a name with the IP for DHCP.

I believe this is what I was talking about - static DHCP.
However, it only allows one IP address per name and one MAC address
per name.

  
  Also, do you plug the laptop in regularly anywhere else?  Maybe you
  could do a static assignment on the wired port but don't activate it
  anywhere else - and then keep the DCHP assignment for the wifi
  side.

True - but I would like the backups to 'just' work whether using the
wired or wireless ports. For now, I have the static assignment set for
the wireless NIC since that is what people used most... but sometimes
you actually prefer to use the wired NIC, especially for backup, since
the link speed is faster.

  
     4) Use NetBIOS name resolution.  (This merely lets you substitute a
     completely *different* and hopefully less broken name server
     infrastructure.  See #1...  :) )
  
   Same reasoning as above... And I really would like to avoid NetBIOS
   since I don't even have it running on my Linux machines...
  
  You could probably run nmbd without smbd.  Nothing would depend on it
  except fallback name resolution.

I will look into this. I'm not very familiar with nmbd/smbd except for
use with Samba - but not with name resolution...
  
     6) Create a tool to merge the backups of one host into the other...  ;)
    
  
   Well, I appreciate your confidence, but that would be a real PITA.
   That being said, I do at some point intent plan on writing just such a
   utility that a more general level allows for the merging of two
   different pools... but I haven't had the time
  
  Wonder what would happen if you symlinked two pc names.  I don't think
  I want to find out on my system, but...

Good thought - exactly what I detailed in my other follow up email.

  
   Again, if I were running a large network or a production server, I
   would totally agree with you.
  
   But it might just be simpler, easier, and cleaner for me to hack the
   BackupPC code so that if one host ip name/number then it tries
   another...
  
  One other approach that would work would be to set up a VPN between
  the backuppc server and targets using OpenVPN or something similar.
  Normally you'd only need that to poke through firewalls or route
  between private networks, but you can end up with a known IP at the
  tunnel interface regardless of the networks/interfaces the packets
  carrying the tunnel traverse.
  

Interesting thought but probably overkill, especially since the
laptops are pretty slow...

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar is needed, but deleted files not needed

2012-01-07 Thread Jeffrey J. Kosowsky
Daniel wrote at about 21:25:02 +0100 on Saturday, January 7, 2012:
  Only this deletion problem... This should be top priority, and should
  already be in the software :-) (I mean I know it is open source etc
  and that I did not develop it, but it's a feature I think many of us
  would like and that would really make it
  a much heavier software for the whole world.

No problem. As you said, it's open source. So make it *your* top
priority and write the code to 'fix' the 'problem' that you believe to
exist. If it were a top priority for the lead developer (Craig) or for
any of the other many users and contributors, they would have for sure
written the code themselves. Alternatively, you can of course offer to
pay someone to write the code for you...

Meanwhile, the rest of us have been quite content with 'rsync' for
tracking deleted files or with tar without deleted file coverage...

  I will have to use dar, which is a solution that never let me down,
  but needs a BackupPC-like interface too :-)
  It uses catalogs, thus incrementals are real fast, everything is nice
  with it, just it's a bit more time consuming to set up nicely and in
  my case, needs software on client and server side.

Feel free to write a module to handle 'dtar'. Other users have
contributed modules in the past (I believe the 'ftp' interface was
written that way). Beyond that, I haven't seen anyone else mention
'dtar' let alone express an interest in using it so I doubt you will
find much interest in anyone else developing it.

  Thanks for taking the time, I will be watching BackupPC features in
  the future, I hope it will handle this tar problem soon :-)

Don't hold your breath unless you plan to contribute yourself...
 

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Restores

2012-01-04 Thread Jeffrey J. Kosowsky
Ralph Weaver wrote at about 17:31:19 -0500 on Wednesday, January 4, 2012:
  We ran into an issue with a backuppc restore where we had unknown files 
  causing issues due to existance.  We tried to restore thinking it would 
  remove the extraneous files as a normal rsync with the --delete option 
  does, but it didn't do this (and the files are mixed in with files 
  essential for running the system.)  It left the extra files and replaced 
  the other files with the correct version that was backed up.  We tried 
  to add --delete to the restore args but of course since File::RsyncP 
  doesn't support that it didn't work.

The behavior of backuppc is the normal and correct behavior of
standard backup programs. It would be quite dangerous for a backup
restore program to delete files since typically that would cause all
files since the last backup to be deleted which is not usually the
desired result.

 
  Is there any way around this issue aside from restoring directly to a 
  different directory and then rsyncing those files over with a --delete 
  to the main system?
  

If you want to delete all files other than those on the backup that
you are restoring, then the solution is trivial. Just delete the
entire directory tree before restoring so that you are restoring into
an empty directory.

If the issue is that you are restoring system files on a running
system, well although that often works, it is not recommended since
bad things can happen. In that case, it would be better to mount the
system root on another live system, delete the system root and restore
there.

If you really need to restore a system directory, then you can do as
you are doing now and then use 'find' after the restore to find and
delete any files created between the time of the backup and the time
of the restore. Of course, there still could be 'race' condition types
of edge cases but this would happen when using rsync --delete on a
system tree too.

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] restore from trash?

2012-01-03 Thread Jeffrey J. Kosowsky
upen wrote at about 13:10:17 -0600 on Tuesday, January 3, 2012:
  Hi,
  I find some of expired backups in Trash directory. What is the correct
  way to restore data from directories under 'trash' I found
  
  For example, I found directory '1325037949_12970_0/f%2fexport%2fhome'
  in trash which I believe is for /export/home partition on a remote
  server. Do I move this to someplace so that data can then be restored
  using backuppc admin gui?

Restoring from trash seems like a very poor and unreliable
idea. Specifically, the purpose of moving old backups to the trash is
to allow the background backupPC_trashClean process to recursively
delete all the files and folders in the old backup. So there is no
guarantee that your backup is at all intact and indeed if backuppc is
working properly it probably already is missing some file trees. The
longer the backup has been sitting in the trash, the more likely that
it is thus corrupted.

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC recovery from unreliable disk

2011-12-22 Thread Jeffrey J. Kosowsky
JP Vossen wrote at about 21:50:29 -0500 on Wednesday, December 21, 2011:
  I'm running Debian Squeeze stock backuppc-3.1.0-9 on a server and I'm
  getting kernel messages [1] and SMART errors [2] about the WD 2TB SATA
  disk.  Fine, I RMA'd it and have the new one...  Now what?  I know I can 
  either 'dd' or start fresh.  But...
  
  
  If I start fresh, I know everything will be work and be valid, but I
  lose my historical backups when I wipe the bad disk and RMA it.
  
  
  If I 'ddrescue' BAD -- GOOD, I'll worry about the integity of the
  BackupPC store.  As I understand it, the incoming files are hashed and
  stored, but the store itself is never checked (true?).  So when I do
  backups, if an incoming file hash matches a file already in the store,
  the incoming file is de-duped and dropped.  But what if the file
  actually in the store is corrupt due to the bad disk?

If the hash of a new file matches the hash of an existing pool file
then the contents are compared since there is always the possibility
of a hash collision since the file hash is a partial file md5sum that
is based on the first and last 128K slice plus the filesize.

  
  Am I correct?  If so, is there a way to have BackupPC validate that the
  files in the pool actually match their hash and weren't mangled by the disk?

Of course, there is no guarantee that the pool files themselves are
not corrupt. Checking the files against their pool file name hash can
rule out some file corruption but if the file size is unchanged and
the corruption is not in the first or last 128K slice then the hash
will be unchanged so any corruption won't be detectable.

That being said, I have written several routines to both check and fix
the pool for corruption relative to the partial file md5sum pool file
name hash. Please search the archives where I have discussed and
posted the code...

Note that there have been bugs in BackupPC itself and also in various
pool libraries (specifically on arm5 processors) that cause relatively
innocuous errors in the pool file names relative to the actual
intended partial file md5sum hash.

  
  
  Any other solution I'm missing?
  
  Thanks,
  JP
  ___
  [1] Example kernel errors:
  
  Security Events for kernel
  =-=-=-=-=-=-=-=-=-=-=-=-=-
  kernel: [4020993.728571] end_request: I/O error, dev sda, sector 81203507
  kernel: [4021009.712952] end_request: I/O error, dev sda, sector 81203507
  
  System Events
  =-=-=-=-=-=-=
  kernel: [4020983.471256] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0
  action 0x0
  kernel: [4020983.471290] ata3.00: BMDMA stat 0x25
  kernel: [4020983.471315] ata3.00: failed command: READ DMA
  kernel: [4020983.471347] ata3.00: cmd
  c8/00:18:33:11:d7/00:00:00:00:00/e4 tag 0 dma 12288 in
  kernel: [4020983.471351]  res
  51/40:07:33:11:d7/40:00:28:00:00/e4 Emask 0x9 (media error)
  kernel: [4020983.471424] ata3.00: status: { DRDY ERR }
  kernel: [4020983.471446] ata3.00: error: { UNC }
  kernel: [4020983.501157] ata3.00: configured for UDMA/133
  
  
  [2] Example SMART error:
  
  Error 1704 occurred at disk power-on lifetime: 10149 hours (422 days +
  21 hours)
 When the command that caused the error occurred, the device was
  active or idle.
  
 After command completion occurred, registers were:
 ER ST SC SN CL CH DH
 -- -- -- -- -- -- --
 40 51 40 45 66 01 e0  Error: UNC 64 sectors at LBA = 0x00016645 = 91717
  
 Commands leading to the command that caused the error were:
 CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
 -- -- -- -- -- -- -- --    
 c8 00 40 3f 66 01 e0 08  46d+13:36:50.242  READ DMA
 ec 00 00 00 00 00 a0 08  46d+13:36:50.233  IDENTIFY DEVICE
 ef 03 46 00 00 00 a0 08  46d+13:36:50.225  SET FEATURES [Set transfer
  mode]
  
  |:::==|---
  JP Vossen, CISSP|:::==|  http://bashcookbook.com/
  My Account, My Opinions |=|  http://www.jpsdomain.org/
  |=|---
  Microsoft Tax = the additional hardware  yearly fees for the add-on
  software required to protect Windows from its own poorly designed and
  implemented self, while the overhead incidentally flattens Moore's Law.
  
  --
  Write once. Port to many.
  Get the SDK and tools to simplify cross-platform app development. Create 
  new or port existing apps to sell to consumers worldwide. Explore the 
  Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
  http://p.sf.net/sfu/intel-appdev
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: 

Re: [BackupPC-users] Scary problem with USB3...

2011-12-15 Thread Jeffrey J. Kosowsky
Mark Maciolek wrote at about 13:37:27 -0500 on Thursday, December 15, 2011:
  On 12/15/2011 1:31 PM, Zach La Celle wrote:
   We just upgraded our backup machine and are using an external USB3 hard
   drive for backups.
  
   Last night, something went wrong, and when I got in this morning I saw
   the following errors on the backup machine:
  
   [88921.670598] usb 1-1: device not accepting address 0, error -71
   [88921.670665] hub 1-0:1.0: cannot disable port 1 (err = -32)
   [88921.674631] usb 1-1: Device not responding to set address.
   [88921.880971] usb 1-1: Device not responding to set address.
  
   Could this be caused by BackupPC?  When I unplugged and replugged the
   USB hard drive, it started working, but I'm worried that BackupPC is
   corrupting the drive somehow.

What? Why in the world would you think BackupPC which is really
just a fancy perl script be responsible for hardware/driver problems?
Even if BackupPC were responsible, you give absolutely no information other
than a hardware error to help anyone troubleshoot your problem.

Just because you run backups on that drive doesn't mean that backuppc
is the cause of your problems. I mean, you probably have hundreds if
not thousands of programs on your system. Do you ask the same question
about every program on your system on the random chance that one of
those programs might be causing some totally unrelated hardware/driver
error?

The only even remotely possible connection with BackupPC is that
backups are disk intensive so it is possible that BackupPC brings out
instability in your hardware more than say just plugging in the disk
and reading a file or two.


--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Are these folder/file names normal?

2011-12-02 Thread Jeffrey J. Kosowsky
Arnold Krille wrote at about 18:10:03 +0100 on Friday, December 2, 2011:
  On Friday 02 December 2011 17:33:41 Igor Sverkos wrote:
   Hi,
   
   today I browsed through the backup data folder. Is it normal that
   folders look like
   
 /var/lib/BackupPC/pc/foo.example.org/252/f%2f/fetc
 ^
   This is the backuped /etc folder from the foo.example.org (linux) host.
   
   Every folder/file is prefixed with a f char and I don't understand the
   folder name f%2f. Doesn't look right to me.
   
   Every backed up host shows that...
  
  Thats perfectly normal. You will notice that file attributes are wrong 
  too. 
  That is because the attributes are stored separat. thus the f-prefix notes 
  that 
  this is a backuppc-thing. and f%2f is the notion of / in backuppc's own 
  language.

The f-prefix is called f-mangling in backuppc language.
The f%2f is really f-mangling plus %2f which is really just standard
encoding for '/' -- other special characters are similarly encoded...

  Of course this looks strange directly on the file-system. But you are not 
  supposed to use these file without the help of backuppc anyway.

backuppc-fuse mounts the backuppc backups using the fuse file system
which allows you to browse backups without the f-mangling and with the
proper file attributes and with incremental backups properly filled
using previous fulls/incrementals. You can then use standard *nix
tools to browse/access/manipulate the corresponding files.

The only downside is that it is a bit slow (but still faster than the
backuppc web interface) since each directory listing requires the
corresponding attrib file to be read, decompressed, and decoded.

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc and excluding ip ranges?

2011-11-16 Thread Jeffrey J. Kosowsky
Tim Fletcher wrote at about 21:21:45 + on Thursday, November 10, 2011:
  On Thu, 2011-11-10 at 15:56 -0500, SSzretter wrote:
  
   
   It would be great if a flag could be set to tell backuppc to only
   backup a machine if it is in a specific subnet range (192.168.2.x) and
   to skip it if not.
  
  You can write a ping script that will test the ip the client has, or you
  can tweak the latency test down a bit as I am guessing the T1 connection
  has higher latency.
  

You could also write a DumpPreUserCmd script to check the subnet...

The point is that there is no need (and in fact it would be a bad
idea) to code such a specific case into BackupPC. Rather, Backuppc
already has several 'hooks' that allow you to write simple scripts to
test this case and many other one-off use cases that might be interesting
to various individual users.

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unattended off-site replication

2011-10-25 Thread Jeffrey J. Kosowsky
Steve M. Robbins wrote at about 22:19:44 -0500 on Monday, October 24, 2011:
  Hi,
  
  One thing that all these methods have in common is that they scan the 
  entire pool filesystem.  I accept that I will have to do that at
  least initially.  However, to send daily updates, it seems unnecessary
  to re-scan the filesytem again when backuppc itself already computes the
  information needed:
  
  * the set of files added to the pool
  * the set of hardlinks in __TOPDIR__/pc/$host/$backup
  * the set of files expired
  
  It strikes me that backuppc could be taught to write all this out to
  one or more journal files that could be replayed on the remote system
  after the new files are transferred.
  
  Does this make sense?  Has anyone investigated this approach?

I and others have considered such approaches before.  The problem is
that this would require modifying the BckupPC program itself to record
such journals. You also left out pool chain renumbering which is a
consequence of file/pool expiry.

Most of us have been reluctant to modify BackupPC other than to
diagnose/fix bugs because the program itself is both quite stable and
critical. So, the inclination has been to do things outside of
BackupPC to avoid unintended consequences that could potentially
destabilize the program. Additionally, this would in a sense 'fork'
backuppc itself unless Craig would buy into such changes.

Also, the v4.x version that Craig is working will reportedly make a
lot of these archive issues  moot.

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] requirement `DAEMON' in file `.../backuppc.sh' has no providers

2011-10-22 Thread Jeffrey J. Kosowsky
Gail Hardmann wrote at about 14:11:40 +0200 on Thursday, October 20, 2011:
  Hi experts
  
  I am not a Linux expert, but I've succeeded in installing BackupPC 3.2.1 on 
  a DNS-323 NAS (running an ARM GNU Linux distribution).
  
  BackupPC seems to be working - I can also access it from the CGI web 
  interface -  but this message appears in ffp.log (it's part of system start):
  
   fun_plug script for DNS-323 (2008-08-11 tpatfonz.de) 
  Day Mon dd 20:59:30 GMT 2011
  ln -snf /mnt/HD_a2/ffp /ffp
  * Running /ffp/etc/fun_plug.init ...
  * Running /ffp/etc/rc ...
  rcorder: requirement `DAEMON' in file `/ffp/start/backuppc.sh' has no 
  providers.

I would start by looking at the startus script 'backuppc.sh'
referenced above.

The specific error message sited seems to be more related to Fonz's
funplug for the dns-323 so you might want to post to that
group. Nothing you mention gives any clue about backuppc itself...

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bad md5sums due to zero size (uncompressed) cpool files - WEIRD BUG

2011-10-06 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 17:54:05 +0200 on Thursday, October 6, 2011:
  Hi,
  
  Tim Fletcher wrote on 2011-10-06 10:17:03 +0100 [Re: [BackupPC-users] Bad 
  md5sums due to zero size (uncompressed) cpool files - WEIRD BUG]:
   On Wed, 2011-10-05 at 21:35 -0400, Jeffrey J. Kosowsky wrote:
Finally, remember it's possible that many people are having this
problem but just don't know it,
  
  perfectly possible. I was just saying what possible cause came to my mind 
  (any
  many people *could* be running with an almost full disk). As you (Jeffrey)
  said, the fact that the errors appeared only within a small time frame may or
  may not be significant. I guess I don't need to ask whether you are *sure*
  that the disk wasn't almost full back then.

Disk was *less* full then...

  To be honest, I would *hope* that only you had these issues and everyone
  else's backups are fine, i.e. that your hardware and not the BackupPC 
  software
  was the trigger (though it would probably need some sort of software bug to
  come up with the exact symptoms).
  
since the only way one would know would be if one actually computed the
partial file md5sums of all the pool files and/or restored  tested ones
backups.
  
  Almost.
  
Since the error affects only 71 out of 1.1 million files it's possible
that no one has ever noticed...
  
  Well, let's think about that for a moment. We *have* had multiple issues that
  *sounded* like corrupt attrib files. What would happen, if you had an attrib
  file that decompresses to  in the reference backup?
  
It would be interesting if other people would run a test on their
pools to see if they have similar such issues (remember I only tested
my pool in response to the recent thread of the guy who was having
issues with his pool)...
   
   Do you have a script or series of commands to do this check with?
  
  Actually, what I would propose in response to what you have found would be to
  test for pool files that decompress to zero length. That should be
  computationally less expensive than computing hashes - in particular, you can
  stop decompressing once you have decompressed any content at all.

Actually this could be made even faster since there seem to be 2
cases:
1. Files of length 8 bytes with first byte = 78 [no rsync checksums]
2. Files of length 57 bytes with first byte = d7 [rsync checksums]

So, all you need to do is to stat the size and then test the
first-byte

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bad md5sums due to zero size (uncompressed) cpool files - WEIRD BUG

2011-10-06 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 02:45:56 +0200 on Friday, October 7, 2011:
  Hi,
  
  Jeffrey J. Kosowsky wrote on 2011-10-06 19:28:38 -0400 [Re: [BackupPC-users] 
  Bad md5sums due to zero size (uncompressed)?cpool files - WEIRD BUG]:
   Holger Parplies wrote at about 17:54:05 +0200 on Thursday, October 6, 2011:
   [...]
 Actually, what I would propose [...] would be to
 test for pool files that decompress to zero length. [...]
   
   Actually this could be made even faster since there seem to be 2
   cases:
   1. Files of length 8 bytes with first byte = 78 [no rsync checksums]
   2. Files of length 57 bytes with first byte = d7 [rsync checksums]
   
   So, all you need to do is to stat the size and then test the
   first-byte
  
  I'm surprised that that isn't faster by orders of magnitude. Running both
  BackupPC_verifyPool and the modified version which does exactly this in
  parallel, it's only about 3 times as fast (faster, though, when traversing
  directories currently in cache). An additionally running 'find' does report
  some 57-byte files, but they don't seem to decompress to . Let's see how
  this continues. I still haven't found a single zero-length file in my pool
  so far (BackupPC_verifyPool at 3/6/*, above check at 2/0/*).
  

Do those 57 byte files have rsync checksums or are they just
compressed files that happen to be 57 bytes long?

Given that the rsync checksums have both block and file checksums,
it's hard to believe that a 57 byte file including rsync checksums
would have much if any data. Even with no blocks of data, you have:
- 0xb3 separator (1 byte)
- File digest which is 2 copies of the full 16 byte MD4 digest (32 bytes)
- Digest info consisting of block size, checksum seed, length of the block 
digest and the magic number (16 bytes)

The above total 49 bytes which is exactly the delta between a 57 byte
empty compressed file with rsync checksums and an 8 byte empty
compressed file without rsync checksums. The common 8 bytes is
presumably the zlib header (which I think is 2 bytes) and the trailer
which would then be 6 bytes.

Note: If you have any data, then you would have 20 bytes (consisting a 4 byte
Adler32 and 16byte MD4 digest) for each block of data.


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bad md5sums due to zero size (uncompressed) cpool files - WEIRD BUG

2011-10-06 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 18:58:51 -0400 on Tuesday, October 4, 2011:
  After the recent thread on bad md5sum file names, I ran a check on all
  my 1.1 million cpool files to check whether the md5sum file names are
  correct.
  
  I got a total of 71 errors out of 1.1 million files:
  - 3 had data in it (though each file was only a few hundred bytes
long)
  
  - 68 of the 71 were *zero* sized when decompressed
29 were 8 bytes long corresponding to zlib compression of a zero
length file
  
39 were 57 bytes long corresponding to a zero length file with an
rsync checksum
  
  Each such cpool file has anywhere from 2 to several thousand links
  
  The 68 *zero* length files should *not* be in the pool since zero
  length files are not pooled. So, something is really messed up here.
  
  It turns out though that none of those zero-length decompressed cpool
  files were originally zero length but somehow they were stored in the
  pool as zero length with an md5sum that is correct for the original
  non-zero length file.
  
  Some are attrib files and some are regular files.
  
  Now it seems unlikely that the files were corrupted after the backups
  were completed since the header and trailers are correct and there is
  no way that the filesystem would just happen to zero out the data
  while leaving the header and trailers intact (including checksums).
  
  Also, it's not the rsync checksum caching causing the problem since
  some of the zero length files are without checksums.
  
  Now the fact that the md5sum file names are correct relative to the
  original data means that the file was originally read correctly by
  BackupPC..
  
  So it seems that for some reason the data was truncated when
  compressing and writing the cpool/pc file but after the partial file
  md5sum was calculated. And it seems to have happened multiple times
  for some of these files since there are multiple pc files linked to
  the same pool file (and before linking to a cpool file, the actual
  content of the files are compared since the partial file md5sum is not
  unique).
  
  Also, on my latest full backup a spot check shows that the files are
  backed up correctly to the right non-zero length cpool file which of
  course has the same (now correct) partial file md5sum. Though as you
  would expect, that cpool file has a _0 suffix since the earlier zero
  length is already stored (incorrectly) as the base of the chain.
  
  I am not sure what is going on with the other 3 files since I have yet
  to find them in the pc tree (my 'find' routine is still running)
  
  I will continue to investigate this but this is very strange and
  worrying since truncated cpool files means data loss!
  
  In summary, what could possibly cause BackupPC to truncate the data
  sometime between reading the file/calculating the partial file md5sum
  and compressing/writing the file to the cpool?
  

OK... this is a little weird maybe...

I looked at one file which is messed up:
 /f%2f/fusr/fshare/fFlightGear/fTimezone/fAmerica/fPort-au-Prince

On all (saved) backups, up to backup 82, the file (and the
corresponding cpool file e/f/0/ef0bd9db744f651b9640ea170b07225a) is
zero length decompressed.

My next saved backup is #110 which is non-zero length and has the
correct contents. This is true for all subsequent saved backups. The
corresponding pool file is as might be expected:
e/f/0/ef0bd9db744f651b9640ea170b07225a_0
which makes sense since PoolWrite.pm sees that while the partial file
md5sum is the same as the root, the contents differ (since the root is
empty) so it creates a new pool file with the same stem but with index
0.

Note the original file itself was unchanged between #82 and #110.

BUT WHAT IS INTERESTING is that both pool files have the same
modification time of: 2011-04-27 03:05
which according to the logs is during the time at which backup #110
was backing up the relevant share.

I don't understand this since why would backup #110 change the mod
time of the root file which was created during an earlier backup. 

Why would PoolWrite.pm change the mod time of a pool file that is not
in the actual backup?

Could it be that this backup somehow destroyed the data in the file?
(but even so, what would cause this to happen)

Also, the XferLOG entry for both backups #82 and #110 have the line:
 pool 644   0/0 252 
usr/share/FlightGear/Timezone/America/Port-au-Prince

But this doesn't make sense since if the new pool file was created as
part of backup #110, shouldn't it say 'create' and not 'pool'?

None of this makes sense to me but somehow I suspect that herein may be a
clue to the problem...

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes

Re: [BackupPC-users] Bad md5sums due to zero size (uncompressed) cpool files - WEIRD BUG

2011-10-06 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 05:46:36 +0200 on Friday, October 7, 2011:
  Hi,
  
  Jeffrey J. Kosowsky wrote on 2011-10-06 22:54:44 -0400 [Re: [BackupPC-users] 
  Bad md5sums due to zero size (uncompressed) cpool?files - WEIRD BUG]:
   OK... this is a little weird maybe...
   [...]
   On all (saved) backups, up to backup 82, the file (and the
   corresponding cpool file e/f/0/ef0bd9db744f651b9640ea170b07225a) is
   zero length decompressed.
   
   My next saved backup is #110 which is non-zero length and has the
   correct contents. This is true for all subsequent saved backups.
   [...]
   BUT WHAT IS INTERESTING is that both pool files have the same
   modification time of: 2011-04-27 03:05
   which according to the logs is during the time at which backup #110
   was backing up the relevant share.
  
  you'll hate me asking this, but: do any of your repair scripts touch the
  modification time

None of them set the modification time (except the pool/pc copy script
where it sets the mod time to the original mod time). But I didn't run
the repair scripts during this time period and both files are modified 
*exactly* during the time of backup #110.

  
  Also: can you give a better resolution on the mod times, i.e. which one is
  older?

OK...
#82: Modify: 2011-04-27 03:05:04.551226502 -0400
#110: Modify: 2011-04-27 03:05:19.813321479 -040

So #110 was modified 15 seconds after #82. Hmmm
Note both of those files have rsync checksums.

When I looked at a couple of files without the rsync checksums, the
mod times differed by a day.

As an aside, I noticed that when I looked at version without the rsync
checksum, that the corrected version also doesn't have an rsync
checksum even after having being backed up many times subsequently --
Now I thought that the rsync checksum should be added after the 2nd or
3rd time the file is read... This makes me wonder whether there is
potentially an issue with the rsync checksum...

  
   Why would PoolWrite.pm change the mod time of a pool file that is not
   in the actual backup?
  
  PoolWrite normally wouldn't, unless something is going wrong somewhere (and 
  it
  probably wouldn't use utime() but rather open the file for writing).
  
  This is an rsync backup, right?

Yes...

   Could it be that this backup somehow destroyed the data in the file?
   (but even so, what would cause this to happen)
  
  Hmm, let's see ... a bug?

   Also, the XferLOG entry for both backups #82 and #110 have the line:
pool 644   0/0 252 
   usr/share/FlightGear/Timezone/America/Port-au-Prince
   
   But this doesn't make sense since if the new pool file was created as
   part of backup #110, shouldn't it say 'create' and not 'pool'?
  
  Considering the mtime, yes. If it's rsync, an *identical* file should be
  'same', if it's tar, an identical file would be 'pool'. Could this be an
  indication that it wasn't BackupPC that clobbered the file?

Well, it is 'rsync'...
And we know BackupPC *thinks* it's a new file since it creates a new
pool file chain member. But what and why did the original file get
clobbered just before the then?

  
   None of this makes sense to me but somehow I suspect that herein may be a
   clue to the problem...
  
  Xfer::Rsync opening the reference file?

But what would cause it to truncate the data portion?

Maybe it's something with rsync checksum caching/seeding when it tries
to add a checksum? I'm just guessing here...


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bad md5sums due to zero size (uncompressed) cpool files - WEIRD BUG

2011-10-05 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 17:41:48 +0200 on Wednesday, October 5, 2011:
  Hi,
  
  Jeffrey J. Kosowsky wrote on 2011-10-04 18:58:51 -0400 [[BackupPC-users] Bad 
  md5sums due to zero size (uncompressed) cpool files - WEIRD BUG]:
   After the recent thread on bad md5sum file names, I ran a check on all
   my 1.1 million cpool files to check whether the md5sum file names are
   correct.
   
   I got a total of 71 errors out of 1.1 million files:
   [...]
   - 68 of the 71 were *zero* sized when decompressed
   [...]
   Each such cpool file has anywhere from 2 to several thousand links
   [...]
   It turns out though that none of those zero-length decompressed cpool
   files were originally zero length but somehow they were stored in the
   pool as zero length with an md5sum that is correct for the original
   non-zero length file.
   [...]
   Now it seems unlikely that the files were corrupted after the backups
   were completed since the header and trailers are correct and there is
   no way that the filesystem would just happen to zero out the data
   while leaving the header and trailers intact (including checksums).
   [...]
   Also, on my latest full backup a spot check shows that the files are
   backed up correctly to the right non-zero length cpool file which of
   course has the same (now correct) partial file md5sum. Though as you
   would expect, that cpool file has a _0 suffix since the earlier zero
   length is already stored (incorrectly) as the base of the chain.
   [...]
   In summary, what could possibly cause BackupPC to truncate the data
   sometime between reading the file/calculating the partial file md5sum
   and compressing/writing the file to the cpool?
  
  the first and only thing that springs to my mind is a full disk. In some
  situations, BackupPC needs to create a temporary file (RStmp, I think) to
  reconstruct the remote file contents. This file can become quite large, I
  suppose. Independant of that, I remember there is *at least* an incorrect
  size fixup which needs to copy already written content to a different hash
  chain (because the hash turns out to be incorrect *after*
  transmission/compression). Without looking closely at the code, I could
  imagine (but am not sure) that this could interact badly with a full disk:
  
  * output file is already open, headers have been written
  * huge RStmp file is written, filling up the disk
  * received file contents are for some reason written to disk (which doesn't
work - no space left) and read back for writing into the output file 
  (giving
zero-length contents)
  * trailing information is written to the output file - this works, because
there is enough space left in the already allocated block for the file
  * RStmp file gets removed and the rest of the backup continues without
apparent error
  
  Actually, for the case I tried to invent above, this doesn't seem to fit, but
  the general idea could apply - at least the symptoms are correct content
  stored somewhere but read back incorrectly. This would mean the result of a
  write operation would have to be unchecked by BackupPC somewhere (or handled
  incorrectly).
  
  So, the question is: have you been running BackupPC with an almost full disk?

Nope - disk has plenty of space...

  Would there be at least one file in the backup set, of which the
  *uncompressed* size is large in comparison to the reserved space (-
  DfMaxUsagePct)?

Nothing large by today's standard - I don't backup any large databases
or video files.

  
  For the moment, that's the most concrete thing I can think of. Of course,
  writing to a temporary location might be fine an reading could fail (you
  haven't modified your BackupPC code to use a signal handler for some 
  arbitrary
  purposes, have you? ;-). Or your Perl version could have an obscure bug that
  occasionally trashes the contents of a string. Doesn't sound very likely,
  though.
  
  What *size* are the original files?

About half are attrib files of normal directories so they are quite
small. One I just checked was a kernel Documentation file of  20K

  
  Ah, yes. How many backups are (or rather were) you running in parallel? Noone
  said the RStmp needs to be created by the affected backup ...

I don't run more than 2-3 in parallel.
And again my disk is far from full (about 60% of a 250GB partition)
and the files with errors so far all seem to be small.

I do have the partition mounted over NFS but I'm now using an updated
kernel on both machines (kernel 2.6.32) so it's not the same buggy
stuff I had years ago with an old 2.6.12 kernel.

But still, I would think an NFS error would trash the entire file, not
just the data portion of a compressed file...

Looking at the timestamps of the bad pool files, the errors occurred in
the Feb-April time frame (note this pool was started in February) and
there have been no errors since then. But the errors are sprinkled
across ~10 different days during that time period

[BackupPC-users] Bad md5sums due to zero size (uncompressed) cpool files - WEIRD BUG

2011-10-04 Thread Jeffrey J. Kosowsky
After the recent thread on bad md5sum file names, I ran a check on all
my 1.1 million cpool files to check whether the md5sum file names are
correct.

I got a total of 71 errors out of 1.1 million files:
- 3 had data in it (though each file was only a few hundred bytes
  long)

- 68 of the 71 were *zero* sized when decompressed
 29 were 8 bytes long corresponding to zlib compression of a zero
 length file

 39 were 57 bytes long corresponding to a zero length file with an
 rsync checksum

Each such cpool file has anywhere from 2 to several thousand links

The 68 *zero* length files should *not* be in the pool since zero
length files are not pooled. So, something is really messed up here.

It turns out though that none of those zero-length decompressed cpool
files were originally zero length but somehow they were stored in the
pool as zero length with an md5sum that is correct for the original
non-zero length file.

Some are attrib files and some are regular files.

Now it seems unlikely that the files were corrupted after the backups
were completed since the header and trailers are correct and there is
no way that the filesystem would just happen to zero out the data
while leaving the header and trailers intact (including checksums).

Also, it's not the rsync checksum caching causing the problem since
some of the zero length files are without checksums.

Now the fact that the md5sum file names are correct relative to the
original data means that the file was originally read correctly by
BackupPC..

So it seems that for some reason the data was truncated when
compressing and writing the cpool/pc file but after the partial file
md5sum was calculated. And it seems to have happened multiple times
for some of these files since there are multiple pc files linked to
the same pool file (and before linking to a cpool file, the actual
content of the files are compared since the partial file md5sum is not
unique).

Also, on my latest full backup a spot check shows that the files are
backed up correctly to the right non-zero length cpool file which of
course has the same (now correct) partial file md5sum. Though as you
would expect, that cpool file has a _0 suffix since the earlier zero
length is already stored (incorrectly) as the base of the chain.

I am not sure what is going on with the other 3 files since I have yet
to find them in the pc tree (my 'find' routine is still running)

I will continue to investigate this but this is very strange and
worrying since truncated cpool files means data loss!

In summary, what could possibly cause BackupPC to truncate the data
sometime between reading the file/calculating the partial file md5sum
and compressing/writing the file to the cpool?

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BUG SOLUTION: Can't call method getStats on an undefined value

2011-10-02 Thread Jeffrey J. Kosowsky
Gail Hardmann wrote at about 12:24:00 +0300 on Saturday, October 1, 2011:
  [BackupPC-users] BUG  SOLUTION: Can't call method getStats on an 
  undefined value
  Jeffrey: I have encountered exactly the same problem.  Thank you for your 
  bug solution. 
  
  I am trying to run BackupPC on a DNS-323 NAS machine (it's a linux ARM 
  machine) usinf Rsync and an rsync daemon.
  
  But firstly, it seems to me I need to solve the famous File::RsyncP - 
  module doesn't exist problem, which crops up here. (BTW, can you direct me 
  to a good link in that respect)?
  

I have run BackupPC on a DNS-323 but I actually run it under debian --
if you look on the DNS-323 forum, you can see that I have posted on
how one can reboot the machine into a debian kernel (not just a change
root although that would work too). You can then use standard debian
BackupPC packages. 

Note, however, that I found (and fixed) two bugs (in md5sum due to
32-bit alignment issues and also one in the Adler32 checksum
computation in File::RsyncP) -- see the archives for the fixes. I
would imagine that these bugs would also occur in the native DNS-323
kernel versions. The bugs are not fatal and don't destroy data but
they do give the wrong md5sum names and checksums.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-10-02 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 05:39:15 +0200 on Sunday, October 2, 2011:
  Mike Dresser wrote on 2011-09-29 14:11:20 -0400 [[BackupPC-users] Fairly 
  large backuppc pool (4TB) moved with backuppc_tarpccopy]:
   [...] Did see a few errors, all of them were related to the attrib files,
   similar to Can't find xx/116/f%2f/fvar/flog/attrib in pool, will copy 
   file
   [...]
   Out of curiosity, where are those errors (the attrib in pool ones) 
   coming from?
  
  (which is a question, and a good one).
  
  I can't promise that this is the correct answer, but it's a possibility: 
  prior
  to BackupPC 3.2.0, *top-level* attrib files (i.e. those for the directory
  containing all the share subdirectories) were linked into the pool with an
  incorrect digest, presuming there was more than one share. This would mean 
  that
  BackupPC_tarPCCopy would not find the content in the pool, because it would
  look for a file with the *correct* digest (i.e. file name). Please note that
  your quote above does *not* reference a *top-level* attrib file (that would 
  be
  xx/116/attrib), and, beyond that, you don't seem to have multiple shares,
  so it might well be a different problem.
  
  According to the ChangeLog, Jeffrey should have pointed this out, because he
  discovered the bug and supplied a patch ;-).
  
  I notice this problem on my pool when investigating where the longest hash
  collision chain comes from: it's a chain of top-level attrib files - all for
  the same host and with different contents and thus certainly different 
  digests.

As Holger points out, the bug I reported and suggested a patch for
involved top-level attribs where you have more than one share. This
has been fixed in 3.2.0.

That being said, in the past I did find a couple of broken attrib file
md5sums out of many 100's of thousands but I assumed at the time that
it was an artifact of some other messing around I may have been done.

If you are finding missing pooled attrib files not in the top-level,
then it would be interesting to figure out what is causing it since
there may be a real bug somewhere (though again I haven't seen the
problem recently but I haven't really checked recently either).

If you want to troubleshoot, I would do the following:
- Look up the inode of the bad attrib file in the pc tree
- Check how many links it has
- Assuming it has nlinks 1, search for that inode in the pool using
  say find topdir/cpool -inum inode number
- If the file is indeed hard-linked into the pool, calculate the
  actual partial md5sum of the file (not the *nix md5sum) using say
  one of my routines. Check to see if the calculated partial file
  md5sum matches the pool file name. Presumably it should be
  different.
- If the file is not there, then that is another issue.

- Also, look back through your logs to see when the attrib file was
  actually created and written to the pool. See if anything is
  weird/wrong there

Assuming that you do have a real issue with non-top-level attrib file
md5sums or pool links, it would be interesting to see if anybody has
encountered the same problem in versions = 3.2.0

  
   I still have the old filesystem online if it's something I 
   should look at.
  
  I don't think it's really important. If the attrib file was not in the pool
  previously, then that may simply have wasted a small amount of space. As I
  understand the matter, the file will remain unpooled in the copy. You could
  fix that with one of Jeffrey's scripts or just live with a few wasted bytes.
  If you are running a BackupPC version  3.2.0, pooling likely won't work for
  those attrib files anyway.
  
  It might be interesting to determine whether the non-top-level attrib files
  you got errors for are also, in fact, pooled under an incorrect pool file
  name, though that would involve finding the pool file by inode number and
  calculating the correct pool hash (or ruling out the existance of a pool file
  due to a link count of 1 :-).

Agreed - see my suggestion above

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-10-02 Thread Jeffrey J. Kosowsky
Mike Dresser wrote at about 10:51:14 -0400 on Sunday, October 2, 2011:
  Can you point me to those?  My method was to rsync the everything but the 
  pc dir, and then used backuppc_tarpccopy to create a tar file of the 
  hardlinks.. I probably could have saved a few days by directly extracting 
  it to the final dir, but rsync was still finishing up a few files.
  

See:
https://sourceforge.net/apps/mediawiki/backuppc/index.php?title=BackupPC_CopyPcPool

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-10-02 Thread Jeffrey J. Kosowsky
Mike Dresser wrote at about 15:44:32 -0400 on Sunday, October 2, 2011:
  On Sun, 2 Oct 2011, Jeffrey J. Kosowsky wrote:
  
   If you want to troubleshoot, I would do the following:
  
  I'm currently running 3.1.0, so that probably answers why I'm seeing 
  these.  Thought I was on 3.2 for some reason, I might try dpkg -i'ing in 
  3.2.1 from wheezy(testing) on a test system and see if it works on 
  squeeze(stable).  Looks like it requires a newer libc than what's 
  available in stable though.
  
  Anyways, i picked a file at random, and got
  
  -rw-r-  1 backuppc backuppc   35 Sep 18 01:56 attrib
  
  so there's only one nlink.

2 follow-ups would be helpful:
1. Is this true for the other non-top-level attrib files?
2. Are the other f-mangled files in the directory also only have a
   single link?
   This would happen if there is a problem in the linking stage. In
   fact, it's harder for me to understand how you would get unlinked
   attrib files while having the f-mangled data files linked properly.

Still, would be interesting for you to check both to make sure we know
the source of the problem and more importantly for you to make sure
you are not having a more general pool file linking issue...

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-10-02 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 04:48:10 +0200 on Monday, October 3, 2011:
  You might want to use parts of either Jeffrey's or my script. Jeffrey builds 
  a
  pool of information which files use which inode. I build a file with pretty
  much the same information. Both don't need to be stored on the pool FS they
  refer to (because they're not links to content, they're files with
  information). That way, you could iterate over the pool *once* and reuse the
  information multiple times.
  
  My method has the downside that you need to sort a huge file (but the 'sort'
  command handles huge files rather well). Jeffrey's method has the downside
  that you have an individual file per inode with typically probably only a few
  hundred bytes of content, which might end up occupying 4K each - depending on
  file system. Also, traversing the tree should take longer, because each file
  is opened and closed multiple times - once per link to the inode it 
  describes.
  Actually, a single big file has a further advantage. It's rather fast to look
  for something (like all non-pooled files) with a Perl script. Traversing an
  ipool is bound to take a similar amount of time as traversing pool or cpool
  will.

Holger, have you ever compared the time on actual data?

Just one nit, I do allow for caching the inode pool so frequently
referenced pool files do not require the corresponding inode pool file
to be opened repeatedly.

Also, I would think that with a large pool, that the file you
construct would take up a fair bit of memory and that the O(n log n)
to search it might take more time then referencing a hierarchically
structured pool, especially if the file is paged to disk.  Of course,
the above would depend on things like size of your memory, efficiency
of file system access, cpu speed vs. disk access. Still would be
curious though...

Finally, did you ever post a working version of your script. I have
heard you mention it (and we both discussed the approach several years
ago), but I don't remember seeing any actual code...

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Search for File

2011-10-01 Thread Jeffrey J. Kosowsky
Tim Connors wrote at about 11:15:31 +1000 on Thursday, September 29, 2011:
  On Wed, 28 Sep 2011, Timothy J Massey wrote:
  
   Arnold Krille arn...@arnoldarts.de wrote on 09/28/2011 11:20:57 AM:
  
 I'm sure someone with more shell-fu will give you a much better
   command
 line (and I look forward to learning something!).
   
Here you are:
   
find path_where_to_start -iname string_to_search
  ...
Using find you will realize that its rather slow and has your disk
   rattling
away. Better to use the indexing services, for example locate:
   
locate string_to_search
  
   Yeah, that's great if you update the locate database (as you mention).  On
   a backup server, with millions of files and lots of work to do pretty much
   around the clock?  That's one of the first things I disable!  So no
   locate.
  
  Hmmm.
  
  When I want to search for a file (half the time I don't even know what
  machine or from what time period, so I have to search the entire pool), I
  look at the mounted backuppcfs fuse filesystem (I mount onto /snapshots):
  https://svn.ulyssis.org/repos/sipa/backuppc-fuse/backuppcfs.pl

I too would recommend backuppc-fuse - though the one disadvantage is
that it is a lot slower than a native search through the pc tree since
the directories need to be reconstructed from the relevant partials 
fulls (which is a *good* thing but slow).

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Search for File

2011-10-01 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 10:30:18 -0400 on Wednesday, September 28, 2011:
  Gerald Brandt g...@majentis.com wrote on 09/28/2011 10:15:12 AM:
  
   I need to search for a specific file on a host, via backuppc.  Is 
   there a way to search a host backup, so I don't have to manually go 
   through all directories via the web interface?
  
  The easiest, most direct way of doing that would be:
  
  cd /path/to/host/pc/directory
  find . | grep ffilename
  

I think it would generally be faster to do:
  find . -name ffilename

This still may have a problem in that the f-mangling *also* converts
non-printable ascii characters (and also whitespace and /) into %hex
codes. So, if your filename contains any of those chars then you need
to change the search term to be written that way.

Also, you need to be careful about incrementals vs. fulls since incrementals
will include only the most recently changed files while fulls might
not include the latest version if there are subsequent incrementals.

You can avoid both of the above problems by using backuppc-fuse as
pointed out by another respondent, though it may be slower.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to run BackupPC_copyPCPool.pl without a BackupPC installation

2011-10-01 Thread Jeffrey J. Kosowsky
Matthias Meyer wrote at about 00:04:50 +0200 on Saturday, October 1, 2011:
  Hi (Jeff ;-)
  
  I would like to try your BackupPC_copyPCPool.pl to backup my BackupPC 
  storage to another server.
  
  Unfortunately this other server have no BackupPC installed.
  I've copied FileZIO.pm, Lib.pm, jLib.pm, Attrib.pm and Storage.pm from 
  /usr/share/backuppc/lib/BackupPC as well as Text.pm from 
  /usr/share/backuppc/lib/BackupPC/Storage onto this server.
  
  ~# sudo -u backuppc /usr/share/backuppc/bin/BackupPC_copyPCPool.pl
  No language setting
  BackupPC::Lib-new failed
  
  Is it possible to set the language without installing the whole BackupPC 
  package?
  

Well if you look in Lib.pm the call to set the language is in
ConfigRead which is called from BackupPC::Lib-new

I suppose one could hack Lib.pm but there are probably other hidden
gotchas so I think a minimal install would be worthwhile (and is very
easy).


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does BackupPC rsync over ssh work?

2011-09-26 Thread Jeffrey J. Kosowsky
maar...@tepaske.net wrote at about 10:29:16 +0200 on Monday, September 26, 2011 
 So I am currently writing some scripts for my backup needs. Which made
  me wonder, BackupPC essentially starts a backup like this:
  
  /usr/bin/ssh -4 -q -l backuppc host sudo /usr/bin/rsync --server --sender 
  --numeric-ids --perms --owner --group -D --links --hard-links --times 
  --block-size=2048 --recursive . directory
  
  But how does this really work? I understand rsync starts in server mode
  on the host that is to be backed up, but I don't see a rsync process
  being started on the BackupPC server.
  

Well remember that BackupPC uses it's own perl-based client version of
rsync 'perl-File-RsyncP' which implements a subset of rsync
functionality and interfaces with the rsync client.

The line you quote above ssh's into the remote client and starts up a
rsync process there. But of course that ssh line doesn't start the
local rsync process that you need to communicate with the remote
process.

If you are trying to run rsync standalone over ssh without using
BackupPC, then you should invoke rsync over ssh the *normal* way which
starts up the process locally and then uses ssh to start the remote
rsync process.

Specifically, if you want all the same flags as BackupPC uses, try
something like:

sudo /usr/bin/rsync --numeric-ids --perms --owner --group -D --links
--hard-links --times --block-size=2048 --recursive  
backuppc@host:directory .

But this is really just standard rsync usage. Also, manually setting
the block-size doesn't make sense if not using BackupPC (where it's
set that way for checksum caching and actually ignored) since the
automatic default setting will generally be more efficient. There may
be other options you want to add or delete.

But this has nothing really to do with BackupPC...

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does BackupPC rsync over ssh work?

2011-09-26 Thread Jeffrey J. Kosowsky
Tim Fletcher wrote at about 19:53:25 +0100 on Monday, September 26, 2011:
  On Mon, 2011-09-26 at 15:37 +0200, Maarten te Paske wrote:
  
   OK, I will read a bit more into the rsync documentation. I thought this
   way I wouldn't be able to limit the privileges through sudo, but maybe
   I'm wrong.
  
  I use the following line in /etc/sudoers to allow the user backups to
  call rsync as root. Note this line does break rsync based recovery, but
  this is part of the plan as it prevents the backuppc server writing to
  the client being backed up.
  
  backups ALL=NOPASSWD: /usr/bin/rsync --server --sender *
  
  You will also need to remove the requiretty option from /etc/sudoers
  too.

You can do this more *selectively* by instead *adding* the following
line:
Defaults:backups !requiretty
where I am assuming 'backups' is the backuppc user name for you.

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] archiving request backuppc process always fails

2011-09-22 Thread Jeffrey J. Kosowsky
Markus Fröhlich wrote at about 18:43:01 +0200 on Thursday, September 22, 2011:
  backupPC processes run as user wwwrun - this is the apache user - 
  because of the permissions making the configuration over the webinterface.
  the archive request get startet over a cronjob and a small skript once a 
  week:
sudo -u wwwrun /usr/local/BackupPC/bin/BackupPC_archiveStart 
  archive-tape xadmin $XALL_HOSTS
  where the variable XALL_HOSTS contain all hosts of the backupPC server.
  

As has been pointed out several times before on the list, making
backuppc run as the apache user is potentially a HUGE security hole
since it may end up allowing anybody to have permission to read any of
the backups...
Backuppc should be run as a *separate*, secure  user.

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups very slow after upgrade to squeeze

2011-09-13 Thread Jeffrey J. Kosowsky
James L. Evans wrote at about 13:50:38 -0400 on Monday, September 12, 2011:
  After further experimenting, it appears that with squeeze (unlike lenny) 
  that running BackupPC_nightly and BackupPC_dump at the same time is the 
  problem. After changing the schedule so that BackupPC_nightly runs hours 
  before the backups start that everybody is happy.

Not sure why that would be true. Do they use the same versions of BackupPC?

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
Learn about the latest advances in developing for the 
BlackBerryreg; mobile platform with sessions, labs  more.
See new tools and technologies. Register for BlackBerryreg; DevCon today!
http://p.sf.net/sfu/rim-devcon-copy1 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] md4 doesn't match even though VSS is used

2011-09-13 Thread Jeffrey J. Kosowsky
Matthias Meyer wrote at about 00:21:25 +0200 on Wednesday, September 14, 2011:
  Jeffrey J. Kosowsky wrote:
  
   Matthias Meyer wrote at about 15:29:09 +0200 on Sunday, September 11,
   2011:
 Dear all,
 
 I've a problem by backing up a large file (9GB) because the internet
 connection of the client interrupts every 24 hours.
 
 BackupPC (V3.1.0) can rsync this file once with status:
 md4 doesn't match: will retry in phase 1; file removed
 With lsof /var/lib/backuppc I can see this phase 1 transfer some
 minutes later.
 But the internet connection will interrupt shortly before this second
 transfer were finished :-(
 
 I am sure that the source file (Windows client) is on a volume shadow
 copy and rsync is using this source because:
 - /etc/rsyncd.conf contains only one path = /cygdrive/Z/
 - ls /cygdrive/ shows only the drives C and D
 - ls /cygdrive/Z lists the same files as ls /cygdrive/C
 
 So it should not possible that the source was changed.
 
 Did /usr/share/backuppc/lib/BackupPC/Xfer/RsyncFileIO.pm compare the
 md4 diggest from the begin of a transfer with a recalculated md4
 diggest at the end of the transfer?
 
 Somebody else have a similiar problem?
 
 Is there any known solution to solving my problem?
 
 What happens if I patch the RsyncFileIO.pm so that it will ignore the
 md4 doesn't match?
 
 I know I should try it instead asking for it. But I'm not sure what the
 meaning of md4 is and hopefully someone can give me a hint.
 
   
   I would not ignore the md4 mismatch. md4sums are used on block sums
   and file sums both to ensure the transfer was errorless and also as
   part of rsync's delta matching algorithm that allows it to only
   transfer over changed blocks.
   
   I'm not sure what is the cause of your problem. But I would first try
   naked rsync (without BackupPC) and with protocol manually set to
   protocol 28 so that md4 sums are used rather than md5sums. See what
   happens when you try to transfer the file...
   
   
  I've got the file with native rsync --protocol=28 from the client without an 
  error.
  In the next step I backup this file with BackupPC onto one machine within my 
  LAN and cp -al the directory into the last backup set from the original 
  client. Thats work. The original client run the next backup without an 
  error.
  
  Maybee my previous error was that I copied the file without the attrib-file 
  into the backup set of the original client.
  

Well, officially one should never manually add/change/delete
individual files in past backups.

As many people know, I wrote a program to all for manual deletions
(BackupPC_deleteFile.pl) but that involves all the trickiness of
changing attrib files and dealing with incremental inheritance. Adding
or moving files has the potential for similar issues.

Just manually copying/moving/deleting files in the pc-tree is a
*really* bad idea. 

That being said, I'm not exactly sure why such a change would cause
the error you report except that if you have checksum caching turned
on, the filesize mentioned in the attrib file is used to determine
where the file contents end and the md4 checksums are stored. I know
in my routines if the filesize is missing, I manually calculate the
size of the file but I'm not sure if that happens also in all instances
in BackupPC. There is also the possibility that the attrib file
contains a record for a similarly named file but with a different size
-- this could very well cause md4sum errors since the location of the
md4sums and their values themselves may be different...

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
Learn about the latest advances in developing for the 
BlackBerryreg; mobile platform with sessions, labs  more.
See new tools and technologies. Register for BlackBerryreg; DevCon today!
http://p.sf.net/sfu/rim-devcon-copy1 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] md4 doesn't match even though VSS is used

2011-09-11 Thread Jeffrey J. Kosowsky
Matthias Meyer wrote at about 15:29:09 +0200 on Sunday, September 11, 2011:
  Dear all,
  
  I've a problem by backing up a large file (9GB) because the internet 
  connection of the client interrupts every 24 hours.
  
  BackupPC (V3.1.0) can rsync this file once with status:
  md4 doesn't match: will retry in phase 1; file removed
  With lsof /var/lib/backuppc I can see this phase 1 transfer some minutes 
  later.
  But the internet connection will interrupt shortly before this second 
  transfer were finished :-(
  
  I am sure that the source file (Windows client) is on a volume shadow copy 
  and rsync is using this source because:
  - /etc/rsyncd.conf contains only one path = /cygdrive/Z/
  - ls /cygdrive/ shows only the drives C and D
  - ls /cygdrive/Z lists the same files as ls /cygdrive/C
  
  So it should not possible that the source was changed.
  
  Did /usr/share/backuppc/lib/BackupPC/Xfer/RsyncFileIO.pm compare the md4 
  diggest from the begin of a transfer with a recalculated md4 diggest at the 
  end of the transfer?
  
  Somebody else have a similiar problem?
  
  Is there any known solution to solving my problem?
  
  What happens if I patch the RsyncFileIO.pm so that it will ignore the md4 
  doesn't match?
  
  I know I should try it instead asking for it. But I'm not sure what the 
  meaning of md4 is and hopefully someone can give me a hint.
  

I would not ignore the md4 mismatch. md4sums are used on block sums
and file sums both to ensure the transfer was errorless and also as
part of rsync's delta matching algorithm that allows it to only
transfer over changed blocks.

I'm not sure what is the cause of your problem. But I would first try
naked rsync (without BackupPC) and with protocol manually set to
protocol 28 so that md4 sums are used rather than md5sums. See what
happens when you try to transfer the file...

--
Using storage to extend the benefits of virtualization and iSCSI
Virtualization increases hardware utilization and delivers a new level of
agility. Learn what those decisions are and how to modernize your storage 
and backup environments for virtualization.
http://www.accelacomm.com/jaw/sfnl/114/51434361/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Round-tripping config.pl between hand-coding and the web interface

2011-09-09 Thread Jeffrey J. Kosowsky
hans...@gmail.com wrote at about 20:09:02 +0700 on Friday, September 9, 2011:
  I realize that, and thought my posting details on my precautionary
  procedures would sufficiently demonstrate my awareness of the fact
  that I'm living on the edge.
  
Please take this as constructive criticism, but perhaps as an admitted
noobie to BackupPC, Perl, and Linux (combined with the fact that you
are not a big fan of structured problem solving), you might be better
off stepping back from the edge a tad until you are at least
comfortable on stable ground... It certainly would give the list and
poor Holger in particular a little bit of a rest and breathing room :P


--
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_copyPcPool error

2011-09-09 Thread Jeffrey J. Kosowsky
Tyler J. Wagner wrote at about 11:58:00 +0100 on Friday, September 9, 2011:
  On 2011-09-05 17:38, Jeffrey J. Kosowsky wrote:
   You probably want to read the documentation under --help (and also
   perhaps at the head of the executable). But you probably want to use
   the --fixlinks|-f option which will create links between any pc file
   that is not linked to the pool to either the appropriate existing pool
   element (if it's there but not linked) or it will create a new
   properly named pool element linked to the pc file if the pool element
   doesn't already exist.
  
  Thanks for your help, Jeffrey. I think the problem is that the ipool wasn't
  fully created on a previous run. I assumed that the ipool would be updated,
  but it isn't. Does it have to be completely recreated with every run if the
  pool has changed since the last run?

Yes - there is no other way to know since BackupPC_nightly can
renumber chains and since pool files can be added/deleted in between
which can reuse old inode numbers.

I modified the run-time warning to be more explicit that you need to
recreate the Ipool if the pool has changed. Also, I added a note to
this effect in the --help documentation.

  
  Also, running the latest version produces this output:
  
  Argument 0.4.0 isn't numeric in subroutine entry at
  /usr/local/bin/BackupPC_copyPcPool.pl line 139.
  
  This is on backuppc 3.2.0-3ubuntu4~maverick1 and also backuppc
  3.2.0-3ubuntu4~lucid1, with jLib 0.4.0.
  

Interesting, you must have a version of Perl that is more dogmatic
about how v-strings are specified. But I'm not sure why are first
encountering the problem now since the versioning code didn't change
from the previous version. I did make a change to specify it
explicitly as a v-string so hopefully that will help.

I will send you the code in a separate email.
I would appreciate it if you could test it out on your system to see
if the versioning now works...

--
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Round-tripping config.pl between hand-coding and the web interface

2011-09-09 Thread Jeffrey J. Kosowsky
hans...@gmail.com wrote at about 21:28:24 +0700 on Friday, September 9, 2011:
  Anyway, I'll try to keep quiet for a while, in particular not
  discussing my ideas to have bootable HDDs implementing BackupPC-based
  per-client personal Time Machines. 8-)
  
  Can anyone suggest a more suitable list/forum for discussing such
  speculative topics, just tossing ideas around, as opposed to specific
  break/fix situations?

Well it depends... there is a development version of this list to
discuss development issues though it is low traffic and may not have a
lot of subscribers.

But MORE IMPORTANTLY, I think the point is that this list is staffed
by volunteers who have limited time and resources. We are eager to
help people who have specific, well-thought out, and BackupPC-related
questions. Also, people who contribute bug fixes or (working  tested)
new code are welcome.

However, what is much less welcome are people who just suggest
ill-defined or loosely-defined ideas that may have only narrow
relevance and then expect the list to flesh-out and translate their
ideas into something that actually makes sense and is doable.

In other words, if you want to invest the effort in fleshing-out,
developing, and testing an extension to BackupPC or a new usage case,
that's great and in most cases very welcome. But if your intention is
to just loosely formulate and throw-out some vague ideas and then
expect the list to do the hard work for you of fleshing it out and
implementing it, such contributions are typically less welcome --
unless perhaps you are addressing a topic that is of clear broad
interest to many people.


--
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] OFFTOPIC Re: WINS server (nmbd) puzzle

2011-09-06 Thread Jeffrey J. Kosowsky
Kenneth Porter wrote at about 14:35:08 -0700 on Monday, September 5, 2011:
  My client Windows XP boxes are failing to register with my WINS server
  (running nmbd from Samba). I'm puzzled how to figure out what I'm doing
  wrong.
  
  I'm setting up BackupPC to back up my Windows clients using rsync. I've 
  installed cwRsync to the clients. BackupPC uses nmblookup to find the 
  client's IP address given its Windows NETBIOS name.
  
  I'm distributing the WINS server address via DHCP and see it on the client
  using ipconfig /all. I can run tcpdump on the server (filtering for this
  client and the NETBIOS port) and see the register/response sequence at UDP
  137:
  
  MULTIHOMED REGISTRATION; REQUEST; UNICAST
  REGISTRATION; POSITIVE; RESPONSE; UNICAST
  REGISTRATION; REQUEST; UNICAST
  REGISTRATION; POSITIVE; RESPONSE; UNICAST
  
  If I signal the nmbd process with SIGHUP to make it dump its table to
  nmbd.log, I don't see client in the list. I do see the server and my
  Windows Active Directory server's records in the list, but no clients.
  
  nmblookup finds the client by broadcast (the client responds with the
  record) but nmblookup -U 127.0.0.1 says there's no record of it. (A query
  for the server's own record finds it, so I know the WINS service is at
  least working to that degree.)
  
  So why is nmbd not remembering client records?
  
  One thing I realized is that the Samba server is configured for one 
  workgroup, and the client in question is in a different workgroup. Will 
  nmbd not record records for workstations outside its workgroup?

What does this have to do with BackupPC?
Seems like you have a Windows and/or SAMBA problem - I suggest you
post to the relevant lists.

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_copyPcPool error

2011-09-05 Thread Jeffrey J. Kosowsky
Tyler J. Wagner wrote at about 09:46:39 +0100 on Monday, September 5, 2011:
  On 2011-09-02 18:06, Jeffrey J. Kosowsky wrote:
   Why do you assume something is wrong with how you are using the
   program?
   
   The error message is saying that you have a bunch of files in the pc
   tree that are linked to each other but not to the pool (actually there
   is a small bug in my program in that the attrib file should say VALID
   pc file not INVALID pc file - I will fix that). Such cases can
   happen when a backup fails to link to the file. Alternatively, it
   could mean the attrib file is broken (in an upcoming version, I will
   separate these cases...)
  
  I assumed it was in error because that was the first backup of the first
  host, and I'd never received any indication that it failed to link.
  
   Can you confirm if those files are indeed in the pool?
  
  The files are not in the pool. How can I manually force them to link?

You probably want to read the documentation under --help (and also
perhaps at the head of the executable). But you probably want to use
the --fixlinks|-f option which will create links between any pc file
that is not linked to the pool to either the appropriate existing pool
element (if it's there but not linked) or it will create a new
properly named pool element linked to the pc file if the pool element
doesn't already exist.

I am attaching a slightly updated version of the executable that makes the
error reporting a little more clear for non-linked pc files:


BackupPC_copyPcPool.pl
Description: Binary data

Good luck and please let me know how this works out for you...

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first full never completes

2011-09-03 Thread Jeffrey J. Kosowsky
hans...@gmail.com wrote at about 14:18:41 +0700 on Saturday, September 3, 2011:
  On Sat, Sep 3, 2011 at 11:09 AM, Timothy J Massey tmas...@obscorp.comwrote:
  
   But would probably be a very good idea.  What would be an even better idea
   would be to grab a spare PC (or a virtual guest) and test it from a
   completely clean installation.  And document the *heck* out of what you do:
you *will* be doing it again (and again and again).
  
  
  Well the whole thing is a test system, and I'm not that concerned with
  figuring out what went wrong vs moving forward, so I guess I'll just wipe
  and restart with a clean OS.
  
  Since I want to use the BackupPC 3.1 package (eventual production system
  will be on CentOS5), while I'm at it I'll use the Ubuntu version it's
  designed for, Lucid 10.04, rather than the latest Natty 11.04.
  
  Hopefully will eliminate the problems I'm seeing un/re- installing from
  the package system.
  
  I plan to keep the pool folders and of course my long-tweaked config.pl, but
  will start off from the clean install with as close to defaults as possible
  with a small static target share to test with, then make the changes a
  little at a time only after I've got the basics working right.
  
  Which as you say I should've done from the start. . .
  
  In the meantime there are a few unanswered questions in the thread above if
  anyone has the information to  ontribute more detailed responses I'm sure it
  will help others googling later on. . .

Just a piece of friendly advice... you seem to have posted dozens of
posts in the past 24 hours or so... you keep making multiple, often
non-standard or nonsensical changes to a standard
configuration... and are asking multiple questions as you dig yourself
deeper.

Why don't you pursue this in a rational and organized approach? Get
the basic system working with no modifications. Verify that it works,
play with it, and get comfortable with the default setup and
behaviors. Then step-by-step make one change at a time. If the change
works as expected, then move on to the next change. If it doesn't then
you know the exact source of the problem and can either troubleshoot
it yourself (ideal) or ask a specific question to the list.

What you are doing now is confusing yourself and probably most of the
readers of the list. Pretty soon people will get tired of answering
you or will lose track of all the questions and changes you have made
meaning that they won't be around to help you when you really need it.

Thanks

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_copyPcPool error

2011-09-02 Thread Jeffrey J. Kosowsky
Tyler J. Wagner wrote at about 17:40:16 +0100 on Friday, September 2, 2011:
  Hi all (well, Jeff, really),
  
  I'm trying to copy a very large pool (1 TB, 70 hosts, about 20 backups
  each) to another server with a bigger disk array. I'm using
  BackupPC_copyPcPool. However, on running it I get a lot of errors.
  
  cd /var/local/backuppc
  su - backuppc -c /usr/local/bin/BackupPC_copyPcPool.pl -o copypooloutput
  
  ...
  
  ERROR:
  pc/server1/620/f%2f/flib/fmodules/f2.6.24-27-generic/fkernel/fdrivers/fvideo/fnvidia/fnvidiafb.ko
  (inode=48614396, nlinks=70) VALID pc file NOT LINKED to pool
  ERROR:
  pc/server1/620/f%2f/flib/fmodules/f2.6.24-27-generic/fkernel/fdrivers/fvideo/fnvidia/attrib
  (inode=48614398, nlinks=70) INVALID pc file and UNLINKED to pool
  ERROR:
  pc/server1/620/f%2f/flib/fmodules/f2.6.24-27-generic/fkernel/fdrivers/fvideo/ftridentfb.ko
  (inode=12629504, nlinks=83) VALID pc file NOT LINKED to pool
  
  Yes, my TopDir is /var/local/backuppc, but I have a symlink from
  /var/lib/backuppc as well.
  
  Can anyone tell me what's going wrong? Am I invoking BackupPC_copyPcPool
  incorrectly?

Why do you assume something is wrong with how you are using the
program?

The error message is saying that you have a bunch of files in the pc
tree that are linked to each other but not to the pool (actually there
is a small bug in my program in that the attrib file should say VALID
pc file not INVALID pc file - I will fix that). Such cases can
happen when a backup fails to link to the file. Alternatively, it
could mean the attrib file is broken (in an upcoming version, I will
separate these cases...)

Can you confirm if those files are indeed in the pool?

You can use the following script to find the MD5sum name and then look
in the pool tree to see if the file exists and the inodes are the
same.



#!/usr/bin/perl
my $bpc = BackupPC::Lib-new(, , , 1) || #No user check
   die(BackupPC::Lib-new failed\n);

use strict;
use warnings;

use lib /usr/share/BackupPC/lib;
use BackupPC::Lib;
use BackupPC::jLib;

my $md5 = Digest::MD5-new;
my @result = $bpc-File2MD5($md5, $ARGV[0]);
print $result[0]\n;
#print $result[0]\t$result[2]\n;

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Linux backups with rsync vs tar

2011-09-02 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 10:43:37 -0400 on Friday, September 2, 2011:
 
  Your old backups should be 100% fine.  They will remain in the pool just 
  fine, etc.  I do not believe that files transferred by rsync will pool 
  with files transferred by tar (due to the attribute issue you mention); 
  however, for you that's a moot point:  90% of your files don't pool, 
  anyway.

Why do you think they won't pool?
Pooling is based on file *content*. Attributes are stored in
separate 'attrib' files. Even so I'm not sure why the basic file
attributes would be different between rsync and tar -- but even if
they do it would only mean that the attrib files wouldn't pool with
old attrib files and that's typically a small proportion of the pool
by volume.

The only issue I can imagine is with rsync checksums. I'm not sure
what happens with such files when you move from rsync to tar. I would
hope that it would still pool them properly either by ignoring or
deleting the checksums at the end of the file. Again, the actual file
contents (which don't include the checksums obviously) are the same
between rsync and tar.
  
  This is not a *bad* thing.  Every single one of my backup servers is based 
  on BackupPC, and all but maybe 2 shares are backed up using rsync.  (The 
  only exceptions I can think of are where I'm backing up data on a NAS, and 
  I can't or won't run rsyncd on the NAS so I have to use SMB).  Whether 
  it's an advantage or disadvantage, that's the setup I use.  I vastly 
  prefer consistency over performance.  But I can live with 8 hour backup 
  windows.

Why not run rsyncd on a NAS? It works fine and is reasonably fast even
on low end arm-based devices with minimal memory (e.g., 64MB).

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Feature request

2011-08-25 Thread Jeffrey J. Kosowsky
Carl Wilhelm Soderstrom wrote at about 07:51:10 -0500 on Thursday, August 25, 
2011:
  On 08/25 07:50 , Brad Alexander wrote:
   Really a small thing, but when doing a restore, and you save as a .zip or
   .tar, instead of defaulting to a generic and non-descriptive filename of
   restore.{tar|zip}, how about something more descriptive, such as
   hostname-filesystem-date.{tar|zip}?
  
  I second this request!
  I believe filenames should always be as descriptive as is reasonable.
  Unfortunately my perl-fu is pretty weak as well.

I would make it consistent with the heirarchy:
hostname-backup#-share
I'm not sure what date adds since the date is irrelevant unless you
are referring to the date of the snapshot in which case it would be an
alternative to backup#.
Also, share should be optional in case the restore is done at the
host level.

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Feature request

2011-08-25 Thread Jeffrey J. Kosowsky
Carl Wilhelm Soderstrom wrote at about 08:31:01 -0500 on Thursday, August 25, 
2011:
  On 08/25 09:23 , Jeffrey J. Kosowsky wrote:
   I would make it consistent with the heirarchy:
   hostname-backup#-share
   I'm not sure what date adds since the date is irrelevant unless you
   are referring to the date of the snapshot in which case it would be an
   alternative to backup#.
  
  I would prefer date, since the 'backup number' is only relevant within
  BackupPC's universe; whereas the date is relevant to the rest of the world.

My point was more that the date should refer to the time of the backup
not of the restore.

  
   Also, share should be optional in case the restore is done at the
   host level.
  
  Even when restoring an entire host, the share must be specified, correct?
  One problem is that '/' is a character with special meaning on the command
  line; and so we should avoid putting it in filenames. Suggestions?

I think one should use the encoding used for the share name in the pc
tree. There are characters other than '/' that could cause problems
such as special or foreign characters on one client that might not be
allowed or present on the server.

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC on client machine running on Fedora13

2011-08-21 Thread Jeffrey J. Kosowsky
jiahwei wrote at about 20:27:09 -0700 on Sunday, August 21, 2011:
  But somehow it doesn't backup my Home directory, even though I tried 
  indicate /home to the backup/

WHAT??? This sentence fragment means nothing... What is 'it'? What are
you talking about?

Are you replying to some other comment? What is the context?

  +--
  |This was sent by jiahwei_cvs...@hotmail.com via Backup Central.
  |Forward SPAM to ab...@backupcentral.com.
  +--

How many times do people need to be told this is a mailing list and
not a forum. Unless you cut and paste the context, we have no clue
what you are talking about.

If you want to get help here then join the mailing list rather than
using Backup Central which is some type of hack that tries to pretend
the mailing list is a forum.

Craig: Is it possible to block mail originating from Backup Central
since it seems to just cause endless problems. Perhaps there could be
an auto-reply that asks the user to sign up on the mailing list...

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] OFFTOPIC: Re: Windows7 vshadow error

2011-08-12 Thread Jeffrey J. Kosowsky
Jonathan Schaeffer wrote at about 15:17:15 +0200 on Friday, August 12, 2011:
  Hello,
  
  this is not direcly connected to backuppc itself, but I thought I would
  find people here with a good experience of my problem.
  
  I am backing up windows clients to a central server using the shadow
  copy system. It's alright for windows XP machines.
  
  But I have an error on Windows 7. To sum up the whole thing, it's just
  like :
  
  $ vshadow.exe C:
  
  Error during the last asynchronous operation.
  - Returned HRESULT = 0x80042318
  - Error text: VSS_E_WRITER_INFRASTRUCTURE
  - Please re-run VSHADOW.EXE with the /tracing option to get more details
  
  
  Of course, /tracing is not telling anything relevant (but I might be wrong).
  
  Besides, the service Volume Shadow Copy starts normaly on the windows
  machine.
  
  I have to admit that I don't understand much about how windows is doing
  all this shadow copy stuff.
  
  Did someone come accross this problem and know how to repair it ?
  

Windows 7 uses a different version of vshadow than XP.
Also the command line arguments differ

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. 
http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup to Multiple NAS

2011-08-11 Thread Jeffrey J. Kosowsky
David Uhlmann wrote at about 11:55:57 +0200 on Thursday, August 11, 2011:
  Dear all,
  
  I want to run a Backup to 2 NAS-Systems, attached via NFS. Because BackupPC 
  can do backups to only one folder, my idea is this:
  
  Create Directory /backup
  
  Run a Cron Job to mount the NAS via NFS, linked to the /backup-Folder. After 
  the Job, umount the NAS and mount the other one.
  
  What do you think about that solution? 

That may not work properly unless your cron job also starts/stop the backuppc
daemon since when backuppc starts it tries to create a hard link in
TopDir. So you at least need something mounted when you start the
process...

Also, how will you handle BackupPC_nightly?

Also, unless you also run the dumps from cron, you need to make sure
that each NAS is mounted long enough for the hourly wakeup
process...

Also, how will you know when all your dumps are done? Sometimes they
can take longer than you anticipate... especially since the dumps are
driven by aging so it's possible the dumps won't even start when you
expect let alone end.

Your method is certainly *possible* but it breaks the normal paradigm
and has at least the above issues to be concerned about.

It probably would be a lot simpler to just run BackupPC on a second
pc.
Alternatively, I believe there was a thread a couple years ago about
hacking the BackupPC code to allow a second concurrent BackupPC
process. It would be a cleaner approach too.

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. 
http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migration/merge BPC hosts questions

2011-08-11 Thread Jeffrey J. Kosowsky
ft oppi wrote at about 13:05:07 +0200 on Thursday, August 11, 2011:
  Hello list,
  
  I've read the wiki and part of the list, but the solutions described there
  don't satisfy me completely, so I'm looking for something else.
  
  I have two old Linux servers running BackupPC 3.1.0 and I need to
  migrate/merge them to a single new server.
  Compression is enabled on both BPC. Cpools sizes are 500Gbytes and
  1200Gbytes for a hundred backuped hosts.
  FullKeepCnt is set to 8,0,12 (56 weeks) so I can't just put the new server
  and keep the others around for that time, I need to migrate everything to
  get rid of the old ones.
  I don't need to migrate all at once but I can't skip a day of backup
  (customer demands).
  
  The new host would be relying on ZFS with compression, deduplication and
  remote replication.
  
  Currently I plan to do this:
  1) fresh install of BackupPC + mimic config (ssh keys, schedules, etc) on
  new server,
  2) for each backuped host,
a) disable backup of host (BackupsDisable = 2),
b) plain tar pc/hostname from old to new server,
c) copy hostname.pl from old to new server,
d) add hostname to hosts file of new server,
  3) shutdown old server when all hosts have been migrated.
  
  It's basically the same process described in the wiki without:
  1) pre-copying the pool (it would take ages, one of the server only has a
  100Mbps internet connection)

I'm confused, copying the pool is too slow yet you plan to plain tar
the pc directory which may be 10's to 100's of times larger (do to
pool deduplication)?

  2) using BackupPC_tarPCCopy (it does hard links against the pool, which
  wouldn't exist)
  
  My reasoning about this is I don't need hard links nor the pool anymore,
  thanks to ZFS deduplication.

That sounds right theoretically, though you would need to test it in
practice. 

  
  What would happen if I did that ?
  Would BackupPC regenerate the pool over time (with new backups
  coming in) ?

A *new* pool would be generated based upon any new backups. Old files
that exist only in the pc directory would obviously not be added to
the pool. The backups copied over from the old pc directory would
remain *unlinked*.

You could run a script to crawl through your pc directory to relink
the old backups. 

  Would I still be able to restore files, browse backups, etc ?
Yes.
  And finally, what would happen if I disabled compression on the new server ?
  I remember reading it would only affect new backups, and I would still be
  able to access old ones.
True.

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. 
http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Too many links : where is the problem ?

2011-08-09 Thread Jeffrey J. Kosowsky
friedmann wrote at about 06:19:35 -0700 on Tuesday, August 9, 2011:
  
  I also noticed a good clue, pointing out the number of subdirectories
  hence, i have counted the number of subdirectories that are on the backuped 
  drive, using : find . -type d | wc -l, that returned me :
  82779

Well this by itself doesn't mean anything since find looks at the
entire tree of subdirectories not just one level (you would need
to use maxdepth to limit the find)

  
  And, maybe more insightful, i have 32002 subdirectories in the same
  directory, i.e. in the most populated one (the one on which the
  backup process fails :
  /var/lib/backuppc/pc/192.168.2.105/new//f%2fmedia%2fRhenovia/fVBox_Data_until_January_2011/fSBML_221010_IP3R_ForArnaud/fsbml2java_distrib/fDATE27octobre2010TIME09h17_huge9)

That is the problem since as you tested, the number of links on your
system is limited to 32000 so you can't have more than that number of
subdirectories either.
  
  I found out that, using ext3, a limit for the number of
  subdirectories in one directory could be 31998
  (http://superuser.com/questions/66331/what-is-the-maximum-number-of-folders-allowed-in-a-folder-in-linux).

I posted that limitation yesterday in response to your OP.

  I checked this with a shell script the following shell script : 
  SCRIPT
  #!/bin/bash
  for i in {1..32010}
  do
 mkdir test$i
  done
  END SCRIPT
  
  It actually fails after 31998 created subdirectories on the ext3 partition
  tested on the host's ntfs partition : the test is passed. I even tried with 
  a value of 50 : I stopped it after 296000 subdirectories were created.
  
  [Idea] So i consider the problem as identified : It is relatively not 
  related to backuppc itself, but deals with a sort of filesystems 
  incompatibility. [Idea]

Well it seems to me it is not just relatively but not at all related
to backuppc.

  [Question] And finally, the only question that lasts is : how can i increase 
  the number of subdirectories tolerated by ext3 ? [Question]

You really can't/shouldn't unless you recompile your own custom kernel
with a different hard-coded value, but even so I would strongly
discourage doing so for the following reasons:
1. There is no guarantee that the resulting filesystem would be stable
   or efficient since it likely hasn't been (extensively) tested with
   other values. In particular, using extra bits to store or count
   links in  something as low as a filesystem structure could cause
   overflow issues...
2. Your non-standard ext3 filesystem would be incompatible with normal
   implementations and could cause problems, including corruption, if
   you ever mistakenly mounted it on a normal version.

I think there are only 2 reasonable atlernatives:
1. Use another filesystem to store your BackupPC data -- as was
   pointed out in the thread ext4 typically seems to have a 65000 limit
   while other filesystems like zfs have no such limit.

2. Break up the offending subdirectories on your client machines into
   another level of subdirectories so that the number of
   subdirectories is below HardLinkMax

  Admit that it was quite tricky : eveything deals with a limit of 32000 
  (hardlinks, subdirectories ...)  :o 
  

Not sure what was so tricky. It seemed pretty easy to figure out given
that you provided essentially no specific debugging information yet
the responders to your thread suggested that (1) issue is in rsync (2)
might be aa subdirectory limitation

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] NFS woes

2011-08-08 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 12:21:54 -0500 on Monday, August 8, 2011:
  On 8/8/2011 9:30 AM, Phil K. wrote:
   Look in to using the nolock option when mounting the device. Rrdtool
   can't get a file lock, which is likely causing your performance issues.
   I have the same device, and had similar performance issues.
  
  The rrd graph is a recent addition to backuppc and not really necessary 
  for backup functionality.  You could probably just disable it in the 
  code somewhere.
  

I run 3.2.0 on a plugcomputer without rrd without any problem... I
compiled it from the perl sources and did not need to disable rrd
either in the 'make' stage or in config.pl.

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Too many links : where is the problem ?

2011-08-08 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 19:34:27 +0200 on Monday, August 8, 2011:
  Hi,
  
  as Jeffrey said, we'll need meaningful information to give meaningful 
  answers.

Oh my goodness, did Holger Top Quote? say it isn't so :P

  One thing I can answer, though:
  
  friedmann wrote on 2011-08-08 09:19:51 -0700 [[BackupPC-users]  Too many 
  links : where is the problem ?]:
   [...]
   the backup machine is 32bits, ubuntu 10.04.1, operating on ext3 partitions 
   :
   What is then the maximum value for this parameter ?
  
  figure it out for yourself:
  
   % # go to a directory on the partition you are interested in, make sure 
  you have write access
   % mkdir linktest
   % cd linktest
   % touch 1
   % perl -e 'for (my $i = 2; ; $i ++) { link 1, $i or die Cant create 
  ${i}th link: $!\n; }'
  
  (this answer is also valid for any other (UNIX type) OS/FS).
  

FYI, On 32-bit Fedora 12/Linux 2.6.32:
   Ext2/3: MAX=32000
   Ext4: MAX=65000

This presumably should be true more generally for any relatively
non-ancient  unhacked version of Linux...

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Too many links : where is the problem ?

2011-08-08 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 16:59:45 -0500 on Monday, August 8, 2011:
  On 8/8/2011 3:28 PM, Jeffrey J. Kosowsky wrote:
  
  
   FYI, On 32-bit Fedora 12/Linux 2.6.32:
   Ext2/3: MAX=32000
   Ext4: MAX=65000
  
   This presumably should be true more generally for any relatively
   non-ancient  unhacked version of Linux...
  
  Is exceeding that a fatal error?  I thought it was just supposed add an 
  entry like a hash collision and go on.

Well the link command in Holger's perl script fails and I imagine a
similar bash script would fail too...

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Too many links : where is the problem ?

2011-08-08 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 23:59:49 +0200 on Monday, August 8, 2011:
  Hi,
  
  Jeffrey J. Kosowsky wrote on 2011-08-08 16:28:28 -0400 [Re: [BackupPC-users] 
  Too many links : where is the problem ?]:
   Holger Parplies wrote at about 19:34:27 +0200 on Monday, August 8, 2011:
 Hi,
 
 as Jeffrey said, we'll need meaningful information to give meaningful 
   answers.
   
   Oh my goodness, did Holger Top Quote? say it isn't so :P
  
  well, I suppose it is *possible* to define top posting that way, but does
  anyone? I didn't reply to anything and quote the question (or the whole mail)
  afterwards. I was *trying* to acknowledge that you had given a more
  comprehensive reply and I was only going to reflect on a minor aspect.
  
I was just teasing (hence the :P emoticon)...

   FYI, On 32-bit Fedora 12/Linux 2.6.32:
  Ext2/3: MAX=32000
  Ext4: MAX=65000
   
   This presumably should be true more generally for any relatively
   non-ancient  unhacked version of Linux...
  
  Well, yes, probably, so you only have to figure out whether you have a
  non-ancient and unhacked version of Linux (or just leave HardLinkMax at the
  default and not worry about it). In general, I agree that you shouldn't
  determine things by experiment that can be figured out by reading the
  documentation. However, almost every Linux distribution seems to add their
  own patches to the vanilla Linux kernel, and for *this* matter, the 
  particular
  limit whichever component of your system may be imposing is quite easy to
  determine by experiment.

I actually used your perl code to verify the source code. :)

I was pleasantly surprised at how fast it ran relative to file creation
but that is understandable. Interestingly, it seemed to run
significantly faster on ext4 than ext3 though deleting the directory
was faster on ext3 than ext4 -- this was not a scientific experiment
though.

That being said, I hope you would agree that the default 32000 number
seems reasonable given that ext2/ext3 is pretty common and seems to be
the least 'max' number of any commonly used hard-link-allowing
filesystem. Given that exceeding the max is typically an infrequent
special case and given that BackupPC seamlessly handles 'overflows' by
creating a second pool instance, there doesn't seem to be any good
reason for a user to change this number unless you (a) know what you
are doing (b) have a special case where the savings from avoiding
creating a second pool chain instance will be substantial...

Finally, out of curiosity, I grepped the BackupPC code base for the
error language too many links cited verbatimu by the OP and found that such
a phrase only occurs in the comments and hence is not even a valid
error code... so while this thread has been interesting regarding the
general nature of max fs links, the OP really hasn't given us anything
to help him address his specific problem -- a point that we have both made
originally!

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Too many links : where is the problem ?

2011-08-08 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 17:43:41 -0500 on Monday, August 8, 2011:
  On 8/8/2011 5:33 PM, Jeffrey J. Kosowsky wrote:
   Les Mikesell wrote at about 16:59:45 -0500 on Monday, August 8, 2011:
   On 8/8/2011 3:28 PM, Jeffrey J. Kosowsky wrote:
   
   
 FYI, On 32-bit Fedora 12/Linux 2.6.32:
 Ext2/3: MAX=32000
 Ext4: MAX=65000
   
 This presumably should be true more generally for any relatively
 non-ancient   unhacked version of Linux...
 
   Is exceeding that a fatal error?  I thought it was just supposed add 
   an
   entry like a hash collision and go on.
  
   Well the link command in Holger's perl script fails and I imagine a
   similar bash script would fail too...
  
  I meant as far as backuppc is concerned.  I thought it was fairly common 
  to have a bazillion copies of the same file scattered over your clients 
  and backuppc did something reasonable - but I'm not quite sure what.
  

As I mentioned in my previous post, before creating a link BackupPC
(and also my routines too btw) check to see if HardLinkMax is exceeded
and if so then the pool element is duplicated (actually moved
typically) and new elements are linked to it. This just adds an
element to the md5sum chain. This all happens automatically and
sensibly in BackupPC.

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Too many links : where is the problem ?

2011-08-08 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 16:28:28 -0400 on Monday, August 8, 2011:
  Holger Parplies wrote at about 19:34:27 +0200 on Monday, August 8, 2011:
Hi,

as Jeffrey said, we'll need meaningful information to give meaningful 
  answers.
  
  Oh my goodness, did Holger Top Quote? say it isn't so :P
  
One thing I can answer, though:

friedmann wrote on 2011-08-08 09:19:51 -0700 [[BackupPC-users]  Too many 
  links : where is the problem ?]:
 [...]
 the backup machine is 32bits, ubuntu 10.04.1, operating on ext3 
  partitions :
 What is then the maximum value for this parameter ?

figure it out for yourself:

  % # go to a directory on the partition you are interested in, make sure 
  you have write access
  % mkdir linktest
  % cd linktest
  % touch 1
  % perl -e 'for (my $i = 2; ; $i ++) { link 1, $i or die Cant create 
  ${i}th link: $!\n; }'

(this answer is also valid for any other (UNIX type) OS/FS).

  
  FYI, On 32-bit Fedora 12/Linux 2.6.32:
 Ext2/3: MAX=32000
 Ext4: MAX=65000
  

Interestingly, experimentally on ntfs-3g, I hit a limit at 8190, so if
anyone is using ntfs-3g, then they should be aware of that lower than
default limit!

Under cygwin ntfs, I got a limit of 1025, though I'm not sure BackupPC
can run under cygwin...

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Too many links : where is the problem ?

2011-08-08 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 02:03:42 +0200 on Tuesday, August 9, 2011:
  Hi,
  
  Jeffrey J. Kosowsky wrote on 2011-08-08 19:41:18 -0400 [Re: [BackupPC-users] 
  Too many links : where is the problem ?]:
   Les Mikesell wrote at about 17:43:41 -0500 on Monday, August 8, 2011:
 On 8/8/2011 5:33 PM, Jeffrey J. Kosowsky wrote:
  Les Mikesell wrote at about 16:59:45 -0500 on Monday, August 8, 2011:
  On 8/8/2011 3:28 PM, Jeffrey J. Kosowsky wrote:
Ext2/3: MAX=32000
Ext4: MAX=65000

  Is exceeding that a fatal error?  [...]
 
 I meant as far as backuppc is concerned.  I thought it was fairly 
   common 
 to have a bazillion copies of the same file scattered over your clients 
 and backuppc did something reasonable - but I'm not quite sure what.
   
   As I mentioned in my previous post, before creating a link BackupPC
   (and also my routines too btw) check to see if HardLinkMax is exceeded
   and if so then the pool element is duplicated (actually moved
   typically) and new elements are linked to it. This just adds an
   element to the md5sum chain. This all happens automatically and
   sensibly in BackupPC.
  
  actually, as I read the code, hitting HardLinkMax will simply make the code
  disregard a pool file as possible candidate for matching. If linking to an
  identified match actually fails later on, a new pool file is written. So, an
  incorrect (i.e. too low) value for HardLinkMax should just make things
  slightly less efficient for files actually at the real maximum (because an
  additional comparison is done which could be skipped), but BackupPC should
  continue to work correctly anyway.
  

Yes - that is what I actually meant -- in fact the new pool file isn't
really written, rather the match is *moved* from the pc/../new directory
to the pool where it joins the md5sum chain and then the pc entry is
linked to it. Then when the next duplicate occurs, it passes over the
first match in the partial md5sum chain (since HardLinkMax is
exceeded) and proceeds to match against the first match that has
nlinks  HardLinkMax. Sorry, if I wasn't clear.

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How do I configure DumpPreUserCmd to have multiple commands

2011-08-08 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 02:58:57 +0200 on Tuesday, August 9, 2011:
  Hi,
  
  Jeffrey J. Kosowsky wrote on 2011-08-05 12:08:47 -0400 [Re: [BackupPC-users] 
  How do I configure DumpPreUserCmd to have?multiple?commands]:
   You are exactly right that 'bash -c blah blah blah' fails since the
   quote mechanism doesn't work without a shell. It's too bad that there
   isn't an option that allows a bash -c like construction to interpret
   everything that follows (up to perhaps a delimitter) to be interpreted
   as part of the command line (though white space would need to be escaped).
  
  If you're really desperate, there is a way, though I'm *not* going to suggest
  $sshPath -q -x localhost ... that would be a really ugly hack :).
  

Don't tempt me... LOL...

Remember, I'm the guy that uses the following DumpPreUserCmd to
avoid having to keep a cleartext version of RsyncdPasswd in my
BackupPC config file... (and thanks to you again for helping me figure
out how to get it finally working)...
 $Conf{DumpPreUserCmd} = [{sub { 
chomp(\$bpc-{Conf}{RsyncdPasswd} = `$Conf{SshPath} -q -x -l 
$Conf{RsyncdUserName} \$args[0]-{hostIP} /usr/local/bin/shadowmountrsync 
$shadowmountparams`); return(\$? . '\n'); }}];

Then again my config.pl files have lots of perl code that takes
advantage of the undocumented/unrecommended fact that the file is
executable perl...

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] NFS woes

2011-08-08 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 03:06:23 +0200 on Tuesday, August 9, 2011:
  Hi,
  
  Jeffrey J. Kosowsky wrote on 2011-08-08 16:15:38 -0400 [Re: [BackupPC-users] 
  NFS woes]:
   Les Mikesell wrote at about 12:21:54 -0500 on Monday, August 8, 2011:
 On 8/8/2011 9:30 AM, Phil K. wrote:
  Look in to using the nolock option when mounting the device. Rrdtool
  can't get a file lock, which is likely causing your performance 
   issues.
  I have the same device, and had similar performance issues.
 
 The rrd graph is a recent addition to backuppc and not really necessary 
 for backup functionality.  You could probably just disable it in the 
 code somewhere.
   
   I run 3.2.0 on a plugcomputer without rrd without any problem... I
   compiled it from the perl sources and did not need to disable rrd
   either in the 'make' stage or in config.pl.
  
  firstly, I believe it's an addition only present in the Debian/Ubuntu 
  packages
  of BackupPC (and as far as I've heard, it *does* create the NFS problems
  described).
  Secondly, you *compiled* the Perl sources of BackupPC? You mentioned that
  before, so at some point I have to ask for more details :). How exactly?

Compile in the imprecise (and I suppose wrong) sense of running
perl configure.pl for BackupPC itself ... and in the more precise
sense that I compiled the module File::RsyncP which has binary
libraries associated with it and is truly compiled.

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Too many links : where is the problem ?

2011-08-08 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 19:42:41 -0400 on Monday, August 8, 2011:
  Jeffrey J. Kosowsky wrote at about 16:28:28 -0400 on Monday, August 8, 2011:
Holger Parplies wrote at about 19:34:27 +0200 on Monday, August 8, 2011:
  Hi,
  
  as Jeffrey said, we'll need meaningful information to give meaningful 
  answers.

Oh my goodness, did Holger Top Quote? say it isn't so :P

  One thing I can answer, though:
  
  friedmann wrote on 2011-08-08 09:19:51 -0700 [[BackupPC-users]  Too 
  many links : where is the problem ?]:
   [...]
   the backup machine is 32bits, ubuntu 10.04.1, operating on ext3 
  partitions :
   What is then the maximum value for this parameter ?
  
  figure it out for yourself:
  
 % # go to a directory on the partition you are interested in, 
  make sure you have write access
 % mkdir linktest
 % cd linktest
 % touch 1
 % perl -e 'for (my $i = 2; ; $i ++) { link 1, $i or die Cant 
  create ${i}th link: $!\n; }'
  
  (this answer is also valid for any other (UNIX type) OS/FS).
  

FYI, On 32-bit Fedora 12/Linux 2.6.32:
   Ext2/3: MAX=32000
   Ext4: MAX=65000

  
  Interestingly, experimentally on ntfs-3g, I hit a limit at 8190, so if
  anyone is using ntfs-3g, then they should be aware of that lower than
  default limit!
  
  Under cygwin ntfs, I got a limit of 1025, though I'm not sure BackupPC
  can run under cygwin...
  

Here is another one... on an arm-based plugcomputer, the rootfs (which
is flash-based continues until it runs out of disk space (there is
only about 512MB total) which in my case stopped after 790566 links...
Of course, this is again moot since TopDir is on an attached USB disk
with an ext4 filesystem.

The behavior is similar to the tmpfs behavior that Holger noted...

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Too many links : where is the problem ?

2011-08-08 Thread Jeffrey J. Kosowsky
Steve wrote at about 19:15:20 -0400 on Monday, August 8, 2011:
  On Mon, Aug 8, 2011 at 6:39 PM, Jeffrey J. Kosowsky
  backu...@kosowsky.org wrote:
   Finally, out of curiosity, I grepped the BackupPC code base for the
   error language too many links cited verbatimu by the OP and found that 
   such
   a phrase only occurs in the comments and hence is not even a valid
   error code...
  
  That error is almost certainly directly from rsync, not backuppc.  I
  saw it many times trying to use rsync to duplicate backuppc.  I never
  figured out what caused it, since I never exceeded the number of
  links.  I just gave up duplicating that way :)
  
  Evets

Not sure if this helps, but googling showed that the several reported
cases of rsync giving that error seem to be due to having too many
subdirectories since each subdirectory requires a link back to the
directory and hence the file system max link limitation applies also
to the maximum number of subdirectories. So, in your case is it
possible that you had more subdirectories somewhere than the allowable
max links?

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] DumpPreUserCmd commands only on full backups?

2011-08-06 Thread Jeffrey J. Kosowsky
Brad Alexander wrote at about 16:41:44 -0400 on Friday, August 5, 2011:
  As the subject posits, is it possible to issue a dump pre- or post-command
  only on certain types of backups? For instance, we run bacula at work, and
  apparently the director states what kind of backup is running, either Full
  or Incr. So in your scripts, you can do something like
  
  if [ $1 != Full ] ; then
exit 0
  fi
  
  ...Rest of script...
  
  We use it to clear some specialized logs after they have been backed up in a
  Full. Is something like this possible in backuppc?
  

This should be possible... according to the config.pl inline
documentation (you have read it, right), the following variable is
available to $Conf{DumpPreUserCmd}, $Conf{DumpPostUserCmd},
$Conf{DumpPreShareCmd} and $Conf{DumpPostShareCmd}:
 $type type of dump (incr or full)

So just write a script with the above logic and pass it $type on the
command line when called from one of the above 'Cmd' variables.

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How do I configure DumpPreUserCmd to have multiple commands

2011-08-05 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 15:01:45 +0200 on Friday, August 5, 2011:
  Hi,
  
  Jeffrey J. Kosowsky wrote on 2011-08-04 01:37:23 -0400 [Re: [BackupPC-users] 
  How do I configure DumpPreUserCmd to have multiple?commands]:
   Rory Toma wrote at about 18:15:32 -0700 on Wednesday, August 3, 2011:
 [...]
 In my case, I want to do something like:
 
 rsh -n machine command; rsh -n machine command ; ls sharename; 
 ls sharename2
   
   As the config.pl inline documentation *clearly* states:
   # Note: all Cmds are executed directly without a shell, so the prog name
   # needs to be a full path and you can't include shell syntax like
   # redirection and pipes; put that in a script if you need it.
   
   Chaining together multiple commands separated by a ; is a *shell*
   feature.
   
   So either package it all into an explicit shell command directly, e.g.,
  bash -c rsh -n machine command; rsh -n machine command ; \
  ls sharename; ls sharename2
  
  I'm not sure that works. As you know, *quoting* is also a shell feature. Now,
  bash just *might* reparse its command line arguments, remove quotes, and
  recombine arguments, but I don't feel that is very likely, because whitespace
  between arguments will have been lost and can only be replaced by single 
  blanks
  (which may or may not be what was previously there). Quoting is simply
  something that needs to be done before invoking bash to get it right. 
  However,
  there might be a special case for the argument to -c starting with a quote.
  
  So, if bash *doesn't* recombine arguments, according to the man page, only 
  the
  first one will be interpreted as a command ('rsh'). This would mean that
  there would in fact be no way to make this work without changing the BackupPC
  code.

All I can say is that the form bash -c code... does work.
I use the following 'monstrosity' to query if rsyncd is running and
start cygwin rsyncd on my Windows machine if it isn't.

$Conf{RestorePreUserCmd} =
  \$sshPath -q -x -l  $Conf{RsyncdUserName} \$hostIP bash -c '/bin/cygrunsrv 
-Q rsyncd | /bin/egrep -q \Current State *: *Running\ || ( /usr/bin/rm -f 
/var/run/rsyncd.pid; /bin/cygrunsrv -S rsyncd ; sleep 20)';

While the code is ugly, I did it to avoid having an extraneous shell
script hanging around that I would have to remember to always copy
over and update. The reality is I almost never use BackupPC for
restores, so I preferred doing it this way (I usually do restores
manually via BackupPC_zcat for single files if I just want the content
or via backuppc-fuse when I want the permissions and timestamps)

  You *could*, however, use Perl code to achieve the same thing.
  
   $Conf{DumpPreUserCmd} = 'sub{ system (...);}';
  
  where ... is your 'rsh; rsh; ls; ls' sequence (system() uses a shell if it
  finds shell metacharacters - such as a semicolon - in the command).

As I mentioned in my recent post, your perl code *won't* work due to 2
bugs in the current BackupPC code.

First, due to inconsistent handling of strings vs. arrays in the
Lib.pm routines like cmdVarSubstitute, perl code must must be passed
as an arrayref of strings of coderefs. (the problem is that the
command converts strings to arryrefs but returns a single coderef
unmodified so that subsequent routines that expect an arrayref fail
with the error:
  Can't use string ({sub { blah blah blah ;}}) as an ARRAY ref while
  strict refs in use at /usr/share/BackupPC/bin/BackupPC_dump

So, unless the code is fixed, one would need to pass something like:
$Conf{DumpPreUserCmd} = [sub{ system (...);}];

Second, when you do pass an arrayref, you will get an error since
there is a syntax error in Lib.pm (occurs twice) where the join
command to concatenate the elements of the array into string doesn't
properly reference the array -- i.e. it references the arrayref rather
than the array itself. Specifically, the following
$cmd = join( , $cmd) if ( ref($cmd) eq ARRAY );
needs to be changed to:
$cmd = join( , @$cmd) if ( ref($cmd) eq ARRAY );


  Disclaimer: both that the 'bash -c' version doesn't work and that the Perl
  version does are untested.

Well, 'bash -c' *does* work and given the above caveats and bug fixes,
the perl code will work (I have tested it!), but you will need to
patch Lib.pm first...

  
   Or package it all together into a shell shell secret
  
  That is probably meant to read shell script ;-). That is without doubt the
  most simple solution (and the one recommended in the config.pl comment).

Well, I keep a lot of my shell scripts secret :P does that count?
But yes it should have read 'script'.

That being said, shell scripts are easiest to use and debug.

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions

Re: [BackupPC-users] How do I configure DumpPreUserCmd to have multiple commands

2011-08-05 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 11:36:13 -0400 on Friday, August 5, 2011:
  Holger Parplies wrote at about 15:01:45 +0200 on Friday, August 5, 2011:
  All I can say is that the form bash -c code... does work.
  I use the following 'monstrosity' to query if rsyncd is running and
  start cygwin rsyncd on my Windows machine if it isn't.
  
  $Conf{RestorePreUserCmd} =
\$sshPath -q -x -l  $Conf{RsyncdUserName} \$hostIP bash -c 
  '/bin/cygrunsrv -Q rsyncd | /bin/egrep -q  \Current State *: *Running\ 
  || ( /usr/bin/rm -f /var/run/rsyncd.pid; /bin/cygrunsrv -S rsyncd ; sleep 
  20)';
  


Correction - I just realized that I am using 'bash -c' in a different
way in that it is passed to ssh and executed on the client side.

You are exactly right that 'bash -c blah blah blah' fails since the
quote mechanism doesn't work without a shell. It's too bad that there
isn't an option that allows a bash -c like construction to interpret
everything that follows (up to perhaps a delimitter) to be interpreted
as part of the command line (though white space would need to be escaped).

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How do I configure DumpPreUserCmd to have multiple commands

2011-08-03 Thread Jeffrey J. Kosowsky
Rory Toma wrote at about 18:15:32 -0700 on Wednesday, August 3, 2011:
  It appears that if I feed it a list of ; separated commands, that it 
  executes the first command, and assumes everything else is an argument.
  
  In my case, I want to do something like:
  
  rsh -n machine command; rsh -n machine command ; ls sharename; 
  ls sharename2
  
  What is the syntax for this?
  
  thx

As the config.pl inline documentation *clearly* states:
# Note: all Cmds are executed directly without a shell, so the prog name
# needs to be a full path and you can't include shell syntax like
# redirection and pipes; put that in a script if you need it.

Chaining together multiple commands separated by a ; is a *shell*
feature.

So either package it all into an explicit shell command directly, e.g.,
   bash -c rsh -n machine command; rsh -n machine command ; \
   ls sharename; ls sharename2
Or package it all together into a shell shell secret

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] *BUMP* *BUMP* Re: BackupPC perl code hacking question... (Craig any chance you might have a suggestion?)

2011-08-01 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 03:29:00 +0200 on Wednesday, July 20, 2011:
  Hi,
  
  sorry for not replying earlier. In case you're still wondering (otherwise for
  the archives) ...
  
  Jeffrey J. Kosowsky wrote on 2011-02-07 14:15:05 -0500 [[BackupPC-users] 
  *BUMP* *BUMP* Re: BackupPC perl code hacking question... (Craig any chance 
  you might have a suggestion?)]:
   Let me rewrite my earlier posting to be more clear so maybe someone
   can help me.
  
  Well, let's rearrange your message so it makes sense (did I ever mention that
  I don't like top posting? ;-).
  
 Jeffrey J. Kosowsky wrote at about 12:53:28 -0500 on Monday, December 
   13, 2010:
   For reasons I can explain later, I am trying to set
   $Conf{RsyncdPasswd} in the main routine of BackupPC_dump (I am
   actually trying to do something a bit more complex but this is easier
   to understand).
   
   Now since %Conf = $bpc-Conf(),
  
  You are aware that this is a hash copy operation, right?
  
   I would have thought that for example
   setting $Conf{RsyncPasswd} = mypasswd would then be pushed down to
   all the routines called directly or indirectly from BackupPC_dump.
  
  Well, it is as long as they use the my %Conf from BackupPC_dump and not
  $bpc-Conf(). You modified the copy, not the original hash in the
  BackupPC::Lib object.
  
   However, in Rsync.pm where the value of $Conf{RsyncPasswd} is 
   actually
   used, the value remains at ''.
  
  Yes, because Rsync.pm gets a reference to the unmodified BackupPC::Lib 
  object's
  hash (BackupPC::Xfer::Protocol, line 59, sub new, conf = $bpc-{Conf}).
  
   (Of course setting the paramter the normal way within a config file
   works and shows up as set in Rsync.pm)
  
  That is because the code in BackupPC::Lib that reads the config file saves 
  the
  values.
  
   I'm sure I must be missing something about how perl inherits and/or
   overwrites variables... but I am stumped here...
  
  It's really simple. If you get a reference to a hash ($conf = \%conf), then
  you modify the original, if you get a copy (%conf = %conf_orig), you don't.
  What makes things complicated here is that you need to follow the code around
  through various modules and subs, and that each copy of %conf is named the
  same :-).
  
   Here is a simplified version of my actual command
   
   $Conf{DumpPreUserCmd} = {sub {\$args[1]{RsyncdPasswd} = `ssh
   -x mybackuypclient get_rsyncd_secret`}};
   
   This uses the fact that $args[1]=$Conf
  
  Actually, it's \%Conf (a reference, not a copy), so modifying 
  *BackupPC_dump's
  copy* works. $Conf would be a scalar, which might coincidentally contain a
  reference to a hash. Using an element of a hash is $Conf{Foo}, using an
  element of a hash a scalar is pointing to is $Conf-{Foo}. You might even
  have both visible at the same time, but only if you are either a bad
  programmer or enjoy confusing people.
  
   my $Conf = $bpc-Conf();
   my %Conf = (XferMethod = snailmail);
   print $Conf{XferMethod}, \n;  # probably prints rsync
   print $Conf-{XferMethod}, \n;# prints snailmail
  
   So, that \$args[1]{RsyncdPasswd} is equivalent to
  
  ... a syntax error? ;-)
  
  $args [1] should be a reference to BackupPC_dump's %Conf, and 
  $args [1] - {RsyncdPasswd} should reference the corresponding entry (as an
  lvalue, so you can assign to it). - between braces []/{} is implied 
  and
  may be left out. I'm not sure how the reference operator binds in your 
  example.
  As it seems to work, let's ignore it for now.
  
   $Conf{RsyncdPasswd}.
  
  ... in BackupPC_dump (because that is what was passed in).
  
   [...]
   So, my question is is there any way to dynamically set Conf parameters
   along the lines I am trying to do?
  
  Well, you'd have to modify the original hash.
  
   $bpc-{Conf}-{RsyncdPasswd} = `ssh ...`;
  
  That is not strictly legal, but it should work. Note that this change would
  *not* propagate to any copies previously made (like the my %Conf in
  BackupPC_dump). Using references to $bpc-{Conf} everywhere instead of copies
  would probably make things much easier, but on the downside, it would mean 
  you
  can't make local modifications to $Conf-{...} that are not meant to 
  propagate
  to the rest of the code (or, put differently, you could more easily
  accidentally clobber BackupPC::Lib's state). You'll notice that
  BackupPC::Xfer::Protocol (as quoted above) actually *does* use a reference.
  
  Hope that helps.

H... very helpful...
Got it to work after correcting a pair of bugs in Lib.pm

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http

[BackupPC-users] BUGS in Lib.pm with CORRECTION

2011-08-01 Thread Jeffrey J. Kosowsky
Lib.pm has a bug that appears twice:
The 2 occurrences of:
$cmd = join( , $cmd) if ( ref($cmd) eq ARRAY );
should be replaced by:
$cmd = join( , @$cmd) if ( ref($cmd) eq ARRAY );
Otherwise you are joining array refs rather than arrays which is
meaningless/error.


Also, I would suggest that the function cmdVarSubstitute always return
an array. Currently, it returns an array except in the case when it is
passed a single perl coderef. As a result, if one set DumpPreUserCmd
(or its equivalents) to a coderef then BacckupPC_dump barfs with an
error like:
  Can't use string ({sub { blah blah blah ;}}) as an ARRAY ref while
  strict refs in use at /usr/share/BackupPC/bin/BackupPC_dump

Alternatively, one could also test for ref($cmd) = ARRAY in
BackupPC_dump, but it is probably cleaner and more consistent to do
the conversion here in Lib.pm so that one always has an array ref.

Unless and until such correction is made, one can only pass an
arrayref to a coderef, i.e., if you are using perl code for
DumpPreUserCmd, it must be of form:
$Conf{DumpPreUserCmd} = [{sub { blahblahblah }}];

Well, the cool thing is that it works!

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up of Windows Client Machine

2011-07-28 Thread Jeffrey J. Kosowsky
Alain Péan wrote at about 17:41:40 +0200 on Wednesday, July 27, 2011:
   Cygwin is another method to access the data, using rsync or rsyncd. In
   fact, BackupPC can use four methods to backup the PCs : smb (for
   windows), rsync, rsyncd, or FTP. You can configure the method you want
   for each host using the web interface.
  
   Did you read the documentation ?
   http://backuppc.sourceforge.net/faq/BackupPC.html#step_5__client_setup
  
   Alain
  
  
  Just a correction, the fourth method is tar, FTP method is just planned, 
  if I remember correctly.

Actually, FTP method is now included, at least in 3.2.0

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BUG SOLUTION: Can't call method getStats on an undefined value

2011-07-28 Thread Jeffrey J. Kosowsky
Whenever you are doing a full backup and DumpPreShareCmd fails, I get the 
following error in my log:
Can't call method getStats on an undefined value at 
/usr/share/BackupPC/bin/BackupPC_dump line 1160.

I posted a similar bug report back in December, but now I believe I
have figured out the problem. 


IF DumpPreShareCmd or DumpPreUserCmd fails, then the routine
BackupFailCleanup is called. In the case of a *full* backup, the
routine checks to see if there is a partial backup that needs to be
saved. In particular, the following 'if' statement (line 1160) is
executed: 
  if ( $nFilesTotal == 0  $xfer-getStats-{fileCnt} == 0 ) {

This gives the above error since $xfer is only defined *after* the pre
commands are executed and hence remains undefined when a pre command
fails.

A potential solution would be to change line 1159 from:
if ( $type eq full ) {
To
if ( $type eq full  defined($xfer)) {

I am actually surprised that no one else has encountered this bug
since it should be rather common since if my analysis is correct then
it will occur *every* time one of the pre user commands fails on a full
backup.

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor BackupPC Performance

2011-07-28 Thread Jeffrey J. Kosowsky
Richard Shaw wrote at about 16:40:10 -0500 on Wednesday, July 27, 2011:
  On Wed, Jul 27, 2011 at 2:42 PM, C. Ronoz chro...@eproxy.nl wrote:
   Depending on how comfortable you are building your own packages,
   Fedora has 3.2.1 almost ready to go. We had to package two perl
   modules for the added FTP support.
  
   If you are willing to try them but don't want to build yourself I
   could probably build them for you.
   Sure, I'll try NetBackup 3.2.1. I have not build packages before myself, 
   although I wouldn't mind first using Fedora to see if the performance will 
   actually be good for me.
  
  It hasn't been released yet as the two perl modules in question are
  making their way through QA before they'll make it to the stable
  repos.
  
  I have built packages x86_64 packages, let me know if you need 32 bit
  packages instead.
  
  
  Richard

Any chances of backporting this to older Fedora versions? (I still run
Fedora 12)

That being said, I did build a 3.2.0 rpm package for Fedora 12 -- I
didn't add FTP dependencies so it works as is without any extra Perl
packages. I would be happy to share it if anybody is interested
(either the source or the binary rpm).

One of these days I will upgrade that to 3.2.1...

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor BackupPC Performance

2011-07-28 Thread Jeffrey J. Kosowsky
Arch 32
Thanks!
Richard Shaw wrote at about 19:24:03 -0500 on Thursday, July 28, 2011:
  On Thu, Jul 28, 2011 at 4:44 PM, Jeffrey J. Kosowsky
  backu...@kosowsky.org wrote:
   Any chances of backporting this to older Fedora versions? (I still run
   Fedora 12)
  
  It wouldn't take me very long but it's very likely that the current
  F14 version (once it hits the stable repos) would work, i.e.:
  
  yum --releasever=14 update BackupPC
  
  As long as it's not a critical server it might be worth a try. If
  you'd rather have F12 native packages, which arch (32 or 64bit) do you
  need?
  
  Richard
  
  --
  Got Input?   Slashdot Needs You.
  Take our quick survey online.  Come on, we don't ask for help often.
  Plus, you'll get a chance to win $100 to spend on ThinkGeek.
  http://p.sf.net/sfu/slashdot-survey
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor BackupPC Performance

2011-07-28 Thread Jeffrey J. Kosowsky
Sure...
Would love to see the src rpm too because worse comes to worse I could
modify/recompile for FC12.
I seem to remember that even in the 3.1.0 versions, a number of
changes were made to the src rpm post FC12 -- some of which if I
recall correctly caused me problems when compiling for FC12 -- so I
thin in my 3.2.0 version, I might have used the FC12 or FC13 version
rathern than FC14+ though my memory is a bit foggy there...

Richard Shaw wrote at about 19:33:25 -0500 on Thursday, July 28, 2011:
  On Thu, Jul 28, 2011 at 4:44 PM, Jeffrey J. Kosowsky
  backu...@kosowsky.org wrote:
   Any chances of backporting this to older Fedora versions? (I still run
   Fedora 12)
  
  Just checked, the oldest version of Fedora I have a mock configuration
  for is F13... Let me know if you want to try it.
  
  Richard
  
  --
  Got Input?   Slashdot Needs You.
  Take our quick survey online.  Come on, we don't ask for help often.
  Plus, you'll get a chance to win $100 to spend on ThinkGeek.
  http://p.sf.net/sfu/slashdot-survey
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Different UID numbers for backuppc on 2 computers

2011-07-12 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 00:34:56 +0200 on Tuesday, July 12, 2011:
  Well I hope you don't have many files ... how about either
  'chown -R backuppc:backuppc /archive' (assuming that's TopDir) - there are no
  files under TopDir *not* belonging to backuppc, or at least there shouldn't 
  be,
  and there shouldn't be any files belonging to backuppc elsewhere (check with
  find) - or 'find / -uid 102 -print0 | xargs -0 chown backuppc:backuppc'. Just
  be careful about what you are doing.

I believe in many distros, the /etc/BackupPC dir (or equivalent) is
also owned by BackupPC.

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   3   4   5   6   7   8   >