Re: [BackupPC-users] Different UID numbers for backuppc on 2 computers

2011-07-11 Thread Holger Parplies
Hi,

I agree with Les:

Les Mikesell wrote on 2011-07-11 17:18:07 -0500 [Re: [BackupPC-users] Different 
UID numbers for backuppc on 2 computers]:
 On 7/11/2011 4:55 PM, Timothy Murphy wrote:
  I want to archive backuppc on machine A to machine B.
  (Both are running CentOS-5.6 .)
  The problem is that backuppc has different UIDs on the 2 machines:
  on A it is 101, on B it is 102.
 
 What do you mean by 'archive'?

If you could describe what you are trying to do (in a way people can
understand), that would help giving meaningful answers.

Do you want to
- create an archive of a host (BackupPC_archive) and store that on an NFS
  export of machine B?
- copy your pool from machine A to machine B?
- use an NFS export of machine B as pool FS for a BackupPC instance running
  on machine A?
- tar together your BackupPC installation on machine A and store the tar
  file on machine B, in case you might decide to use it again?
- do something completely different?

In none of the first four cases is it a problem that the backuppc user has
different UIDs on the two machines, so it must be the last?

  Now when I NFS mount /archive on machine B on /archive on machine A
  I am told that /archive belongs to avahi-autoipd ,

Deinstall avahi-autoipd. That's the only thing I believe it is good for,
except for frustrating admins that don't want it installed, yet don't want to
reinvent their packaging system's dependency mechanism.

Seriously, 'mount' gives you informational output about the owner of a
directory?

 It shouldn't matter to the machine exporting the nfs directory whether 
 there is a local user with the same uid or not.  Or are you trying to 
 access the files from both machines?

Well, the only point would be that avahi-autoipd would have access to the
pool, which might not be a good idea.

  This seems to prevent backuppc from archiving onto /archive .
 
 All you should need is write access (which might be from having the same 
 owner at the top of the tree).  If you permit root nfs access from the 
 backuppc client you can arrange the proper permissions from there.

I'm guessing (and I *hate* to do that) that you set up permissions
incorrectly. In fact, I don't see why you have a backuppc user on the NFS
server at all.

  Is there any simple way of changing a UID
  (together with all the files it owns)?
 
 You can't do both at once.  You can change the uid in the passwd file 
 but your real problem is that some other package took the uid you want.

Well, you could change the UID of backuppc on the client (assuming the UID the
server uses is free), or you could change UIDs on both client and server to
a common value, that is free on both. Or you could use NIS. Or you could set
up UID mapping for NFS (I've never needed to do that, but I believe it is
possible). Or you could forget about the backuppc user on the NFS server,
though there actually *is* a point in having that user, namely to prevent
something else from allocating the same UID and thus gaining access to the
pool.

Carl Wilhelm Soderstrom wrote on 2011-07-11 17:19:09 -0500 [Re: 
[BackupPC-users] Different UID numbers for backuppc on 2 computers]:
 On 07/11 11:55 , Timothy Murphy wrote:
  Is there any simple way of changing a UID
  (together with all the files it owns)?
 
 vipw then vigr to edit the UIDs in /etc/passwd and /etc/group. You will need
 to do vipw -s and vigr -s to change the /etc/shadow and /etc/gshadow as
 well.

There is no UID information in the shadow files, and unless you're also
worried about the group, you don't need /etc/group either ;-). If you are, you
probably need to change the group of the files, too.

 Then use a command like 'find / -uid 102 -exec chown backuppc: {} \;' to
 change the ownership of all the files owned by UID 102 to whatever UID
 backuppc is. 

Well I hope you don't have many files ... how about either
'chown -R backuppc:backuppc /archive' (assuming that's TopDir) - there are no
files under TopDir *not* belonging to backuppc, or at least there shouldn't be,
and there shouldn't be any files belonging to backuppc elsewhere (check with
find) - or 'find / -uid 102 -print0 | xargs -0 chown backuppc:backuppc'. Just
be careful about what you are doing. Is that the previous UID of avahi-autoipd?
Are there any files owned by that UID that are *not* part of BackupPC? Whenever
something is messed up, and you are trying to clean up the mess, try not to
make the mess bigger in the process ;-).

  Alternatively, is there a way of telling backuppc to ignore the UIDs?

BackupPC doesn't really care about UIDs at this point. The kernel does. I
don't think you're asking is there a way to tell the kernel to ignore file
system permissions.

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. 

Re: [BackupPC-users] My Solution for Off-Site

2011-07-11 Thread Holger Parplies
Hi,

Christian Völker wrote on 2011-07-12 00:17:57 +0200 [Re: [BackupPC-users] My 
Solution for Off-Site]:
 On 11/07/2011 21:58, Les Mikesell wrote:
  The way my 'take a disk off RAID1' works is that there are 3 spare 
  disks, with at least one always offsite in the rotation [...]

 I'm aware of the rotation there- it's just the same and only a question
 on levels you do it. You have three disks and swap them at some time. I
 take snapshots instead. In both cases it can happen a filesystem error
 gets copied over, too.

so, you're saying that you don't trust your file system, but you trust LVM to
keep 4 snapshots accurate for up to four weeks? I think I understand Les'
point (if he's making it) that a hardware-based don't do anything approach
is more reliable than a software-based accumulate the information needed to
undo all my changes. But I also understand your point of as long as it
works, it gives me three previous states to go back to.

 I think I might move it to the garage, though :)

I hope your data is well enough protected against theft in your garage.

  [...] to understand how you can drbd to the live partition while keeping 
  snapshots of old copies.  I wouldn't have expected that to work.  Are 
  they really layered correctly so the lvm copy-on-write business works?

Why shouldn't it work? An LVM LV is just a block device. Why should the
snapshotting be in any way dependent on the type of data you have on top?

 Yes, this works absolutely fine.  [...] Taking a snapshot of the LVM volume
 doesn't affect the drbd device at all.

I'm just wondering whether you're unmounting the pool FS before the snapshot,
or if you're relying on it to be in a consistent state by itself. How much
testing have you done?

 The only thing I have to evaluate is to have the proper size of the
 snapshot.

Which, in itself, doesn't sound practical. Effectively, you are estimating
how much new data your backups for a week (or four weeks?) will contain.

I just hope you don't decide to implement a BackupPC fork with deduplication
implemented through LVM snapshots ;-).

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC makes server unusable

2011-07-08 Thread Holger Parplies
Hi,

first of all, I'm disappointed that Sourceforge's Spamassassin failed to sum
up our opinion regarding the original post with the concise description
UNWANTED_LANGUAGE_BODY.

Secondly, sending an automatic translation of your reply is a great idea :-).

Michael Stowe wrote on 2011-07-08 09:28:37 -0500 [Re: [BackupPC-users] Backuppc 
legt Server lahm]:
 Detlef Sch?el *should have* written:
  Hello,
 
  I've got a root server [...] with 100GB backup space via FTP.
 
  The root server runs a VM with VMware server. The FTP backup space is
  mounted on the (physical) root server via CIFS.
 
  The VM is backed up with BackupPC, however, some process appears to
  use up so much of the resources of the virtual server, that it turns out
  to be next to unusable. Even ssh runs into a timeout at this point.
 
  Is this type of setup a structural problem in itself, or can the issue
  be solved by configuration? I wasn't able to find any mention of such
  a problem with Google. Shouldn't it be possible to lower the priority
  of a backup in such a way that the server stays more or less unaffected?

 If I understand you correctly, the backup works, but is overwhelming the
 VM, via ftp, thus making the VM unusable.
 
 Are connecting TO the VM via FTP?

No, as far as I understand it (and there are a lot of details missing!),
TopDir is effectively mounted via CIFS, which should in itself be a problem,
because hardlinks shouldn't work, should they?

I can't find any mention of the XferMethod used - there's a mention of 'ssh',
but it's not even clear whether that is part of the backup transfer or of an
attempt to investigate what is going on in the VM.

So, the questions I'd like to ask at this point are:

1.) How is your BackupPC pool accessed? Is it true that this is a CIFS mount?
If so, what has that got to do with FTP?
Are you able to access it in a way that will allow hardlinks to work
correctly? If not, you can stop here. BackupPC won't work without hardlink
support.
Is your access to the backup space fast? BackupPC is designed for fast
local pool access and potentially slow client access, not the other way
around.

2.) Is your client (the VM) running Linux (or some other Unix) or Windoze?
3.) What XferMethod are you using to access the client?
4.) Are you using a virtual disk for the VM, or does the VM have access to
a physical disk/partition?
5.) How much data do you want to backup?
6.) Are backups currently working, even though they slow down the virtual
server unacceptably, or do they abort with a timeout? If they work, how
long do they take?
7.) Have you tried simulating a backup without BackupPC, i.e. rsync the
files from the VM to your backup space (presuming you're using rsync)?
Does that work?
8.) Does your VM have enough memory assigned to it? How much memory does your
root server have? The problem here is, that both the client and the
server need enough memory and you're basically splitting up the physical
memory of your root server between both.
9.) Ah, yes. The version of BackupPC you are using might be relevant. Any
other specs of your root server you can provide might also help.

In any case, backing up a VM running on the BackupPC server itself doesn't
sound too promising. If you *have* to do it that way, you should think about
how you can minimize resource requirements. Turning compression off if you
don't need it should help. If you're using ssh, use a faster cipher (is it
possible to turn off encryption altogether?), or consider completely avoiding
ssh (- rsyncd). tar probably needs fewer resources than rsync (and you don't
need the bandwidth savings), but you'd be sacrificing backup exactness
(probably nothing to worry about if you can run full backups regularly).

If you need to use compression, things might speed up considerably after the
first backup has completed, presuming you have a lot of data that doesn't
change regularly. This is because files already existing in the pool don't
need to be re-compressed. If you're really desperate, rsync the data from your
VM to the physical server and back it up with BackupPC from there (use a
different host in BackupPC; the point is just that it is compressed into the
pool). Then you can see if subsequent backups work out or not.

Hope that helps.

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: 

Re: [BackupPC-users] wakeup command?

2011-07-07 Thread Holger Parplies
Hi,

gregrwm wrote on 2011-07-07 17:40:33 -0500 [Re: [BackupPC-users] wakeup 
command?]:
   On 6/23/2011 3:59 PM, gregrwm wrote:
is there a command that triggers the equivalent of a wakeup? normally i
only want 1 wakeup per day, yet for special circumstances i often find
myself editing in a wakeup a couple minutes hence and triggering a 
reload.
 [...]
 generally i want backups to run at one specific time only, but i want
 specifically requested backups to be allowed anytime.

just for the record, backups requested via the web interface should be started
immediately, independent of WakeupSchedule, IncrPeriod or FullPeriod
(they may be delayed subject to MaxBackups and MaxUserBackups, though).
That's the whole point of manually initiated backups. There may be reasons to
change the client configuration and want an automatic backup to be run, though
the only one I can think of right now would be to test the configuration, and
even that doesn't seem to be that much different from running a manual backup.
Perhaps you should just run a manual backup?

 On Thu, Jun 23, 2011 at 21:15, Holger Parplies wb...@parplies.de wrote:
 [...]
 BackupPC_serverMesg backup all

Actually, the web interface uses a very similar invocation of
BackupPC_serverMesg to start a manual backup. If you don't want to go through
the web interface, you could use that variant from the command line:

BackupPC_serverMesg backup hostip host user type

where host is the host name you assigned within BackupPC (usually the name
of the host), hostip is its DNS name or IP (in fact, automatic backups use
the same value as host here; not sure why you can specify something
different with BackupPC_serverMesg and how that interacts with
ClientNameAlias, for example), user is the name of the user requesting the
backup (though I don't think it's used for anything except logging) and type
is 1 for full, 0 for incremental or -1 for whatever is due, though you can
apparently also use the values doFull, doIncr, auto, autoIncr or autoFull (if
you have at least BackupPC 3.2.0, that is). So, to request a full backup of
a host named 'foobar', you'd normally use

BackupPC_serverMesg backup foobar foobar me 1

Of course, if all your hosts simply missed their backups for some reason, the
'backup all' variant is simpler, though if this happens often, you should
probably consider adding a second wakeup, as I outlined previously. For
reference, I use

$Conf {WakeupSchedule} = [2, 22, 23, 0];
$Conf {IncrPeriod} = 0.8;
$Conf {FullPeriod} = 6.8;

That way, BackupPC_nightly is run at 2:00, backups are normally run at 22:00,
but are retried at 23:00, 0:00 and 2:00 if necessary. Even if a backup turns
out to be run at 2:00, the next one should again be run at 22:00 (because more
than 0.8 days will have passed). Don't ask me why I left out 1:00 - probably
to avoid collisions between BackupPC_nighly and my longest backup.

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-07-03 Thread Holger Parplies
Hi,

C. Ronoz wrote on 2011-06-30 12:54:44 +0200 [Re: [BackupPC-users] Yet another 
filesystem thread]:
 [...]
  - How stable is XFS?

unless I missed something, I'd say XFS is perfectly stable - more stable than
reiserfs in any case. The only thing that makes me hesitate with that statement
is Les' remark XFS should also be OK on 64-bit systems - why only on 64 bit
systems? [Of course, for really large pools, a 64 bit system would be
preferable with XFS.]

 [Bowie Bailey asked on 2011-06-29 10:43:28 -0400:]
 How much memory do you have on the backup server?  What backup method
 are you using?
 The server has 1GB memory, but a pretty powerful processor.

A powerful processor doesn't help even marginally with memory problems.
See http://en.wikipedia.org/wiki/Thrashing_(computer_science)

 I found out that BackupPC is ignoring my Excludes though, [...]

This is because your syntax is wrong.

 $Conf{BackupFilesOnly} = {};
 $Conf{BackupFilesExclude} = {'/proc', '/blaat', '/pub', '/tmp'};

While the comments in config.pl state

# This can be set to a string, an array of strings, or, in the case
# of multiple shares, a hash of strings or arrays.

that is actually incorrect. A hash of strings makes no sense. In fact, Perl
would turn your example into a hash with /proc and /pub as keys and
/blaat and /tmp as respective values - certainly not what you want.
Turn your config value into an array (use '[]' instead of '{}'), and you
should be fine. You'll notice that the examples correctly don't include a
hash of strings.

Better yet, use a full hash of arrays. That is easier to read and maintain,
because it's explicit on which shares you want which excludes to apply to:

$Conf {BackupFilesExclude} = { '/' = [ '/proc', '/blaat', '/pub', '/tmp' ] };

The leading '/' on your excludes is just fine, contrary to what has been said.
It anchors them to the transfer root. Without the slashes, you would also be
excluding e.g. /home/user/pub and /home/user/tmp, just as two examples of
things you might *not* want to exclude (well, you might even want to exclude
/home/user/tmp, but really *any* file or directory named tmp? It's your
decision, you can do whatever you want, even things like tmp/ (only
directories) or /home/**/tmp/ (only directories somewhere under /home) or
/home/*/tmp/ (only directories immediately in some user's home directory).
See the rsync man page for details). Just note that if your share name is
*not* /, you'll need to remove that part from the excludes (e.g. for a share
name /var, to exclude /var/tmp you'll need to specify /tmp as the exclude,
not /var/tmp, which would try to exclude /var/var/tmp).

 This could explain why the run takes longer, but it should still finish
 within an hour?

On the first run (or whenever something is added that does not yet exist in
the pool), compression might slow down things considerably, especially if your
exclude of /proc is not working. Just consider how long compressing a large
file (say 1 GB) takes in comparison to how long reading the file takes. The
host status page should tell you more about how much data your backups
contain and how much of that was already in the pool.

 You can just delete the directory and remove the test host from your
 hosts file.
 That will only remove the hardlinks, not the original files in the pool?

What you mean is correct, but you should note that there is nothing more
original about the hardlinks from the pool to the content than those from
the pc directory to the same content. They are all hardlinks and are
indistinguishable from each other. Every normal file on your Linux system is
a hardlink to some content in the file system, just for files with only a
single hardlink we don't usually think much about it (and for files with more
than one hardlink we don't usually *need* to think much about it - it just
works as intended).

 The space should be released when BackuPC_Nightly runs.  If you want to
 start over quickly, I'd make a new filesystem on your archive partition
 (assuming you did mount a separate partition there, which is always a
 good idea...) and re-install the program.

I believe you don't even need to reinstall anything. BackupPC creates most of
the directories it needs, probably excluding $TopDir, which will exist in your
case, because it's the mount point, but which will need to have the correct
permissions (user=backuppc, group=backuppc perms=u=rwx,g=rx,o= - but check
your installation values before unmounting the existing FS). Reinstalling
BackupPC may or may not be the easier option, depending on your preferences.

 I ran backuppc nightly /usr/share/backuppc/bin/BackupPC_nightly 0 255

You shouldn't have. Hopefully, there were no BackupPC_link processes running
during that time. BackupPC_nightly *should* contain a comment something like

# *NEVER* RUN THIS BY HAND WHILE A BackupPC DAEMON IS RUNNING. IF YOU NEED AN
# IMMEDIATE NIGHTLY RUN, TELL THE BackupPC DAEMON TO LAUNCH ONE INSTEAD:
#
# BackupPC_serverMesg 

Re: [BackupPC-users] Recompressing individual files in pool

2011-07-03 Thread Holger Parplies
Hi,

Kelly Sauke wrote on 2011-07-01 09:21:28 -0500 [[BackupPC-users] Recompressing 
individual files in pool]:
 I have a need to modify certain files from backups that I have in 
 BackupPC.  My pool is compressed and I've found I can decompress single 
 files using BackupPC_zcat.  I can then modify those files as needed, 
 however I cannot figure out how to re-compress those modified files to 
 be put back into the pool.  Is there a tool available that can do that?  

no. It's not a common requirement to be able to modify files in backups.
Normally, a backup is intended to reflect the state a file system was in at
the time the backup was taken, not the state the file system *should have*
been in or the state *I'd like it* to have been in. I sure hope you have
legitimate reasons for doing this.

If you are modifying files, you'll need to think about several things.

* Do you want to modify every occurrence of a specific content (i.e. all
  files in all backups linked to one pool file) or only specific files,
  while other files continue to contain the unmodified content?

* If you are modifying every occurrence of a specific content, you'll either
  have to find out which files link to the pool file (hard, with a reasonably
  sized pool) or ensure you're updating the content without changing the inode
  (i.e. open the file for write, not delete and re-create it). If you do that,
  there is not much you can do for failure recovery. Your update had better
  succeed.

* Does your update change the partial file md5sum? If so, you'll need to move
  the pool file to its new name and location. Presuming the new content
  already exists, you should probably create a hash collision. That may be
  less efficient than linking to the target pool file, but it should be legal
  (when the maximum link count is exceeded, a second pool file with identical
  content is created; later on the link count on the first file may drop due
  to expiring backups), and it's certainly simpler than finding all the files
  linked to your modified pool file and re-linking them to the pre-existing
  pool file.

* If you're only changing individual files in a pc/ directory, the matter is
  far more simple. You'll need to take some code from the BackupPC sources
  for compressing anyway, so you might as well take the part that handles
  pooling as well (see BackupPC::PoolWrite and note that you'll be coding in
  Perl ;-).

 Is there a better way to go about modifying certain files within my 
 backups?

Once the contents you want to have in the files are in the pool (probably from
a recent backup), you can just figure out the pool files and link to them. If
you want that to be really easy, ask Jeffrey about his hpool directory ;-).

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ssh goes defunct but BackupPC is waiting for it

2011-07-03 Thread Holger Parplies
Hi,

Aleksey Tsalolikhin wrote on Fri, 1 Jul 2011 11:56:42 -0700:
 I just noticed one of my servers has not had a successful full backup
 for over 60 days.  Incremental backups still succeed.

have you only got a single full backup for that host, or did full backups work
up to some point in time, when they started failing? And why does BackupPC do
an incremental backup after a failed full backup? Shouldn't the full be
retried until it succeeds? That doesn't sound right.

Les Mikesell wrote on 2011-07-01 21:05:03 -0500 [Re: [BackupPC-users] ssh goes 
defunct but BackupPC is waiting for it]:
 On 7/1/11 6:53 PM, Aleksey Tsalolikhin wrote:
  On Fri, Jul 1, 2011 at 3:16 PM, Les Mikeselllesmikes...@gmail.com  wrote:
  Is there a NAT router, stateful firewall, or similar device between them?
 
  Yes, there is!
 
  Backups with few changes can let the connection time out leaving both ends
  waiting for the other.

Would that lead to a zombie ssh process? See the ssh_config man page for how
to send keepalive messages (TCPKeepAlive or ServerAliveInterval) to keep your
firewall happy.

What is your $Conf{RsyncClientCmd}, or rather, what commands show up in the
XferLOG files for incremental and for full backups? Have you looked at the
XferLOG for a failing full backup? Does it always seem to fail at the same
point?

  Interesting.  I'm doing a full backup... I don't think that would
  qualify as a backup
  with few changes?  I thought full backup means copy everything, hey?
 
 An rsync full means read everything at both ends but only send the
 differences. 
   There can be a lot of dead air in the process.

I'm not sure how that exchange happens. Is it one large list with all the
details at the start (i.e. file list + checksums for all files)? Does the
receiver really need to compute checksums for *all* files (with
--ignore-times), even those the sender side would send anyway due to changed
attributes? Wouldn't that mean reading those files twice? That would certainly
explain why checksum caching makes such a difference (maybe I should switch it
on ;-).

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupFilesExclude syntax (was: Re: Yet another filesystem thread)

2011-07-03 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2011-07-03 20:18:07 -0400 [Re: [BackupPC-users] 
Yet another filesystem thread]:
 Holger Parplies wrote at about 19:31:14 +0200 on Sunday, July 3, 2011:
   While the comments in config.pl state
   
   # This can be set to a string, an array of strings, or, in the case
   # of multiple shares, a hash of strings or arrays.
   
   that is actually incorrect. A hash of strings makes no sense. In fact, Perl
   would turn your example into a hash with /proc and /pub as keys and
   /blaat and /tmp as respective values - certainly not what you want.
   [...]
 
 I think by hash of strings, the following is meant:
 $Conf {BackupFilesExclude} = { 'share1' = 'exclude-path1',
'share2' = 'exclude-path2',
 ...
  }
 
 This is just a simpler case of the hash of arrays that you illustrate
 below.

in fact, it's the same as what I meant to illustrate above, and in fact, it's
wrong.

Rereading the code for BackupPC::Lib::backupFileConfFix yet again, I'm quite
sure it doesn't do anything meaningful for a hash of strings. For the case
of BackupFiles{Exclude,Only} already being a hash, the values are, in fact,
*not* promoted to arrays if they are scalars. Later on (at least in
Xfer::Rsync - I didn't check the other XferMethods), they are used as array
refs:

foreach my $file ( @{$conf-{BackupFilesExclude}{$t-{shareName}}} )

In absense of 'strict refs', this appears to yield an empty array, which is
preferable to a syntax error, but effectively (and silently) behaves as if
the excludes (or includes) were not specified at all. So, yes, you *can*
specify a hash of strings, but, no, they won't be used as excludes (or
includes).

Before we get into a discussion whether this should be fixed, and because I
had already written this before I realized that it doesn't currently work this
way:

The point is that it's not a hash of strings, it's a hash of key-value
pairs. A hash of strings suggests just what the original poster understood:
several strings of the same meaning, excludes probably. I was first going to
say that it's actually a string of share names (with empty exclude lists,
which makes no sense), when I realized that that's not true. Unless you write
it with = (which, in Perl, is simply a synonym for ,), it's anything but
obvious. Perl won't mind if you write

$Conf {BackupFilesExclude} = { 'share1',
   'exclude-path1' = 'share2',
   'exclude-path2', ...
 };

but it's misleading, just as it is with commas only. The interpretation is
key-value pairs, so it should be stated explicitly that that's what you
[would] need to supply.

I wonder if the case of only a single exclude pattern per share for all shares
is common enough to warrant advocating its usage [or rather implementing it].
Of course, a mixed usage [would then] also work:

$Conf {BackupFilesExclude} = { 'share1' = 'exclude1',
   'share2' = [ 'exclude2a', 'exclude2b' ],
   'share3' = 'exclude3',
 };

But who would write that or encourage others to do so? What's the point in
leaving out the '[]' here? Allowing one single array of excludes to apply to
all shares (array value) makes sense, and extending that to allow one single
exclude pattern to apply to all shares (scalar value) also. Both are
simplifications for people not familiar with Perl syntax. Allowing a single
hash of strings would not be a simplification (in my opinion, at least).
Well, your opinion may vary ;-).

 While I have not tried that syntax, I imagine that is what the
 documentation refers to.

I haven't tried it either, obviously.

 Of course, the wording is not terribly clear except maybe to those who
 already know what is going on (and understand perl)...

Well, it would seem to confuse even those ;-).

Either the wording or the implementation needs to be changed. My suggestion is
to change the documentation to:

# This can be set to a string, an array of strings, or, in the case
# of multiple shares, a hash of (string, array) pairs.

I can't see the need for supporting a hash of strings variant.

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http

Re: [BackupPC-users] Recompressing individual files in pool

2011-07-03 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2011-07-03 20:29:48 -0400 [Re: [BackupPC-users] 
Recompressing individual files in pool]:
 Holger Parplies wrote at about 20:10:20 +0200 on Sunday, July 3, 2011:
   Kelly Sauke wrote on 2011-07-01 09:21:28 -0500 [[BackupPC-users] 
 Recompressing individual files in pool]:
   * Does your update change the partial file md5sum? If so, you'll need to
 move the pool file to its new name and location. [...]
 
 Yes - unless you are just changing content between the first and last
 chunks (keeping the file size the same), the partial file md5sum will
 change.

not quite exact ;-), but close enough for this thread.

 That being said, while it is technically correct and advisable to
 rename the file with the correct partial file md5sum (including
 adjusting the suffix for potential collisions), it is not strictly
 necessary.

True, of course. You don't even really need the pool file. Actually, I'd
consider *removing* the pool file rather than leaving it in the pool with
an incorrect name (of course that includes adjusting collision chains).
Incorrectly named pool files just add work for BackupPC when trying to
match new content to existing pool files. Am I missing any side effects
and end cases? Statistics will be inacurate?

 Another perhaps more important issue is that you really need to change
 the attrib file.

And here, again, you need to note that attrib files are also pooled, so you
can't simply change the attrib file in place.

Note that this also rules out simply modifying the pool file - you need all
the pc/ files linking to it, because you need to modify all their attrib files
(presuming you *are* changing the file size). But I somehow have the
impression you weren't going to do that anyway.

 The bottom line is that editing existing files is possible (and indeed
 I do a lot more 'messy' things in my BackupPC_deleteFile routine)
 *but* you need to think of all the side-effects and end cases to make
 sure you won't be messing anything else up.

And as you see, it can get quite complicated. We haven't even touched on
the matter of backup dependencies yet. Supposing you only want to change a
file for a specific (range of) backup(s), you may need to add the file with
its previous contents to the next backup after the one(s) you want to change.

Well, any of that may or may not apply to what you actually want to do. Could
you perhaps be a bit more specific?

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup to distant FTP Server

2011-06-23 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2011-06-23 14:20:57 -0500 [Re: [BackupPC-users] Backup to 
distant FTP Server]:
 On 6/23/2011 6:41 AM, Arnaud Forster wrote:
 
  I need to use backuppc with a distant FTP server as destination. So, I
  used curftpfs to mount my FTP connection to a local folder and choosed
  it as the root of my destination. [...]
 
 Are you saying you use a remote ftp based server as your backuppc 
 archive storage?  I wouldn't expect that to work at all unless it 
 supports hardlinks through the fuse interface - which doesn't seem possible.
 
 In any case, linux distributions usually have an automounter that will 
 mount filesystems on demand as they are accessed and unmount after a 
 timeout. [...]

... but only if they are unused!

That said, I believe $TopDir is the BackupPC daemon's working directory (hmm,
no, that doesn't seem to be true, but it usually has a log files opened under
$TopDir/log, though the location may be changed). In any case, BackupPC doesn't
really handle $TopDir disappearing too well, so you're bound to get into
trouble if it does (actually, I'd consider it a bug if anything emulating a
*file system* through FTP would not transparently reconnect if the server
disconnects - such a severe bug, in fact, that I'd consider the software
experimental at best and avoid it for any serious work).

In any case, you should verify your assumption that this is working fine.
* Are your link counts accurate, i.e. does pooling work, or do you in fact
  have N+1 independant copies of each file that should be pooled?
* I'd expect BackupPC to be *extremely slow* on an FTP-based pool. What might
  seem to work on an empty pool with a small test backup set will almost
  certainly degrade over time with a growing pool and hash collisions.
  BackupPC is optimized for fast pool access and potentially slow client
  access, not the other way around.

Why are you using an FTP server as destination? How distant is it (i.e. how
fast or slow is the link)? What amount of data are you planning to back up?
What are you trying to protect against?

Chances are BackupPC is not the right tool for the task you have in mind, or
at least isn't being used in the correct way.

Hope that helps.

Regards,
Holger

--
All the data continuously generated in your IT infrastructure contains a 
definitive record of customers, application performance, security 
threats, fraudulent activity and more. Splunk takes this data and makes 
sense of it. Business sense. IT sense. Common sense.. 
http://p.sf.net/sfu/splunk-d2d-c1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] wakeup command?

2011-06-23 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2011-06-23 16:19:52 -0500 [Re: [BackupPC-users] wakeup 
command?]:
 On 6/23/2011 3:59 PM, gregrwm wrote:
  is there a command that triggers the equivalent of a wakeup?  normally i
  only want 1 wakeup per day, yet for special circumstances i often find
  myself editing in a wakeup a couple minutes hence and triggering a reload.
 
 Normally you'd have moderately frequent wakeups where the actual 
 scheduling of the runs is controlled by other settings (which are 
 checked at each wakeup).  Is there some reason that is a problem?

there might be, depending on what you abuse your PingCmd etc. for ;-). And you
might want to allow the disk to spin down if it is unused most of the day.
This is also the most effective way to enforce a global blackout. If you
*never* want automatic backups to run during a particular part of the day,
there is no point in scheduling wakeups.

However, I find two or three wakeups more useful than just one. If that one
wakeup is missed (because IncrPeriod is a few minutes too long), you don't get
a backup that day (with only one wakeup). Also, BackupPC_nightly is run on the
first wakeup (i.e. first entry in $Conf{WakeupSchedule}, whatever time that
is), and I don't want that running in parallel with backups.

In any case, no there is no command that triggers the equivalent of a wakeup,
but the part you are probably interested in - running backups which are due -
can be triggered with

BackupPC_serverMesg backup all

If it's the nightly cleanup you're interested in, that would be

BackupPC_serverMesg BackupPC_nightly run

In any case, BackupPC_serverMesg needs to be run as the backuppc user.
 
Regards,
Holger

--
All the data continuously generated in your IT infrastructure contains a 
definitive record of customers, application performance, security 
threats, fraudulent activity and more. Splunk takes this data and makes 
sense of it. Business sense. IT sense. Common sense.. 
http://p.sf.net/sfu/splunk-d2d-c1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems rsyncing WinXP

2011-06-20 Thread Holger Parplies
Hi,

Daniel Spannbauer wrote on 2011-06-20 17:27:54 +0200 [Re: [BackupPC-users] 
Problems rsyncing WinXP]:
 [775KB deleted]

775KB to a mailing list? Congratulations.

Regards,
Holger

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with the CGI interface

2011-06-17 Thread Holger Parplies
Hi,

Max León wrote on 2011-06-17 15:23:10 -0600 [[BackupPC-users] Problem with the 
CGI interface]:
 Hello,
 I have BackupPC-3.2.1 running under Centos 5.6, it has been working like a
 charm for the last couple of months.
 However we had a server crash yesterday, it came back and everything is good
 but the CGI interface that is showing as if it was a clean install.
 
 Can anyone point me where I can fix this?

yes, very simple. You need to mount the partition where you store your pool.
Best set it up to be automatically mounted on boot (/etc/fstab).

Hope that helps.

Regards,
Holger

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 18 hour incremental backups?

2011-06-10 Thread Holger Parplies
Hi,

Boris HUISGEN wrote on 2011-03-02 15:53:10 +0100 [Re: [BackupPC-users] 18 hour 
incremental backups?]:
 The compression is disabled (level 0 = no compression)

just for the archives: that's complete nonsense. What *should* be disabled
is top-posting. Level 0 means full backup, not compression disabled. And
disabled compression would usually not make the backups slower, not by orders
of magnitude in any case. In fact, if there are a lot of files that are not
yet to be found in the pool, disabling compression should make the backup run
*faster*.

 Le 28/02/11 18:09, Rob Morin a écrit :
  Hello all? I cannot seem to wonder why incremental backups would take 18
  hours?
  
  2011-02-26 23:00:10 incr backup started back to 2011-02-21 20:00:01
  (backup #0) for directory /etc
  2011-02-26 23:02:15 incr backup started back to 2011-02-21 20:00:01
  (backup #0) for directory /home
  2011-02-27 16:34:08 incr backup started back to 2011-02-21 20:00:01
  (backup #0) for directory /usr/local/src
  2011-02-27 16:35:41 incr backup started back to 2011-02-21 20:00:01
  (backup #0) for directory /var/lib/mysql/mysql_backup
  2011-02-27 18:01:46 incr backup 5 complete, 12825 files, 28192345844
  bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other)
  
  The home dir is under 50 gigs, and its incremental?

[I hope you've solved the problem in the mean time, but I thought I'd add
 something for the benefit of anyone finding this in the archives. If you
 found out what the problem was, you could share your results.]

Well, you see that it *is* the home dir that is taking 17.5 of 19 hours, and
you seem to be transferring 28 GB in total, so I'd guess there are a lot of
changes. In fact, the next incremental backup, based on the same full backup,
ran significantly faster. You could look into the XferLOG for that backup
to get an idea of what was actually happening (lots of small temporary
files?).

  Backup# Type Filled Level Start Date Duration/mins Age/days Server
  Backup Path
  
  0
  https://mail.6948065.com/backuppc/index.cgi?action=browsehost=d9.interhub.localnum=0
  full yes 0 2/21 20:00 202.1 6.7 /var/lib/backuppc/pc/d9.interhub.local/0
  [...]
  5
  https://mail.6948065.com/backuppc/index.cgi?action=browsehost=d9.interhub.localnum=5
  incr no 1 2/26 23:00 1141.6 1.5 /var/lib/backuppc/pc/d9.interhub.local/5
  
  6
  https://mail.6948065.com/backuppc/index.cgi?action=browsehost=d9.interhub.localnum=6
  incr no 1 2/27 23:00 566.9 0.5 /var/lib/backuppc/pc/d9.interhub.local/6

Still, the fact that your full backup is considerably faster does seem
strange. From the information you've given, I can't even begin to guess
what might be wrong.

Regards,
Holger

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc and Win7

2011-06-08 Thread Holger Parplies
Hi,

Tom Brown wrote on 2011-06-08 12:25:55 -0400 [Re: [BackupPC-users] backuppc and 
Win7]:
 The $Conf{RsyncdPasswd} below is an example. Use the password you assign to
 your backuppc user on the client PC. The password should be the same as in
 the c:\your_path\rsync.secrets file.

the stupid thing about top-posting (apart from it being extremely annoying)
is that it's not obvious to anyone except you what your point might be,
presuming you have one. Were you replying to some unquoted mail? Had you
just noticed that you forgot to erase your password and were trying to undo
that? Did the lack of direct response seem to prompt further explanation?
Did you just want to repeat a part of the thread you think we should all
read at least twice, and were ashamed of reposting it without adding
something arbitrary? Should I quote it again, so we can read it a third
time?

Just wondering ...

Regards,
Holger

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of VM images

2011-06-07 Thread Holger Parplies
Hi,

Boniforti Flavio wrote on 2011-06-07 11:00:24 +0200 [Re: [BackupPC-users] 
Backup of VM images]:
 [...]
 So I'm right when thinking that rsync *does* transfer only the bits of a
 file (no matter how big) which have changed, and *not* the whole file?

usually that's correct. Presuming rsync *can* determine which parts have
changed, and presuming these parts *can* be efficiently transferred. For
example, changing every second byte in a file obviously *won't* lead to a
reduction of transfer bandwidth by 50%. So it really depends on *how* your
files change.

 [...]
 Well, size is a critical parameter, because I can suppose that VM images
 are quite *big* files.
 But if the data transfer could be reduced by using rsync (over ssh of
 course), there's no problem because the initial transfer would be done
 by importing the VM images from a USB HDD. Therefore, only subsequent
 backups (rsyncs) would transfer data.
 
 What do you think?

First of all, you keep saying VM images, but you don't mention from which VM
product. Nobody says VM images are simple file based images of what the virtual
disk looks like. They're some opaque structure optimized for whatever the
individual VM product wants to handle efficiently (which is probably *not*
rsyncability). Black boxes, so to say. There are probably people on this list
who can tell you from experience how VMware virtual disks behave (or VirtualBox
or whatever), and it might even be very likely that they all behave in similar
ways (such as changing roughly the same amount of the virtual disk file for the
same amount of changes within the virtual machine), but there's really no
guarantee for that. You should try it out and see what happens in your case.

Secondly, you say that the images are already somewhere, and your
responsibility is simply to back them up. Hopefully, your client didn't have
the smart idea to also encrypt the images and simply forget to tell you.
Encryption would pretty much guarantee 0% rsync savings.

Thirdly, as long as things work as they are supposed to, you are probably
fine. But what if something malfunctions and, say, your client mistakenly
drops an empty (0 byte) file for an image one day (some partition may have
been full and an automated script didn't notice)? The backup of the 0-byte
file will be quite efficient, but I don't want to think about the next
backup. That may only be a problem if the 0-byte file actually lands in a
backup that is used as a reference backup, but it's an example meant to
illustrate that you *could* end up transferring the whole data set, and you
probably won't notice until it congests your links. Nothing will ever
malfunction? Ok, a virtual host is probably perfectly capable of actually
*changing* the complete virtual disk contents if directed to (system update,
encrypting the virtual host's file systems, file system defragmentation
utility, malicious clobbering of data by an intruder ...). rsync bandwidth
savings are a fine thing. Relying on them when you have no control over the
data you are transferring may not be wise, though.
And within BackupPC may not be the best place to handle problems. For
instance, if you first made a local copy of the images and then backed up
that *copy*, you could script just about any checks you want to, use bandwidth
limiting, abort transfers of single images that take too long, use a
specialized tool that handles your VM images more efficiently than rsync,
split your images after transferring ... it really depends on what guarantees
you are making, what constraints you want (or need) to apply, how much effort
you want to invest (and probably other things I've forgotten).

Hope that helps.

Regards,
Holger

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore Files Newer Than Date

2011-06-07 Thread Holger Parplies
Hi,

Gene Cooper wrote on 2011-06-07 16:28:01 -0700 [[BackupPC-users] Restore Files 
Newer Than Date]:
 [...]
 I had a server fail today, but there was a full backup done last night. 
  It's many gigabytes over a WAN connection.
 
 I had a separate local-disk backup system, which I refer to as Level 1, 
 which I used to restore the server to 'the day before yesterday'.  But I 
 need to restore 'yesterday' from BackupPC over the WAN.
 [...]
 I can't help but think there is some clever command line that will do 
 this for me...

after writing a rather complicated reply I find myself wondering whether a
plain restore won't do just what you want, presuming the backup is configured
as an rsync(d) backup, which it almost certainly is. As you are using rsync as
the transfer method, you should be transferring only file deltas over the
WAN, though you'll probably be reading all files on both sides in the style of
a full backup.

Presuming that is, for some obscure reason, not the case, here are my original
thoughts:

If you've got enough space, you could do a local restore to a temporary
directory on the BackupPC server (or any other host on the BackupPC server's
local network) and then use rsync to transfer exactly the missing changes over
the WAN (remember the --delete options!). If you don't, you could restore only
the files changed after a certain time to a temporary directory on the BackupPC
server and then rsync that over (note that you won't be able to get rid of
files deleted yesterday, though, so you won't get *exactly* the state of the
last backup). That would be an invocation of BackupPC_tarCreate, piped into
tar with the '-N' option ('--newer=date'). If you don't have the disk space
even for that, you could play around with doing it on an sshfs mount of the
target host, though that will obviously lose any rsync savings for the files
you are restoring.
I don't know of any filter that would reduce a tar stream to only files newer
than a specific date (and remember, you want the deletions from yesterday,
too).

The first option [referring to the local restore + rsync] is both simpler and
less error-prone, so use that if [the plain restore doesn't do what you want
and] you have the space available. If you need help on the syntax of
BackupPC_tarCreate, feel free to ask.

Hope that helps.

Regards,
Holger

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc and Win7

2011-06-06 Thread Holger Parplies
Hi,

higuita wrote on 2011-06-05 22:10:28 +0100 [Re: [BackupPC-users] backuppc and 
Win7]:
 On Sun, 5 Jun 2011 13:15:16 -0700, ow...@netptc.net wrote:
  all other machines on the LAN) and pings work fine.  Backuppc fires off
  the error Get fatal error during xfer (no files dumped for share C$).
  Any suggestions?
 
   Yes... share the c drive... depending on your config, you might
 not have the c$ share enabled by default (or restricted by the firewall
 and network settings).

shouldn't that lead to a different error? As I read the source, no files
dumped for share is produced for a dump where no other error was detected,
yet no files were read - perhaps due to insufficient permissions to access
any files (oh, and it's *Got* fatal error ..., so we know it's only an
approximation of the actual error message ;-).

More context from the XferLOG would probably help in determining the actual
problem. Are there error messages which BackupPC couldn't interpret leading
up to the no files dumped ...? Have you tried running the exact same
command BackupPC is trying from the command line? Can you connect to the
client machine with 'smbclient' (something like
'smbclient //host/C\$ -U backup-user', if 'backup-user' is what BackupPC is
using; see the man page for details) and list the contents of the share?

Regards,
Holger

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ping failures: dhcp clients and different subnets

2011-06-01 Thread Holger Parplies
Hi,

sorry for the delay, I just found your mail in the depths of my mailbox.

Tobias Mayer wrote on 2011-05-19 10:50:49 +0200 [Re: [BackupPC-users] ping 
failures: dhcp clients and different subnets]:
 [...]
 Can anyone with knowledge of the code please have a look at this?

You don't really need any knowledge of the code.

 On 17.05.2011 20:17, Tobias Mayer wrote:
  [...]
  To circumvent this, i simply removed the else { from line 1718.

That most certainly breaks things. Whatever was in the else clause was meant
to happen under some circumstances. Now it only happens for your if or
elsif case.


I don't use DHCP clients myself, so I'd have to take a closer look, but I'm
rather confident that it's a configuration issue rather than a problem in the
code. The issue of what the DHCP flag in the hosts file means seems to be
rather easy to misunderstand. If I can find the time, I'll take a look and
follow up. Isn't there a wiki page on this topic? There should be ...

Regards,
Holger

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to Cancel a Restore

2011-06-01 Thread Holger Parplies
Hi,

Tyler J. Wagner wrote on 2011-06-01 16:29:47 +0100 [Re: [BackupPC-users] How to 
Cancel a Restore]:
 On Wed, 2011-06-01 at 11:20 -0400, Long V wrote:
  I would make it even more explicit by renaming it to Stop/Dequeue 
  Backup/Restore.
 
 I personally prefer Whatever you are doing, just stop. :)

actually, when a restore is running, there should be a second button
Stop Restore (and if no restore is running, the button shouldn't be
there). What do you do if you concurrently have a backup *and* a
restore running? Stop both? Only the wrong one? ;-) If you're stopping
a restore, does the backoff apply?

Regards,
Holger

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BlackoutPeriods seemingly ignored

2011-06-01 Thread Holger Parplies
Hi,

martin f krafft wrote on 2011-05-30 14:28:23 +0200 [Re: [BackupPC-users] 
BlackoutPeriods seemingly ignored]:
 also sprach martin f krafft madd...@madduck.net [2011.05.19.0925 +0200]:
  [...]
  It seems almost like BackupPC is just ignoring BlackoutPeriods, [...]
 
 To answer my own question:

sorry that you didn't get a reply.

 It is really such that BlackoutPeriods only work once and as long as
 BlackoutGoodCnt pings succeed in succession.

Correct. (Well, one bad ping won't end it, BlackoutBadPingLimit successive
bad pings will.)

 To get my desired behaviour, I set BlackoutGoodCnt to 0 globally and
 deleted all global BlackoutPeriods.

That's a good thing to keep in mind. Strangely, I can't remember anyone
suggesting that yet, even though it is rather obvious (and documented, as
I see).

Personally, I only back up servers automatically, so I simply set
WakeupSchedule to not interfere with normal operation, much like blackouts
are meant to work (which also turns out to be documented).

 The only disadvantage is that if a host never comes online during
 this period, it won't be backed up, but I can live with that.

With per-host blackouts it seems reasonable to assume that they are meant
as they were specified. For a global blackout it seems reasonable to assume
that it doesn't apply to hosts that are never (or seldom) online outside
the blackout window.

As I understand the concept of blackouts in BackupPC, I think it is meant to
automagically adapt backup strategy to avoid the (global) blackout window
(working hours) for all hosts that can be backed up outside this window.
That would be the global blackout case above.

Actually, I would modify your suggestion in that I'd set BlackoutGoodCnt to 0
*locally* in the pc config file, together with a local BlackoutPeriods
setting. In the global config, BlackoutGoodCnt should be != 0. That way, you
would get expected behaviour for both cases.

 It wouldn't hurt if BackupPC had some sort of MaxAgeBeforeIgnoringBlackout
 setting to force-ensure some backups.

If you don't have a backup yet, are you supposed to ignore the blackout window
or not? You can't really say, respect it for three days, then ignore it if
there is still no backup, at least not without defining a concept of when a
host was added to the BackupPC configuration (timestamp of the host.pl file?).
In any case, you have the email reminders that are supposed to alert you to
the fact that there is no recent backup (or no backup at all). If such a case
happens, you will need to consider whether your blackout window needs
adjustment anyway. Automatically degrading from one backup per IncrPeriod to
one backup per MaxAgeBefore... doesn't seem like a sufficient resolution.

Regards,
Holger

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] moving temporarily PC and pool to an external drive

2011-06-01 Thread Holger Parplies
Hi,

Philippe Rousselot wrote on 2011-06-01 19:47:44 +0200 [[BackupPC-users] moving 
temporarily PC and pool to an external drive]:
 [...]
 My hard drives are full and need to add one to have a new and bigger 
 /var partition.
 
 In the mean time I would like to continue  backing up things using an 
 external drive

you could simply vgextend your VG onto the external drive. Once you get your
new internal drive, you can pvmove the data from the external drive to the
internal one(s). If external means USB, you'll probably want to be
careful, though, because that isn't exactly famous for its stability. But
that warning applies *in any case*.

Ah, you're not using LVM. Bad luck. When you get your new disk, you should
consider changing that. In fact, if your external disk is large enough (larger
than your current pool partition), and you have a bit of time, you could even
consider doing that now. Then again, if your external disk really was large
enough, you would probably just 'dd' and resize and not try anything
complicated.

Before we go on, how much time are we talking about? When I think of getting
a new disk, I think of a few hours - 3 days at most, if there's a weekend in
the way. Is setting up an interim solution really worth the trouble?

 things are actually archived using standard path  /var/lib/backuppc/pc/ 
 and /var/lib/backuppc/pool to the external drive

I don't understand what you mean by that.

 [...]
 I want to replace /var/lib/backuppc/pc/localhost  and 
 /var/lib/backuppc/pool by /media/USBDISK/pool and 
 /media/USBDISK/pc/localhost

Nope. You want to mount your USBDISK on /var/lib/backuppc ... presuming you
trust usb-storage enough to proceed. Well, you probably don't have much to
lose.

The problem is really that you'll start from scratch. No reference backups
(the next backup will be a full for all hosts, transferring all data without
any rsync savings), no history, no good way to combine that with your present
backup history onto the new disk, no pool files (meaning everything will need
to be compressed and written to disk, rather than comparing with present files
read from disk).

You *could* try copying part of your present pool FS onto the external disk,
but that really depends on a lot of things (size, xferMethods, reference
backup (is the last backup a full?), ...). You most likely won't get anything
that will do more than back up files for a few days and be slightly
inconsistent (pooling probably won't work correctly, but restores should be
fine), but you'll need to do something like this if you have large backup sets
and slow links involved.

In any case, consider that as a temporary safeguard against data loss only,
not as something you'll keep when you move to your new disk.


Due to the way BackupPC (pooling, in particular) works, you *can't* move just
one part of the pool FS to another file system. If you try just symlinking
pc/localhost and pool/ to the external disk, you'll only end up messing up
pooling for the other hosts you are backing up.

If you need more help, you'll need to explain in much more detail what you
want to achieve (rather than what steps you are considering), but don't
hesitate to ask.

Hope that helps.

Regards,
Holger

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to Cancel a Restore

2011-06-01 Thread Holger Parplies
Hi,

Tyler J. Wagner wrote on 2011-06-01 17:44:47 +0100 [Re: [BackupPC-users] How to 
Cancel a Restore]:
 On Wed, 2011-06-01 at 18:09 +0200, Holger Parplies wrote:
 [...]
   What do you do if you concurrently have a backup *and* a
  restore running? Stop both? Only the wrong one? ;-)
 
 It stops everything. Uh, I think. Who wants to break a backup and find
 out?

well, I don't. I just wanted to point out that one button is not enough,
because you might want to stop either one, but probably not both.

Regards,
Holger

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsynv vs. tar, full vs. incremental

2011-05-31 Thread Holger Parplies
Hi,

Pavel Hofman wrote on 2011-05-31 15:24:56 +0200 [[BackupPC-users] Rsynv vs. 
tar, full vs. incremental]:
 Incremental backup of a linux machine using tar (i.e. only files newer
 than...) is several times faster than using rsync.

that could be because it is missing files that rsync catches. Or perhaps I
should rather say: yes, tar is probably more efficient, but it is less exact
than rsync, because it only has one single timestamp to go by, whereas rsync
has a full file list with attributes for all files. One very real consequence
is that tar *cannot* detect deleted files in incremental backups while rsync
will.

My understanding is that the concept of incremental backups, way back in times
where we did backups to tapes, was introduced simply to make daily backups
feasible at all. Something along the lines of it's not great, but it's the
best we can do, and it's good enough to be worthwhile.

Nowadays, incremental backups still have their benefits, but we really need
to shake the habit of making compromises for no better reason than that we
haven't yet realized that there is an alternative.

If you determine that incremental tar backups are good enough for you (e.g.
because the cases it doesn't catch don't happen in your backup set), or that
your server load forces you to make a compromise, then that's fine. But if
it's only tar is faster than rsync and faster is better, then you should
ask yourself why you are doing backups at all (no backups is an even faster
option).

 On the other hand, full backup using tar transfers huge amount of data over
 network, way more than the efficient rsync.

There are also other factors to consider like CPU usage. Where exactly is your
bottleneck?

 Is there a way to use rsync for full backup and tar for the incremental
 runs?

No. Actually, *the other way around*, it would make sense: full backups with
tar (probably faster than rsync over a fast local network - depending on your
backup set) and incremental backups with rsync (almost as exact as a full
backup).

 I do not even know whether the two transfer modes formats produce
 mutually compatible data in the pool.

No. There is (or was?) a slight difference in the attribute files, leading to
retransmission of all files on the first rsync run after a tar run (because
RsyncP thinks the file type has changed from something to plain file).
The rest is, of course, compatible. It would be a shame if pooling wouldn't
work between tar and rsync backups, wouldn't it? :)

Regards,
Holger

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hard links again

2011-05-31 Thread Holger Parplies
Hi,

Scott wrote on 2011-05-31 19:57:11 -0400 [Re: [BackupPC-users] hard links 
again]:
 I noticed the backup data appears to be stored in the pc directory.   Does
 that mean the hardlinks are in the pool directory and not in the pc
 directory?

ah, you don't understand what hardlinks are. Hardlinks are simply different
names (directory entries) pointing to the same content (inode, file). They're
indistinguishable from each other. There's no primary name, they are all
equal. The content you access is identical, no matter which of the names you
use.

If you need more explanation, google is your friend.

 So would it be possible to store the pc directory in a different directory
 (in my flexraid pool) but store the pool directory in a normal ext file
 system?   What about the cpool ?

No. Hardlinks can obviously only work within one file system.

Regards,
Holger

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best FS for BackupPC

2011-05-26 Thread Holger Parplies
Hi,

Carl Wilhelm Soderstrom wrote on 2011-05-26 06:05:48 -0500 [Re: 
[BackupPC-users] Best FS for BackupPC]:
 On 05/26 12:20 , Adam Goryachev wrote:
  BTW, specifically related to backuppc, many years ago, reiserfsck was
  perfect as it doesn't have any concept or limit on 'inodes'... Same for
  mail and news (nntp) servers. Do XFS/JFS have this feature? I'll look
  into these things another day, when I have some time :)
 
 There are indeed 'inodes' listed in the 'df -i' output of XFS filesystems.
 However, I've never heard of anyone hitting the inode limit on XFS, unlike
 ext3.

of course XFS *has* inodes, and I wondered about the 'df -i' output, too, when
I tried it yesterday. I don't remember reiserfs giving any meaningful
information for 'df -i' ... nope, '0 0 0 -'. I sincerely hope that XFS doesn't
have *static inode allocation*, meaning I have to choose the number of inodes
at file system creation time and waste any space I reserve for them but do not
turn out to need. That was one main concern when choosing my pool FS.
Actually, mkfs.xfs(8) explains a parameter '-i maxpct=value':

This  specifies  the  maximum percentage of space in
the filesystem that can be allocated to inodes.  The
default  value  is 25% for filesystems under 1TB, 5%
for filesystems under 50TB and  1%  for  filesystems
over 50TB.

The further explanation says this is achieved by the data block allocator
avoiding lower blocks, which are needed for obtaining 32-bit inode numbers.
It leaves two questions unanswered (to me, at least):

1.) Is this a hard limit, or will inodes continue to be allocated in excess
of this percentage, (a) if more space happens to be free in the lower
blocks, or (b) generating inode numbers exceeding 32 bits, provided the
kernel supports them (probably only 64-bit kernels)?
2.) Will the data block allocator use these blocks up once no other blocks
are available any more, or is your XFS full, even though you've got
another 249GB(!) free on your 1TB FS, that are reserved for inodes?

The answer to (2) is most likely the data block allocator will use them,
because the man page goes on:

Setting the value to 0 means that essentially all of
the filesystem can become inode blocks,  subject  to
inode32 restrictions.

(however, it could be a special case for the value 0). In fact, the very
concept of allocating inodes rather than reserving fixed blocks for them
strongly suggests some flexibility in deciding how much space will be used
for them and how much for data.

In any case, the default percentage seems to allow for far more inodes than
with ext[23], which might explain why you hit the boundary later (if at all).

Regards,
Holger

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best FS for BackupPC

2011-05-25 Thread Holger Parplies
Hi,

first of all, my personal experience with reiserfs is also that it lost a
complete pool FS (apparently, the cpool directory disappeared and was
re-created by BackupPC *several times* before I noticed the problem).
Rebuilding the tree obviously gave me a state that is next to impossible
to fix properly (lots of directories in lost+found named by inode - any
volunteers for finding out, where and within which pc/ directory to put
them? ;-), let alone verify the results.

My decision was to move to a different FS. I didn't go the scientific way, I
just chose xfs, which apparently was a good choice - at least up to now.

So I certainly don't disagree with your results, but I do partly disagree with
your reasoning and interpretations.

Michael Stowe wrote on 2011-05-25 08:40:10 -0500 [Re: [BackupPC-users] Best FS 
for BackupPC]:
 [Adam wrote:]
  On 24/05/2011 11:25 PM, Michael Stowe wrote:
  [...] The high level results:
 
  jfs, xfs:  quick, stable
  reiserfs:  not stable
  ext4:  slow
  ext3:  very slow

While that is a nice summary, I, personally, wouldn't base any decisions
solely on a summary without having any idea how the results were obtained,
because the methods could be flawed or simply not take my main concerns into
account (e.g. if I have my BackupPC server on a UPS, power loss is not my
primary concern (though it may still be one); long term stability is). For
other people, speed may be vital, while the ability to survive a power failure
is not. You explain in a followup (see below) how you obtained your results.

  The not stable designation comes from power-off-during-write tests.
  [...]
 
  Just a couple of my own personal comments on reiserfs:
  1) It does usually handle random power-offs on both general servers and
  backuppc based servers.
 
 Usually doesn't really do it for me.

I believe that is exactly the point. You simply can't *test* whether a file
system handles *every* power-off case correctly. You can prove that it
doesn't, or you can find that you didn't manage to trigger any problems. So,
while I agree with reiserfs does *not* handle power-offs sufficiently well,
I don't see it as *proven* that xfs/jfs/ext4/ext3 are any better. They might
be better, they might be worse. They are *probably* better, but that is just
speculation. Granted, I'd prefer an FS where I didn't manage to trigger any
problems over one where I did, too. Or one, where the majority of the
community seems to agree that it performs better. However, both choices are
based on experience, not on scientific results.

 The problem seems to be in the structure of the trees and the rebuild tree
 routines, which just grabs every block that looks like they're reiserfs
 tree blocks.

If that is the case, it is certainly problematic. What I also dislike is that
'reiserfsck --rebuild-tree' leaves your FS in an unusable state until it has
completed - let's hope it does complete. All other 'fsck' programs I can
remember having used seem to operate in an incremental way - fixing problems
without causing new ones (except maybe trivial wrong count type
inconsistencies), so they can [mostly] be interrupted without making the
situation worse than it was.

  3) I've used reiserfs on both file servers and backuppc servers for
  quite a long time [...] One backuppc server I used it with [...] did
  daily backups of about 5 servers with a total of 700G data. [...]
 
 There are plenty of things that run perfectly well when unstressed.

What is your understanding of unstressed?

  Perhaps in your testing you either didn't enable the correct journalling
  options, or found that particular corner case. Perhaps next time it
  happens jfs/xfs might hit their corner cases.
 
 This doesn't ring true nor does it match the results of my testing.  I
 didn't tune any file systems.

Perhaps you should have. The default options are not always suitable for
obtaining what you need. In what way doesn't next time jfs/xfs might hit
their corner cases match the results of your testing? As I said, I don't
believe you've proven that jfs/xfs don't *have* corner cases. You just didn't
expose any.

 You can speculate that xfs and jfs may contain the same flaws but some kind
 of blind luck kept them working properly, but it seems *really* unlikely.

The speculation is, that you didn't test the situations that xfs or jfs might
have problems with (and reiserfs might handle perfectly).

 Further, simply running a filesystem is not the same as testing and
 recovering it.  It's certainly possible to have run a FAT filesystem under
 Windows 3.1 for 20 years.  This doesn't make it a robust choice.

Certainly true. But all I can see here are different data points from
different people's *experience*. You're unlikely to experience running
*dozens* of FAT/Win3.1 file systems for 20 years, and if you do, it might
well be a robust choice *for your usage pattern*. That doesn't mean it
will work equally well with different usage patterns, or that if you suddenly

Re: [BackupPC-users] How to delete specific files from backups? (with BackupPC_deleteFile.pl)

2011-05-23 Thread Holger Parplies
Hi,

Nick Bright wrote on 2011-05-22 23:27:58 -0500 [Re: [BackupPC-users] How to 
delete specific files from backups? (with BackupPC_deleteFile.pl)]:
 On 5/22/2011 7:14 PM, Nick Bright wrote:
  Sounds to me like the BackupPC_deleteFile script is the way to go:
 [...]
 No matter what options I give it, it just won't delete anything.
 
 There is a complete void of examples, and there is no indication of what 
 valid inputs are for the arguments are in the documention, so I'm not 
 even sure if I'm doing it correctly.

Jeffrey? ;-)

 I've tried:
 
 BackupPC_deleteFile.pl -h hostname -n - -d 4 /var/log/maillog
 [...]
 
 But they all just give the same output - nothing deleted.

From that example, the general syntax of the BackupPC_* commands provided with
BackupPC, and the expectation that Jeffrey will follow their conventions,
I'd expect you need to provide a '-s sharename' argument and probably
the path within this share as either relative or absolute path. So, if you
have a share '/var', that would be

BackupPC_deleteFile.pl -h hostname -n - -d 4 -s /var /log/maillog

or possibly

BackupPC_deleteFile.pl -h hostname -n - -d 4 -s /var log/maillog

(don't know what '-d 4' does, though; aside from that, where did you get
'-n -' from? Maybe try '-n -1'?).

You probably have a share '/', so it would be '-s / /var/log/maillog' or
'-s / var/log/maillog' instead. It really depends on how you set your
backups up, because that determines how BackupPC stores them.

But I'm just guessing. I'd look at the script, but I don't really have the
time right now. Wait for an authoritative answer if you don't feel like
experimenting (but you probably do - you've already done so ;-).

Hope that helps.

Regards,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archive without tarball - directly to the file system

2011-05-23 Thread Holger Parplies
Hallo,

samuel_w...@t-online.de wrote on 2011-05-23 11:44:33 +0200 [[BackupPC-users] 
Archive without tarball - directly to the file system]:
 htmlheadtitle/titlemeta http-equiv=Content-Type [...]

ungefaehr bis dahin bin ich gekommen, bevor ich den Rest der Mail ignoriert
habe. Bitte HTML-Mails an Mailinglisten um jeden Preis vermeiden.

Danke.

Gruss,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archive without tarball - directly to the file system

2011-05-23 Thread Holger Parplies
I erraneously wrote to the list
Holger Parplies wrote on 2011-05-23 16:30:57 +0200 [Re: [BackupPC-users] 
Archive without tarball - directly to the file system]:
 Hallo,
 [...]

sorry, that was meant to be off-list. *kick my MUA*

Regards,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC on XFS getting lots of error -4 when calling...

2011-05-23 Thread Holger Parplies
Hi,

Doug Lytle wrote on 2011-05-21 15:49:57 -0400 [Re: [BackupPC-users] BackupPC on 
XFS getting lots of error -4 when calling...]:
 Holger Parplies wrote:
  Where do you get the impression there's anything wrong with the file system?
 
   2011-05-20 14:39:37 BackupPC_link got error -4 when calling 
 MakeFileLink
 
 That's the first thing that came to mind would be to run either an 
 xfs_check on the file system and or xfs_repair.

my point exactly. You shouldn't recommend the first thing that comes to mind
without further consideration whether it actually makes any sense - especially
if it is an operation with the potential to destroy a perfectly functional
file system. I'm not saying xfs_repair will do that, I'm just wary about
repairing file systems. Sometimes you have no other option, sometimes it's
successful. But the very *nature* of repairing file systems is looking for
inconsistencies and *guessing* what the intended state might be. Sometimes
that involves discarding data which might otherwise crash your OS (or the FS
driver), because there is no way to guess the correct meaning of the data.
Other times (like 'reiserfsck --rebuild-tree') you start a process which needs
to complete before you can access *any* data on the FS again. Most of the
time, you are strongly encouraged to make a backup copy of the whole partition
you are about to 'repair', in case the result is not to your satisfaction
(xfs_repair(8), in fact, does not; however, xfs_check(8) describes how to
save important data before repairing the filesystem with xfs_repair).

So, had you suggested xfs_check, that might have been pointless but harmless,
whereas xfs_repair is pointless and potentially harmful. Well, in my opinion,
at least.


For the archives:
If you suspect your XFS file system is corrupt,

1.) read the man pages xfs_check(8) and xfs_repair(8),
2.) then run xfs_check, and, only if this indicates you should,
3.) correct problems with xfsdump(8) and/or xfs_repair after possibly making
a backup copy of your file system (i.e. partition/volume).


In the context of BackupPC, you also need to think about

1.) what caused the problem? Can you trust the physical disk for future
backups?
2.) was the repair action successful? How do you test that for a BackupPC
pool? Can you be sure no backup is missing files or contains incorrect
content? Is that important for you? Will you need to know at a later
point in time, when you access the backups, that they may be inaccurate?

The safe answer to both points is, start fresh on a new disk, but that is
obviously not always possible.


In the context of this thread, forget about the above. The problem is *not* a
media error, though starting with a fresh pool may be a good idea if you want
to get pooling right. Alternatively, look for Jeffrey's scripts for fixing
pool linking issues.

Regards,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and MooseFS?

2011-05-23 Thread Holger Parplies
Hi,

Mike wrote on 2011-05-20 09:05:11 -0300 [Re: [BackupPC-users] BackupPC and 
MooseFS?]:
 [...]
 being able to say I want to have 2 copies of anything in this directory 
 and 3 copies of anything in this directory is very nice. [...]

Les Mikesell wrote on 2011-05-23 11:12:18 -0500 [Re: [BackupPC-users] BackupPC 
and MooseFS?]:
 On 5/21/2011 7:24 PM, Scott wrote:
  But does moosefs basically duplicate the data, so if you have 2tb of
  backuppc data, you need a moosefs with 2tb of storage to duplicate the
  whole thing?
 
 Yes, it gives the effect of raid1 mirrors -

from what Mike wrote, shouldn't you be able to say I want to have only one
copy of anything in some directories and two copies of everything else?
With BackupPC, that doesn't seem to make any sense (but: see below) - why
would you want to replicate only part of the pool, and why only files that
happen to have a partial file md5sum starting with certain letters? You could
limit log files to a single copy, but is that enough data to even worry about?
This does bring up questions, though: how does it handle hardlinks, if you
determine numbers of copies by directory, i.e. how many copies do you get,
if a file is in one directory where you want three copies and in another
directory where you chose two copies?
If the answer is five copies, it won't work with BackupPC ;-).

Mike, can you test what it does with hard links, e.g. by creating a large file
with several links?

I'm just asking, because with normal UNIX file system usage patterns, you
could probably get away with cheating (and creating five copies) without
anyone complaining (or even noticing). Then again, the mechanism might be
totally different, like putting the number of copies in the inode (and
inheriting from the parent directory on file creation; presuming it *has* an
own inode and doesn't just use a different FS for local storage). If that is
the case, you could even conceivably have some hosts' data replicated X
times and other hosts' data Y times (e.g. Y=1) by tagging the appropriate pc/
directories accordingly. Only problem: *shared* data (file contents appearing
on hosts in both sets) would 'randomly' have X or Y copies, depending on which
set of hosts happened to contain the file first (but you could probably adjust
that later and watch all the unmet goals get resolved ;-).

 but if I understand it correctly the contents can be distributed across
 several machines instead of needing space for a full copy of even a single
 instance of the whole filesystem on any single machine or drive.

The way BackupPC works (heavily relying on fast read performance), I would
expect it to be important for performance to have a full copy of the file
system locally on the BackupPC server. Is there a way to enforce that?

Another consideration would be, how well does it handle a large backlog of
unmet goals? If you're replicating over a comparatively slow connection, you
might need to spread out updates to the mirror(s) over more time than your
backup window contains. Does a large backlog of unmet goals deplete system
memory needed for caching?

Mike wrote:
 I haven't tried backuppc on it yet, but storing mail in maildir folder
 works well, and virtual machine images work well.

Unfortunately, both of these examples don't resemble BackupPC's disk usage.
Virtual machine images are possibly high-bandwidth single large file
operations, maildir folders use many small files, but the bandwidth is
probably severely limited by your internet connection and MTA processing
(DNSBL lookups, sender verification, Spamassassin, ...). Reading mail is
limited by your POP or IMAP server's processing speed (well, or NFS). And
all of that only happens if there is actually incoming mail or users
checking their mailbox, which you probably don't have at a sustained high
rate for longer periods of time.

While BackupPC's performance may also be limited by link bandwidth or
client speed, from what I read on this list, server disk performance seems
to be the most important limiting factor.

So, while your results are encouraging, we still simply need to try it out,
unless we can establish a reason why it won't work. For any meaningful
results, it would be best to have an alternate BackupPC server with
conventional storage (and comparable hardware) backing up the same clients
(but not at the same time) to compare backup performance with.

Regards,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:

Re: [BackupPC-users] No ping respons on localhost after ubuntu upgrade

2011-05-23 Thread Holger Parplies
Hi,

Brycie wrote on 2011-05-23 07:24:50 -0700 [[BackupPC-users] No ping respons on 
localhost after ubuntu upgrade]:
 I had exactly the same problem,

I suppose you're referring to something?

 +--
 |This was sent by br...@bdrm.org.uk via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--

Ah, yes. I see where the problem is.

 [...]
 It appears that BackupPC under Ubuntu 11.04 has started to use IPv6 when
 pinging localhost.

It's certainly interesting to know that the cause is a brainless bug in the
Ubuntu backuppc package. Aside from that, it's not really on-topic here.

 The solution is to simply add the following to config.pl: 
 
 $Conf{Ping6Path} = '/bin/ping6'; undefined

Well,
1.) Unquoted string undefined may clash with future reserved word
2.) Useless use of a constant in void context
3.) Unless that's the last non comment line in config.pl, you should even
get a syntax error.
4.) It's not a solution, it's a workaround. For localhost, testing whether
it is up before proceeding is somewhat optional. Experience indicates
that it usually is. For other hosts, you need to use whatever the
transport will (strictly, you should use TCP syn probes to the correct
port). There's not much point in requiring a working IPv6 path to the
client host if ssh is going to use (or would fall back to using) an
IPv4 connection.

Regards,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC on XFS getting lots of error -4 when calling...

2011-05-21 Thread Holger Parplies
Hi,

djsmiley2k wrote on 2011-05-20 07:13:51 -0700 [[BackupPC-users] BackupPC on XFS 
getting lots of error -4 when calling...]:
 
 Not sure exactly how to post logs correctly so I'm gonna go ahead with
 the straight paste.

generally, it's a good idea to mention what your problem is first. Nobody
likes trying to guess your issue from what your log file says. How about
simply putting

 [My] backups appear intact and correct. However I [am getting an error
 which, I believe,] means the pool isn't being used correctly?

before the log file quote?

 2011-05-20 14:39:37 BackupPC_link got error -4 when calling 
 MakeFileLink(/storage/backuppc/pc/testy/1/fdocs/fuser/fFavorites/fWindows 
 Live/attrib, d2f31b93ecf17a126ae44a4ea6cb750f, 1)
 2011-05-20 14:39:37 BackupPC_link got error -4 when calling 
 MakeFileLink(/storage/backuppc/pc/testy/1/fdocs/fuser/fLinks/attrib, 
 63875eaa9bf67c6925803d25ed7931ad, 1)

 [...]
 Mounting infomation:
 /dev/sdb1 on /storage type xfs (rw,nobarrier)
 
 And config options:$Conf{TopDir} = '/storage/backuppc';

Ah, you didn't read the documentation, right? There *are* recent versions
which actually support this, but likely you're not using one of those.
If you aren't, you should look at

http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Change_archive_directory

 I've googled and read all sorts about there being issues with xfs,

Really? I've been following the list for quite some time, and I don't recall
anything of this sort, while there have been hundreds of threads dealing with
problems with changing TopDir. Actually, just googling for BackupPC_link got
error -4 will point you in the right direction if you read through the first
few hits.

Doug Lytle wrote on 2011-05-20 11:52:31 -0400 [Re: [BackupPC-users] BackupPC on 
XFS getting lots of error -4 when calling...]:
 [...]
 I'd do the following (Note, I have the backuppc script in my path)

 backuppc stop

Do you also have invoke-rc.d in your path ? ;-)

 [...]
 xfs_repair /dev/sdb1

Where do you get the impression there's anything wrong with the file system?
Reading and creating files works, linking consistently fails, so the FS is
corrupt? I don't know how good xfs_repair is, but I'd be *very* careful about
*repairing* any FS I'm not absolutely sure is actually corrupt. 'fsck' should
be a safe start, though it probably won't do anything on an XFS the FS driver
hasn't detected any corruption on.

Regards,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best NAS device to run BackupPC ?

2011-05-18 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2011-05-18 02:41:15 -0400 [Re: [BackupPC-users] 
Best NAS device to run BackupPC ?]:
 [...]
 let's end this thread...

great idea, I second that!

Timothy's point, as I understand it, was that if you *do* have buggy NFS
implementations, you have the choice of either putting time into fixing that
or putting time into implementing a different option (iSCSI, ATAoE), and that
not everyone enjoys the challenge of hacking a proprietary NAS device. You
do, and that's fine. I wouldn't, but I spend time on other topics where I
*know* there's a simple solution or workaround readily available, but I
prefer to go the complicated way - just for fun or for good reasons.

Your experience seems to indicate that NAS devices occasionally *do* have
buggy NFS implementations and other pitfalls, right? ;-)

I believe many if not all options have been pointed out, so let's either find
something we *really* disagree on or move on ...

Regards,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to know backups stored or not

2011-05-18 Thread Holger Parplies
Hi,

Audi Narayana Reddy wrote on 2011-05-18 18:49:54 +0530 [Re: [BackupPC-users] 
How to know backups stored or not]:
 Thanks for replay

is this a replay attack?

 but  I have some problem like while i am going to this path
 bash: cd: /var/lib/backuppc/: Permission denied

Please note: not all questions are on-topic on this list, just because you
happen to have BackupPC installed on your computer.

You need a basic understanding of using a Linux computer. Teaching you that is
beyond the scope (and aim) of this list. I'm sure there are lots of helpful
resources to be found. If not, start by reading manual pages (try 'man man').

 while i create a new host i have problem
 
 session setup failed: NT_STATUS_LOGON_FAILURE

You also need a basic understanding of using a Windoze computer (if that's
what you're backing up). Hint: you're not using a user name/password
combination the computer knows.

 Backup Summary
 [...]

Is that just a random quote, or was there a point to that?

Hope that helps [us].

Regards,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Smarter CGI

2011-05-17 Thread Holger Parplies
Hi,

Bowie Bailey wrote on 2011-05-17 12:05:15 -0400 [Re: [BackupPC-users] Smarter 
CGI]:
 On 5/17/2011 9:11 AM, Steven Johnson wrote:
  Greetings,
  I would like to add some automation to the CGI interface. Firstly, in
  the Xfer config screen for Rsyncd or SMB hosts, I would like to make the
  Rsync/SmbShareName fields smarter so that when you click add you get a
  selection box or dropdown menu of all available rsync/SMB shares. It will
  use the username, password (and port for rsync) info to contact the host
  for a list of shares (for rsync, those that are configured to list in
  the rsync.conf file) Has anyone done something like this? is this even
  possible?
 
 Should be possible for Samba, not sure about rsync.

while I agree that this would be a nice feature for users of the GUI, you need
to keep several things in mind.

1.) The web server is not necessarily the host running BackupPC. In most
cases it is (I presume), but it's not required. The web server may not
have access to the backup clients. Even the BackupPC server may not
have access to the backup clients without running the PreDump command
(which might set up an ssh tunnel, VPN, etc.), and running the PreDump
command is not generally a good idea when you're not about to do a
backup, because it might do anything (create database dumps ...). The
point is, BackupPC allows quite complex scenarios, and some people use
them.

So, it might work in some cases, but it won't work in all cases. That
means, if it doesn't work, you can't use the GUI for configuration.

On the other hand, administrators of complex setups are probably more
likely to use their favorite vi clone for configuring BackupPC anyway ;-).

2.) You're letting a web access initiate a command that needs to be run as
the backuppc user (access to the ssh identities). You need to be *very*
careful about how you do that, lest you open up security holes. You might
think of the simple cases (e.g. someone passing a host name like

host; rm -rf /var/lib/backuppc; echo 

as parameter), but which cases won't you think of?

3.) People might want (or need) to set up the backup configuration *before*
configuring the client hosts. If you have a dropdown menu, you can't
enter values the menu doesn't contain.
I'm not sure if that's possible with rsyncd/smb, but if it is, you might
also want to use subdirectories of module names as BackupPC share name.
That would also not be possible.

Of course, if you can code up some smart completion suggestion mechanism
(like you have, for example, on the google web page search input field),
that would allow inputting any arbitrary value while giving you sensible
suggestions (when available). But please test that on all browsers ;-).

4.) What if the client host is offline when you are changing the
configuration?

5.) What if the connection to the client host is extremely slow or expensive
to set up (dial on demand)? Do you need to wait - possibly encountering
a browser timeout - for your configuration page, even if you don't *want*
to change a relevant setting? (Sorry, I don't use the GUI, that last part
may not apply.)

6.) You need to make sure that all relevant parts of the configuration have
been filled out before (e.g. ClientNameAlias, SmbClientFullCmd - from
which you would need to figure out how to list the shares; there's a
reason this setting is configurable in BackupPC).

All in all, I'd prefer a new (per-host) configuration variable GuiListShares
which would point to a script which could do *anything* to figure out the list
of share names (e.g. print static strings to stdout). Leaving this setting
empty (or even the command not returning any share names) would give you a
text input field like now. Invocation of the command should be via the
BackupPC server (BackupPC_serverMesg), though I'm not sure serverMesgs can
be completed asynchronously (well, the child process running the command
could be set up with stdout pointing to the socket; the server process
could simply close its descriptor and forget about the matter; not much
error logging in this case, though, and it wouldn't really fit in with
the existing serverMesg commands).

Of course this would mean that before configuring the shares, you'd have to
configure how to get a list of shares, which might, to a certain extent,
defeat the purpose. But if you have a global script
list_shares.pl host xferMethod and a global config setting, then you
would achieve what you are suggesting: automation. A version of this script
for simple, normal cases could even be distributed with BackupPC or through
the wiki. And, you wouldn't be limited to certain xferMethods, presuming you
can figure out share names for the other types (maybe mount points on the
client host?).

  Lastly, for various fields throughout the CGI, I would like to make them
  drop down menu's with 

Re: [BackupPC-users] No ping respons on localhost after ubuntu upgrade

2011-05-17 Thread Holger Parplies
Hi,

Magnus Larsson wrote on 2011-05-17 23:08:58 +0200 [Re: [BackupPC-users] No ping 
respons on localhost after ubuntu?upgrade]:
 [context dropped as I have not yet implemented un-toppost.pl]
 localhost   0   backuppc
 
 Then there is a localhost.pl.

regardless of why it doesn't work (which does sound weird), there is no
benefit to pinging localhost. You couldn't even *try* to ping it if it were
down. In localhost.pl, try

$Conf {PingCmd} = '{sub {0}}';

which is a simple way of saying

$Conf {PingCmd} = '/bin/true';

except that it doesn't need a child process to figure out the value of true
- Perl is perfectly capable of doing that itself without the result being in
any way inferior.

If your problem persists (or changes in nature), maybe at least we'll get a
better indication of the cause (error message). In that case, please give
more detail on your localhost.pl file and what your log files say.

Regards,
Holger

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude files from share and its subfolders

2011-05-14 Thread Holger Parplies
Hi,

Steven Johnson wrote on 2011-05-14 04:57:18 -0400 [[BackupPC-users] Exclude 
files from share and its subfolders]:
 Greetings,
 I've excluded a few files and folders from being backed up but i would
 like to exclude certain files (e.g. *.bak files and /.AppleDB folders
 from all subfolders under the main share. I'm using Rsyncd, btw. Thanks
 in advance.

well, why don't you do that?

Assuming your statement was meant as a question and I'm guessing the correct
question(s), you should either

1.) look at the rsync man page for the syntax of rsync excludes (BackupPC
uses the excludes verbatim and in the order you specify them, in case
you were wondering),
2.) look at the main config file for how to make excludes apply to a certain
share, or
3.) rest assured that that is possible.

I *could* quote the man page and config file verbatim, but that's unlikely
to help you. If you don't understand part of the explanation in these
sources, you're going to have to be more specific about what you don't
understand. You're unlikely to find anyone here prepared to rephrase both
of these documents in the hope of maybe making one point more clear for
you.

Hope that helps.

Regards,
Holger

P.S.: I've got a spare ) and you seem to be missing one. Take mine.

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup retention and deletion.

2011-05-13 Thread Holger Parplies
Hi,

Robin Lee Powell wrote on 2011-05-13 14:36:14 -0700 [Re: [BackupPC-users] 
Backup retention and deletion.]:
 On Fri, May 13, 2011 at 10:47:27PM +0200, Matthias Meyer wrote:
  Robin Lee Powell wrote:
   [...]
   My questions:
   
   1.  When and how do deletions of the excess occur?

read the comments in config.pl or the source of BackupPC_dump if you want the
details.

 [snip]
   $Conf{FullKeepCnt} = '25';
   $Conf{FullKeepCntMin} = '5';
   $Conf{FullAgeMax} = '60';
 [snip]
 
 I want it to delete all the fulls older than 60 days.
 FullKeepCntMin is only 5, so if I have more than 5 fulls (which I
 do), then everything older than FullAgeMax should be deleted, no?

No.

config.pl commented on 2010-07-31 19:52 [BackupPC 3.2.0]:
# Note that $Conf{FullAgeMax} will be increased to $Conf{FullKeepCnt}
# times $Conf{FullPeriod} if $Conf{FullKeepCnt} specifies enough
# full backups to exceed $Conf{FullAgeMax}.

Why don't you simply set FullKeepCnt to the number of backups you want to keep
(which appears to be about 16)?

Hope that helps.

Regards,
Holger

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] tree connect failed: NT_STATUS_ACCESS_DENIED

2011-05-12 Thread Holger Parplies
Hi,

first of all, please avoid HTML mail to mailing lists at all costs. Thank you.

The OP wrote:
   thanks for the reply, seems to be the latter as I can easily connect
   to the smb host with other windows and mac machines.

Yes, but you don't say as which user, which is the whole point.

   [...]
   I doubt its a credentials problem since I get a completely
   different error when I deliberately enter incorrect creds (LOGON 
   FAILURE).

Just to make what others have already written overclear:

NT_STATUS_LOGON_FAILURE = invalid user/pass combination (user does not exist
  or password incorrect)
NT_STATUS_ACCESS_DENIED = user/pass combination valid, but authenticated user
  has no access

Though, according to what Timothy writes, the matter is more complicated.

Hope that helps.

Regards,
Holger

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc sauvegarde automatique comment on fait ?

2011-05-12 Thread Holger Parplies
Hi,

Tyler J. Wagner wrote on 2011-05-12 16:35:31 +0100 [Re: [BackupPC-users] 
backuppc sauvegarde automatique comment on fait ?]:
 On Thu, 2011-05-12 at 02:31 -0700, driss wrote:
  j'ai installe un serveur debian avec un serveur de sauvegarde backuppc pour 
  sauvegarder un client windows xp et seven et le serveur lui meme .
  les sauvegarde manuelles c'est a dire a partir de l'interface web 
  fonctionnent a merveille
  mais les sauvegarde automatiques ne marche pas .
  j'aimarais savoir quel parametres a modifier dans le fichier de 
  configuration pour que les sauvegardes auto marchent
  a noter j'ai mis un fichier de configuration pour chaque machine
  merci pour votre aide
 
 Qu'est-ce que vous entendez par «la sauvegarde automatique ne fonctionne
 pas? S'il vous plaît donner plus de détails et de messages d'erreur.

yes, and s'il vous plait en anglais si possible. While we're learning French,
shouldn't it be les sauvegardes automatiques ne marchent pas?

However, I guess you mean the backups simply aren't happening (i.e. no error
message to quote; but: check the log files - maybe there *is* an error
message). And you *could* quote your configuration. Then we might actually
be able to help you.

Regards,
Holger

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FreeBSD port Broken?

2011-05-04 Thread Holger Parplies
Hi,

RYAN M. vAN GINNEKEN wrote on 2011-05-04 01:47:51 -0600 [Re: [BackupPC-users] 
FreeBSD port Broken?]:
 [...]
 Adjust the command line to work with FreeBSD not sure how to go about this?

just like any time you're looking at a command prompt and wondering what to
type.

 I presume editing the config.pl script any hints?

I presume you've done that before - or are you actually using the default
backup definition the package happened to provide?

 I checked that /usr/bin/tar exits

Is that 'exits' or 'exists'?

 and wounder why would the FreeBSD package maintainers not include a working
 configuration file, or include gnu tar as a dependency if it is required to
 work.

GNU tar is neither required, nor would it be a dependency if it were. You need
to understand what you're doing. You need to understand the difference between
your BackupPC server and the client(s) you want to back up. In *your* (not
necessarily typical) case, these two seem to be identical. Fine. You *can* use
BackupPC that way. In the general case, your clients are separate machines,
possibly with a different OS and any conceivable version of any toolkit
installed. In any case, they are outside the scope of dependencies of your
BackupPC package - and outside the scope of consideration of the packager(s).
They probably don't even know the *names* of these machines, so they clearly
can't provide a working configuration.


If you are really using the default configuration provided with the package,
you should learn two things from your experience:

1.) You need to think about what *you* want to do (i.e. back up), not blindly
use some *examples* someone included to get you started. I would consider
it a *feature* if such examples *do not* work out of the box.

2.) Apart from probably not doing what you require of them, examples may be
ill chosen. Doing a backup of the /etc directory on the local host with
'ssh -l root' is not recommendable - and the excludes certainly make no
sense at all.

 Don't get me wrong i love BSD, but why do they make everything so difficult
 for us.

For what, exactly, do you love BSD, if you find it so difficult? Or was that a
typo?

 I am sure it is just me as backuppc is in the ports so most must have it
 working right? 

Right.

 maybe i should forget tar and move on to rsync or something?

You are going to need a working rsync command line just the same (and a
supported version of rsync). That doesn't make anything easier - just
different.

 This is just the localhost wondering how difficult it will be to backup
 remote machines.  

Yes, they will need to be switched on, plugged into the network, etc. Lots of
things to think about.


Hope that helps.

Regards,
Holger

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Not able to use the backuppc account in freebsd

2011-05-03 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2011-05-03 15:14:08 -0500 [Re: [BackupPC-users] Not able 
to use the backuppc account in freebsd]:
 I'm surprised that sudo doesn't honor the user's shell.

really? It's really not surprising. The semantics of sudo is allow executing
/exact/path/to/some/command as user X to user Y, possibly without password,
not allow executing something that /some/random/shell would do when you
type $some_random_string at its prompt, so sudo would use an execve() system
call, not a user's login shell.

 su can override it with a command line option, but the issue that prompted
 this thread is that the linux/bsd flavors of su take different options.

As would be explained in the corresponding fine (tm) manual pages. I'm sure
the manual pages of 'sudo' would likewise explain whether there are
differences in its options on Linux vs. BSD (which, I'm guessing, there
probably aren't, but that's just a guess). So, the generic answer to the
original question is: RTFM, or, if that's too complicated, use a command
with identical syntax.

Regards,
Holger

P.S.: I fully agree in advance with any responses.

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RAID and offsite

2011-04-29 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2011-04-28 23:15:52 -0500 [Re: [BackupPC-users] RAID and 
offsite]:
 On 4/28/11 9:50 PM, Holger Parplies wrote:
  I'm sure that's a point where we'll all disagree with each other :-).
 
  Personally, I wouldn't use a common set of disks for normal backup operation
  and offsite backups. [...]
 
 I don't think there is anything predictable about disk failure. Handling
 them is probably bad.  Normal (even heavy) use doesn't seem to matter
 unless maybe they overheat.

well, age does matter at *some* point, as does heat. Unless you proactively
replace the disks before that point is reached, they will likely all be old
when the first one fails. Sure, if the first disk fails after a few months,
the others will likely be ok (though I've had a set of 15 identical disks
where about 10 failed within the first 2 years).

  [...] I think it brought up the *wrong* (i.e. faulty) disk of the mirror and
  failed on an fsck. [...]
 
 Grub doesn't know about raid and just happens to work with raid1 because it 
 treats the disk as a single drive.

What's more, grub doesn't know about fsck.

grub found and booted a kernel. The kernel then decided that its root FS on
/dev/md0 consisted of the wrong mirror (or maybe its LVM PV on /dev/md1;
probably both). grub and the BIOS have no part in that decision.

I can see that the remaining drive may fail to boot (which it didn't), but I
*can't* see why an array should be started in degraded mode on the *defective*
mirror when both are present.

 And back in IDE days, a drive failure usually locked the controller which
 might have had another drive on the same cable.

Totally unrelated, but yes. SATA in my case anyway.

  I *have* seen RAID members dropped from an array without understandable
  reasons, but, mostly, re-adding them simply worked [...]
 
 I've seen that too.  I think retries are much more aggressive on single
 disks or the last one left in a raid than on the mirror.

Yes, but a retry needs a read error first. Are retries on single disks always
logged or only on failure?

Or perhaps I should ask this: are retries uncommon enough to warrant failing
array members, yet common enough that a disk that has produced one can still
be trustworthy? How do you handle disks where you see that happen? Replace or
retry?

  [...] there are no guarantees your specific software/kernel/driver/hardware
  combination will not trigger some unknown (or unfixed ;-) bug.
 
 I had a machine with a couple of 4-year uptime runs (a red hat 7.3) where 
 several of the scsi drives failed and were hot-swapped and re-synced with no 
 surprises.  So unless something has broken in the software recently, I mostly 
 trust it.

You mean, your RH 7.3 machine had all software/kernel/driver/hardware
combinations that there are?

Like I said, I've seen (and heard of) strange occurrences, yet, like you, I
mostly trust the software, simply out of lack of choice. I *can't* verify its
correct operation; I could only try to reproduce incorrect operation, were I
to notice it. When something strange happens, I mostly attribute it to user
errors, bugs in file system code, hardware errors (memory or power supply).
RAID software errors are last on my mind. In any case, the benefits seem to
outweigh the doubts.

Yet there remain these few strange occurrences, which may or may not be
RAID-related. On average, every few thousand years, a CPU will randomly
compute an incorrect result for some operation for whatever reason. That is
unlikely enough that any single one of us is extremely unlikely to ever be
affected. But there are enough computers around that it does happen on a daily
basis. Most of the time, the effect is probably benign (random mouse movement,
one incorrect sample in an audio stream, another Windoze bluescreen, whatever).
It might as well be RAID weirdness in one case. Or the RAID weirdness may be
the result of an obscure bug. Complex software *does* contain bugs, you know.

  It *would* help to understand how RAID event counts and the Linux RAID
  implementation in general work. Has anyone got any pointers to good
  documentation?
 
 I've never seen it get this wrong when auto-assembling at reboot (and I move 
 disks around frequently and sometimes clone machines by splitting the mirrors 
 into different machines), but it shouldn't matter in the BPC scenario because 
 you are always manually telling it which partition to add to an already
 running array.

That doesn't exactly answer my question, but I'll take it as a no, I don't.

Yes, I *did* mention that, I believe, but if your 2 TB resync doesn't complete
before reboot/power failure, then you exactly *don't* have a rebuild initiated
by an 'md --add'; after reboot, you have an auto-assembly (I also mentioned
that). And, also agreed, I've also never ***seen*** it get this wrong when
auto-assembling at reboot (well, except for once, but let's even ignore that).

My point is that auto-assembly normally takes two (or more) mirrors

Re: [BackupPC-users] RAID and offsite

2011-04-28 Thread Holger Parplies
Hi,

Michael Conner wrote on 2011-04-27 10:27:18 -0500 [Re: [BackupPC-users] RAID 
and offsite]:
 On Apr 26, 2011, at 12:08 PM, Les Mikesell wrote:
  On 4/26/2011 11:38 AM, Michael Conner wrote:
  [...]
  Someone used a RAID 1 setup but only put in the second disk periodically,
  then removed it for offsite storage. I have three 2T drives, so was
  considering something similar where I would keep a normal 2-disk RAID 1
  setup but periodically remove one disk and replace it with a prior
  offsite disk.

just to summarize what has been posted so far:

1.) Having an *additional* disk (i.e. 3-disk RAID 1 with 2 permanent and 1
offsite member) protects you against single disk failures during rebuild.
Other failures (software, hardware, controller, lightning, etc.) can still
do harm, so it is still not perfect, but I think there is no disagreement
on that the additional RAID member does add protection against one very
real failure scenario.

2.) You really need more than one offsite disk, if you are taking offsite
seriously. I.e. bringing the disk on-site, failing one RAID member, adding
the previous offsite disk, and then taking the new offsite disk off-site
will temporarily have all disks on-site. That may or may not be of concern
for you, but it is worth emphasizing.
On the other hand, first failing one RAID member, taking it off-site, then
bringing in the other disk and adding it, will leave you with a degraded
RAID for a considerable amount of time (and may not work for you, depending
on how often you want to resync).

With just 4 disks, you can have both a permanent 2-way RAID 1 (3 members, one
only connected for resync) and one copy always offsite. Normally, you keep
both offsite disks offsite, and bring them in alternately to resync.

  [...]
  But, note that even though you don't technically have to stop/unmount 
  the raid while doing the sync, realistically it doesn't perform well 
  enough to do backups at the same time. I use a cron job to start the 
  sync very early in the morning so it will complete before backups would 
  start.

How do you schedule the sync? (Or are you just talking about hot-adding the
disk via cron?)

 All my sata drives are external internals. That is, they are connected to
 PCI sata controller but since there are no bays to install them in the
 computer chasis, I just run the cables outside through a PCI slot bar.
 Still have to figure out the a long-term housing solution. At least they
 are easy to access.

I don't think eSATA has any real disadvantages over SATA performance wise.
Sure, you have external cabling and one or more separate power supplies as
additional points of failure. But if you have that anyway, you might as well
use standard cables that somewhat facilitate handling. Or buy a computer
chassis that will accommodate your drives (and use eSATA for the offsite
drive(s)).

 So I would be ok doing something like this:
 Stop BPC process
 Unmount raid array (md0 made up of sda1 and sdb1)
 Use mdadm to remove sdb1 from the array

Assuming you want to remount your file system and restart BackupPC, you can do
so at this point (or later). As Les said, your performance may vary :).

 Take off the sdb drive, attach offsite one in its place

Assuming your kernel/SATA-driver/SATA-chipset can handle hotswapping ...
otherwise you'd need to reboot here.

 Use mdadm to add sdb1 to md0 and reconstruct
 
 Maybe cycle through whether I remove sda or sdb so all drives get used
 about the same amount over time.

I'm sure that's a point where we'll all disagree with each other :-).

Personally, I wouldn't use a common set of disks for normal backup operation
and offsite backups. BackupPC puts considerable wear on its pool disks. At
some point in time, you'll either have failing disks or proactively want to
replace disks before they start failing. Are you sure you want to think about
failing pool disks and failing offsite backup disks at the same time (i.e.
correlated)? I assume, failing pool disks are one of the things you want to
protect against with offsite backups. So why use backup media that are likely
to begin failing just when you'll need them?

 My main concerns were: can I remount and use md0 while it is rebuilding and
 that there is no danger of the array rebuilding to the state of the newly
 attached drive (I'm very paranoid).

I can understand that. I used RAID 1 in one of my computers (root FS, system,
data) for a time simply for the purpose of gaining experience with RAID 1. I
didn't notice much (except for the noise of the additional disk) until one
disk had some sort of problem. I don't remember the details, but I recall that
I had expected the computer to boot unattendedly (well, the 'reboot' was
manual ... or was it actually a crash that triggered the problem?), which it
didn't. I think it brought up the *wrong* (i.e. faulty) disk of the mirror and
failed on an fsck. Physically removing the faulty disk 

Re: [BackupPC-users] RAID and offsite

2011-04-28 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2011-04-27 20:36:13 -0500 [Re: [BackupPC-users] RAID and 
offsite]:
 On 4/27/11 7:10 PM, Chris Parsons wrote:
  On 28/04/2011 6:52 AM, Les Mikesell wrote:
  I've forgotten the original context, but if it is setting up a new
  system you don't have much to lose in the initial sync - and by the time
  you do, you should have another copy already stored offsite.
 
  In this case, why involve the complexities of RAID at all. Just use
  individual disks, each with their own pool and rotate them. If a disk
  fails, you only lose that pool. It avoids all the complexities of raid
  - and the danger of raid corruption. I don't see any point in involving
  raid until you need to span pools over more than one disk.
 
 Most backups do a double duty. One use is for complete system/disaster
 recovery, one is for for when you realize the next day that you deleted
 something you need.   Backuppc is particularly good for the latter, more
 frequent occurrence, but if you've just swapped an old disk back you won't
 have access to the copy you are most likely to need.  You'll also be
 copying more than necessary with older reference copies, but that is less
 likely to be a real problem.

Aside from this very important point, some minor reasons spring to mind ...

1.) n-way RAID 1 gives you a theoretical increase of *read* throughput by a
factor of n. Aside from that, it can save you some head seeks, further
speeding up operation. As it happens, BackupPC prefers reading from the
pool over writing to it, when it has a choice (actually, it prefers
decompression over compression, but the result is the same).
So, at least in theory, RAID 1 can speed up your backups.

2.) Although we usually forget it, RAID is about *uninterrupted* operation.
If your disk dies, your server doesn't go down. With RAID, you *might*
be able to go and buy a replacement disk, plug it into the computer,
and resync the data without any of your users ever noticing anything
(except for the slowdown due to the resync, but they'll probably just
complain, my internet is broken ;-).

3.) If a disk fails, and you only lose that pool, that pool may contain
backups you vitally need. Though the other pools probably contain backups
close by, that may not be good enough. While you can't avoid losing
young backups (from after the last resync), you *can* avoid losing older
backups.

But, if these points are not important in your case, using independent pools
may very well be an option. As a variation, you could even have offsite
BackupPC servers doing backups alternately (server 1 on day 1, server 2 on
day 2, server 3 on day 3, server 1 on day 4, etc.) if your backup clients
(or network bandwidth) can't take the impact of more than one backup per
day. That way, you would have all of the backup history online (though
spread over several servers), and a disasterous event at any one site
would leave you the remaining pools.
Bandwidth permitting, of course.

Regards,
Holger

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't Fork Crash on Nexenta (Solaris)

2011-04-28 Thread Holger Parplies
Hi,

Stephen Gelman wrote on 2011-04-20 22:57:38 -0500 [[BackupPC-users] Can't 
Fork Crash on Nexenta (Solaris)]:
 On Nexenta (which is essentially an OpenSolaris derivative), I seem to have
 issues where BackupPC crashes every once and a while.  When it crashes, the
 log says:
 
 Can't fork at /usr/share/backuppc/lib/BackupPC/Lib.pm line 1340.
 
 Any ideas how to prevent this?
 
 Stephen Gelman
 Systems Administrator

errm, have less processes running on your machine? What errno is that? Line
1340 contains a Perl 'return' statement, so that's strange (since you didn't
mention it, you must be using BackupPC 3.2.0beta0, because that's the version
I happened to check). Which log file? How come BackupPC writes something to
the log if it crashes? BackupPC doesn't even *try* to fork via Lib.pm (only
BackupPC_{dump,restore,archive} appear to use that), and failures to fork in
the daemon are certainly not fatal (except for daemonizing on startup).
Very strange.

Oh, and we know what Nexenta is. That's the part you wouldn't have needed to
explain.

Regards,
Holger

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] incremental smbclient directory empty

2011-04-26 Thread Holger Parplies
Hi,

jguillem wrote on 2011-04-26 05:54:56 -0700 [[BackupPC-users]  incremental 
smbclient directory empty]:
 What commands you use to perform the incremental backup?

standard rsync over ssh. Why?

 To full backup I try it and works:
 
 smbclient //host/share  -U user -E -d 1 -c tarmode\ inc -Tc file.tar *
 
 and try it to incremental but dont work
 
 smbclient //host/share  -U user -E -d 1 -c tarmode\ inc -TcN file.tar *

Ah. Look in your configuration or log file. Read the documentation. It would
appear your incremental command is missing a timestamp file. And I doubt you
really want 'tarmode inc', though for testing it probably doesn't make much of
a difference. In reality, you're escaping the '*', right?

Doubt that helps.

Regards,
Holger

 |This was sent by jordifrag...@gmail.com via Backup Central.

Why am I not surprised?

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Renaming files causes retransfer?

2011-04-20 Thread Holger Parplies
Hi,

martin f krafft wrote on 2011-04-17 16:43:07 +0200 [Re: [BackupPC-users] 
Renaming files causes retransfer?]:
 also sprach John Rouillard rouilj-backu...@renesys.com [2011.04.17.1625 
 +0200]:
   In terms of backuppc, this means that the files will have to be
   transferred again, completely, right?
  
  Correct.
 
 Actually, I just did a test, using iptables to count bytes between
 the two hosts, and then renamed a 33M file. backuppc, using rsync,
 only transferred 370k. Hence I think that it actually does *not*
 transfer the whole file.

it always feels strange to contradict reality, but, in theory, there is no way
to get around transferring the file.

For the rsync algorithm to work, you need a local reference copy of the
file you want to transfer. While you and I know that there *is* a local copy,
BackupPC would need to know (a) that there is and (b) where to find it. The
only available information at the point in time where this decision needs to
be made is the (new) file name. For this, there is no candidate in the
reference backup (or any other backup, for that matter). So the file needs to
be transferred in full.

We'd all like to be able to choose an existing *pool file* as reference - this
would save us transfers of *any* file already existing in the pool (e.g. from
other hosts). Unfortunately, this is technically not possible without a
specialized BackupPC client.

 (btw, I also think that what I wrote in
 http://comments.gmane.org/gmane.comp.sysutils.backup.backuppc.general/24352
 is wrong, but I shall follow up on this when I have verified my
 findings).

Is that a backuppc-users thread I somehow missed? I see where your question
is going now, so I'll go into a bit more detail (not sure if any of this was
already mentioned in that thread).

1.) BackupPC uses already existing transfer methods for the sake of not
needing to install anything non-mainstream on the clients. In your case,
that is probably ssh + rsync.
Consequentially, BackupPC is limited to what the rsync protocol will
allow, which does *not* include, hey, send me the 1st and 8th 128kB
chunk of the file before I'll tell you the checksum I have on my side.
Such a request just doesn't make any sense for standalone rsync. We need
to select a candidate before we can start transferring blocks that don't
match (and skip blocks that do). It's really quite obvious, if you think
about it, and it only gets more complicated (but doesn't change) if you go
into the details of which rsync end plays which role in the file delta
exchange.

The same is basically true for tar and smb, respectively. The remote end
decides what data to transfer (which is whole file or nothing), and you
can take it or ignore it, but you can't prevent it from being transferred.

2.) BackupPC reads the first 1MB into memory. It needs to do so to determine
the pool file name. That should not be a problem memory-wise.

3.) BackupPC cannot, obviously, read any arbitrary size file into memory. It
also wants to avoid unnecessary (possibly extremely large) writes to the
pool FS. So it does this:
- Determine pool file candidates (possibly several, in case of pool
  collisions).
- Read pool file candidates in parallel with the network transfer.
- As soon as something doesn't match, discard the respective candidate.
- If that was the last available candidate, copy everything so far (which
  *did* match) from that candidate to a new file.
  We need to get this content from somewhere, and the network stream is,
  obviously, not seekable, so we can't re-get it from there (but then, we
  don't need to and wouldn't want to, because, hopefully, our local disk
  is faster ;-).
- If the whole candidate file matched our complete network stream, we
  have a pool match and only need to link to that.

4.) There *was* an attempt to write a specialized BackupPC client (BackupPCd)
quite a while back. I believe this was given up for lack of human
resources. I always found this matter rather interesting, but I've never
gotten around to even taking a look at the code, let alone do anything
with it.

I hope that clears things up a bit.

Regards,
Holger

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Downloading copy of backup from zip file over 4GB

2011-04-20 Thread Holger Parplies
Hi,

Lee A. Connell wrote on 2011-04-20 16:58:15 -0400 [[BackupPC-users] Downloading 
copy of backup from zip file over 4GB]:
 When I try to download tar or zip backup file over 4GB the download
 fails.  What can be causing this?

the same as in the thread a few days ago: quite a lot of things. Where is your
browser trying to store it? Note that I'm not asking where *you* are trying to
store it - I assume you have enough space there, and I assume you've verified
that storing files of that size works. I don't know of any limitations on
BackupPC's side (if that's what you're asking). It's almost certainly a
browser or file system issue. I'd guess tmp space, wherever your browser
puts that.

Regards,
Holger

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to force full backups on weekends?

2011-04-14 Thread Holger Parplies
Hi,

Jake Wilson wrote on 2011-04-14 10:04:42 -0600 [Re: [BackupPC-users] How to 
force full backups on weekends?]:
 I did search Google and researched the topic in the wiki and documentation
 and the only thing I could find remotely related was the cron scheduling
 page in the wiki.

in my opinion, that is exactly the right place, so if you found that, your
search was good - probably even very good, because the name of the page is not
good ;-).

The problem is, if you *know* the answer, you'll find it on that page. It's
just not obvious (and it should be, because it's intended for people who
*don't* know the answer yet).

 If a question is asked (and answered) a lot I guess I
 would expect to see the best solutions at least listed on the wiki.

I agree with you there. As for me, I have, in the past, spent my energy on
answering questions here rather than in the wiki. I would have loved to have a
*good* wiki page to point to instead, but I never got around to putting any
time into it. This time, I edited the wiki page instead. I would be greatly
interested if my changes clear anything up or confuse matters further (and
whether they answer your question). If they don't, keep asking. That's the
only way we'll ever get answers into the wiki (that people other than
ourselves understand).

And Jeffrey, if you could give me a pointer to the previous thread, I'll add
anything from there, or you could, of course, also do that yourself ;-).

Regards,
Holger

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Re-read hostname config after DumpPreUserCmd

2011-04-14 Thread Holger Parplies
Hi,

Marcos Lorenzo de Santiago wrote on 2011-04-14 12:31:08 +0200 [[BackupPC-users] 
Re-read hostname config after DumpPreUserCmd]:
 [...]
 I have a bunch of virtual servers in our platform that I wish to backup. 
 The issue here is that sometimes they're turned on and sometimes they're 
 off. I came out with a solution: modifying the ssh command at login time I 
 can pass another argument with the virtual machine name so I can mount it, 
 chroot to it's disk and make a backup if the machine is turned off.

while I like the idea of being flexible about how to obtain the backup, I
don't see that you would need to modify the configuration at runtime for
that to work. Maybe I'm missing something obvious, but I would imagine either

1.) RsyncClientCmd could do all the work - simply point that to a script
that sets things up and execs the appropriate 'ssh $host rsync ...'
command.

2.) DumpPreUserCmd could prepare things (as you are doing now) and write into
a state file whether a native or chroot backup should be done.
RsyncClientCmd could be a script that reads that state file and then execs
the appropriate command like above.

3.) DumpPreUserCmd *can* be Perl code, so you could probably even use it
to do exactly what you are proposing *without* modifying the BackupPC
code (meaning if the change is not of general interest, you don't need to
patch the source to get its benefits; you can put it in the
configuration) - if you really need it.

You'll have to be smart about PingCmd in any case (probably just ping the
server holding the virtual disk contents?).

Aside from all that, can't you just run the backup in the chroot environment
regardless of whether the virtual machine is running? What kind of
virtualization are we talking about?


I'm not saying your approach doesn't work. It's just that, personally, I'd
find a solution which does not modify the configuration at runtime more
readable (maintainable) and more resilient against errors.


 I wanted also to include this modification under 
 BackupPC_dump as it increases BackupPC's functionality.

I disagree. As I said, I don't see that it makes anything possible that isn't
already. It *does* add a failure case (if a narrow one) and unnecessary work
(if not much) for the vast majority of BackupPC installations. And it could
conceivably break installations if anyone should currently use DumpPreUserCmd
to modify his configuration and rely on it *not* taking immediate effect (not
that I'd expect that, but who knows?).

Regards,
Holger

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to restore an 8GB archive file?

2011-04-14 Thread Holger Parplies
Hi,

Sorin Srbu wrote on 2011-04-14 08:33:10 +0200 [Re: [BackupPC-users] How to 
restore  an 8GB archive file?]:
 -Original Message-
 From: Jeffrey J. Kosowsky [mailto:@.org]

please don't do that. At least now I know why I'm getting spam to my
backuppc-list-only email address.

 To: General list for user discussion, questions and support

Much better, though this address is probably less sensitive ...

 Cc: sorin.s...@orgfarm.uu.se

Your problem :-).

 [...]
 Moving the 8GB archive to a machine with ext4, solved the problem. 

I agree with the other opinions. Amongst other things, you changed the file
system. I doubt this was the relevant change.

 OTOH, ext3 is said to have a max file size limit from about 16GB up to
 some 2TB, depending on block size.

Several years ago, I worried about file sizes, too. It turned out to just
work even back then. I haven't encountered such limits in years. Then again, 
on relevant file systems I don't tend to use ext3, because it *still* seems to
have occasional problems with online resizing (admittedly on a Debian etch
installation; might have gone away since). Huge files seem to go hand in hand
with online resizing requirements.

Sorin Srbu wrote on 2011-04-14 08:37:54 +0200 [Re: [BackupPC-users] How to 
restore  an 8GB archive file?]:
 [...]
 From: Les Mikesell
 Sent: Wednesday, April 13, 2011 5:10 PM
  Why don't you just restore it back to his machine, using the typical
  option 1? If BackupPC archived it in the first place, it can restore it
  the same way.
 
  I've never had that option to work. This time I got a weird unable to
 read 4 bytes-error when trying a direct restore.
 
 Usually that means the restore is configured to use ssh in some way, and
 the ssh keys aren't set up correctly.  Is there something different
 about the way your restore command works?
 
 I do use passwordless login for the backups to work. The backup works fine
 using ssh this way; I don't get prompted for a password.
 
 Not sure though, how you mean different for restoring. Could you elaborate a
 bit?

You've got it the wrong way around. *You* need to elaborate. What are your
RsyncClientCmd and RsyncClientRestoreCmd (it was rsync, wasn't it?)? If we
knew those, we could see what might be misconfigured or causing problems (or
what is even *involved* in backing up/restoring in your setup).

 I haven't really looked into the first restore option, ie tweaked in any
 way, as #2 and #3 have worked fine so far, until now.

Well, then it may be set incorrectly. Or not. Depending on what you did to the
backup command.

Sorin Srbu wrote on 2011-04-14 08:47:12 +0200 [Re: [BackupPC-users] How to 
restore  an 8GB archive file?]:
 From: Holger Parplies
 Sent: Thursday, April 14, 2011 12:38 AM
 
 - Which user on the target host do you need to connect as? Perhaps root?
 
 When the backuppc user connects to a host to do a backup, it uses a
 passwordless login with ssh keys. The password entered the very first time I
 transferred the key, was root's. So does this mean it's user backuppc that
 does the actual restore or user root?

Well, you took away the context, so it's not obvious you misunderstood the
question (which wasn't one, actually).

If you use computers to do things, you need to think. There is no way around
that. Even a nice shiny GUI does not have a do the right thing, now button.
Downloading a tar file over the GUI requires you to think about where to do
that and how to get the tar file to the destination computer, as the right
user, and where to put it. There might be a simple solution (go to the
destination computer and download the tar file from a browser belonging to
the user, and he'll tell you where to put it), but there might as well be
many obstacles (not enough tmp space, broken browser version, no network
access to the BackupPC server, slow network link, transparent proxy, user
out for lunch, user needs to leave before the download is complete ...).
Some of these might even impose *arbitrary* file size limits when downloading
(browsers seem to have *strange* solutions for starting downloads before they
know where to put the file).
You might automatically select the right option, or you might not think
about it at all and just get away with it. Or hit something that looks like a
file system problem, but can't really be explained.

Concerning the selection of an ssh target user, if you want a generic answer,
use root, that will always work (but has the potential to do more harm if you
get something wrong). For your case, if you *can* log in as the file owner
(all files in the restore belong to him, right?), then do that. Maybe I should
have written select the target user that makes most sense in each respective
case.

All of this has *nothing* to do with BackupPC doing backups. It's only about
*you* getting the user's files back on his computer. And it's coincidentally
similar to how automatic restores would work, except that they need a generic
(and non

Re: [BackupPC-users] Block-level rsync-like hashing dd?

2011-04-13 Thread Holger Parplies
Hi,

Saturn2888 wrote on 2011-04-12 19:40:50 -0700 [[BackupPC-users]  Block-level 
rsync-like hashing dd?]:
 My goodness that's a lot of replies; although, almost all of them are from
 a post I made a while back which I've elaborated on at least 3 times since.

that is probably due to the fact that you are using inferior forum software to
post to this mailing list, which ends up creating a new unrelated thread each
time *you* get back into the discussion. I've considered doing the equivalent
for you for demonstration purposes - removing reference headers and perhaps
changing the subject for good measure. You would have found this message in
a different topic on your board - or, more probably, you wouldn't have found
it at all. I still like the idea, though. Please just *try* to imagine the
concept of following a discussion intermixed throughout three or four
topics - even if they happen to all have the same title. That is, basically,
what you are asking us to do.

We have it on good confidence from the board maintainer that he can't fix this
issue, because he is doing everything he does for altruistic reasons (I
believe that was his net reasoning).

You, however, *could* fix the problem, as has been pointed out more than once,
as I believe. If you refuse to do so, you are implicitly asking for exactly
what you get: replies to posts you consider outdated. So don't complain. It
was *your* choice.

And I need to quote Timothy, because I enjoyed his usage of the word
God-forsaken so very much :-). Lesson learnt: it pays to read threads
you considered simply skipping, provided the right people are contributing.

 The main point here that I'm trying to get across, at least for my own
 setup, is that I cannot handle dd'ing 1TB/day.

Well, that's simple: then don't. Problem solved.

The main point you *should* probably be trying to get across (and you might
have - I have obviously missed at least one segment of the hyper-thread) is,
what *exactly* you are trying to *achieve*, *under which constraints*. It's
obviously some variation of the old story of pool replication. I'm sure you've
read all the hundreds of previous threads discussing this matter, and your
case is surely significantly enough different to warrant reiterating the
matter yet once more.

If it turns out that what you want *can't be done*, then asking if we can
make an exception for you won't help.

 [...] The reason I am advocating DRBD [...]

It's really your decision. You don't have to persuade us. You *won't* persuade
us to agree with you if you are wrong (I don't know if you are).

If your question is actually, can DRBD do for me what I want it to, then
*ask* *that* *question* (and that really belongs in capitals!), including what
you want it to do; don't confuse us with a general discussion that most of us
tend to be fed up with, because we've had it literally hundreds of times.

 is because I have it going to an iSCSI target on a machine with ZFS which
 snapshots the pool. The key there is I'm taking snapshots of it so even if
 it corrupts, I'm fine. All I'd have to do is mount the .img as EXT4 in an
 iSCSI target and point a client to that target to retrieve the files.

If you want an honest opinion, that sounds like a few random sentences to me,
fabricated to include certain keywords. To me, it doesn't make any sense
whatsoever. Maybe it's just my lack of understanding, but I don't see how you
can mount the .img as EXT4 in an iSCSI target. An iSCSI target is a block
device, an opaque array of bytes, exported to a client to use however he
wants. Your iSCSI device might allow you to export image files as iSCSI
devices, but then you wouldn't mount anything server-side (much less as
EXT4). Are you sure you *understand* the concepts you are talking about?

 [...] I don't see pool corruption a big deal because I've never experienced
 anything like that.

What exactly are you protecting against if not pool corruption? Theft?
It's perfectly fine to protect (or not protect) against any particular
threat. It's just unclear whether you are achieving what you think you
are, so it might help to explain what you are trying to achieve, presuming
you want any advice.

 [...] just need a method of getting only the changes over there without
 abusing the available bandwidth.

The only thing I can think of to add to what has already been said is that you
have bandwidth at several points here:

- bandwidth from source disk
- bandwidth to destination disk
- bandwidth of data your CPU can handle (i.e. checksum, md4/md5, compression,
  or whatever)
- bandwidth of source NIC
- bandwidth of destination NIC
- bandwidth of network link.

You sound like you are worried about the network link and nothing else. Is
that correct? Is that sensible? All of those resources are limited and may
or may not be precious to you. Depending on which are, the comments differ
greatly (up to the point of being contradictory). It is probably frustrating
for you to read answers worried 

Re: [BackupPC-users] How to restore an 8GB archive file?

2011-04-13 Thread Holger Parplies
Hi,

just to add an option for the archives ...

Sorin Srbu wrote on 2011-04-13 14:38:29 +0200 [[BackupPC-users] How to restore  
an 8GB archive file?]:
 [...]
 Not initially knowing how big it actually was, using BPC I downloaded it in a
 zipped and uncompressed format to my Windows machine, then transferred the zip
 to her linux machine running 32b CentOS v5.6. [...]
 
 1. How would I best deal with really big archives when restoring from BPC and
 32b linux is involved?

with really big archives you want to avoid unnecessary network transfers and
intermediate storage of the files. Try something along the lines of ...

backuppc-server$ sudo -u backuppc 
/usr/share/backuppc/bin/BackupPC_tarCreate -h host -n dumpNum -s shareName 
/path/to/data/relative/to/share | ssh -l user target-host tar xvpf - -C 
/share/path/on/target

(of course, great minds prefer netcat ;-). You'll have to play around a bit
with that (practise with small amounts of data and piping BackupPC_tarCreate
into 'tar tvf -' instead of ssh to get a feeling for what files you are
selecting and what the paths in the tar stream look like).
Some things to consider:
- Which user on the target host do you need to connect as? Perhaps root?
- Are you restoring in-place or do you need to change paths? Consider using
  the '-r' and '-p' options to BackupPC_tarCreate or restore to a temporary
  location - preferably on the correct partition - and move the target
  directory into the correct place manually. Check permissions before moving
  so 'mv' does not, in fact, start copying things.
- Does sudo -u backuppc work for you or do you need to become the backuppc
  user in a different way?
- Where is your BackupPC_tarCreate? I've used the Debian package path, but
  that's not the standard ...
- I just added a tar 'v' option, because you should probably see what you are
  doing until it has become routine, and perhaps even then ...

 2. Wouldn't a zip-split function be a nice thing to have in BPC when restoring
 data? This is a hint to the BPC-author. 8-)

Personally, I wouldn't use the web interface for downloading large amounts of
data anyway. On the command line, your imagination is the limit to what you
can do. If it's not available as a filter yet, the BPC-author would likely
need to implement the functionality. A generic tar2zipsplit filter would be
more useful to the world than a specific implementation inside BackupPC, don't
you think?

Regards,
Holger

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Making errors in log stand out

2011-04-13 Thread Holger Parplies
Hi,

Sorin Srbu wrote on 2011-04-06 16:11:55 +0200 [Re: [BackupPC-users] Making 
errors in log stand out]:
 [...]
 Thanks Bowie. Seemed to have done the trick, but I don't see anything red
 in the logs. 8-/ Checked all logs, the machine specific as well as the
 general summary log.

the log files are conceptually ASCII, not HTML. You can't really get colour in
there. You *could* make the code that displays the log file contents on the
web page parse the log file and highlight anything you want (similar to the
ability to extract only errors). That's definitely more complicated than
adding a HTML tag somewhere, though.
You could probably put HTML tags in the text log files, but I'd expect the
 characters to be quoted by the displaying code, so aside from looking ugly
in the log files when viewed as text, it would probably look just as ugly on
the web page rather than work ;-).

Regards,
Holger

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] [OT] FAQs (was: Re: How to force full backups on weekends?)

2011-04-13 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2011-04-13 17:33:05 -0400 [Re: [BackupPC-users] 
How to force full backups on weekends?]:
 [...]
 Does anybody bother to do a Google search or read the archives before
 WASTING our time asking the EXACT same question that was asked just a
 couple of weeks ago?

yes. And they continue to get more and more hits of less and less quality due
to redundant threads started by people who did not.

But I believe the OP did actually search, since he links to a relevant
BackupPC wiki page, and was asking for further clarification. I'll try to
update the wiki page to actually answer the question he had (or at least
part of it).

I must admit, though, that I did not read the thread you are referring to, so
you might be right that the necessary clarification was given there.

Regards,
Holger

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_dump hangs with: .: size doesn't match (12288 vs 17592185913344)

2011-04-04 Thread Holger Parplies
Hi,

John Rouillard wrote on 2011-03-31 15:20:23 + [[BackupPC-users] 
BackupPC_dump hangs with: .: size doesn't match (12288 vs 17592185913344)]:
 [...]
 I get a bunch of output (the share being backed up /etc on a centos
 5.5. box) which ends with:
 
   attribSet: dir=f%2fetc exists
   attribSet(dir=f%2fetc, file=zshrc, size=640, placeholder=1)
   Starting file 0 (.), blkCnt=134217728, blkSize=131072, remainder=0
   .: size doesn't match (12288 vs 17592185913344)

at first glance, this would appear to be an indication of something I have
been suspecting for a long time: corruption - caused by whatever - in an
attrib file leading to the SIGALRM abort. If I remember correctly, someone
(presumably File::RsyncP) would ordinarily try to allocate space for the file
(though that doesn't seem to make sense, so I probably remember incorrectly)
and either gives up when that fails or refrains from trying in the first
place, because the amount is obviously insane.

The weird thing in this case is that we're seeing a directory. There is
absolutely no reason (unless I am missing something) to worry about the
*size* of a directory. The value is absolutely file system dependant and
not even necessarily an indication of the *current* amount of entries in
the directory. In any case, you restore the contents of a directory by
restoring the files in it, and you (incrementally) backup a directory by
determining if any files have changed or been added. The *size* of a
directory will not help with that decision.

Then again, the problematic file (or attrib file entry) may or may not be the
last one reported (maybe it's the first one not reported?).

 [...] I have had similar hanging issues before
 but usully scheduling a full backup or removing a prior backup or two
 in the chain will let things work again. However I would like to
 actually get this fixed this time around as it seems to be occurring
 more often recently (on different backuppc servers and against
 different hosts).

I agree with you there. This is probably one of the most frustrating problems
to be encountered with BackupPC, because there is no obvious cause and nothing
obvious to correct (throwing away part of your backup history for no better
reason than after that it works again is somewhat unsatisfactory).

The reason not to investigate this matter any further so far seems to have
been that it is usually solved by removing the reference backup (I believe
simply running a full backup will encounter the same problem again), because
people tend to want to get their backups back up and running. There are two
things to think about here:

1.) Why does attrib file corruption cause the backup to hang? Is there no
sane(r) way to deal with the situation?
2.) How does the attrib file get corrupted in the first place?

Presuming it *is* attrib file corruption. Could you please send me a copy of
the attrib file off-list?

 If I dump the root attrib file (where /etc starts) for either last
 successful or the current (partial) failing backup I see:
 
   '/etc' = {
 'uid' = 0,
 'mtime' = 1300766985,
 'mode' = 16877,
 'size' = 12288,
 'sizeDiv4GB' = 0,
 'type' = 5,
 'gid' = 0,
 'sizeMod4GB' = 12288
   },

I would expect the interesting part to be the '.' entry in the attrib file for
'/etc' (f%2fetc/attrib of the last successful backup, that is). And I would be
curious about how the attrib file was decoded, because I'd implement decoding
differently from how BackupPC does, though BackupPC's method does appear to be
well tested ;-).

 [...] the last few lines of strace show:
 
 [...]
   19368 15:00:38.199634 select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
 59.994597

I believe this is the result of File::RsyncP having given up on the transfer
because of either a failing malloc() or a suppressed malloc(). I'll have to
find some time to check in more detail. I vaguely remember it was a rather
complicated matter, and there was never really enough evidence to support that
corrupted attrib files were really the cause. But I sure would like to get to
the bottom of this :-).

Regards,
Holger

--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Viewing detail of a backup in progress?

2011-04-04 Thread Holger Parplies
Hi,

Tyler J. Wagner wrote on 2011-04-04 15:54:00 +0100 [Re: [BackupPC-users] 
Viewing detail of a backup in progress?]:
 On Mon, 2011-04-04 at 09:41 -0500, Carl Wilhelm Soderstrom wrote:
  /usr/share/backuppc/bin/BackupPC_zcat XferLOG.z |tail
 
 Note that the the log files, like XferLOG.z, are buffered. They may not
 show files currently copying, if the log write buffer hasn't filled.

in particular, they are compressed, so the end of the file is in my experience
usually a considerable amount behind the file currently copying. This is also
the reason you can't simply switch off buffering for the log files
(compression needs reasonably sized chunks to operate on for efficient
results). It might make sense to think about (optionally) writing log files
uncompressed and compressing them after the backup has finished. Wanting to
follow backup progress seems to be a frequent enough requirement. Putting the
log files on a disk separate from the pool FS should probably be encouraged in
this case ;-).

Regards,
Holger

--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] excluding files

2011-04-04 Thread Holger Parplies
Hi,

Bowie Bailey wrote on 2011-04-04 11:36:18 -0400 [Re: [BackupPC-users] excluding 
files]:
 On 4/3/2011 5:59 PM, Scott wrote:
  [...]
  In the web interface I added an entry for:  *.MPG 
  [...]
 
 You probably just put the entry in the wrong place.
 [...]
 [...] followed by your *.MPG exclusion.

you *may* also need to think about your XferMethod, because excludes are,
in general, handled by the XferMethod and thus subject to its syntax
specifications. Your simple *.MPG should probably work for all methods -
presuming they see your file names in upper case - but it wouldn't hurt to
mention what XferMethod you are using if your problems persist after checking
your exclude definitions. You should then also attach a copy of your host
config file, preferably as attachment of the original file and not cutpaste
from the web interface ;-).

All of that said, I personally dislike exclusions like *.mpg, because there
is no way for the user to override them for single files that should be backed
up. Similarly, you only exclude things you expect, missing music.mph (typo)
or music.mpg.bak (copy) or music.mp3 (different common naming scheme
(there is no such thing as an extension, except, maybe, under Windoze, but
there shouldn't be even there ;-)).
If you can express your exclude as a directory (My Music?), you might need
to figure out space quoting issues, but you might be able to avoid the issues
above. Then again, all of that may not apply to your situation. Just something
to think about.

Regards,
Holger

--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keeping 1 month of files and number of full backups

2011-04-04 Thread Holger Parplies
Hi,

Matthias Meyer wrote on 2011-04-03 20:06:31 +0200 [Re: [BackupPC-users] Keeping 
1 month of files and number of full backups]:
 Scott wrote:
 
  I want to be able to restore a file for users up to one month in the past.
  
  What is the difference / what is the best -
  
  To do a full backup every 2 weeks, keeping 2 full backups,
  and incrementals every day , or
  
  Do a full backup every 1 month and incrementals every day?
 
 Within the Web GUI of BackupPC are no differences bwetween incremental or 
 full backups.

correct. Incremental backups appear filled. You see every file with the
content it had at the time of the backup, regardless of where this content is
stored within BackupPC.

Concerning space requirements, the differences between full and incremental
backups are negligible.

 [...]
 If you are using rsync than the only difference between full and incremental 
 are:
 - incremental only scan files which are created after the last backup.

Not true. *rsync* full *and* incremental backups will both transfer any files
they determine to have changed or been added, using a full file list for
comparison of the states on client and server. They also both remove deleted
files from the backup. The difference is that incremental backups only check
file attributes (particularly the timestamps) for determining changes (which
is sufficient except for extremely rare cases), whereas full backups check
file contents, i.e. they really read every single file on the client (and the
server, usually). This takes time and probably puts wear on the disks.
So, true:

   Therefore it is a lot faster than a full backup.

 - full backup scan all files, also files which would be extracted from an
   archive after the last backup but have an extracted timestamp older than
   the last backup.
   Therefore a full is much slower but get really all new files.

See above. The difference described is true for *non-rsync* backups. tar or
smb backups only have one timestamp as reference - that of the previous
backup, not the timestamp of every individual file - so incrementals can only
catch modifications (or creations) with timestamps later than the previous
backup. File deletions are not detectable by non-rsync incrementals, meaning
deleted files will continue to show up in your backups in the state they were
last in until you run a full backup.

Also note that full backups will, due to the full file comparison, correct
the probably extremely rare event of pool file corruption by making a new copy
with the correct content (past backups will, of course, not be corrected).
Someone please feel free to add a note on the effect of checksum caching
(turned off by default) ;-).


There is no one general answer to your question, what is best. It depends on
your requirements. For probably all backup solutions, full backups are more
exact than incremental backups. Incrementals are a compromise for the sake of
practically being able to do backups at all (imagine the amount of tapes you
would require for full tape backups each day - and what small amount of that
data really changes).
BackupPC is designed for saving space used by redundant data, so you *could*
do daily full backups almost without penalty. Saving time is the motivation
here for copying the semantics of incremental backups. With *rsync*, the
difference in backup exactness is small enough to be a theoretical matter
only. For tar/smb, that is not true.
So it's really your decision. How much exactness do you need, how much can you
afford? For what you describe, you're unlikely to make a wrong decision.
Anything from daily full backups to one monthly full and daily incrementals is
likely to work for you. Just don't be surprised about the extra full backup
(any full backup an incremental backup depends on must be kept, so you will
almost always have 1 more full backup than you requested).


One thing to keep in mind, though: if network links with low bandwidth are
involved, you will want to do rsync-type backups (rsync or rsyncd), and
frequent full backups will actually use *less* bandwidth than long FullPeriods
and frequent incremental backups.

Regards,
Holger

--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Auth failed on module cDrive

2011-04-04 Thread Holger Parplies
Hi,

Tom Brown wrote on 2011-03-30 16:13:44 -0400 [Re: [BackupPC-users] Auth failed 
on module cDrive]:
 It appears I've solved the problem. 

well, then this reply might be redundant or wrong.

 I am surprised it took this long for someone to respond to my inquiry

Sorry, I used to read all mail on this mailing list, but nowadays I'm glad to
find the time to occasionally read some of it ...

 [...]
 If I’ve missed something about handling global and per PC passwords in
 /conf/config.pl and /pc/pc_name/config.pl, please let me know.

Err, what version of BackupPC are you using? The normal locations of
configuration files are $ConfDir/config.pl and $ConfDir/hostname.pl.
Depending on what you mean by /pc/pc_name/config.pl, that directory might
actually be used for backwards compatibility, but wouldn't it still be
hostname.pl (sorry, can't check right now)? In any case, try using
/conf/pc_name.pl instead of /pc/pc_name/config.pl and see if that changes
anything. I'm guessing not only the password in your pc config files is not
used (though it might be the only thing you set in there).

 [...]
 BackupPC reports “auth failed on module cDrive”. The rsyncd.log on the W7
 client reports “connect from x226.rbhs.lan; password mismatch”.
 [...]
 When I rsync files from the command line on the backuppc server using
 rsync -av backuppc@daved-hp::cDrive ., I get a series of permission denied
 errors.

That means, on the command line, authentication works for the same
username/password pair?

 receiving file list ...
 rsync: opendir ygdrive/c/Windows/system32/c:/Documents and Settings (in
 cDrive) failed: Permission denied (13)

This is an indication of incorrect path names in rsyncd.conf, which might
also be responsible for your password problems. I don't use Windoze and
I don't use rsyncd on Windoze, but google and the BackupPC wiki [1] tell me
that you want something like

secrets file = /etc/rsyncd.secrets

in rsyncd.conf, not something starting with c:/ in any case ...

Hope that helps.

Regards,
Holger

[1] 
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=RsyncdWindowsClients

--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] moved server, now stuck in chkdsk

2011-03-28 Thread Holger Parplies
Hi,

Chris Baker wrote on 2011-03-28 13:44:21 -0500 [[BackupPC-users] moved server, 
now stuck in chkdsk]:
 I should have figured that I couldn't do this without some problem popping
 up.

well, yes, they tend to do so when it suits you least.

 I shut the server down properly. I moved it. Now when it boots up, it
 wants to do chkdsk.

I presume you mean fsck? To put it differently: please elaborate on your setup
(what version of BackupPC, what version of OS, distribution, FS, hardware,
the usual stuff). Or are we really talking about a Windoze machine? ;-)

 I can't seem to get out of it. ALL I DID was move the thing.

Are you sure everything terminated properly (LVM, RAID, disk buffers flushed)?
I presume you moved it in a way that would be guaranteed not to damage the
hard disk(s), motherboard, etc.? Note that there seem to be a lot of
out-of-spec heatsinks around that weigh more than they should. This is usually
not a problem, but moving a computer with such a heatsink might be a more
delicate matter ...

Another thing to think about: are you operating the disk(s) in the same
orientation as before?

Also, check the hard disk cables. Possibly simply removing and replugging them
(with unplugged computer power cord, please!) might change something (corroded
contacts). Or replacing them. How old is the computer? How long was it in
operation as BackupPC server before you moved it?

 The chkdsk on the logical volume does not complete, and it just
 reboots.

How does it not complete? Are there error messages? Does it quietly die? Did
it (try to) correct some errors, i.e. has it changed anything yet?
What do you mean by logical volume? Are we talking about the BackupPC pool
FS? What FS type are you using? How large is the FS? Is it separate from the
OS installation? Do you remember details such as number of used inodes (or
files in pool or anything that might help)? How long does fsck run before the
computer reboots? How much memory does the computer have?

Were there any messages from the BIOS, is SMART enabled for the drive(s)?

 Does anyone have any suggestion on how to get this thing back up and
 running? It was working just fine before I moved it, and that's all I did.

Are you absolutely sure about that? Did you perhaps boot a different kernel
than before, e.g. due to a previous upgrade without reboot? Different versions
of some utilities?

It really depends on the nature of the problem. I can understand that you're
frustrated and probably under a lot of stress right now, but the information
you provided really doesn't point in any particular direction. It could be an
unrecoverable hardware failure or a loose cable, maybe even just an incorrect
BIOS setting somewhere. Or anything in between.


Sometimes it also helps to take a step back. What are you trying to achieve?
1.) Get backups up and running as soon (and reliably) as possible?
2.) Get access to your existing backup history?
3.) Continue backups, keeping your existing backup history?

What resources do you have? Empty hard disks to copy to? Spare machines to try
to access the hard disk from? External cases? Lots of time and man power? :)


I hope there are some ideas in there that help, and I hope you regain access
to your backups quickly.

Regards,
Holger

--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and publish 
your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] error -4 help

2011-03-18 Thread Holger Parplies
Hi,

Scott wrote on 2011-03-18 13:38:27 -0400 [[BackupPC-users] error -4 help]:
 What do these errors mean?   I have  LOTS of them -

yes, we know. Basically, one for each file.

 [...]
 2011-03-18 11:04:21 BackupPC_link got error -4 when calling 
 MakeFileLink(/mnt/backuppc/pc/2510c 
 http://backuppc/backuppc/index.cgi?host=2510c/4/fC$/fDocuments and 
 Settings/fAdministrator/fApplication Data/fAdobe/attrib, 
 6fd819211482442b99e16844e676ec12, 1)

Same issue as your previous question. See the answers there.

Regards,
Holger

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Still trying to understand reason for extremely slow backup speed...

2011-03-13 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2011-03-09 17:43:41 -0500 [Re: [BackupPC-users] 
Still trying to understand reason for extremely slow backup speed...]:
 [...] I actually used my program that I posted to copy
 it over and I checked that all the links are ok.

I believe you have changed more than four variables ;-). BackupPC, as we all
know, depends heavily on seek times. When you are measuring NFS speed, what
exactly *are* you measuring? Probably not what BackupPC is doing, so that may
or may not explain the difference. You said you changed from ext2 to lvm2 -
I suppose you are still using a file system? ;-) And almost definitely a
different one, else you would have used dd ...

- LVM may or may not make the seeks slower. I wouldn't expect it. I suspect
  many people use BackupPC on LVM - for the flexibility of resizing the pool
  partition if nothing else. Mileage on ARM NAS devices may vary.
- The FS may or may not behave differently.
- The inode layout after copying may or may not be less efficient. Even
  significantly so. I can't tell you what a generically good order to create
  the copied files, directories, pool entries for a BackupPC pool (tm) would
  be, else I'd re-implement your program ;-). My understanding is that there
  is no good layout for a BackupPC pool, but there are bound to be varying
  degrees of bad layouts.
- In theory, RAID1 might have eliminated many of the seeks (on reading, that
  is), if the usage pattern of the pool and the driver implementation happen
  to fit. Might be interesting to figure out how many mirrors and what hard-
  or software raid would be optimal for BackupPC ;-). But that's a 3.x topic.

Does the backup sound seek-limited, or is the NAS disk idle some of the
time? You didn't also change NFS mount options, did you ;-?

Is at least as much memory available (for disk cache, if that's not obvious)
on the NAS as before the kernel upgrade? Does the NAS swap? Is it swapping?

I can't think of any more questions to ask right now. Good luck :).

Regards,
Holger

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] General questions about the background queue and link commands.

2011-02-03 Thread Holger Parplies
Hi,

Robin Lee Powell wrote on 2011-02-02 08:12:47 -0800 [[BackupPC-users] General 
questions about the background queue and link commands.]:
 Let me ask some more general questions:  What does the background queue
 actually *mean*?

from the source (3.2.0beta0, but probably unchanged):

#- @BgQueue is a queue of automatically scheduled backup requests.

If my memory serves me correctly, BackupPC queues each host it knows about at
each wakeup (presumably unless it's already on a queue ... see %BgQueueOn,
%CmdQueueOn, %UserQueueOn). It then proceeds to process these backup requests
in the order they're in the queue. Most of the time, that will simply mean
deciding that it's not yet time for the next backup of this host. Now, if ...

 All but one host (!), 226 out of 227, is in the background queue,
 which seems rather excessive since about half my hosts have recent
 backups.

... I would guess that the first host is currently doing its backup, you've
got $Conf{MaxBackups} = 1, and most of the queue will simply disappear
once the running backup is finished, because BackupPC will decide for each
host in turn whether its backup is recent enough.

Of course, if your situation is that one backup is always running (i.e. your
BackupPC server is constantly backing up something), you'll see this situation
most of the time - all of the time if backups take more time than the interval
between wakeups. Well, you obviously won't really have that situation - the
example is just for illustration. But you'll see this (for a possibly short
period of time) whenever a backup takes longer than the wakeup interval, and
whenever the first host to be scheduled is actually backed up (for the
duration of that backup). That is not a problem. It is not an indication of
high load.

To sum it up, that a host is on @BgQueue does *not* necessarily mean that a
backup will be done for this host, just that a check will be done whether a
backup is needed. If so, the backup will also be done, else it's just the
check.

Note that this also means that backups may start at any time, not just at the
times listed in the WakeupSchedule (but you've probably already noticed that).

Regards,
Holger

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc ignores value of BackupFilesOnly; archives entire /

2011-01-18 Thread Holger Parplies
Hi,

itismike wrote on 2011-01-18 09:51:05 -0500 [[BackupPC-users]  backuppc ignores 
value of BackupFilesOnly; archives entire /]:
 [...]
 I don't intend to sound terse toward those reaching out to help me; only
 point out that any newcomer to this program is going to hit this same wall
 every time.

actually, that is known, because a lot of newcomers *have* hit this same wall.
If you had searched, you could have found tons of threads describing the same
mistake you have made. I am not sure whether any of those are recent, though,
because I haven't been able to follow the list closely for many months now.

As far as I remember, Craig is aware of the web interface being misleading in
this respect, and I'm sure it's somewhere on his todo-list, if not already
fixed. I'm sure any patch would be welcome anyway (though I'm not sure what
the plans are concerning new releases of BackupPC 3.* ...).

 On 1/18/2011 11:39 AM, itismike wrote:
  [...]
  $Conf{BackupFilesOnly} = {
  '/home/michael/' = [
  ''
  ]
  }; 
 
  Now if the above code can't be interpreted correctly by BackupPC, then
  we may have stumbled upon the root cause. But I don't think this is the
  problem or others would be finding the same issue.

It *is* interpreted *correctly*, just not how you (and others) meant it to
be ;-).

Bowie Bailey wrote on 2011-01-18 11:58:19 -0500 [Re: [BackupPC-users] backuppc 
ignores value of BackupFilesOnly; archives entire /]:
 Devs:  Would it make sense for the GUI to generate a warning message of
 some sort when the keyname does not match a share name or when there is
 a keyname with nothing specified for it?

First, I agree with Les, that the most important step would be making the
GUI more obvious on what information should be entered where.

Aside from that, it could make sense to warn about or disallow apparently
nonsensical settings in the GUI. People who actually *want* such settings for
whatever reason probably edit the config files instead of using the GUI
anyway. The question is probably, how much checking (and where) should the GUI
perform? Just prevent adding nonsensical settings or try to repair existing
ones (e.g. remove keys which don't match an existing ShareName; what happens
when you rename a share or temporarily remove it?)? That could easily become a
nuissance ...

itismike wrote on 2011-01-18 12:13:59 -0500 [[BackupPC-users]  backuppc ignores 
value of BackupFilesOnly; archives entire /]:
 [...]
 That being said, if entering my directory in the CGI leads to it containing
 an illegal value (sharename=/home/michael) wouldn't I expect it to throw
 an error rather than ignore the entry and backup everything?

The value is not illegal. It is simply not used. From the top of my head, I
can think of at least three cases where this could occur:

1. You have a share /home/michael, but you've temporarily removed it from
   your share list, because you currently don't want it to be backed up. Next
   week, you'll re-add it, and you want to keep the excludes.
   Or maybe you are planning on adding the share next week and already want
   to note the excludes in advance.
   Or maybe you want to make a note that if you should ever add the share,
   you will want to exclude something you probably won't think of when the
   time comes. Maybe this note is not to yourself but to other administrators
   of the BackupPC installation (might not apply in your case, but surely
   could in others).

2. You are renaming /home/michael to /homes/michael and want the backup
   to back up the new directory. What do you change first, the list of
   share names or the in-/excludes? What is supposed to happen in between?
   Ok, you might say, the GUI should automatically change the exclude and
   include keys, so you just need to change the share name once. In an ideal
   world, you're probably right :).

3. You have a global config (config.pl) which specifies site defaults.
   Every machine with a share /home/michael should exclude the
   subdirectories .Trash and .thumbnails. Machines without such a share
   should not throw errors but just silently ignore the exclude, because it
   doesn't apply.
   While this is unlikely to apply to /home/michael, think about shares
   like /var/spool/fax, /srv/tftp, /usr/local ...

 Or does BackupPC use some type of fail-safe mode where an error in
 config.pl just defaults to: This guy is nuts! BACK IT ALL UP!!! :)

No, it's simply apply includes and excludes to the shares they apply to. If
they don't apply to anything, they're not used.

Les Mikesell wrote on 2011-01-18 11:32:08 -0600 [Re: [BackupPC-users] backuppc 
ignores value of BackupFilesOnly; archives entire /]:
 On 1/18/2011 11:13 AM, itismike wrote:
  [...]
  I don't need to mess with RsyncShareName, right?
  [...]
 
 [...]
 Still, it doesn't make any sense to start at '/' and 
 make the client walk the entire directory tree looking for matches if 
 you know you don't want anything above that directory included.  The 
 right 

Re: [BackupPC-users] specifying differerent user in RsyncClientCmd

2011-01-17 Thread Holger Parplies
Hi,

 On 1/17/2011 12:18 AM, itismike wrote:
  I'm running an Ubuntu client with ecryptFS enabled. Since my home
  directory is encrypted, I'd like to perform the backup as myself rather
  than root so the files are browsable by me and restore is possible.

while I don't really know what the entails, I would like to point out one
thing: the user backuppc on the BackupPC server machine has (and needs)
non-interactive and passwordless access to your files. There is no way around
that if you want non-interactive backups.

So, while you might be protected against root on the *client machine* (not
sure about that, but I suppose you know what you're doing), you are *not*
protected against root on the *BackupPC server* (unless there is some
mechanism preventing him to su - backuppc) or anyone else that can gain
access to the backuppc account there or access the private ssh key you use for
the connection (or the BackupPC pool files, obviously).

That might be fine in your case, but I think it is worth mentioning.

  So my intent is to put my username in the RsyncClientCmd and
  RsyncClientRestoreCmd commands like this:
  $sshPath -q -x -l michael $host $rsyncPath $argList+
 
  The problem is I haven't been able to get past the message below:
  2011-01-16 14:03:16 full backup started for directory /
  2011-01-16 14:03:17 Got fatal error during xfer (Unable to read 4 bytes)
  2011-01-16 14:03:22 Backup aborted (Unable to read 4 bytes)
 
  [...] I set up ssh-keygen and can establish passwordless ssh connections
  between the server and client (and vice-versa.)

Vice-versa is not needed. Actually, if we're talking about the same thing, it
is not a good idea.

Bowie Bailey wrote on 2011-01-17 10:56:31 -0500 [Re: [BackupPC-users] 
specifying differerent user in RsyncClientCmd]:
 Are you testing as the backuppc user?  Make sure you can establish a
 passwordless connection from the backuppc user on the server to your client.

In particular, there must be no extraneous output. Make sure you can

backuppc@backuppc-server% ssh -q -x -l michael ubuntu-client /bin/true
backuppc@backuppc-server%

and get exactly *no output* from that. Furthermore, make sure rsync is
actually installed (I've been surprised that it wasn't more than once ...),
e.g.

backuppc@backuppc-server% ssh -q -x -l michael ubuntu-client 
/usr/bin/rsync --foo
rsync: --foo: unknown option
rsync error: syntax or usage error (code 1) at main.c(1443) 
[client=3.0.7]

(that should give an rsync usage error similar to the above, not a shell
error message command not found).

Other things to note:
- You might want or need to use an alternate ssh identity, e.g.

backuppc@backuppc-server% ssh -i /var/lib/backuppc/.ssh/id_michael_rsa 
-q -x -l michael ubuntu-client ...

  If you do, your RsyncClientCmd/RsyncClientRestoreCmd needs to reflect that,
  or you need to set it up in ~backuppc/.ssh/config . This is probably only
  the case if you are backing up several different hosts.
- You do *not* need (and should not have) passwordless access to the
  BackupPC server from the client, i.e.

michael@ubuntu-client% ssh -l backuppc backuppc-server ...

  should prompt you for a password (or deny access). There is no point in
  setting up passwordless logins in that direction, and doing so would mean
  that anyone capable of becoming michael@ubuntu-client had full access to
  your BackupPC pool (possibly containing backups of other hosts).

Hope that helps.

Regards,
Holger

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copy the log dir from /var/lib/backuppc/log

2011-01-07 Thread Holger Parplies
Hi,

Madcha wrote on 2011-01-07 08:56:45 -0500 [[BackupPC-users]  Copy the log dir 
from /var/lib/backuppc/log]:
 I have created a new data store for my backupps
 I have created symbolink links
 from var/lib/backuppc/cpool to /media/backuppc/cpool
 from var/lib/backuppc/pc to /media/backuppc/pc
 fromd var/lib/backuppc/trash to /media/backuppc/trash

let me guess ...

- you didn't read the instructions and
- pooling isn't working, but you haven't noticed yet.

You shouldn't link the subdirectories, you should make $TopDir a link to
the alternate location: 'ln -s /media/backuppc /var/lib/backuppc'.

 I receive : cp can't create special file...BackupPC.sock 

Wow, what program might produce that kind of output? It looks vaguely
like an error message from 'cp'. There's no point in copying the socket.
BackupPC will re-create it upon startup. Even if you could copy it, the
copy would have no effect.

 |This was sent by rageinh...@msn.com via Backup Central.

No comment.

Regards,
Holger

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2010-12-07 13:16:32 -0500 [Re: [BackupPC-users] 
Bizarre form of cpool corruption.]:
 Robin Lee Powell wrote at about 23:46:11 -0800 on Monday, December 6, 2010:
   [...]
   So, yeah, that's really it.  They're both really there, and that's
   the right md5sum, and both the pool file and the original file have
   more than 1 hardlink count, and there's no inode match.
 
 Robin, can you just clarify the context.
 Did this apparent pool corruption only occur after running
 BackupPC_tarPCCopy or did it occur in the course of normal backuppc
 running.
 
 Because if the second then I can think of only 2 ways that you would
 have pc files with more than one link but not in the pool:
 1. File system corruption
 2. Something buggy with BackupPC_nightly
 Because files in the pc directory only get multiple links after being
 linked to the pool and files only unlinked from the pool using
 BackupPC_nightly (Craig, please correct me if I am wrong here)

I'm not Craig ;-), but I can think of a third possibility (meaning files may
get multiple links *without* being linked to the pool, providing something has
previously gone wrong):

3. You have unlinked files in pc trees (as you described in a seperate
   posting - missing or incomplete BackupPC_link runs) and then run an rsync
   full backup. Identical files are linked *to the corresponding file in the
   reference backup*, not to a pool file.
   If I remember correctly, that is. I haven't found much time for looking
   at the code (or list mails) in the last year, so I might be mistaken, but
   I'd rather contribute the thought and be corrected than wait until I find
   the time to verify it myself :).

 If the first, then presumably something is going wrong with either
 BackupPC_tarPCCopy or how it's applied...

Just in case it's not obvious, BackupPC_tarPCCopy generates a tar file that
can *only be meaningfully extracted* against a similar pool to that it was
created with (files *not referenced* by the tar file may, of course, be missing
or have different content - presuming you can find a usage example for
that ;-).

The hard links in the tar file reference pool file names for which the actual
file is (somewhat illegally, but that's really the whole point ;-) not
contained in the tar file. There is thus no way for tar to know if it is
actually linking to the intended file or a file with the same name but
different content - it is up to you to make sure the contents are correct.
You usually do that by copying the pool and running BackupPC_tarPCCopy
immediately afterwards, *without BackupPC modifying the source pool in
between*; you have probably stopped BackupPC altogether before starting the
pool copy.

BackupPC_nightly may rename pool files. If that happens after copying the pool
and before running BackupPC_tarPCCopy, (some of) the links will point to the
wrong file (with respect to the pool copy).

That said, I can't see how that would cause the unlinked pc files Robin is
observing. However, *using* a pool copy (i.e. running BackupPC on it) for
which BackupPC_tarPCCopy has stored the file contents, because it could not
find the pool file, would cause that file to remain outside the pool forever,
as long as you are using rsync and don't modify the file contents, as I
described above.

You probably know that, but I thought I'd clarify what I expect Jeffrey means
by something going wrong with how BackupPC_tarPCCopy is applied.


Oh, and of course there's always

4. Tampering with the pool. Just for the sake of completeness. But we don't
   do that, do we? ;-)


Hope that helps.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2010-12-09 17:08:05 -0600 [Re: [BackupPC-users] Bizarre 
form of cpool corruption.]:
 On 12/9/2010 4:44 PM, Jeffrey J. Kosowsky wrote:
 
  If I recall correctly, the first time you would do a
  subsequent incremental then it should all get linked back to the pool
  since they are linked not copied to the pool *unless* the file is
  already in the pool in which case the new backup would be linked and
  the old ones would be left orphaned. Similarly, I imagine that new
  fulls would leave them stranded. Either case could explain.
 
 I thought that was a difference between rsync/others.  Rsync works 
 against a previous copy making direct links to anything that already 
 exists so the pool copies are only for new data.  Other methods copy the 
 whole file content over and don't bother looking at any earlier runs, 
 just doing the hash and pool link or copy.

just to clarify:

1. Non-rsync-XferMethods never link to previous backups, only to the pool.
   If new files aren't BackupPC_link-ed into the pool (which should not
   happen, see below), they'll have exactly one hard link and will never
   aquire more.

2. rsync *incrementals* only create entries for *changed* files. These are
   linked to the pool if a matching file exists or otherwise entered into the
   pool as new files (which may fail if BackupPC_link is not or incompletely
   run, which should never happen under normal circumstances, just to be
   clear).
   Thus, rsync *incrementals* will never create new links to orphaned files.

3. rsync *full backups* create entries for *all files*. Changed files are
   treated as with incrementals (i.e. linked to the pool). *Un*changed files
   are linked to the same file in the reference backup. This *should normally*
   be a link to a pool file, making the new entry also be linked to the pool.
   If, however, it is not (and this is the case we were originally talking
   about), the new entry will also not find its way into the pool. This is how
   a multi-link file without pool entry can come into existance.

   I believe, BackupPC *could* in fact detect this case (if the file we're
   about to link to has only one link, we should try to link to the pool
   instead - and possibly also correct the reference file), but I haven't
   checked the source for reasons why this might not work, and I don't expect
   I'll be writing a patch anytime soon :(. Also, I can't estimate if this
   problem is common enough to be worth the effort (of coding and of slowing
   down rsync-Xfer, if only slightly). (*)

   I'm not sure what happens, if the link count of the reference file reaches
   HardLinkMax - I would expect a new entry *in the pool* to be made.

4. rsync will *not* link to anything except the exact same file in the
   reference backup (because it does not notice that there may be an identical
   file elsewhere in the reference backup or anywhere in other backups).

Regards,
Holger

(*) Just to describe how this situation can also occur:
I knowingly introduced it into my pool when I had to start over due to
pool FS corruption and desperately *needed* a reference backup for a large
data set on the other end of a slow link. I copied the last backup from
the corrupted pool FS and ran a full backup to make sure I had intact data.
I was going to fix the problem later or live with the (in my case
harmless) duplication.
BTW, this is an example of tampering with the pool ;-).

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] incremental backup question

2009-12-21 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2009-12-21 14:58:03 -0600 [Re: [BackupPC-users] 
incremental backup question]:
 Mester wrote:
  My TarClientCmd is:
  '$sshPath -q -x -l backup -o PreferredAuthentications=publickey $host 
  /usr/bin/env LC_ALL=C LANG=en_EN /usr/bin/sudo $tarPath -c -v -f - -C 
  $shareName+ --totals'
 
 Your extra sudo might make an extra level of shell escaping necessary. 

no, it doesn't.

 What about $Conf{TarIncrArgs}?

That is more interesting (because this is where the '+' might be missing). To
put it more general: if you want to avoid this debugging ping-pong, provide
some relevant information (like your configuration settings and log file
extracts, for example). Even if we *can* sometimes give terse answers to
terse questions, you probably wouldn't be asking the questions if this type
of answer were of any help to you (the documentation would explain everything
sufficiently in this case (not meaning to imply that it's terse)).

In particular, describe your problem, not your analysis of it. We want to
concentrate on fixing problems, not analyses.

 Maybe the space in the date string is 
 getting parsed as a separator by the extra shell layer.

That is what I was thinking. It would, however, most probably lead to tar
complaining about a non-existant file and redundant backups of only a small
part of the files. I'm still in favour of clock skew. Or maybe a TarIncrArgs
without a --newer ;-).

Regards,
Holger

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] incremental backup question

2009-12-20 Thread Holger Parplies
Hi,

Mester wrote on 2009-12-19 21:49:58 +0100 [[BackupPC-users] incremental backup 
question]:
 I use backuppc on a Debian Linux 5.0 server for backing up another 
 Debian Linux 5.0 server over the internet with tar over ssh.
 The first full backup is created succesfully but the incremental backups 
 alway make full backup. What could be the reason of this?

a missing '+', clock skew, or a number of different configuration errors.

Regards,
Holger

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unexpected call BackupPC::Xfer::RsyncFileIO-unlink(...)

2009-12-18 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2009-12-18 15:36:48 -0500 [Re: [BackupPC-users] 
Unexpected call?BackupPC::Xfer::RsyncFileIO-unlink(...)]:
 Jeffrey J. Kosowsky wrote at about 13:11:37 -0500 on Monday, November 2, 2009:
   Unexpected call 
 BackupPC::Xfer::RsyncFileIO-unlink(cygwin/usr/share/man/man3/addnwstr.3x.gz)
   
   [...]
   
   Note from the below quoted thread from 2005, Craig claims that the error is
   benign, but doesn't explain how/why.
 
 [...]
 
 I am curious about what could be causing this situation [...]

if you're curious about what is causing a benign warning message, you're
probably on your own for the most part. I can supply you with one casual
observation and one tip:

I think I saw that warning when I changed from tar to rsync XferMethod. As you
know, tar and rsync encode the file type plain file differently in attrib
files (rsync has a bit for it, tar (like stat()) doesn't and simply takes the
absense of a bit for a special file type to mean plain file). When rsync
compares remote and local file type (remote from remote rsync instance, local
from attrib file generated by tar XferMethod), it assumes a plain file changed
its type, so it removes the local copy (if that sounds strange, remember that
File::RsyncP mimics plain rsync, which *would* delete the local file; with
BackupPC's storage backend, that doesn't make sense, hence the warning) and
transfers the remote file without a local copy to compare to. Or something
like that.

If you want to know more, look at where the source code generates the warning
message (well, that's stated *in* the warning message) and where that code is
called from (presumably File::RsyncP) and in which circumstances.

Good luck. Since you asked, don't forget to report back ;-).

Regards,
Holger

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems on Ubuntu 9.10-Client

2009-12-15 Thread Holger Parplies
Hi,

Georg Schilling wrote on 2009-12-15 11:29:26 +0100 [[BackupPC-users] Problems 
on Ubuntu 9.10-Client]:
 [...]
 after adding a Ubuntu-Client to our BackupPC-Server I'm struggeling in
 some kind of rsync-problems. Apart from that. all other Ubuntu-Clients
 (!=9.10)work like a charm. Our BackupPC-server is a Solaris10-X86-machine.
 
 Here comes the XferLOG.bad:
 - -8-
 Executing DumpPreUserCmd: /etc/BackupPC/Notify_start_backup full
 sarpc04.sar.de gep
 full backup started for directory /bin (baseline backup #53)
 Running: /usr/bin/ssh -q -x -l root sarpc04.sar.de /usr/bin/rsync
 - --server --sender --numeric-ids --perms --owner --group -D --links
 - --hard-links --times --block-size=2048 --recursive --ignore-times . /bin/
 Xfer PIDs are now 26244
 Rsync command pid is 26244
 Fetching remote protocol
 Got remote protocol 1752392034
 Fatal error (bad version): bash: warning: setlocale: LC_ALL: cannot
 change locale (de)

this is the problem: your remote end outputs 'bash: warning: setlocale:
LC_ALL: cannot change locale (de)', which it mustn't. Apart from localized
root error messages being a real pain for the administrator of the machine
(but that's just my opinion), BackupPC rsync backups need a data stream free
of extraneous output. You need to be able to run

ssh -q -x -l root sarpc04.sar.de /bin/true

(as backuppc user on the BackupPC server) without getting *any* output. For
tar backups, the locale settings would matter, for rsync backups I don't
believe they do. They just need to be correct so you don't get any warning
messages (maybe you need to use 'de_DE' instead of 'de' somewhere?). You
probably need to check the bash startup files for root and the global ones or
basically anything you might have changed to set the locale to 'de'.

Hope that helps.

Regards,
Holger

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] a slightly different question about rsync for offsite backups

2009-12-09 Thread Holger Parplies
Hi,

Tino Schwarze wrote on 2009-12-09 20:50:35 +0100 [Re: [BackupPC-users] a 
slightly different question about rsync for?offsite backups]:
 On Wed, Dec 09, 2009 at 10:57:13AM -0800, Omid wrote:
 [...]
  if the usb drive does not mount for whatever reason (either because it
  hasn't been plugged in, or for another reason), the copy is going to go to
  the folder that's there, which is going to fill up the native drive very
  quickly.
  
  how can i avoid this?
 [...]
 
 Just create a file called THIS_IS_THE_USB_DRIVE on the drive itself,

... or a file THIS_IS_THE_HOST_DRIVE in the directory you are mounting to
(and invert the testing logic). Or, of course, read mountpoint(1) and do
something like

mountpoint -q /mnt/usb  rsync -aHPp /data/ /mnt/usb/data/

It all depends on what you want to make easy and what you want to guard
against.

All of that said, remember that rsyncing a BackupPC pool doesn't scale well
and may fail at some point in the future. Also, syncing a live pool will
probably not lead to a consistent copy. Depending on what you might be using
the copy for, that may or may not be a problem (restoring from backups
completed before starting the copy will probably work - though parallel chain
renumbering might mess things up (don't really know), but I wouldn't recommend
using it (or a copy of it) to continue backing up to).

Regards,
Holger

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] convert from rsync to rsyncd

2009-12-09 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2009-12-09 02:33:35 -0500 [Re: [BackupPC-users] 
convert from rsync to rsyncd]:
 [...]
 That being said, I don't think / will pose a problem since BackupPC
 saves the rsync / share name as f%2f (mangled form) which is
 equivalent to an unmangled %2f share name which probably is allowed
 in rsyncd. So just call the equivalent rsyncd share name %2f. I
 haven't tested this, so I may be missing something, but try it...

sure you're missing something. A file or directory name %2f would luckily
be mangled to f%252f - otherwise there could be file name collisions. See
sub fileNameEltMangle in BackupPC::Lib. So, no, f%2f is *not* equivalent to
an unmangled %2f.

Regards,
Holger

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] making a mirror image of the backup pc disk

2009-12-09 Thread Holger Parplies
Hi,

Pat Rice wrote on 2009-12-09 11:04:37 + [[BackupPC-users] making a mirror 
image of the backup pc disk]:
 [...]
 What I would like to know, or if any on had any experience of:
 Making a mirror or the backup disk:

well, yes, it is an FAQ, but in short:

 Should I do a dd?

Yes, in your situation definitely.

 or would a copy be sufficient ?

Maybe, but not certain. You don't want to take the chance. If your pool is
reasonably sized, it will take longer (possibly by orders of magnitude; in
fact, it might not complete before running out of resources) than a 'dd'.
Only a small pool on a large disk would be faster to cp/rsync/tar/... than
to dd.

 or will I have to worry about hard links that need to be kept ?

Yes.

Regards,
Holger

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] localhost translated as 127.0.0.1 vs. ::1 messing up permissions

2009-12-08 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2009-12-07 12:10:00 -0500 [[BackupPC-users] 
localhost translated as 127.0.0.1 vs. ::1 messing up permissions]:
 [post ignored due to senders request]

I think the fourth option is the easiest and most straightforward:

 My /etc/hosts file has these (Fedora default) lines at the top:
127.0.0.1   localhost localhost.localdomain localhost4 
 localhost4.localdomain4 mycomputer.mydomain
::1 localhost localhost.localdomain localhost6 
 localhost6.localdomain6 mycomputer.mydomain
 [...]
 However when I access the web interface via an ssh tunnel using 
 -L 8145:localhost:80 and http://127.0.0.1:8145/BackupPC, somehow the
 address gets translated to ::1 (as shown by the httpd error_log
 records) which causes access to be denied.

just use 'ssh -L 8145:localhost4:80 ...'

Regards,
Holger

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] cronjob if needed

2009-11-27 Thread Holger Parplies
Hi,

Matthieu Bollot wrote on 2009-11-18 14:25:44 +0100 [[BackupPC-users] cronjob if 
needed]:
 I want to send an email after 7 days since the last backup, so I could
 setEMailNotifyOldBackupDays to 6.97.
 
 But what I actually want is to send an e-mail at 12am. So I could make a
 cron job :
 00 00 12 * * backuppc /usr/share/backuppc/bin/BackupPC_serverMesg backup
 host.domain.tld host.domain.tld backuppc 1

the status e-mails are sent by BackupPC_sendEmail, which is called by
BackupPC_nightly, not by BackupPC_dump, so triggering a backup won't result in
the e-mail being sent (even if the backup is unsuccessful). If you can arrange
for BackupPC_nightly to be run at the time you want the e-mails to be sent,
that should take care of what you want to achieve (except that the e-mails are
sent *after* pool cleanup, which will be an arbitrary - possibly large - amount
of time later). The question is whether that is a good time in your
circumstances to run the pool cleanup.
Hint: BackupPC nightly is run on the first hour in $Conf{WakeupSchedule}.

If the time is inconvenient for pool maintenance, you can probably call
BackupPC_sendEmail directly, but that won't prevent it from also being called
from BackupPC_nightly, so *some* of the emails will probably be sent from
there (for hosts which have become overdue between last BackupPC_sendEmail
call and BackupPC_nightly run).

As for the time, are you talking about midnight (12am, '00 00' in crontab) or
noon (12pm, '00 12' in crontab - I suppose you weren't intending to do this
only on midnight of the twelfth of each month ('00 00 12' in crontab) as your
crontab entry suggests ;-)?

 Now, more difficult (that's why I'm sending an email) I want to do both
 of them.
 after 6days, the next time  it is 12am I send an email.
 ie : 
 I backup monday 1st at 5pm, 
 I receive an email monday 8th at 12am.

This suggests you probably mean noon (12pm).

 And backup when receiving the email

I don't quite understand what you're intending here. Are you intending to
script a reaction to overdue backups? Is that possible? If so, wouldn't it be
better to make backups succeed when scheduled in the first place
(DumpPreUserCmd or PingCmd)?

Or do you want to replace BackupPC's scheduling? Or are you talking about
manual intervention by the person reading the e-mail?

 [...] I will do this in shell but the results of backuppc_servermesg seems
 to be perl, isn't it ?

Strictly speaking it's a string which happens to parse as one or more Perl
assignments (once you remove the Got reply:) - at least for the 'status'
command, that is. It is probably much easier to do anything meaningful with
the value in Perl than in a shell script.

Regards,
Holger

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup fails after running 8-10 hours

2009-11-19 Thread Holger Parplies
Hi,

Nick Bright wrote on 2009-11-09 17:57:11 -0600 [Re: [BackupPC-users] Backup 
fails after running 8-10 hours]:
 Les Mikesell wrote:
  Nick Bright wrote:
  [...]
  
  full backup started for directory /
  Running: /usr/bin/ssh -q -x -l root cpanel /usr/bin/rsync --server 
  --sender --numeric-ids --perms --owner --group -D --links --hard-links 
  --times --block-size=2048 --recursive --ignore-times . /
  Xfer PIDs are now 27778
  Got remote protocol 1768191091
  Fatal error (bad version): stdin: is not a tty

note the error message here. stdin: is not a tty.

 [...]
 Thank you for your reply. I checked in to it, and determined that there 
 isn't anything being output by logging in to the backuppc system and su 
 backuppc then ssh r...@cpanel:

It's not output, it's probably an 'stty' or something else that expects its
stdin to be a tty. Note that if you try it like you did (ssh -l root cpanel
without command argument), that's not the same as BackupPC is doing. stdin
*will* be a tty in this case, so you won't get the error. You should try
something like ssh -l root cpanel /bin/true instead.

The reason it is working with tar is probably just that the tar XferMethod is
less strict about its input. rsync *needs* to agree on a protocol version
first and then speak that protocol (which doesn't accomodate for arbitrary
garbage in the protocol stream). tar just parses messages and treats
everything it doesn't recognize as an error message. You could look at your
error counts and the XferLOG error messages to confirm that.

I would encourage you to find the problem and switch back to rsync (if that is
what you had first planned on using).

Regards,
Holger

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cant find how to set what is backed up!

2009-10-22 Thread Holger Parplies
Hi,

Bowie Bailey wrote on 2009-10-22 11:17:22 -0400 [Re: [BackupPC-users] Cant find 
how to set what is backed up!]:
 Holger Parplies wrote:
  P.S.: You don't need to worry about write permissions unless you want to
modify these files through the web interface.
 
 Which is highly recommended.  BackupPC has an excellent web
 configuration interface.

yes, I was not trying to discourage its usage. My point was simply, don't
expect that to solve your current issue. Aside from that, config.pl has
excellent comments that explain a lot of how BackupPC works (as well as,
obviously, how to use the settings). So editing the file with an editor is
in no way inferior to using the web configuration interface. It mainly depends
on what feels more intuitive to you. Some prefer the web interface, some prefer
editing the file. The two possibilities exist so that you can use both of
them. Both have their advantages and limitations.

But thank you for pointing this out. I see that my remark can be understood
that way.

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to have 1 full backup + incrementals forever?

2009-10-22 Thread Holger Parplies
Hi,

Tyler J. Wagner wrote on 2009-10-21 15:35:20 +0100 [Re: [BackupPC-users] how to 
have 1 full backup + incrementals forever?]:
 But you'd also want to adjust the IncrKeepCnt to 8*6 = 42, 
 to keep those dailies.

for the archives, 8*6 is 48, not 42. Remember: what do you get if you
multiply six by nine?.

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cant find how to set what is backed up!

2009-10-21 Thread Holger Parplies
Hi,

Andrew Schulman wrote on 2009-10-21 05:49:59 -0400 [Re: [BackupPC-users] Cant 
find how to set what is backed up!]:
  Le mardi 20 octobre 2009 à 19:23 -0400, giorgio p a écrit :
   I'm trying to get backuppc configured.
   I thought I had done the required setup... 
   
   In the /etc/backuppc/config.pl file I have:
   $Conf{XferMethod} = 'rsync';
   $Conf{RsyncShareName} = ['/home/storage','/home/george'];
   
   In the /etc/backuppc/hosts file I have:
   localhost   0   backuppc
   
   However when the backup runs it appears to just backup the /etc directory 
   which isn't even specified.
 
 I think this last point is the clue. I you've edited config.pl as root, it
 may have become owned by root and not readable by backuppc or www-data (or
 whatever your web server user is).

actually, I doubt that.

The Debian package provides an example localhost.pl which specifies backups
of /etc. It is overriding your settings from config.pl. The whole point of a
host specific configuration file is to be able to override global settings on
a per-host basis. You should either specify your settings for localhost in
localhost.pl rather than config.pl (preferred) or leave them in config.pl and
delete localhost.pl (or at least remove the settings you don't want to
override).

 In that case, backuppc will fall back to a default config, which just backs
 up /etc.

Actually, I can't see any defaults for share names in the code, and I wouldn't
think there is any point in defaulting them. How should BackupPC guess *what*
to back up if you've failed to configure it? There's a default config.pl which
you are supposed to edit and which is extensively commented. Removing that (or
making it inaccessible to BackupPC) is always a configuration error. There's
not much good (but a lot of harm) that could come from just backing up
something that seems to make sense when you encounter an identifiable
misconfiguration.

 This happens to me all the time.  If you edit a file as root, some editors
 will preserve the file ownership when you save, others (emacs) will change
 it back to root.

You run emacs as root? Small tip: sudoedit. You can use any editor you want,
and it will be run with your priviledges on a tmp file. sudoedit should
preserve ownership, if I'm not completely mistaken.

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cant find how to set what is backed up!

2009-10-21 Thread Holger Parplies
Hi,

giorgio p wrote on 2009-10-21 16:25:54 -0400 [[BackupPC-users]  Cant find how 
to set what is backed up!]:
 
 Thanks for the replies.

did you read them?

 I'm wondering if I have missed something more basic...

Yes, you did. Let me rephrase my previous reply.

 Here are the permissions on the /etc/backuppc directory:
 -rw-r--r-- 1 root root   414 2007-02-07 07:46 apache.conf
 -rw-r--r-- 1 root root 6 2009-10-19 23:26 config.pl

That's your main config file.

 -rw-r--r-- 1 root root  2238 2007-02-07 07:46 hosts
 -rw-r--r-- 1 root root 0 2009-10-19 22:19 htgroup
 -rw-r--r-- 1 root root23 2009-10-19 23:01 htpasswd
 -rw-r--r-- 1 root root   427 2007-02-07 07:46 localhost.pl

And ***that*** is ***where your error is***. Look inside this file
(localhost.pl) and understand.

Regards,
Holger

P.S.: You don't need to worry about write permissions unless you want to
  modify these files through the web interface.

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Discrepant file size reports of backups

2009-10-16 Thread Holger Parplies
Hi,

s...@pm7.ch wrote on 2009-10-16 02:05:07 -0700 [Re: [BackupPC-users] Discrepant 
file size reports of backups]:
 [...]
 Both directories are in the same filesystem and the path was most likely
 changed right after the installation.

/var/lib/backuppc and /usr/share/backuppc/data ? That would be the root file
system. It's not really a good idea to put your pool on the rootfs.

 I don't fully understand why the changed path is a problem in this case,

Neither do I, but I remember that it *is* a problem. You can have a link for
$TopDir (i.e. $TopDir = '/var/lib/backuppc' and /var/lib/backuppc is a
softlink to somewhere), but softlinks below $TopDir don't seem to work
(whatever the reason was), even if they remain within one file system.

 but I can try to move the directory back to the default path (thanks for the
 link).

That case is not really covered on the wiki page, because it really depends on
what you did and thus need to revert. I was hoping the wiki page will explain
some of the background. If you need more help, we'd need to know a bit more
about your filesystem layout (i.e. 'df', any softlinks involved, layout below
/usr/share/backuppc/data, ...). If it's really all just one FS, you probably
just need to 'mv' some things around, though that won't re-establish pooling
if it wasn't working.

Also note that backup expiration is *not* done by BackupPC_nightly but by
BackupPC_dump. Changing the number of backups to keep and running
BackupPC_nightly (via the daemon, hopefully? :) will do exactly nothing. You
need to run a backup to expire old backups, and if pooling isn't working (not
sure from the info so far), you won't even need BackupPC_nightly to get back
some space.

On the other hand, moving things back into place just might make
BackupPC_nightly clean out the complete pool (because nothing has more than
the one pool link) without the need to expire any backups. That would however
mean that your existing backups do not participate in pooling (I believe
Jeffrey has scripts to fix that, but I'm not sure if it's worthwhile for you).

If you don't need to keep your backup history (as your attempt of reducing the
number of kept backups suggests), you might be best off to simply start over
with a new pool (and keep the old one around as long as you need it). Consider
putting this on a separate partition (or better yet: set of disks) from your
OS installation. But that matter is explained on the wiki page :).

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Discrepant file size reports of backups

2009-10-15 Thread Holger Parplies
Hi,

Adam Goryachev wrote on 2009-10-15 21:02:24 +1100 [Re: [BackupPC-users] 
Discrepant file size reports of backups]:
 Stefan Dürrenberger wrote:
  $Conf{TopDir} = '/usr/share/backuppc/data';
  r...@server:/var/lib/backuppc# du -h --max-depth=1
  r...@server:/usr/share/backuppc# du -h --max-depth=1
 
 Are you using the backuppc deb from ubuntu? If so, you seem to have
 modified topdir incorrectly.

that seems to be the case, judging from the du output.

 the pool, cpool and pc directories *must* all reside on the same
 filesystem. Also, you can't simply change topdir in the config file.

See

https://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Change_archive_directory

for details.

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] editing the wiki

2009-10-14 Thread Holger Parplies
Hi,

Andrew Schulman wrote on 2009-10-14 14:37:36 -0400 [[BackupPC-users] editing 
the wiki]:
 I'd like to add a section or page to the wiki, about usage of
 BackupPC_serverMesg.

https://sourceforge.net/apps/mediawiki/backuppc/index.php?title=ServerMesg_commands

 How do I go about getting edit rights?

I've added you to the editors group.

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pooling doesn't work

2009-10-08 Thread Holger Parplies
Hi,

Patric Hafner wrote on 2009-10-08 16:29:54 +0200 [[BackupPC-users] Pooling 
doesn't work]:
 i'm running BackupPC 3.1.0 with Debian Lenny. BackupPC is responsible
 for about 5 Clients which are backupped over rsync/ssh.
 
 My problem is, that during an incremental backup nearly every file is
 marked as create, so nearly every file will be downloaded again.
 About 20% are marked as pool.
 But those files marked as create haven't changed since the last run,
 timestamps are still the same. For example the whole /etc directory will
 be downloaded every day. And I can surely say that nothing changed there.
 [...]
 Does anyone has an idea? This would be great.

yes, you are probably incorrectly using incremental backups, but since you
don't say anything about your configuration, we can only guess.

Level 1 incremental backups download everything that has changed since the
last full backup. Presuming your last full was long ago, or you have modified
your configuration since then (e.g. changed from a test backup of, say, /lib,
to a full backup of all of your root file system), you will be downloading
everything changed or added since the last full backup with every incremental.

Run a full backup and see if the following incrementals behave better. If so,
send us some details about your configuration (esp. full and incremental
backup scheduling settings) to let us help you adjust your schedule. In short:
you *need* regular full backups.

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pooling doesn't work

2009-10-08 Thread Holger Parplies
Hi again,

I was just going to add that your subject is incorrect, but I see that you
seem to be having a second issue. Sorry for replying a bit hastily, but your
wording does make it rather easy to draw incorrect conclusions (or rather miss
essential points).

Patric Hafner wrote on 2009-10-08 16:29:54 +0200 [[BackupPC-users] Pooling 
doesn't work]:
 [...]
 My problem is, that during an incremental backup nearly every file is
 marked as create, so nearly every file will be downloaded again.
 About 20% are marked as pool.

Note that pool also means downloaded again. Not downloaded due to rsync
savings is same (in a full backup) or the file simply not appearing in the
log (in an incremental backup).

 But those files marked as create haven't changed since the last run,
 timestamps are still the same. For example the whole /etc directory will
 be downloaded every day. And I can surely say that nothing changed there.

Timestamps are not the only indication of change. It is *possible* to modify a
file without changing the timestamp (e.g. resetting it after the change). But
that is probably not what is happening here.

It would appear that pooling is only *partially* working (which is confusing
in itself). You couldn't have files marked pool if there was no pooling at
all. I would *guess* that you have probably incorrectly changed $TopDir after
having made some backups. You probably have tons of link failed ... errors
in your log files. New files are not added to the pool, so only files already
present from your first backups would be found there, though linking would not
work for them, either.

Again, for anything more than educated guesses about what might be going
wrong, we need details about your setup.

- What version, what installation method, what OS, which paths, which
  filesystem(s), how partitioned?
- What did you change recently? Move $TopDir? How? See
  
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Change_archive_directory
  for details on what you should or should not have done.
- Is there anything suspicious in your log files ($LogDir/LOG and
  $TopDir/pc/hostname/XferLOG.NN.z)?

 This results an an extensive amount of traffic, which is unacceptable.

Err, no. It results in an excessive amount of storage being used. Traffic is
independant of storage. If pooling was working correctly, you could still have
the same amount of traffic, but everything should be marked pool and stored
only once. Conversely, if rsync transfers were working correctly, you would
save traffic, but that does not imply that pooling would work. True, for this
one host unchanged files would be re-used, but they would not be matched up
against independent copies of identical content (from the same host or
different hosts).

You need to fix both issues, and they are independant of each other.

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What happened to the part of the Wiki with user-contributed utilities?

2009-10-07 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2009-10-07 09:33:51 -0400 [[BackupPC-users] What 
happened to the part of the Wiki with user-contributed utilities?]:
 I was trying to add updated versions of my routines (e.g.,
 BackupPC_deleteFile, BackupPC_fixLinks) to the user contribution
 portion of the Wiki and was not able to find it. Not sure if I'm
 looking in the wrong place
 (http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Main_Page#Welcome_to_the_BackupPC_Wiki.21)
 or if that area of the Wiki has moved or been eliminated.

that area of the Wiki? Frankly, the Main_Page doesn't seem to link to *any*
wiki content. But the wiki pages seem to be there, and they're really easy to
find: just press Random page repeatedly until you've found what you're
looking for. I was lucky, I got to title=CustomUserScripts with three or
four clicks. How many does it take you (and don't cheat by using the page
title ;-)?

Assuming I ever manage to create a SF account (the pages seem to be optimized
toward not working with my browser version), I might even join the distributed
content recovery effort (sort of an fsck for wikis ... we could create a page
lost+found and put links to any pages we find in there).

Have fun.

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What happened to the part of the Wiki with user-contributed utilities?

2009-10-07 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2009-10-07 16:30:54 -0400 [Re: [BackupPC-users] 
What happened to the part of the Wiki with?user-contributed utilities?]:
 Michael Stowe wrote at about 14:11:36 -0500 on Wednesday, October 7, 2009:
   [...]
   I'm not sure if the toolbox appears if you're logged in, but it's
   certainly not there when I look at it.
 
 I think the problem is that I don't have authoring or editing
 permissions (or at least so it said when I attempted to edit the
 source page).

that remains true when you log in (I actually managed to create an account -
my fault, not sourceforge's). In case anyone was wondering, to edit a page,
you click on view source. Well, maybe that changes to edit once you have
permission ...

But I've found the navigation, from where you should be able to reach most
wiki pages (there are some listed as orphaned). Well, it *was* the navigation,
but I still think it needs major restructuring. Actually, I think it needs a
structure :-). In any case, here it is:

https://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Space.menu

At least, we can again provide links to existing wiki pages ...

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup, Lighttp and my mess

2009-10-06 Thread Holger Parplies
Hi,

Laura_marpplet wrote on 2009-10-05 13:33:31 -0400 [[BackupPC-users]  Backup, 
Lighttp and my mess]:
 [...]
 But when I try to run the CGI script I got a 500 - Internal Server Error.
 Reading the lighttpd error file it just says Can't access() script. I
 checked the permissions and they look good to me (/srv/lighttpd/cgi-bin/):
 
 -r-sr-xr--  1 backuppc backuppc 3995 Oct  5 17:54 BackupPC_Admin.pl
 -rwsr-xr--  1 root root 3993 Oct  2 14:10 sample2.pl
 -rw-r--r--  1 root root  116 Oct  2 14:09 sample.pl

this really says nothing about the web server's ability to access the files.
sample.pl and sample2.pl are accessible to user root and group root.
BackupPC_Admin.pl is accessible to user backuppc and group backuppc. Presuming
lighttpd is not running as user root, but as a user in *group* root but not in
group backuppc, what you are observing makes perfect sense.

 I can also run sample.pl and sample2.pl with no problems. It doesn't
 work either if the owner is root.

If lighttpd were running as user root, it could access BackupPC_Admin.pl no
matter who it belongs to. I'd guess it's the group membership that makes the
difference.

All of that said, it could probably be a completely different problem, but the
error message points in this direction (presuming access() refers to the
system call). If it's not permissions (which you could fix by either
'chmod a+x BackupPC_Admin.pl' or 'chgrp root BackupPC_Admin.pl' or even adding
the user lighttpd is running as to the group backuppc in /etc/group -
depending on which of those fits your taste and security needs best), you
could try some other things:

- perl -c BackupPC_Admin.pl
  Is your script garbled in some way that makes it syntactically incorrect or
  unable to find its library files? You might have to temporarily remove the
  setuid bit from the permissions for this test.

- Look at the first line of the script ('head -1 BackupPC_Admin.pl'). Is the
  path to the Perl binary correct? (Don't know if that's relevant for
  lighttpd.)

- Give us more information on your setup (which version of BackupPC, what
  OS and version, what user is lighttpd running as, where did you install
  BackupPC to (which paths), is SElinux enabled, output of
  'md5sum BackupPC_Admin.pl' ...).

But try fixing the permissions first, that seems to be the most likely
problem.

Regards,
Holger

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need help on scheduling, full backups and excluding directories

2009-10-06 Thread Holger Parplies
Hi,

Anand Gupta wrote on 2009-10-07 01:49:41 +0530 [[BackupPC-users] Need help on 
scheduling, full backups and excluding directories]:
 [...]
 2. I have excluded the following directories in backup
 
 $Conf{BackupFilesExclude} = {
   '*' = [
 'WUTemp',
 'UsrClass.dat',
 'NTUSER.DAT',
 'parent.lock',
 'Thumbs.db',
 'ntuser.dat.LOG',
 'IconCache.db',
 'pagefile.sys',
 'hiberfil.sys',
 'UsrClass.dat.LOG',
 '/WINDOWS',
 '/Program Files',
 '/Progra~1',
 '/Config.Msi',
 '/$AVG*',
 '/cygwin*',
 'autoexec.ch',
 'config.ch',
 '/*/Cache',
 '/RECYCLER',
 '/RECYCLER/',
 '/*/Temporary?Internet?Files',
 '/System?Volume?Information',
 '/System Volume Information/',
 '/Documents and Settings/*/Application Data/Microsoft/Search/Data',
 '/Documents and Settings/*/Local Settings/Application 
 Data/Google/Chrome/User Data/Default/Cache',
 '/Documents and Settings/*/Local Settings/Application 
 Data/Mozilla/Firefox/Profiles/*.default/Cache',
 '/Documents and Settings/*/Local Settings/Temp',
 '/Documents and Settings/*/Local Settings/Temporary Internet Files',
 '/System Volume Information',
 '/Temp',
 '/MSOCache/'
   ]
 
 However even though the above excludes have been put in the config file, 
 i see these directories/ files being backed up.

what XferMethod are you using? If it's smb, I believe having read that you
need '\' as path separator if you want globbing to work, eg.

'/RECYCLER/',   # should be ok, no wildcards
'\RECYCLER\',   # the same as above
'/*/Cache', # won't work, but ...
'\*\Cache', # should work

As I can't test that (I don't backup Windoze systems), I'd be interested in
someone confirming that (maybe you if it works ;-).

 How do i remove them from the pool ?

Since you're only keeping backups for a week, I'd suggest not worrying about
it (if space permits). Your list is quite long, so it seems quite a chore to
get rid of all those and still have backups you trust. If you really want to
do it, Jeffrey wrote a script which you can find somewhere in the archives.
If you want a *simple approximation* that is good enough though not 100%
correct, try something like

find pc/ -mindepth 4 -maxdepth 4 \( -name fpagefile.sys -o -name 
fhiberfil.sys \) -exec rm {} \;

(assuming those are always in the root of a share). On the other hand, these
excludes didn't include a wildcard, so I would expect them to have been
respected (were they?).

If you're going to remove many small files that way (which I wouldn't do),
you'd want to think of either replacing the -exec with a -print0 and pipe the
output to 'xargs -0 rm' or replacing the 'find' with 'find2perl' and piping
the output into 'perl'.

Also remember that no space will be actually freed until BackupPC_nightly
runs (and don't run it by hand, use 'BackupPC_serverMesg BackupPC_nightly run'
if you really want to force an unscheduled run).

Regards,
Holger

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] unable to restore via zip archives

2009-10-02 Thread Holger Parplies
Hi,

Boyko Yordanov - Exsisto Ltd. wrote on 2009-10-02 19:15:40 +0300 
[[BackupPC-users] unable to restore via zip archives]:
 [...] BackupPC version 3.2.0beta0 [...]
 I am unable to restore files via zip archives with compression (1-9).  
 The web interface prompts me to save the file, but then the file is  
 usually just a few kilobytes in size (no matter how many files I am  
 selecting for restoration) and I am unable to extract anything from  
 it. I am able to download a zero compression zip archive, but  
 nothing if I set the compression to higher values. The other restore  
 methods are working fine. I have all the necessary perl modules  
 installed including Archive::Zip and the ones it depends on. There are  
 no errors in the BackupPC log file, actually it just logs a normal  
 entry like I've successfully downloaded the zip archive - but I did  
 not.. at least the file I get seems somehow broken.

to rule out a browser problem, you should try it from the command line:

backuppc% /path/to/BackupPC_zipCreate -h hostname -n backupnum \
  -s /share/name -c 1 /paths/of/files/in/share  /tmp/test.zip

Aside from that, browsing the source code of zipCreate shows several comment
lines ending in 8(. They detail what file types can't be zipped. If you are
restoring files from a Unix system, you probably shouldn't be using zip files,
because you may be losing information. For data where you *know* that you have
no hardlinks, symlinks or special files, you are probably ok, but in most
cases you should be using tar. For Windoze restores, that might be a different
matter.

 Could it be a bug or a known issue? Anyone noticing the same behavior  
 w/ his BackupPC setup?

For whatever it's worth, I don't use zip restores, but a quick test of
BackupPC_zipCreate on BackupPC 2.1.2 (sorry :) shows that it seems to work
fine for me. I didn't test through a browser.

Regards,
Holger

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Variable in Host Configuration File

2009-10-01 Thread Holger Parplies
Hi,

Valarie Moore wrote on 2009-10-01 16:11:16 + [[BackupPC-users] Variable in 
Host Configuration File]:
 I have a host configuration file like this:
 
 $Conf{ClientNameAlias} = 'localhost';
 $Conf{XferMethod} = 'rsync';
 $Conf{RsyncShareName} = '/home/jeixav';
 $Conf{RsyncClientCmd} = 'sudo -u jeixav $rsyncPath $argList+';
 $Conf{RsyncClientRestoreCmd} = 'sudo -u jeixav $rsyncPath $argList+';
 
 I would like to specify the user account jeixav once as a variable, rather
 than having it hardcoded three times, but I'm don't know how to do this.
 Going through the BackupPC's config.pl file, it seems as though the variable
 called $user should help, [...]

no, it doesn't. Above $Conf {RsyncClientCmd}, the comment starting with Full
command to run rsync on the client machine does not list a variable $user.
For RsyncShareName, there is no substitution at all.

You don't tell us what the name of the host is (if the configuration file is
named foo.pl, then your host name is foo). You are using ClientNameAlias
to override the host name for ... err ... the ping command (it is not used in
RsyncClient{,Restore}Cmd, so it has no other effect), so you've presumably
used something descriptive as host name. If your host is actually named
jeixav, you can use $host in RsyncClientCmd and RsyncClientRestoreCmd (but
not RsyncShareName).

Depending on what you are exactly trying to achieve, there are several
further possibilities.

1. Avoid three occurrences of the same constant value for aesthetic reasons
   or ease of change

   Define a Perl variable in the config file and use it (you won't be able to
   use the web based config editor to make changes to the config file,
   though):

   my $user = 'jeixav';
   $Conf{RsyncShareName} = /home/$user;
   $Conf{RsyncClientCmd} = sudo -u $user \$rsyncPath \$argList;
   $Conf{RsyncClientRestoreCmd} = sudo -u $user \$rsyncPath \$argList;

   Note the double quotes and the quotation of $-signs for the variables that
   are to be interpolated by BackupPC rather than Perl, though.

2. Reusability of the same config file for several users through hard links
   (/etc/backuppc/jeixav.pl, /etc/backuppc/user2.pl, /etc/backuppc/user3.pl
   etc. all point to the same file - this is assuming you are using the
   relevant user names as host names in BackupPC):

   Use Jeffrey's trick of looking for the host name in $_[1] inside the config
   file:

   $Conf{RsyncShareName} = /home/$_[1];
   $Conf{RsyncClientCmd} = sudo -u $_[1] \$rsyncPath \$argList;
   $Conf{RsyncClientRestoreCmd} = sudo -u $_[1] \$rsyncPath \$argList;

   The same notes apply as above.


Two unrelated remarks:

1. In any case, it should be $argList, not $argList+, as the value is not
   passed through a shell (see remark above $Conf{TarClientCmd} in config.pl
   for an explanation on shell quoting). The default value of RsyncClientCmd
   contains an 'ssh', so in that case $argList needs to be quoted. For 'sudo'
   it should not be (both will work as long as there is, in fact, nothing to
   quote, but you don't want it to break if you change the configuration at
   some point in the future).

2. For backups of localhost (and even more so if RsyncClientCmd does not even
   contain an 'ssh' or equivalent) you don't really need a PingCmd. Then
   again, you might want to keep it to simplify future changes. If you want to
   disable it, you can do so by setting

   $Conf{PingCmd} = '{sub {0}}';

   I'm only mentioning this because *without* a PingCmd, ClientNameAlias
   doesn't do anything anymore, so you could probably drop it (though I'm not
   positive that BackupPC won't try to do a DNS lookup of the name, so you
   might need to keep it after all).

   But this is really more academic than in any way relevant. If in doubt,
   just leave PingCmd and ClientNameAlias as they are.

 I am using BackupPC as packaged with Debian 5.0.3 (lenny).

Which is 3.1.0, in case anyone was wondering.

Regards,
Holger

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Switching backup methods

2009-09-29 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2009-09-29 02:49:11 -0400 [Re: [BackupPC-users] 
Switching backup methods]:
 Holger Parplies wrote at about 15:54:25 +0200 on Saturday, September 26, 2009:
   [...]
   3. I believe I recall reading that *restoring* a backup *made with rsync*
  with the tar XferMethod produces warnings (or errors?) - probably
  because tar doesn't understand the deleted file entries in the
  attrib files. [...]
 
 If I am understanding you correctly, then presumably this problem
 would only occur for incremental backups and not for full backups
 since the concept of deleted file shouldn't exist in a full backup.

I agree on that, but I have never experienced the problem myself, just read
about it one or two times over the last years. At the time, I wasn't familiar
with the attrib file contents. Now, it makes sense that this problem could
occur. But I might be mistaking things. The important part is that if you run
into problems when restoring after changing the XferMethod, changing it back
for the restore might help.

 Second, how do tar incrementals signal deleted files if they don't use
 the deleted file type (10) attribute?

They don't. tar can't detect deleted files. Deleting a file doesn't change its
modification time, it deletes it ;-).

 I have never used tar as a XferMethod before so I don't understand how it
 works differently from rsync here...

Well, you don't have fileLists ... a full tar backup is just a tar stream of
all the files on the sender side, which is then interpreted by BackupPC and
integrated into the pool. An incremental tar backup is a tar stream of all
the files on the sender side that have changed since a timestamp. Files not
present are either unchanged or deleted (or moved to a new location). There's
no way to tell which.

Regards,
Holger

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] problem purging files with pool size

2009-09-28 Thread Holger Parplies
Hi,

backu...@omidia.com wrote on 2009-09-28 08:48:44 -0700 [[BackupPC-users] 
problem purging files  with pool size]:
 i'm having a problem purging files and with the size of my pool.
 
 i'm running version 2.1.2pl0.
 
 in the past, i've modified the $Conf{FullKeepCnt} so that it's more
 conservative, and then run BackupPC_nightly, and it's trimmed the pool. 

I don't believe that is actually true. Backup expiration is done by
BackupPC_dump, not by BackupPC_nightly. I believe your problem with 2.1.2 is
that no dumps (and no expiration) are done when the pool FS is more than
DfMaxUsagePct full (actually I don't have the 2.1.2 source here, but in 2.1.1
that is the case; in 3.0.0beta3 it's fixed; the changelog doesn't seem to say
in which version it was changed).

 [...]
 here's the disk space report:
 
 /dev/sda1 147G  135G  4.5G  97% /home/backuppc

You might try temporarily increasing $Conf{DfMaxUsagePct} to 97 or 98 (and
then run a backup or wait for one to run automatically). Depending on how
large your backups typically are, you might even keep it there (7.35GB (5% of
147GB) is a lot of space to keep reserved - unless your backups are typically
that large; how much space do you need so that $Conf{MaxBackups} backups can
be started and complete without the FS filling up?).

Also note that backups don't seem to be expired for hosts for which backups
are disabled.

 as a temporary measure, i think i can manually delete files by, for
 example, deleting all the files in /home/backuppc/pc/prettylady/112 (for
 the above host)?  and then running nightly?

Presuming you don't miss any dependencies (incremental backups that are based
on that full backup), you can do that (though you'll still have an entry for
that backup in the backups file) - move the directory to $TopDir/trash and
trashClean will even do it for you in the background - but it's safer to let
BackupPC handle expiration.

Regards,
Holger

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exponential expiring incremental backups with IncrKeepCnt?

2009-09-28 Thread Holger Parplies
Hi,

Christian Neumann wrote on 2009-09-28 15:57:24 +0100 [[BackupPC-users] 
Exponential expiring incremental backups with IncrKeepCnt?]:
 [...]
 The documentation mentions exponential expiring incremental backups
 (http://backuppc.sourceforge.net/faq/BackupPC.html#backup_basics): BackupPC
 can also be configured to keep a certain number of incremental backups, and
 to keep a smaller number of very old incremental backups.

while I don't really understand what keep a smaller number of very old
incremental backups is supposed to mean, there is no mention of exponential
incremental backup expiry. If you read the preceeding paragraph on full
backups you'll notice that it's described very explicitly there. If there were
exponential expiry of incrementals, there would be at least a clear reference
to this description.

 [...]
 Are exponential expiring incremental backups supported? If not, is there a
 reason behind it?

Exponential expiry of incremental backups really makes no sense (and it's not
sanely implementable with multi-level incrementals anyway). With BackupPC, you
*need* regular full backups(*) (if the wiki were functional, there would
probably be a page explaining why), and storing full backups is only
insignificantly more expensive than storing incrementals anyway. For this
reason, incremental backups are always fairly young (mine are up to 60 days
old, and I doubt anyone keeps them much longer). To keep an incremental backup,
you also need to keep the full backup it was made against(!), so the age
difference between the incremental and its full will never exceed
$Conf{FullPeriod} (the time between two full backups). With exponential
incremental backup expiry, you would quickly exceed $Conf{FullPeriod}, meaning
you would be keeping full backups (if only to support the incrementals) that
are closer to the incrementals than they are to each other. Why would you want
that?

Incremental backups are there for gaining a speed advantage - an advantage
that will allow you to make daily (or hourly or whatever) backups. Full
backups are (amongst other purposes) for keeping exponentially - yearly
backups for the last 10 years, monthly for the last two years, weekly for the
last six months (just to give you an idea). As with any backup system,
incremental backups are only a (good enough) approximation. Only full backups
give you a true snapshot (and that only if they are, in fact, taken of a
snapshot, but that's a different topic). You want to keep true snapshots
around for a long time, not approximations.

Regards,
Holger

(*) Actually, you probably need regular full backups with any backup scheme.
It's just that on this list, we make a point of telling you ;-).

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


<    1   2   3   4   5   6   7   8   9   >