Re: [BackupPC-users] RAID and offsite

2011-04-29 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2011-04-28 23:15:52 -0500 [Re: [BackupPC-users] RAID and 
offsite]:
 On 4/28/11 9:50 PM, Holger Parplies wrote:
  I'm sure that's a point where we'll all disagree with each other :-).
 
  Personally, I wouldn't use a common set of disks for normal backup operation
  and offsite backups. [...]
 
 I don't think there is anything predictable about disk failure. Handling
 them is probably bad.  Normal (even heavy) use doesn't seem to matter
 unless maybe they overheat.

well, age does matter at *some* point, as does heat. Unless you proactively
replace the disks before that point is reached, they will likely all be old
when the first one fails. Sure, if the first disk fails after a few months,
the others will likely be ok (though I've had a set of 15 identical disks
where about 10 failed within the first 2 years).

  [...] I think it brought up the *wrong* (i.e. faulty) disk of the mirror and
  failed on an fsck. [...]
 
 Grub doesn't know about raid and just happens to work with raid1 because it 
 treats the disk as a single drive.

What's more, grub doesn't know about fsck.

grub found and booted a kernel. The kernel then decided that its root FS on
/dev/md0 consisted of the wrong mirror (or maybe its LVM PV on /dev/md1;
probably both). grub and the BIOS have no part in that decision.

I can see that the remaining drive may fail to boot (which it didn't), but I
*can't* see why an array should be started in degraded mode on the *defective*
mirror when both are present.

 And back in IDE days, a drive failure usually locked the controller which
 might have had another drive on the same cable.

Totally unrelated, but yes. SATA in my case anyway.

  I *have* seen RAID members dropped from an array without understandable
  reasons, but, mostly, re-adding them simply worked [...]
 
 I've seen that too.  I think retries are much more aggressive on single
 disks or the last one left in a raid than on the mirror.

Yes, but a retry needs a read error first. Are retries on single disks always
logged or only on failure?

Or perhaps I should ask this: are retries uncommon enough to warrant failing
array members, yet common enough that a disk that has produced one can still
be trustworthy? How do you handle disks where you see that happen? Replace or
retry?

  [...] there are no guarantees your specific software/kernel/driver/hardware
  combination will not trigger some unknown (or unfixed ;-) bug.
 
 I had a machine with a couple of 4-year uptime runs (a red hat 7.3) where 
 several of the scsi drives failed and were hot-swapped and re-synced with no 
 surprises.  So unless something has broken in the software recently, I mostly 
 trust it.

You mean, your RH 7.3 machine had all software/kernel/driver/hardware
combinations that there are?

Like I said, I've seen (and heard of) strange occurrences, yet, like you, I
mostly trust the software, simply out of lack of choice. I *can't* verify its
correct operation; I could only try to reproduce incorrect operation, were I
to notice it. When something strange happens, I mostly attribute it to user
errors, bugs in file system code, hardware errors (memory or power supply).
RAID software errors are last on my mind. In any case, the benefits seem to
outweigh the doubts.

Yet there remain these few strange occurrences, which may or may not be
RAID-related. On average, every few thousand years, a CPU will randomly
compute an incorrect result for some operation for whatever reason. That is
unlikely enough that any single one of us is extremely unlikely to ever be
affected. But there are enough computers around that it does happen on a daily
basis. Most of the time, the effect is probably benign (random mouse movement,
one incorrect sample in an audio stream, another Windoze bluescreen, whatever).
It might as well be RAID weirdness in one case. Or the RAID weirdness may be
the result of an obscure bug. Complex software *does* contain bugs, you know.

  It *would* help to understand how RAID event counts and the Linux RAID
  implementation in general work. Has anyone got any pointers to good
  documentation?
 
 I've never seen it get this wrong when auto-assembling at reboot (and I move 
 disks around frequently and sometimes clone machines by splitting the mirrors 
 into different machines), but it shouldn't matter in the BPC scenario because 
 you are always manually telling it which partition to add to an already
 running array.

That doesn't exactly answer my question, but I'll take it as a no, I don't.

Yes, I *did* mention that, I believe, but if your 2 TB resync doesn't complete
before reboot/power failure, then you exactly *don't* have a rebuild initiated
by an 'md --add'; after reboot, you have an auto-assembly (I also mentioned
that). And, also agreed, I've also never ***seen*** it get this wrong when
auto-assembling at reboot (well, except for once, but let's even ignore that).

My point is that auto-assembly normally takes two (or more) mirrors 

[BackupPC-users] Fwd: Minor update in the Samba Module

2011-04-29 Thread Nick van IJzendoorn
Good evening,

First of al great product! :) At work I had to configure a backup
server so I decided to use BackupPC because of the samba support and
nice web interface. However our backup policy is somewhat different
then normal... I think, because I couldn't find the functionality I
was looking for:
The setup:
//host/DiskD/Projects/  here from all source files must be backup-ed
but not the compile files and the documentation etc.
//host/DiskD/MyDocs/  here all must be backup-ed but not some default
windows crap

The problem was that you can't enter and BackupFilesOnly and
BackupFilesExcept at the same time and you can't enter the path in the
Share because smbclient doesn't understand it. That's why I came up
with this idea to let the Smb.pm handle the //host/share/path problem
by splitting it into the Share and Directory section so you can add
multiple rules for the same share with and use different Only and
Except rules without the need to specify all possibilities.

Excuse me for my bad perl skills but here is a proof of concept:
http://pastebin.com/s1GsijU9

Notable changes are at lines: 55, 120+ 129, 137+

Didn't had time to test it though will be at work in about 8 hours so
then I can see if all went well.

Cheers,
Nick van IJzendoorn

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Minor update in the Samba Module

2011-04-29 Thread Nick van IJzendoorn
Good morning,

I've tested the source now and the one I wrote this evening didn't
work =) I changed it and now it suites my need and maybe somebody else
would like to use it aswell. The new version can be found at:
http://pastie.org/1842600 notable changes are at lines: 120-134
141-142 in the config file the new Samba variable $directory need to
be placed.

Cheers,
Nick van IJzendoorn

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] New: BackupPC_extract

2011-04-29 Thread Nick van IJzendoorn
Good midday,

Since we weekly want to store all backups on an external HD in 7zip
format I created this script to exract the latest backup so I can
later add it to a 7zip archive. You might also enjoy it.
http://pastie.org/1847242

Cheers,
Nick van IJzendoorn

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New: BackupPC_extract

2011-04-29 Thread Richard Shaw
On Fri, Apr 29, 2011 at 8:50 AM, Nick van IJzendoorn
nick.de.ne...@gmail.com wrote:
 Good midday,

 Since we weekly want to store all backups on an external HD in 7zip
 format I created this script to exract the latest backup so I can
 later add it to a 7zip archive. You might also enjoy it.
 http://pastie.org/1847242

I may find something like this useful. I'm assuming since you are
zipping it separtely that his just exports a copy of the backup with
all files accessible? (no tar?)

Thanks,
Richard

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't Fork Crash on Nexenta (Solaris)

2011-04-29 Thread Stephen Gelman
I am running BackupPC 3.2.0.  The line where it fails is:

if ( !defined($pid = open(CHILD, -|)) ) {

So it looks like it is attempting to fork...

Stephen Gelman
Systems Administrator

On Apr 28, 2011, at 11:55 PM, Holger Parplies wrote:

 Hi,
 
 Stephen Gelman wrote on 2011-04-20 22:57:38 -0500 [[BackupPC-users] Can't 
 Fork Crash on Nexenta (Solaris)]:
 On Nexenta (which is essentially an OpenSolaris derivative), I seem to have
 issues where BackupPC crashes every once and a while.  When it crashes, the
 log says:
 
 Can't fork at /usr/share/backuppc/lib/BackupPC/Lib.pm line 1340.
 
 Any ideas how to prevent this?
 
 Stephen Gelman
 Systems Administrator
 
 errm, have less processes running on your machine? What errno is that? Line
 1340 contains a Perl 'return' statement, so that's strange (since you didn't
 mention it, you must be using BackupPC 3.2.0beta0, because that's the version
 I happened to check). Which log file? How come BackupPC writes something to
 the log if it crashes? BackupPC doesn't even *try* to fork via Lib.pm (only
 BackupPC_{dump,restore,archive} appear to use that), and failures to fork in
 the daemon are certainly not fatal (except for daemonizing on startup).
 Very strange.
 
 Oh, and we know what Nexenta is. That's the part you wouldn't have needed to
 explain.
 
 Regards,
 Holger


--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] The usual questions looking for better ansers

2011-04-29 Thread Richard Shaw
I was recently sub-contracted to setup BackupPC for a business and
everything's fine so far but I was hoping to make some improvements.

1. On the server, which is a Xeon Quad 2GHz machine, I've got a rsync
over ssh dump that's been running for over 12 hours which is about
300GB into a 800GB share. Perl seems to be the bottleneck on the
server and ssh is only using about 20-30% of one core on the client so
I'm assuming changing the ssh cypher will not help.

Is there any tips or tricks I can apply in this case?

2. Can more than one email address be added to the configuration for
the EMailAdminUserName?

3. I have more than one share that needs to be backed up from the
client but they want a different backup schedule for 1 of the shares.
Am I going to have to setup fake/virtual hosts to accomplish this?

The host-ip resolution is being handled by a hosts file so I could
add aliases there so BackupPC would treat them as separate clients,
right?

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New: BackupPC_extract

2011-04-29 Thread Nick van IJzendoorn
2011/4/29 Richard Shaw hobbes1...@gmail.com:
 On Fri, Apr 29, 2011 at 8:50 AM, Nick van IJzendoorn
 nick.de.ne...@gmail.com wrote:
 Good midday,

 Since we weekly want to store all backups on an external HD in 7zip
 format I created this script to exract the latest backup so I can
 later add it to a 7zip archive. You might also enjoy it.
 http://pastie.org/1847242

 I may find something like this useful. I'm assuming since you are
 zipping it separtely that his just exports a copy of the backup with
 all files accessible? (no tar?)

 Thanks,
 Richard

Yes, it just rebuilds the whole backup you specified in the current
work directory.

Enjoy!

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't Fork Crash on Nexenta (Solaris)

2011-04-29 Thread Les Mikesell
On 4/29/2011 9:33 AM, Stephen Gelman wrote:
 I am running BackupPC 3.2.0.  The line where it fails is:

 if ( !defined($pid = open(CHILD, -|)) ) {

 So it looks like it is attempting to fork...

The usual (perhaps only?) reason for not being able to fork is that you 
have run out of resources or hit an OS-imposed limit 
(memory/processes/file descriptors, etc.).   Can you raise those limits 
for the backuppc user?

-- 
   Les Mikesell
lesmikes...@gmail.com

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New: BackupPC_extract

2011-04-29 Thread Richard Shaw
On Fri, Apr 29, 2011 at 9:44 AM, Nick van IJzendoorn
nick.de.ne...@gmail.com wrote:
 Yes, it just rebuilds the whole backup you specified in the current
 work directory.

One minor nit pick. The header still makes 5 references to zip
archive which I presume is left over from the original script you
based yours on. It would probably be better to just remove zip from
those instances.

Thanks,
Richard

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The usual questions looking for better ansers

2011-04-29 Thread Les Mikesell
On 4/29/2011 10:18 AM, Richard Shaw wrote:
 I was recently sub-contracted to setup BackupPC for a business and
 everything's fine so far but I was hoping to make some improvements.

 1. On the server, which is a Xeon Quad 2GHz machine, I've got a rsync
 over ssh dump that's been running for over 12 hours which is about
 300GB into a 800GB share. Perl seems to be the bottleneck on the
 server and ssh is only using about 20-30% of one core on the client so
 I'm assuming changing the ssh cypher will not help.

 Is there any tips or tricks I can apply in this case?

Is this the 1st or 2nd full? It may improve by itself if you have the 
--checksum-seed=32761 option set so the server won't have to recompute 
the values.  You could also look at the content and how it changes. Big 
files with small changes are bad, since the system has to reconstruct 
the copy with a mix of decompressing the old version and the changes 
from the network.  Maybe there is something that you can exclude or 
handle some other way.  Also, remember that incrementals without levels 
copy everything changed since the previous full and with levels have to 
do extra server-side work to merge the comparison view.

 2. Can more than one email address be added to the configuration for
 the EMailAdminUserName?

I think it should work to use a comma separated list, but an alternative 
would be an alias in the mail system.

 3. I have more than one share that needs to be backed up from the
 client but they want a different backup schedule for 1 of the shares.
 Am I going to have to setup fake/virtual hosts to accomplish this?

Yes.  But make sure they understand that the pooling in backuppc means 
they wouldn't actually store more copies (unless they change) of files 
if they run all the shares at the more frequent schedule.

 The host-ip resolution is being handled by a hosts file so I could
 add aliases there so BackupPC would treat them as separate clients,
 right?

Yes, but you could also use the ClientAlias setting in backuppc itself 
to make different host configurations point to the same real name or IP.

-- 
   Les Mikesell
lesmikes...@gmail.com




--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New: BackupPC_extract

2011-04-29 Thread Nick van IJzendoorn
2011/4/29 Richard Shaw hobbes1...@gmail.com:
 On Fri, Apr 29, 2011 at 9:44 AM, Nick van IJzendoorn
 nick.de.ne...@gmail.com wrote:
 Yes, it just rebuilds the whole backup you specified in the current
 work directory.

 One minor nit pick. The header still makes 5 references to zip
 archive which I presume is left over from the original script you
 based yours on. It would probably be better to just remove zip from
 those instances.

 Thanks,
 Richard

Updated, thanks for noticing. I also removed the time limit since we
don't have the limit on a regular dump.
Updated version: http://pastie.org/1847681

Have a nice weekend,
Nick

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RAID and offsite

2011-04-29 Thread Les Mikesell
On 4/29/2011 1:48 AM, Holger Parplies wrote:

 well, age does matter at *some* point, as does heat. Unless you proactively
 replace the disks before that point is reached, they will likely all be old
 when the first one fails. Sure, if the first disk fails after a few months,
 the others will likely be ok (though I've had a set of 15 identical disks
 where about 10 failed within the first 2 years).

I think of it about like light bulbs.  All you know is that they don't 
last forever. Manufacturing batches are probably the most critical 
difference and it's not something you can control.  Anyway, the old rule 
about data is that if something is important you should have at least 3 
copies and don't let the person who destroyed the first 2 touch the last 
one.

 [...] I think it brought up the *wrong* (i.e. faulty) disk of the mirror and
 failed on an fsck. [...]

 Grub doesn't know about raid and just happens to work with raid1 because it
 treats the disk as a single drive.

 What's more, grub doesn't know about fsck.

 grub found and booted a kernel. The kernel then decided that its root FS on
 /dev/md0 consisted of the wrong mirror (or maybe its LVM PV on /dev/md1;
 probably both). grub and the BIOS have no part in that decision.

Sort-of... Grub itself is loaded by bios, which may fail (or not) 
automatically to the alternate disk.  Then it loads the kernel and 
initrd from the disk it was configured to use (but which might not be in 
the same position now).  These can potentially be out of date if one 
copy had been kicked out of the raid and you didn't notice.  But that 
probably wasn't the problem.  The kernel takes over at that point, 
re-detects the drives, assembles the raids, and then looks at the file 
systems.

 I can see that the remaining drive may fail to boot (which it didn't), but I
 *can't* see why an array should be started in degraded mode on the *defective*
 mirror when both are present.

That's going to depend on what broke in the first place. If it went down 
cleanly and both drives work at startup, they should have been assembled 
together.  If you crashed, the raid assembly will be looking at one 
place for the uuid and event counts, where the file system cleanness 
check happens later and looks in a different place.  So the raid 
assembly choice can't have anything to do with the correctness of the 
file system on it.  And just to make things more complicated, I've seen 
cases where bad RAM caused very intermittent problems that included 
differences between the mirror instances that lingered and re-appeared 
randomly after the RAM was fixed.

 I *have* seen RAID members dropped from an array without understandable
 reasons, but, mostly, re-adding them simply worked [...]

 I've seen that too.  I think retries are much more aggressive on single
 disks or the last one left in a raid than on the mirror.

 Yes, but a retry needs a read error first. Are retries on single disks always
 logged or only on failure?

I've seen this with single partitions out of several on the same disk, 
so I don't think it is actually seen as a hardware-level error.  Maybe 
it is just a timeout while the disk does a soft recovery.

 Or perhaps I should ask this: are retries uncommon enough to warrant failing
 array members, yet common enough that a disk that has produced one can still
 be trustworthy? How do you handle disks where you see that happen? Replace or
 retry?

Not sure there's a generic answer. I've replaced drives and not had it 
happen again in some cases.  In at least one case, it did keep happening 
on the swap partition and eventually I stopped adding it back. Much, 
much later the server failed in a way that looked like it was the 
on-board scsi controller.


 [...] there are no guarantees your specific software/kernel/driver/hardware
 combination will not trigger some unknown (or unfixed ;-) bug.

 I had a machine with a couple of 4-year uptime runs (a red hat 7.3) where
 several of the scsi drives failed and were hot-swapped and re-synced with no
 surprises.  So unless something has broken in the software recently, I mostly
 trust it.

 You mean, your RH 7.3 machine had all software/kernel/driver/hardware
 combinations that there are?

No, I mean that the bugs in the software raid1 layer have long been 
ironed out and I expect it to protect against other problems to a 
greater extent than contributing to them.  The physical hard drive 
itself remains as the most likely failure point anyway. And you can 
assume that most of the related software/drivers generally worked or you 
wouldn't have data on the drive to lose.

 Like I said, I've seen (and heard of) strange occurrences, yet, like you, I
 mostly trust the software, simply out of lack of choice. I *can't* verify its
 correct operation;

Yes you can - there is an option to mdadm to verify that the mirrors are 
identical (and fix if they aren't), and the underlying filesystem is 
close enough that you can mount either member partition 

Re: [BackupPC-users] The usual questions looking for better ansers

2011-04-29 Thread Richard Shaw
On Fri, Apr 29, 2011 at 10:54 AM, Les Mikesell lesmikes...@gmail.com wrote:
 On 4/29/2011 10:18 AM, Richard Shaw wrote:
 I was recently sub-contracted to setup BackupPC for a business and
 everything's fine so far but I was hoping to make some improvements.

 1. On the server, which is a Xeon Quad 2GHz machine, I've got a rsync
 over ssh dump that's been running for over 12 hours which is about
 300GB into a 800GB share. Perl seems to be the bottleneck on the
 server and ssh is only using about 20-30% of one core on the client so
 I'm assuming changing the ssh cypher will not help.

 Is there any tips or tricks I can apply in this case?

 Is this the 1st or 2nd full? It may improve by itself if you have the
 --checksum-seed=32761 option set so the server won't have to recompute
 the values.  You could also look at the content and how it changes. Big
 files with small changes are bad, since the system has to reconstruct
 the copy with a mix of decompressing the old version and the changes
 from the network.  Maybe there is something that you can exclude or
 handle some other way.  Also, remember that incrementals without levels
 copy everything changed since the previous full and with levels have to
 do extra server-side work to merge the comparison view.

It's the 1st full so I guess that wouldn't help... Is that a safe
option to keep using all the time?

The files are commercial animation files so I'm guessing they are
large. I was looking for a utility like 'du' except for file size
distribution but didn't really find anything. They can't really be
excluded since they are the point of the backup...


 2. Can more than one email address be added to the configuration for
 the EMailAdminUserName?

 I think it should work to use a comma separated list, but an alternative
 would be an alias in the mail system.

I'll give it a try and see if it works on my test server (virtualbox)
and report back.


 3. I have more than one share that needs to be backed up from the
 client but they want a different backup schedule for 1 of the shares.
 Am I going to have to setup fake/virtual hosts to accomplish this?

 Yes.  But make sure they understand that the pooling in backuppc means
 they wouldn't actually store more copies (unless they change) of files
 if they run all the shares at the more frequent schedule.

That shouldn't be a problem, these are different directories with
dissimilar files, so not a pooling opportunity.


 The host-ip resolution is being handled by a hosts file so I could
 add aliases there so BackupPC would treat them as separate clients,
 right?

 Yes, but you could also use the ClientAlias setting in backuppc itself
 to make different host configurations point to the same real name or IP.

Can you point me to some documentation on how to use ClientAlias?
Surprisingly googling backuppc clientalias didn't seem to get me
what I needed. And I found no instances of that in the basic
documentation.

Thanks,
Richard

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow Rsync Transfer?

2011-04-29 Thread Dan Lavu
Resolved. 

After looking at the file list, we found a 102GB log file, rsync doesn't like 
large files and there are a ton of threads about why. 

Troubleshooting steps that were taken that actually isolated the issue 

strace -p $PID (The output look like it was catting the file) 
lsof -f | grep rsync (and the following to confirm)

I hope this helps anybody else who might have this issue. 

Cheers,

 
___
Dan Lavu
System Administrator  - Emptoris, Inc.
www.emptoris.com Office: 703.995.6052 - Cell: 703.296.0645


-Original Message-
From: Adam Goryachev [mailto:mailingli...@websitemanagers.com.au] 
Sent: Friday, April 29, 2011 12:46 AM
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Slow Rsync Transfer?

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 29/04/11 04:08, Dan Lavu wrote:
 Gerald,
 
  
 
 Not the case with me, if you look at the host ras03, you see that the 
 average speed is .92MB/s while other host are significantly faster. It 
 is taking 40 hours to do 110GB, while other hosts are doing it in 
 about an hour. I’m about to patch this box and reboot it, it’s been up 
 for
 200+ days and I haven’t had a good backup for over a week now. So any
 input will be helpful, again thanks in advance.

One thing I've seen which can really slow down rsync backups is that a large 
file with changes will be much slower to backup than a number of small files 
(of the same total size) with the same amount of changes.

I backup disk images, original method was to just backup the image, but this 
was too slow. New method is:
use split to divide the file into a series of 20M or 100M files backup these 
individual files

I also do the same with database exports and other software backup files more 
than around 100M ... it just backup quicker, and also a failed backup will 
continue from the most recent chunk (in a full backup) instead of restarting 
the whole file. Also, timeout is shorter because it is reset after each chunk.

Regards,
Adam

- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk26QpkACgkQGyoxogrTyiXMlgCgghJ14sMasOdtJi28os6rBj4U
GeYAnRxasxrFgpSZ442w0+HKDNHJFsZZ
=d8vA
-END PGP SIGNATURE-

--
WhatsUp Gold - Download Free Network Management Software The most intuitive, 
comprehensive, and cost-effective network management toolset available today.  
Delivers lowest initial acquisition cost and overall TCO of any competing 
solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The usual questions looking for better ansers

2011-04-29 Thread John Rouillard
On Fri, Apr 29, 2011 at 01:09:24PM -0500, Richard Shaw wrote:
 On Fri, Apr 29, 2011 at 10:54 AM, Les Mikesell lesmikes...@gmail.com wrote:
  On 4/29/2011 10:18 AM, Richard Shaw wrote:
  The host-ip resolution is being handled by a hosts file so I could
  add aliases there so BackupPC would treat them as separate clients,
  right?
 
  Yes, but you could also use the ClientAlias setting in backuppc itself
  to make different host configurations point to the same real name or IP.
 
 Can you point me to some documentation on how to use ClientAlias?
 Surprisingly googling backuppc clientalias didn't seem to get me
 what I needed. And I found no instances of that in the basic
 documentation.

Les almost got it right it's ClientNameAlias

  http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_clientnamealias_

Enjoy.

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The usual questions looking for better ansers

2011-04-29 Thread Richard Shaw
On Fri, Apr 29, 2011 at 1:24 PM, John Rouillard
rouilj-backu...@renesys.com wrote:
 Les almost got it right it's ClientNameAlias

  http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_clientnamealias_

Thanks! I think I understand how that works!

Richard

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The usual questions looking for better ansers

2011-04-29 Thread Les Mikesell
On 4/29/2011 1:09 PM, Richard Shaw wrote:

 Is this the 1st or 2nd full? It may improve by itself if you have the
 --checksum-seed=32761 option set so the server won't have to recompute
 the values.

 It's the 1st full so I guess that wouldn't help... Is that a safe
 option to keep using all the time?

Yes - with it on, the block checksums needed to verify the file are 
saved on the 2nd run so the server doesn't have to uncompress and 
recompute them on subsequent full runs.

 The files are commercial animation files so I'm guessing they are
 large. I was looking for a utility like 'du' except for file size
 distribution but didn't really find anything. They can't really be
 excluded since they are the point of the backup...

If they don't change, things will go faster later.  Incrementals will 
skip over on the directory timestamp/length match.  Fulls will do a 
block checksum verify.

 3. I have more than one share that needs to be backed up from the
 client but they want a different backup schedule for 1 of the shares.
 Am I going to have to setup fake/virtual hosts to accomplish this?

 Yes.  But make sure they understand that the pooling in backuppc means
 they wouldn't actually store more copies (unless they change) of files
 if they run all the shares at the more frequent schedule.

 That shouldn't be a problem, these are different directories with
 dissimilar files, so not a pooling opportunity.

What I mean is that multiple runs of the same share are pooled, so while 
there may be other (load, network traffic) reasons to back up some parts 
less frequently, doing all the shares at the most frequent desired 
schedule probably won't take a lot more server space.

 The host-ip resolution is being handled by a hosts file so I could
 add aliases there so BackupPC would treat them as separate clients,
 right?

 Yes, but you could also use the ClientAlias setting in backuppc itself
 to make different host configurations point to the same real name or IP.

 Can you point me to some documentation on how to use ClientAlias?
 Surprisingly googling backuppc clientalias didn't seem to get me
 what I needed. And I found no instances of that in the basic
 documentation.

Sorry, it is actually $Conf{ClientNameAlias}.  You can use dummy 
hostnames so you can control the schedule separately but override the 
actual target with this setting.


-- 
   Les Mikesell
lesmikes...@gmail.com


--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The usual questions looking for better ansers

2011-04-29 Thread Richard Shaw
On Fri, Apr 29, 2011 at 1:42 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On 4/29/2011 1:09 PM, Richard Shaw wrote:
 3. I have more than one share that needs to be backed up from the
 client but they want a different backup schedule for 1 of the shares.
 Am I going to have to setup fake/virtual hosts to accomplish this?

 Yes.  But make sure they understand that the pooling in backuppc means
 they wouldn't actually store more copies (unless they change) of files
 if they run all the shares at the more frequent schedule.

 That shouldn't be a problem, these are different directories with
 dissimilar files, so not a pooling opportunity.

 What I mean is that multiple runs of the same share are pooled, so while
 there may be other (load, network traffic) reasons to back up some parts
 less frequently, doing all the shares at the most frequent desired
 schedule probably won't take a lot more server space.

Yeah, the main backup is their projects folders, the rest we were
going to backup less frequently mainly due to their lesser importance
more than any other reason, but also due to load and traffic as this
is the master for their rendering farm.

Thanks,
Richard

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] hosts disappearing from host list

2011-04-29 Thread Mark Maciolek
Hi,

For the second time in a few months, one of my BackupPC clients has 
disappeared from the hosts file. When I went to add it back, I noticed 
another client missing. I added the first client back in and in the log 
it showed the second client as being removed.

Has anyone seen this happen to them or a reason why it happens?



BackupPC 3.1.0 on CentOS 5.6

Mark
-- 
Mark Maciolek
Network Administrator   
Morse Hall 339
862-3050
mark.macio...@unh.edu
https://www.sr.unh.edu

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hosts disappearing from host list

2011-04-29 Thread John Rouillard
On Fri, Apr 29, 2011 at 04:07:39PM -0400, Mark Maciolek wrote:
 For the second time in a few months, one of my BackupPC clients has 
 disappeared from the hosts file. When I went to add it back, I noticed 
 another client missing. I added the first client back in and in the log 
 it showed the second client as being removed.

How are you adding/removing clients? Via the gui, direct file edit?
Is it possible somebody else is changing the file at the same time?
 
 Has anyone seen this happen to them or a reason why it happens?

I have not seen this happen, but then again our hosts file is
configuration managed and not modified via the gui so...

 BackupPC 3.1.0 on CentOS 5.6

Current release is 3.2.0 released last July and I know there were some
gui improvements, but I am not sure if locking etc was one of them.

Since centos 5.6 is a new release, when it first happend a few months
ago what OS were you running?

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow Rsync Transfer?

2011-04-29 Thread Ryan Manikowski
On 4/29/2011 2:07 PM, Dan Lavu wrote:
 Resolved.

 After looking at the file list, we found a 102GB log file, rsync doesn't like 
 large files and there are a ton of threads about why.

 Troubleshooting steps that were taken that actually isolated the issue

 strace -p $PID (The output look like it was catting the file)
 lsof -f | grep rsync (and the following to confirm)

 I hope this helps anybody else who might have this issue.



Glad to hear you found the problem. Stalled transfers tend to be a 
fairly common issue.

Ryan



--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hosts disappearing from host list

2011-04-29 Thread Richard Shaw
On Fri, Apr 29, 2011 at 3:40 PM, John Rouillard
rouilj-backu...@renesys.com wrote:
 Current release is 3.2.0 released last July and I know there were some
 gui improvements, but I am not sure if locking etc was one of them.

I would like to get 3.2.0 as well but there's a 2 fold problem[1]:

- Fedora/Redhat doesn't like bundled libraries
- One of the libraries does not pass make test

Therefore, my understanding is that there will be no 3.2 for
Fedora/Redhat until someone fixes it.

I posted to the mailing list some time ago about this problem but got
no response.

Richard

[1] https://bugzilla.redhat.com/show_bug.cgi?id=627373#c7

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] hosts disappearing from host list

2011-04-29 Thread Jeffrey J. Kosowsky
I built my own (for FC12)...
Richard Shaw wrote at about 16:31:15 -0500 on Friday, April 29, 2011:
  On Fri, Apr 29, 2011 at 3:40 PM, John Rouillard
  rouilj-backu...@renesys.com wrote:
   Current release is 3.2.0 released last July and I know there were some
   gui improvements, but I am not sure if locking etc was one of them.
  
  I would like to get 3.2.0 as well but there's a 2 fold problem[1]:
  
  - Fedora/Redhat doesn't like bundled libraries
  - One of the libraries does not pass make test
  
  Therefore, my understanding is that there will be no 3.2 for
  Fedora/Redhat until someone fixes it.
  
  I posted to the mailing list some time ago about this problem but got
  no response.
  
  Richard
  
  [1] https://bugzilla.redhat.com/show_bug.cgi?id=627373#c7
  
  --
  WhatsUp Gold - Download Free Network Management Software
  The most intuitive, comprehensive, and cost-effective network 
  management toolset available today.  Delivers lowest initial 
  acquisition cost and overall TCO of any competing solution.
  http://p.sf.net/sfu/whatsupgold-sd
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/