Re: [BackupPC-users] Hardware choices for a BackupPC server

2007-03-15 Thread Harry Mangalam
This is but a single data point, but if others can provide some 
additional data, it might help you triangulate what you want.

I'd also recommend 3ware over Areca - I've had 2 bad experiences with 
Areca involving data loss - only one of which I could really blame on 
the controller, but 3ware service is very good (as such things go).

Here's the bonnie++ output on a backup server I built for a lab:
This was on a very baseline box (single athlon64/.5GB RAM) outfitted 
with a single PCI-X 3ware 9550SX-12 controller driving 12 SATA-2, 
NQS-enabled 500GB WD disks in RAID5 with an XFS file system, 64K 
stripe, with readahead set to 16K (blockdev --setra 16385 /dev/sdb)

Email me if you want the exact spec.

Here's the bonnie++ output (it looks like it's going to be fragged by 
the line wrap tho - I can send you a file that will protect it and 
provide more details in  it as well).  It'll be cheaper of course now.  
The disks were on the 3ware approved list and were also among the 
cheapest available.  The SATA2 spec and the NQS makes a huge 
difference.  If you have to down-grade the disks to SATA1 (via 
jumper, because of controller incompatibility), performance drops to 
~1/3.

The salient number is 206298 KB/s for block writes and 543326 KB/s for 
block reads.  When I was doing some benchmarking on climate data 
reduction (netCDF files), I got real-world numbers close to what 
bonnie++ indicated.  Of course character performance is relatively 
awful.  And this is an XFS filesystem - as others noted, you'd 
probably want reiserfs (or maybe jfs?).  If you keep the journal on 
another controller, you should also get better performance.

Also note, this is with RAID5.  If I had really tried for maximum I/O 
with RAID 0 or 10, it should have been even higher.

Version  1.03   --Sequential Output-- --Sequential 
Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per 
Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP 
K/sec %CP  /sec %CP
kstore   15000M 57150  96 206298  42 113941  29 63148  99 543326  
55 231.6   0
--Sequential Create-- Random 
Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  
files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16  6081  31 + +++  5702  24  7243  35 + +++  
1448   7
kstore,15000M,57150,96,206298,42,113941,29,63148,99,543326,55,231.6,0,16,6081,31,
+,+++,5702,24,7243,35,+,+++,1448,7

-- 
Harry Mangalam - Research Computing, NACS, E2148, Engineering Gateway, 
UC Irvine 92697  949 824 0084(o), 949 285 4487(c) 
[EMAIL PROTECTED]

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with backuppc

2007-03-15 Thread Ciarlotta, Aaron
You didn't say if it appeared as though data was coming across the wire
or not.
 
This is possibly the culprit:
 
$Conf{ClientTimeout} = 72000; 
Timeout in seconds when listening for the transport program's
(smbclient, tar etc) stdout. If no output is received during this time,
then it is assumed that something has wedged during a backup, and the
backup is terminated.

Note that stdout buffering combined with huge files being backed up
could cause longish delays in the output from smbclient that
BackupPC_dump sees, so in rare cases you might want to increase this
value.

Despite the name, this parameter sets the timeout for all transport
methods (tar, smb etc).




From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peter
Nearing
Sent: Thursday, March 15, 2007 3:50 PM
To: backuppc-users@lists.sourceforge.net
Subject: [BackupPC-users] Problems with backuppc


Hey,

   I'm Peter, and I have been using backuppc for about 8 months now, and
I am an addict oops, wrong open anyway

I am a network admin for a prof. at Queens University, and I have 8
computers that I backup using backuppc, using rsync over ssh.  I have
recently ran into a problem with backuppc failing while backing up one
of my desktops, giving me the reason of sig ALARM.  I was wondering if
anyone else here has had this problem, and if so, what it's caused by.
The xfer seems to start fine, but after about 5 hours, it fails...
originally I thought the problem maybe due to a partially corrupt
filesystem, since I am using xfs, and it does crap out from time to
time, but it would seem as though that isn't the case, as I have done a
fsck on all the drives, and repaired the one error I got on the client.
I tried the backup again, and again it failed.  So now, I'm not so sure
as to the problem.  If anyone has any suggestions, or knows why backuppc
would be getting a sig alrm, it would be greatly appreciated. 

Oh and a huge thanks to the guys who made this great peice of software.

Peter N.


The information in this electronic mail message is sender's 
business Confidential and may be legally privileged.  It is 
intended solely for the addressee(s).  Access to this Internet 
electronic mail message by anyone else is unauthorized.  If 
you are not the intended recipient, any disclosure, copying, 
distribution or any action taken or omitted to be taken in 
reliance on it is prohibited and may be unlawful. The sender 
believes that this E-mail and any attachments were free of 
any virus, worm, Trojan horse, and/or malicious code when 
sent. This message and its attachments could have been 
infected during  transmission. By reading the message and 
opening any attachments, the recipient accepts full 
responsibility for taking protective and remedial action about 
viruses and other defects. Travelport Inc. is not liable for any loss 
or damage arising in any way from this message or its 
attachments.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with backuppc

2007-03-15 Thread Jason Hughes

Peter,

For testing purposes, you may reduce the alarm period, but under 
practical circumstances, it must be large enough that it doesn't cut off 
backups that would finish, had they been given the time to collect 
enough file information.  The behavior also depends on the transport 
mechanism you use, so sometimes a 5 minute timeout will be fine, but if 
you switch transports the host never completes, and vice-versa.


Chances are, if you're not getting data in the pc/hostname/new 
directory, there's something wrong.  When you run at the command line, 
are there files in the 'new' subdirectory?  Make sure you do this test 
as the backuppc user.  Also, try to ssh to the remote host and check 
that you have proper file permissions.  Something may have changed with 
the configuration since you set it up.


One last thing: check the host for any really, really deep directory 
structures that are new.  It would be dated just after your last valid 
backup.  If there's a directory with millions of files, or huge numbers 
of subdirectories full of files, it could seriously bog down rsync while 
it builds a file list, because it pulls that info into memory.  With a 
small enough memory configuration, it may be virtual memory swapping on 
the host, making it take up to 100x longer than normal.  With low enough 
disk space on the client, it may be hanging or thrashing the disk 
looking for a place to put the VM, making it even slower.  Of course, 
the simple solution would be to exclude such a directory tree until the 
issue can be remedied.


Hope that helps,
JH

Peter Nearing wrote:

Aaron,

  When I ran the command line that it's trying, the data isn't coming, 
rsync is running on the client, but it stops there.  The backuppc logs 
state that it's saving the data as a partial, tho.  The ClientTimeout, 
may be it, I think I'll reduce it to something a lotta bit more 
sane... like 5 min... I hope this works but I'll let you know 
either way.  Thanks for the quick reply.


Peter N.

On 3/15/07, *Ciarlotta, Aaron* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


You didn't say if it appeared as though data was coming across the
wire or not.
 
This is possibly the culprit:
 
*$Conf{ClientTimeout} = 72000;*


Timeout in seconds when listening for the transport program's
(smbclient, tar etc) stdout. If no output is received during
this time, then it is assumed that something has wedged during
a backup, and the backup is terminated.

Note that stdout buffering combined with huge files being
backed up could cause longish delays in the output from
smbclient that BackupPC_dump sees, so in rare cases you might
want to increase this value.

Despite the name, this parameter sets the timeout for all
transport methods (tar, smb etc).



*From:* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]] *On Behalf
Of *Peter Nearing
*Sent:* Thursday, March 15, 2007 3:50 PM
*To:* backuppc-users@lists.sourceforge.net
mailto:backuppc-users@lists.sourceforge.net
*Subject:* [BackupPC-users] Problems with backuppc

Hey,

   I'm Peter, and I have been using backuppc for about 8 months
now, and I am an addict oops, wrong open anyway

I am a network admin for a prof. at Queens University, and I have
8 computers that I backup using backuppc, using rsync over ssh.  I
have recently ran into a problem with backuppc failing while
backing up one of my desktops, giving me the reason of sig ALARM. 
I was wondering if anyone else here has had this problem, and if

so, what it's caused by.  The xfer seems to start fine, but after
about 5 hours, it fails... originally I thought the problem maybe
due to a partially corrupt filesystem, since I am using xfs, and
it does crap out from time to time, but it would seem as though
that isn't the case, as I have done a fsck on all the drives, and
repaired the one error I got on the client.  I tried the backup
again, and again it failed.  So now, I'm not so sure as to the
problem.  If anyone has any suggestions, or knows why backuppc
would be getting a sig alrm, it would be greatly appreciated.

Oh and a huge thanks to the guys who made this great peice of
software.

Peter N.

The information in this electronic mail message is sender's 
business Confidential and may be legally privileged.  It is 
intended solely for the addressee(s).  Access to this Internet 
electronic mail message by anyone else is unauthorized.  If 

you are not the intended recipient, any disclosure, copying, 
distribution or any action taken or omitted to be taken in 
reliance on it is prohibited and may be unlawful. The sender 
believes that this E-mail and 

Re: [BackupPC-users] Problems with backuppc

2007-03-15 Thread Bernhard Ott
Peter Nearing wrote:
 Aaron,
 
   When I ran the command line that it's trying, the data isn't coming, 
 rsync is running on the client, but it stops there.  The backuppc logs 
 state that it's saving the data as a partial, tho.
   ^^
If backuppc saves partial backups there must some kind of data finding 
its way to the server.
Which value is shown in the Duration-column of your hosts backup 
summary? If it's the same as in your config.pl settings then follow 
Aarons advice. Maybe try to split the shares?

BTW, what amount of data are we talking about?

Bernhard


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with backuppc

2007-03-15 Thread Peter Nearing

Unfortunatly right now, I am rechecking backuppc's data partition, so I
can't check the setup, and the client is in windows (it's in kingston, on...
and I am in Prince George, BC, so I can't just walk over and kick them off,
grr...)  But the backups are split up, one fs is the '/', excluding /proc,
it's about  3-4 gig all told, and  '/home'  which is  about 11gig,  and that
partition is almost full...  which adds some merit to the problem of vm
thrashing it out while building a dir list Hopefully I'll be able to
respond with some positive results when this check is done.

Peter N.

On 3/15/07, Bernhard Ott [EMAIL PROTECTED] wrote:


Peter Nearing wrote:
 Aaron,

   When I ran the command line that it's trying, the data isn't coming,
 rsync is running on the client, but it stops there.  The backuppc logs
 state that it's saving the data as a partial, tho.
   ^^
If backuppc saves partial backups there must some kind of data finding
its way to the server.
Which value is shown in the Duration-column of your hosts backup
summary? If it's the same as in your config.pl settings then follow
Aarons advice. Maybe try to split the shares?

BTW, what amount of data are we talking about?

Bernhard


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/