Re: [BackupPC-users] Feature request for BackupPC: Search backups

2007-03-26 Thread John Buttery
* On Sunday 25 March 2007 15:18, Krsnendu dasa [EMAIL PROTECTED] 
wrote:
It is hard to find files in the backups by browsing. If there were a
search feature that allowed you to search one or more computers
backups that would be great.

  I know this isn't a real solution per se, but if you're in a current 
situation that you need a quick solution to...there is the possibility 
of doing something like this (at the shell prompt on the BackupPC 
server):

cd /var/lib/backuppc/pc (or wherever you keep your backups)
find -name '*somename*' -print

  You may also want to play with '-mindepth', '-maxdepth', '-prune', 
and '-iname'...but in general I'm just saying you can use the search 
utilities of the underlying OS as a workaround.

-- 
John Buttery [EMAIL PROTECTED]
System Administrator

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] very slow backup speed

2007-03-26 Thread Evren Yurtesen
I am using backuppc but it is extremely slow. I narrowed it down to disk
bottleneck. (ad2 being the backup disk). Also checked the archives of
the mailing list and it is mentioned that this is happening because of
too many hard links.

Disks   ad0   ad2
KB/t   4.00 25.50
tps   175
MB/s   0.00  1.87
% busy196

But I couldnt find any solution to this. Is there a way to get this
faster without changing to faster disks? I guess I could 2 disks in 
mirror or something but it is stupid to waste the space I gain by 
backuppc algorithm by using multiple disks to get a decent performance :)

I am feeling that this performance problem is extreme. It goes with 
snail speed even when I am backing up 1 machine only. Should I be adding 
disks for each machine I backup? :)

Thanks,
Evren

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread John Pettitt
Evren Yurtesen wrote:
 I am using backuppc but it is extremely slow. I narrowed it down to disk
 bottleneck. (ad2 being the backup disk). Also checked the archives of
 the mailing list and it is mentioned that this is happening because of
 too many hard links.

   
[snip]

The basic problem is backuppc is using the file system as a database - 
specifically using the hard link capability to store multiple references 
to an object and the link count to manage garbage collection.   Many 
(all?) filesystems seem to get slow when you get into the millios of 
files with thousands of links range.   Changing the way is works (say to 
use a real database) looks like a very non trivial task.   Adding disk 
spindles will help (particularly if you have multiple backups going at 
once) but in the end it's still not going to be blazingly fast.

John





-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Evren Yurtesen
John Pettitt wrote:
 Evren Yurtesen wrote:
 I am using backuppc but it is extremely slow. I narrowed it down to disk
 bottleneck. (ad2 being the backup disk). Also checked the archives of
 the mailing list and it is mentioned that this is happening because of
 too many hard links.

   
 [snip]
 
 The basic problem is backuppc is using the file system as a database - 
 specifically using the hard link capability to store multiple references 
 to an object and the link count to manage garbage collection.   Many 
 (all?) filesystems seem to get slow when you get into the millios of 
 files with thousands of links range.   Changing the way is works (say to 
 use a real database) looks like a very non trivial task.   Adding disk 
 spindles will help (particularly if you have multiple backups going at 
 once) but in the end it's still not going to be blazingly fast.
 
 John


Well, so there are no plans to fix this problem? I found forum threads 
that in certain cases backups take over 24hours! Goodbye to daily 
incremental backups :)

Do you know any alternatives to backuppc with web gui? which probably 
works faster? :P

I wonder what is the mechanical stress this poses on the hard drive when 
it has to work 24/7 moving it's head like crazy.

Thanks,
Evren

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Les Mikesell
Evren Yurtesen wrote:
 John Pettitt wrote:
 Evren Yurtesen wrote:
 I am using backuppc but it is extremely slow. I narrowed it down to disk
 bottleneck. (ad2 being the backup disk). Also checked the archives of
 the mailing list and it is mentioned that this is happening because of
 too many hard links.

   
 [snip]

 The basic problem is backuppc is using the file system as a database - 
 specifically using the hard link capability to store multiple references 
 to an object and the link count to manage garbage collection.   Many 
 (all?) filesystems seem to get slow when you get into the millios of 
 files with thousands of links range.   Changing the way is works (say to 
 use a real database) looks like a very non trivial task.   Adding disk 
 spindles will help (particularly if you have multiple backups going at 
 once) but in the end it's still not going to be blazingly fast.

 John

 
 Well, so there are no plans to fix this problem? I found forum threads 
 that in certain cases backups take over 24hours! Goodbye to daily 
 incremental backups :)

If your filesystem isn't a good place to store files, there is not much 
an application can do about it.  Perhaps it would help if you mentioned 
what kind of scale you are attempting with what server hardware.  I know 
there are some people on the list handling what I would consider large 
backups with backuppc.  If yours is substantially smaller perhaps they 
can help diagnose the problem.  Maybe you are short on RAM and swapping 
memory to disk with large rsync targets.

 I wonder what is the mechanical stress this poses on the hard drive when 
 it has to work 24/7 moving it's head like crazy.

They'll die at some random time averaging around 4-5 years - just like 
any other hard drive.  Disk heads are made to move...

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread David Rees
On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
 John Pettitt wrote:
  The basic problem is backuppc is using the file system as a database -
  specifically using the hard link capability to store multiple references
  to an object and the link count to manage garbage collection.   Many
  (all?) filesystems seem to get slow when you get into the millios of
  files with thousands of links range.   Changing the way is works (say to
  use a real database) looks like a very non trivial task.   Adding disk
  spindles will help (particularly if you have multiple backups going at
  once) but in the end it's still not going to be blazingly fast.

 Well, so there are no plans to fix this problem? I found forum threads
 that in certain cases backups take over 24hours! Goodbye to daily
 incremental backups :)

Well, I just saw a proposal on linux-kernel which addresses inode
allocation performance issues on ext3/4 by preallocating contiguous
blocks of inodes for directories. I suspect this would help reduce the
number of seeks required when performing backups.

If there is another filesystem which does this I imagine it would
perform better than ext3.

 Do you know any alternatives to backuppc with web gui? which probably
 works faster? :P

BackupPC is the best. Most backups complete in a reasonable time,
those that don't are backups which are either very large (lots of
bandwidth) or have lots of files. My backup server is a simple Athlon
XP 2000+ with a RAID1 consisting of 2 Seagate 250GB 7200rpm ATA
drives.

More spindles and/or disks with faster seek times is the way to go.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Evren Yurtesen
Les Mikesell wrote:
 Evren Yurtesen wrote:
 John Pettitt wrote:
 Evren Yurtesen wrote:
 I am using backuppc but it is extremely slow. I narrowed it down to 
 disk
 bottleneck. (ad2 being the backup disk). Also checked the archives of
 the mailing list and it is mentioned that this is happening because of
 too many hard links.

   
 [snip]

 The basic problem is backuppc is using the file system as a database 
 - specifically using the hard link capability to store multiple 
 references to an object and the link count to manage garbage 
 collection.   Many (all?) filesystems seem to get slow when you get 
 into the millios of files with thousands of links range.   Changing 
 the way is works (say to use a real database) looks like a very non 
 trivial task.   Adding disk spindles will help (particularly if you 
 have multiple backups going at once) but in the end it's still not 
 going to be blazingly fast.

 John


 Well, so there are no plans to fix this problem? I found forum threads 
 that in certain cases backups take over 24hours! Goodbye to daily 
 incremental backups :)
 
 If your filesystem isn't a good place to store files, there is not much 
 an application can do about it.  Perhaps it would help if you mentioned 
 what kind of scale you are attempting with what server hardware.  I know 
 there are some people on the list handling what I would consider large 
 backups with backuppc.  If yours is substantially smaller perhaps they 
 can help diagnose the problem.  Maybe you are short on RAM and swapping 
 memory to disk with large rsync targets.

I know that the bottleneck is the disk. I am using a single ide disk to 
take the backups, only 4 machines and 2 backups running at a time(if I 
am not remembering wrong).

I see that it is possible to use raid to solve this problem to some 
extent but the real solution is to change backuppc in such way that it 
wont use so much disk operations.

 I wonder what is the mechanical stress this poses on the hard drive 
 when it has to work 24/7 moving it's head like crazy.
 
 They'll die at some random time averaging around 4-5 years - just like 
 any other hard drive.  Disk heads are made to move...

Perhaps, but there is a difference if they are moving 10 times or 10 
times. Where the difference is that the possibility of failure due to 
mechanical problems increases 1 times.

Thanks,
Evren

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Jason Hughes
Evren Yurtesen wrote:
 I know that the bottleneck is the disk. I am using a single ide disk to 
 take the backups, only 4 machines and 2 backups running at a time(if I 
 am not remembering wrong).

 I see that it is possible to use raid to solve this problem to some 
 extent but the real solution is to change backuppc in such way that it 
 wont use so much disk operations.
   


The whole purpose of live backup media is to use the media.  What you 
may be noticing is that perhaps your drive is mounted with access time 
being tracked.  You should check that your fstab has noatime as a 
parameter for your mounted data volume.  This probably cuts the seeks 
down by nearly half or more.

And, you could consider buying a faster drive, or one with a larger 
buffer.  Some IDE drives have pathetically small buffers and slow 
rotation rates.  That makes for a greater need for seeking, and worse 
seek performance.

Also, if your server is a single-proc, you'll probably want to reduce it 
to 1 simultaneous backup, not 2.  Heck, if you are seeing bad thrashing 
on the disk, it would have better coherence if you stick to 1 anyway.  
Increase your memory and you may see less virtual memory swapping as well. 

It seems that your setup is very similar to mine, and I'm not seeing the 
kind of performance problems you're reporting.  Full backup using rsyncd 
over a slow wifi link of about 65gb is only taking about 100 minutes.  
Incrementals are about 35 minutes.  Using SMB on a different machine 
with about 30gb, it takes 300 minutes for a full, even over gigabit, but 
only a couple of minutes for an incremental (because it doesn't detect 
as many changes as rsync).  So it varies dramatically with the protocol 
and hardware.

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread John Pettitt
Evren Yurtesen wrote:


 I know that the bottleneck is the disk. I am using a single ide disk to 
 take the backups, only 4 machines and 2 backups running at a time(if I 
 am not remembering wrong).

 I see that it is possible to use raid to solve this problem to some 
 extent but the real solution is to change backuppc in such way that it 
 wont use so much disk operations.

   


 From what I can tell the issue is that each file requires a hard link - 
depending on your file system metadata like directory entries, had links 
etc get treated differently that regular data - on a BSD ufs2 system 
metadata updates are typically synchronous, that is the system doesn't 
return until the write has made it to the disk.   This is good for 
reliability but really bad for performance since it prevents out of 
order writes which can save a lot of disk activity.   

Changing backuppc would be decidedly non-trivial - eyeballing it to hack 
in a real database to store the relationship between pool and individual 
files would touch almost just about every part of the system.

What filesystem are you using and have you turned off atime - I found 
that makes a big difference.

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Bernhard Ott
 Original Message 
Subject: Re:[BackupPC-users] very slow backup speed
From: Evren Yurtesen [EMAIL PROTECTED]
To: David Rees [EMAIL PROTECTED]
Date: 26.03.2007 23:37

 David Rees wrote:
 
 
 It is true that BackupPC is great, however backuppc is slow because it 
 is trying to make backup of a single instance of each file to save 
 space. Now we are wasting (perhaps even more?) space to make it fast 
 when we do raid1.
You can't be serious about that: let's say you have a handful of 
workstations full backup 200GB each and perform backups for a couple of 
weeks - in my case after a month 1,4 TB for the fulls and 179GB for the 
incrementals. After pooling and compression: 203 (!) GB TOTAL.
Xfer time for a 130GB full: 50min. How fast are your tapes?
But if you prefer changing tapes (and spending a lot more money on the 
drives) - go ahead ... so much for wasting space ;-)

Regards, Bernhard


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Evren Yurtesen
John Pettitt wrote:
 Evren Yurtesen wrote:


 I know that the bottleneck is the disk. I am using a single ide disk 
 to take the backups, only 4 machines and 2 backups running at a 
 time(if I am not remembering wrong).

 I see that it is possible to use raid to solve this problem to some 
 extent but the real solution is to change backuppc in such way that it 
 wont use so much disk operations.

   
 
 
  From what I can tell the issue is that each file requires a hard link - 
 depending on your file system metadata like directory entries, had links 
 etc get treated differently that regular data - on a BSD ufs2 system 
 metadata updates are typically synchronous, that is the system doesn't 
 return until the write has made it to the disk.   This is good for 
 reliability but really bad for performance since it prevents out of 
 order writes which can save a lot of disk activity.  
 Changing backuppc would be decidedly non-trivial - eyeballing it to hack 
 in a real database to store the relationship between pool and individual 
 files would touch almost just about every part of the system.
 
 What filesystem are you using and have you turned off atime - I found 
 that makes a big difference.
 
 John

I have noatime, I will try bumping up the memory and hope that the 
caching will help. I will let you know if it helps.

Thanks,
Evren

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Client Push ?

2007-03-26 Thread John Hannfield
Hello

I've just installed BackupPC and love it. It's really great, and
great to see an open source application which competes with
similar enterprise level products.

I only need to backup Linux servers with rsync over SSH, and have set
up a test deployement of BackupPC as described in the docs. But the
current model is a server pull, which means the backup server has
potential root on all my client machines. I would prefer a client push
model. Has anyone devised a method of using BackupPC  with rsync in
a push model?

If so, I would love to hear how you have done it.

-- 

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Les Mikesell
Evren Yurtesen wrote:

 If your filesystem isn't a good place to store files, there is not much 
 an application can do about it.  Perhaps it would help if you mentioned 
 what kind of scale you are attempting with what server hardware.  I know 
 there are some people on the list handling what I would consider large 
 backups with backuppc.  If yours is substantially smaller perhaps they 
 can help diagnose the problem.  Maybe you are short on RAM and swapping 
 memory to disk with large rsync targets.
 
 I know that the bottleneck is the disk. I am using a single ide disk to 
 take the backups, only 4 machines and 2 backups running at a time(if I 
 am not remembering wrong).

That's still not very informative.  Approximately how much data do those 
  targets hold (number of files and total space used)?  Are you using 
tar or rsync?  If you are running Linux, what does 'hdparm -t -T' say 
about your disk speed (the smaller number)? And what filesystem are you 
using?

 I see that it is possible to use raid to solve this problem to some 
 extent but the real solution is to change backuppc in such way that it 
 wont use so much disk operations.

First we should find out if your system is performing badly compared to 
others or if you are just expecting too much.  As an example, one of my 
systems is backing up 20 machines and the summary says:
  Pool is 152.54GB comprising 2552606 files and 4369 directories
This is a RAID1 (mirrored, so no faster than a single drive) on IDE 
drives and the backups always complete overnight.

 I wonder what is the mechanical stress this poses on the hard drive 
 when it has to work 24/7 moving it's head like crazy.
 They'll die at some random time averaging around 4-5 years - just like 
 any other hard drive.  Disk heads are made to move...
 
 Perhaps, but there is a difference if they are moving 10 times or 10 
 times. Where the difference is that the possibility of failure due to 
 mechanical problems increases 1 times.

No, it doesn't make a lot of difference as long as the drive doesn't 
overheat.  The head only moves so fast and it doesn't matter if it does 
it continuously.  However, if your system has sufficient RAM, it will 
cache and optimize many of the things that might otherwise need an 
additional seek and access.

-- 
  Les Mikesell
[EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Client Push ?

2007-03-26 Thread Les Mikesell
John Hannfield wrote:
 Hello
 
 I've just installed BackupPC and love it. It's really great, and
 great to see an open source application which competes with
 similar enterprise level products.
 
 I only need to backup Linux servers with rsync over SSH, and have set
 up a test deployement of BackupPC as described in the docs. But the
 current model is a server pull, which means the backup server has
 potential root on all my client machines. I would prefer a client push
 model. Has anyone devised a method of using BackupPC  with rsync in
 a push model?
 
 If so, I would love to hear how you have done it.

I think the only way you could avoid giving the backuppc server root 
access would be to have an intermediate system where the client can 
rsync a copy which the backuppc server subsequently picks up.  It will 
waste the disk space for the intermediate copy but it might solve some 
logistical problems.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Les Mikesell
John Pettitt wrote:

 Changing backuppc would be decidedly non-trivial - eyeballing it to hack 
 in a real database to store the relationship between pool and individual 
 files would touch almost just about every part of the system.

And there's not much reason to think that a database could do this with 
atomic updates any better than the filesystem it sits on.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread David Rees
On 3/26/07, Bernhard Ott [EMAIL PROTECTED] wrote:
  It is true that BackupPC is great, however backuppc is slow because it
  is trying to make backup of a single instance of each file to save
  space. Now we are wasting (perhaps even more?) space to make it fast
  when we do raid1.

 You can't be serious about that: let's say you have a handful of
 workstations full backup 200GB each and perform backups for a couple of
 weeks - in my case after a month 1,4 TB for the fulls and 179GB for the
 incrementals. After pooling and compression: 203 (!) GB TOTAL.
 Xfer time for a 130GB full: 50min. How fast are your tapes?
 But if you prefer changing tapes (and spending a lot more money on the
 drives) - go ahead ... so much for wasting space ;-)

No kidding! My backuppc stats are like this:

18 hosts
76 full backups of total size 748.09GB (prior to pooling and compression)
113 incr backups of total size 134.11GB (prior to pooling and compression)
Pool is 135.07GB comprising 2477803 files and 4369 directories

6.5:1 compression ratio is pretty good, I think.

Athlon XP 2000+ 1GB RAM, software RAID 1 w/ 2 ST3250824A (7200rpm,
ATA, 8MB cache). The machine was just built from leftover parts.
Running on Fedora Core 6.

I love BackupPC. :-)

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread David Rees
On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
  And, you could consider buying a faster drive, or one with a larger
  buffer.  Some IDE drives have pathetically small buffers and slow
  rotation rates.  That makes for a greater need for seeking, and worse
  seek performance.

 Well this is a seagate barracuda 7200rpm drive with 8mb cache ST3250824A
 http://www.seagate.com/support/disc/manuals/ata/100389997c.pdf

Same drive as I'm using, except mine are in RAID1 which doubles random
read performance.

 I read your posts about wifi etc. on forum. The processor is not the
 problem however adding memory probably might help bufferwise. I think
 this idea can actually work.:) thanks! I am seeing swapping problems but
 the disk the swap is on is almost idle. The backup drive is working all
 the time.

Please show us some more real data showing CPU utilization while a
backup is running. Please also give us the real specs of the machine
and what other jobs the machine performs.

 I have to say that slow performance with BackupPC is a known problem. I
 have heard it from several other people who are using BackupPC and it is
 the #1 reason of changing to another backup program from what I hear.

 Things must improve on this area.

There are plenty of ways to speed up BackupPC. It really isn't slow in
my experience.

But you must tell us what you are actually doing and what is going on
with your server for us to help instead of repeatedly saying it's
slow, speed it up.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread David Rees
Let's start at the beginning:

On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
 I am using backuppc but it is extremely slow. I narrowed it down to disk
 bottleneck. (ad2 being the backup disk). Also checked the archives of
 the mailing list and it is mentioned that this is happening because of
 too many hard links.

 Disks   ad0   ad2
 KB/t   4.00 25.50
 tps   175
 MB/s   0.00  1.87
 % busy196

What OS are you runnnig? What filesystem? What backup method
(ssh+rsync, rsyncd, smb, tar, etc)?

75 tps seems to be a bit slow for a single disk. Do you have vmstat,
iostat and/or top output while a backup is running?

 But I couldnt find any solution to this. Is there a way to get this
 faster without changing to faster disks? I guess I could 2 disks in
 mirror or something but it is stupid to waste the space I gain by
 backuppc algorithm by using multiple disks to get a decent performance :)

A mirror will only help speed up random reads at best. This usually
isn't a problem for actual backups, but will help speed up the nightly
maintenance runs.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] upgrade to 3.0

2007-03-26 Thread David Rees
On 3/20/07, Henrik Genssen [EMAIL PROTECTED] wrote:
 are there any issues upgrading from 2.1.2.pl1?

None that I know of. The upgrade process is pretty smooth. (though I
opted to convert to the new configuration file layout at the same time
which does take a bit of tweaking).

 is 3.0 yet apt-getable?

Don't know, I always install from source.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] upgrade to 3.0

2007-03-26 Thread Jim McNamara


 is 3.0 yet apt-getable?

Don't know, I always install from source.

-Dave



Yes, it is. It is only in unstable though, so you'll need to specify that
apt-get use the unstable repositories to get version 3.0.

Peace,
Jim

-

Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share
your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Unable to connect to BackupPC server error

2007-03-26 Thread Winston Chan
I had been running BackupPC on an Ubuntu computer for several months to
back the computer to a spare hard drive without problem. About the time
I added a new host (Windows XP computer using Samba), I started getting
the following behavior:

BackupPC backs both hosts properly onto the spare hard drive once or
twice after I reboot the Ubuntu server. Then I get a Error: Unable to
connect to BackupPC server error when I attempt to go the web
interface. When I restart BackupPC with /etc/init.d/backuppc restart,
I get a message Can't create LOG file /var/lib/backuppc/log/LOG
at /usr/share/backuppc/bin/BackupPC line 1735.

I have made sure that the backuppc is the owner
of /var/lib/backuppc/log/LOG. It seems the only way to get backuppc to
work again is to reboot the Ubuntu server. I then can see on the web
interface that the last successful backup (of both hosts) occurred 1 or
2 days after the previous reboot. Then backupPC works for 1 or 2 backups
and the cycle starts again.

What is the cause of this and how can I fix it?

Winston



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Evren Yurtesen
Jason Hughes wrote:
 Evren Yurtesen wrote:
 And, you could consider buying a faster drive, or one with a larger 
 buffer.  Some IDE drives have pathetically small buffers and slow 
 rotation rates.  That makes for a greater need for seeking, and worse 
 seek performance.

 Well this is a seagate barracuda 7200rpm drive with 8mb cache ST3250824A
 http://www.seagate.com/support/disc/manuals/ata/100389997c.pdf

 Perhaps it is not the maximum amount of cache one can have on a drive 
 but it is not that bad really.
 
 That drive should be more than adequate.  Mine is a 5400rpm 2mb buffer 
 clunker.  Works fine.
 Are you running anything else on the backup server, besides BackupPC?  
 What OS?  What filesystem?  How many files total?

FreeBSD, UFS2+softupdates, noatime.

There are 4 hosts that have been backed up, for a total of:

 * 16 full backups of total size 72.16GB (prior to pooling and 
compression),
 * 24 incr backups of total size 13.45GB (prior to pooling and 
compression).

# Pool is 17.08GB comprising 760528 files and 4369 directories (as of 
3/27 05:54),
# Pool hashing gives 38 repeated files with longest chain 6,
# Nightly cleanup removed 10725 files of size 0.40GB (around 3/27 05:54),
# Pool file system was recently at 10% (3/27 07:16), today's max is 10% 
(3/27 01:00) and yesterday's max was 10%.

  Host   User#Full   Full Age (days) Full Size (GB) 
 Speed 
(MB/s)   #Incr   Incr Age (days) Last Backup (days) 
 State   
Last attempt
host1   4   5.4 3.880.226   0.4 0.4 
idleidle
host2   4   5.4 2.100.066   0.4 0.4 
idleidle
host3   4   5.4 7.570.146   0.4 0.4 
idleidle
host4   4   5.4 5.560.106   0.4 0.4 
idleidle


 I read your posts about wifi etc. on forum. The processor is not the 
 problem however adding memory probably might help bufferwise. I think 
 this idea can actually work.:) thanks! I am seeing swapping problems 
 but the disk the swap is on is almost idle. The backup drive is 
 working all the time.
 
 Hmm.  That's a separate disk, not a separate partition of the same disk, 
 right?  If it's just a separate partition, I'm not sure how well the OS 
 will be able to allocate wait states to logical devices sharing the same 
 physical media... in other words, what looks like waiting on ad2 may be 
 waiting on ad0.  Someone more familiar with device drivers and linux 
 internals would have chime in here.  I'm not an expert.

It is a separate disk. The disk is on FreeBSD not Linux. They are not 
waiting for each other, they can be used simultaneously.


 I have to say that slow performance with BackupPC is a known problem. 
 I have heard it from several other people who are using BackupPC and 
 it is the #1 reason of changing to another backup program from what I 
 hear.

 Things must improve on this area.

 
 I did quite a lot of research and found only one other program that was 
 near my needs, and it was substantially slower due to encryption 
 overhead, and didn't have a central pool to combine backup data.  I may 
 have missed an app out there, though.  What are these people switching 
 to, if you don't mind?
 
 Re: what must improve is more people helping Craig.  He's doing it all 
 for free.  I think if it's important enough to have fixed, it's 
 important enough to pay for.  Or dive into the code and start making 
 those changes.  It is open source, after all.

I think we are already helping already by discussing the issue. Even if 
we wanted to pay, there is nothing to pay yet, as there is no agreed 
solution to this slowness.

Thanks,
Evren

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Evren Yurtesen
Les Mikesell wrote:
 Evren Yurtesen wrote:
 
 If your filesystem isn't a good place to store files, there is not 
 much an application can do about it.  Perhaps it would help if you 
 mentioned what kind of scale you are attempting with what server 
 hardware.  I know there are some people on the list handling what I 
 would consider large backups with backuppc.  If yours is 
 substantially smaller perhaps they can help diagnose the problem.  
 Maybe you are short on RAM and swapping memory to disk with large 
 rsync targets.

 I know that the bottleneck is the disk. I am using a single ide disk 
 to take the backups, only 4 machines and 2 backups running at a 
 time(if I am not remembering wrong).
 
 That's still not very informative.  Approximately how much data do those 
  targets hold (number of files and total space used)?  Are you using tar 
 or rsync?  If you are running Linux, what does 'hdparm -t -T' say about 
 your disk speed (the smaller number)? And what filesystem are you using?

There are 4 hosts that have been backed up, for a total of:

 * 16 full backups of total size 72.16GB (prior to pooling and 
compression),
 * 24 incr backups of total size 13.45GB (prior to pooling and 
compression).

  Host   User#Full   Full Age (days) Full Size (GB) 
 Speed 
(MB/s)   #Incr   Incr Age (days) Last Backup (days) 
 State   
Last attempt
host1   4   5.4 3.880.226   0.4 0.4 
idleidle
host2   4   5.4 2.100.066   0.4 0.4 
idleidle
host3   4   5.4 7.570.146   0.4 0.4 
idleidle
host4   4   5.4 5.560.106   0.4 0.4 idle
idle

# Pool is 17.08GB comprising 760528 files and 4369 directories (as of 
3/27 05:54),
# Pool hashing gives 38 repeated files with longest chain 6,
# Nightly cleanup removed 10725 files of size 0.40GB (around 3/27 05:54),
# Pool file system was recently at 10% (3/27 07:16), today's max is 10% 
(3/27 01:00) and yesterday's max was 10%.


 I see that it is possible to use raid to solve this problem to some 
 extent but the real solution is to change backuppc in such way that it 
 wont use so much disk operations.
 
 First we should find out if your system is performing badly compared to 
 others or if you are just expecting too much.  As an example, one of my 
 systems is backing up 20 machines and the summary says:
  Pool is 152.54GB comprising 2552606 files and 4369 directories
 This is a RAID1 (mirrored, so no faster than a single drive) on IDE 
 drives and the backups always complete overnight.
 
 I wonder what is the mechanical stress this poses on the hard drive 
 when it has to work 24/7 moving it's head like crazy.
 They'll die at some random time averaging around 4-5 years - just 
 like any other hard drive.  Disk heads are made to move...

 Perhaps, but there is a difference if they are moving 10 times or 
 10 times. Where the difference is that the possibility of failure 
 due to mechanical problems increases 1 times.
 
 No, it doesn't make a lot of difference as long as the drive doesn't 
 overheat.  The head only moves so fast and it doesn't matter if it does 
 it continuously.  However, if your system has sufficient RAM, it will 
 cache and optimize many of the things that might otherwise need an 
 additional seek and access.
 

I cant see how you can reach to this conclusion. So you say that a car 
which was driven 10miles have the same possibility of breaking down 
compared to the same model car driven for 10miles? There are 
frictions involved when head moves in the hard drive.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] RSync v. Tar

2007-03-26 Thread Jesse Proudman
I've got one customer who's server has taken 3600 minutes to  
backup.   77 Gigs of Data.  1,972,859 small files.  Would tar be  
better or make this faster?  It's directly connected via 100 Mbit to  
the backup box.

--

Jesse Proudman,  Blue Box Group, LLC





-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Evren Yurtesen
David Rees wrote:
 Let's start at the beginning:
 
 On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
 I am using backuppc but it is extremely slow. I narrowed it down to disk
 bottleneck. (ad2 being the backup disk). Also checked the archives of
 the mailing list and it is mentioned that this is happening because of
 too many hard links.

 Disks   ad0   ad2
 KB/t   4.00 25.50
 tps   175
 MB/s   0.00  1.87
 % busy196
 
 What OS are you runnnig? What filesystem? What backup method
 (ssh+rsync, rsyncd, smb, tar, etc)?
 
 75 tps seems to be a bit slow for a single disk. Do you have vmstat,
 iostat and/or top output while a backup is running?

Well, 1.7MB/s random reads is not that bad really.

There are 4 hosts that have been backed up, for a total of:

 vmstat
  procs  memory  pagedisks faults  cpu
  r b w avmfre  flt  re  pi  po  fr  sr ad0 ad2   in   sy  cs us 
sy id
  1 10 0  18  14124  112   0   1   1 245 280   0   0  438  361 192 
6  3 91

 iostat
   tty ad0  ad2 cpu
  tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
02 11.39   3  0.04   9.52  59  0.54   6  0  3  0 91



 But I couldnt find any solution to this. Is there a way to get this
 faster without changing to faster disks? I guess I could 2 disks in
 mirror or something but it is stupid to waste the space I gain by
 backuppc algorithm by using multiple disks to get a decent performance :)
 
 A mirror will only help speed up random reads at best. This usually
 isn't a problem for actual backups, but will help speed up the nightly
 maintenance runs.
 
 -Dave
 
 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread brien dieterle
Jason Hughes wrote:
 Evren Yurtesen wrote:
   
 Jason Hughes wrote:
 
 That drive should be more than adequate.  Mine is a 5400rpm 2mb 
 buffer clunker.  Works fine.
 Are you running anything else on the backup server, besides 
 BackupPC?  What OS?  What filesystem?  How many files total?
   
 FreeBSD, UFS2+softupdates, noatime.

 There are 4 hosts that have been backed up, for a total of:

 * 16 full backups of total size 72.16GB (prior to pooling and 
 compression),
 * 24 incr backups of total size 13.45GB (prior to pooling and 
 compression).

 # Pool is 17.08GB comprising 760528 files and 4369 directories (as of 
 3/27 05:54),
 # Pool hashing gives 38 repeated files with longest chain 6,
 # Nightly cleanup removed 10725 files of size 0.40GB (around 3/27 05:54),
 # Pool file system was recently at 10% (3/27 07:16), today's max is 
 10% (3/27 01:00) and yesterday's max was 10%.

  Host   User   #Full   Full Age (days)   Full Size 
 (GB)   Speed (MB/s)   #Incr   Incr Age (days)   Last 
 Backup (days)   State   Last attempt
 host1 4 5.4 3.88 0.22 6 0.4 
 0.4 idle idle
 host2 4 5.4 2.10 0.06 6 0.4 
 0.4 idle idle
 host3 4 5.4 7.57 0.14 6 0.4 
 0.4 idle idle
 host4 4 5.4 5.56 0.10 6 0.4 
 0.4 idle idle

 

 Hmm.  This is a tiny backup setup, even smaller than mine.  However, it 
 appears that the average size of your file is only 22KB, which is quite 
 small.  For comparison sake, this is from my own server:
 Pool is 172.91GB comprising 217311 files and 4369 directories (as of 
 3/26 01:08),

 The fact that you have tons of little files will probably give 
 significantly higher overhead when doing file-oriented work, simply 
 because the inodes must be fetched for each file before seeking to the 
 file itself.  If we assume no files are shared between hosts (very 
 conservative), and you have an 8ms access time, you will have 190132 
 files per host and two seeks per file, neglecting actual i/o time, gives 
 you 50 minutes.  Just to seek them all.  If you have a high degree of 
 sharing, it can be up to 4x worse.  Realize, the same number of seeks 
 must be made on the server as well as the client.

 Are you sure you need to be backing up everything that you're putting 
 across the network?  Maybe excluding some useless directories, maybe 
 temp files or logs that haven't been cleaned up?  Perhaps you can 
 archive big chunks of it with a cron job?

 I'd start looking for ways to cut down the number of files, because the 
 overhead of per-file accesses are probably eating you alive.  I'm also 
 no expert on UFS2 or FreeBSD, so it may be worthwhile to research its 
 behavior with hard links and small files.

 JH

   
For what it's worth, I have a server that backs up 8.6 million files  
averaging 10k in size from one host.  It takes a full 10 hours for a 
full backup via tar over NFS ( 2.40MB/s for 87GB). CPU usage is low, 
around 10-20%, however IOwait is a pretty steady 25%.

Server info:
HP DL380 G4
debian sarge
dual processor 3.2ghz xeon
2GB Ram
5x10k rpm scsi disks, raid5
128MB battery backed cache (50/50 r/w)
ext3 filesystems

brien

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Les Mikesell
Evren Yurtesen wrote:
 
 There are 4 hosts that have been backed up, for a total of:
 
 * 16 full backups of total size 72.16GB (prior to pooling and 
 compression),
 * 24 incr backups of total size 13.45GB (prior to pooling and 
 compression).
 
 
 # Pool is 17.08GB comprising 760528 files and 4369 directories (as of 
 3/27 05:54),

That doesn't sound difficult at all.  I suspect your real problem is 
that you are running a *bsd UFS filesystem with it's default sync 
metadata handling which is going to wait for the physical disk action to 
complete on every directory operation.  I think there are other options 
but I haven't kept up with them.  I gave up on UFS long ago when I 
needed to make an application that frequently truncated and rewrote a 
data file work on a machine that crashed frequently.  The sync-metadata 
'feature' statistically ensured that there was never any data in the 
file after recovering since the truncation was always forced to disk 
immediately but the data write was buffered so with a fast cycle the 
on-disk copy was nearly always empty.

Is anyone else running a *bsd?

 Perhaps, but there is a difference if they are moving 10 times or 
 10 times. Where the difference is that the possibility of failure 
 due to mechanical problems increases 1 times.

 No, it doesn't make a lot of difference as long as the drive doesn't 
 overheat.  The head only moves so fast and it doesn't matter if it 
 does it continuously.  However, if your system has sufficient RAM, it 
 will cache and optimize many of the things that might otherwise need 
 an additional seek and access.
 
 I cant see how you can reach to this conclusion.

Observation... I run hundreds of servers, many of which are 5 or more 
years old.  The disk failures have had no correlation to the server 
activity.

 So you say that a car 
 which was driven 10miles have the same possibility of breaking down 
 compared to the same model car driven for 10miles? There are 
 frictions involved when head moves in the hard drive.

Cars run under less predictable conditions and need some periodic 
maintenance, but yes, I expect my cars to run 10 miles under their 
design conditions without breaking down.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect to BackupPC server error

2007-03-26 Thread Craig Barratt
Winston writes:

 I had been running BackupPC on an Ubuntu computer for several months to
 back the computer to a spare hard drive without problem. About the time
 I added a new host (Windows XP computer using Samba), I started getting
 the following behavior:
 
 BackupPC backs both hosts properly onto the spare hard drive once or
 twice after I reboot the Ubuntu server. Then I get a Error: Unable to
 connect to BackupPC server error when I attempt to go the web
 interface. When I restart BackupPC with /etc/init.d/backuppc restart,
 I get a message Can't create LOG file /var/lib/backuppc/log/LOG
 at /usr/share/backuppc/bin/BackupPC line 1735.

Perhaps the /var/lib file system is full?

If not, does the backuppc user have permissions to write in
/var/lib/backuppc/log?

Craig

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/