John Pettitt wrote:
> Evren Yurtesen wrote:
>>
>>
>> I know that the bottleneck is the disk. I am using a single ide disk 
>> to take the backups, only 4 machines and 2 backups running at a 
>> time(if I am not remembering wrong).
>>
>> I see that it is possible to use raid to solve this problem to some 
>> extent but the real solution is to change backuppc in such way that it 
>> wont use so much disk operations.
>>
>>   
> 
> 
>  From what I can tell the issue is that each file requires a hard link - 
> depending on your file system metadata like directory entries, had links 
> etc get treated differently that regular data - on a BSD ufs2 system 
> metadata updates are typically synchronous, that is the system doesn't 
> return until the write has made it to the disk.   This is good for 
> reliability but really bad for performance since it prevents out of 
> order writes which can save a lot of disk activity.  
> Changing backuppc would be decidedly non-trivial - eyeballing it to hack 
> in a real database to store the relationship between pool and individual 
> files would touch almost just about every part of the system.
> 
> What filesystem are you using and have you turned off atime - I found 
> that makes a big difference.
> 
> John

I have noatime, I will try bumping up the memory and hope that the 
caching will help. I will let you know if it helps.

Thanks,
Evren

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to