Jesse Proudman wrote:
> I've got one customer who's server has taken 3600 minutes to  
> backup.   77 Gigs of Data.  1,972,859 small files.  Would tar be  
> better or make this faster?  It's directly connected via 100 Mbit to  
> the backup box.
>
>   

First, determine your bottleneck.  Is it disk i/o or cpu limited?

There is another thread going on about small files and seek times.  A 
quick calculation, assuming 8ms per seek and two seeks per file, gives 
me 540 minutes worth of seeks for 2m files, and ~183 minutes of transfer 
time (assume 70% efficiency, best case).  I don't think the protocol is 
the limiting factor, necessarily.  :-)

Tar uses less cpu and more bandwidth, so if that's the place you're 
having trouble (cpu), switching might help.  It also has lower per-file 
transfer latency (rsync calculates checksums and sends extra packets to 
determine what to send), which might help.  But in any case, I think the 
backups will take half a day under theoretical best conditions.

Thanks,
JH

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to