On 26/03/2013, at 6:44, Aryan Ameri <[email protected]> wrote:

> What's the best way to copy a large directory tree (around 3TB in
> total) with a combination of large and small files? The files
> currently reside on my NAS which is on my LAN (connected via gigabit
> ethernet) and are mounted on my system as a NFS share. I would like to
> copy all files/directories to an external hard disk connected via USB.
> 
> I care about speed, but I also care about reliability, making sure
> that every file is copied, that all metadata is preserved and that
> errors are handled gracefully. I've done some research and currently,
> I am thinking of using tar or rsync, or a combination of the two.
> Something like:
> 
> tar --ignore-failed-read -C $SRC -cpf - . | tar --ignore-failed-read
> -C $DEST -xpvf -
> 
> to copy everything initially, and then
> 
> rsync -ahSD --ignore-errors --force --delete --stats $SRC/ $DIR/
> 
> To check everything with rsync.
> 
> What do you guys think about this? Am I missing something? Are there
> better tools for this? Or other useful options for tar and rsync that
> I am missing?
> 
> Cheers
> 
> --
> Aryan

Home NAS devices are very CPU limited so compressing files is the last thing 
you want to do.  

An rsync still has to build file lists and calculate a hash for each but this 
process will be much faster.  I suggest an initial rsync to move the bulk of 
the data and then a final pass when you're ready to stop using the NAS.  

If you want absolute speed dd will be much faster than both.  

Edward 
_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to