Timothy J Massey wrote at about 11:48:05 -0400 on Tuesday, May 17, 2011: > The point was basically "any block-level attachment (except maybe USB)". > The problem comes from NFS' poor handling of the zillions of hard links > that BackupPC wants to use.
I haven't noticed any NFS problems due to hard links. I get approximately the same speed of transfer operations when I am reading/writing regular file or massively hard-linked ones. In my experience, the issue with hard links (e.g., rsync copying of the pool) has nothing to do with NFS (or any file system for that matter). Do you have any data supporting your claim that NFS suffers more than other filesystems with massive hard links? Again, the only issue I have with NFS is that it is relatively slow when accessing large numbers of small files due to the protocol overhead. But even so, it is quite workable even on a 100MHz ethernet. ------------------------------------------------------------------------------ Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
