Holger Parplies wrote at about 06:04:23 +0100 on Monday, March 14, 2011: > Hi, > > Jeffrey J. Kosowsky wrote on 2011-03-09 17:43:41 -0500 [Re: [BackupPC-users] > Still trying to understand reason for extremely slow backup speed...]: > > [...] I actually used my program that I posted to copy > > it over and I checked that all the links are ok. > > I believe you have changed more than four variables ;-). BackupPC, > as we all know, depends heavily on seek times. When you are > measuring NFS speed, what exactly *are* you measuring? Probably not > what BackupPC is doing, so that may
I agree though I tested the extremes of both small and large file transfers and both single files and multiple files... so I would have thought that any major changes due to file access would be detected. I will probably try some rsync trials to try to isolate the problem more.. > or may not explain the difference. You said you "changed from ext2 to lvm2" - > I suppose you are still using a file system? ;-) And almost definitely a > different one, else you would have used dd ... Good point. Ext4 over lmv2. Of course this is another "change" (bad me!) but again by benchmarking the NFS behavior (over the right test set) I am trying to convince myself that nothing I did on the NAS fileserver side (e,g., new kernel, lvm2, ext4) should make any difference since if I can prove that the NFS access & transfer speeds haven't changed for representative actions then the problem must be elsewhere and it it has changed then I can just focus presumably on the NAS side. > - LVM may or may not make the seeks slower. I wouldn't expect it. I suspect > many people use BackupPC on LVM - for the flexibility of resizing the pool > partition if nothing else. Mileage on ARM NAS devices may vary. I did find one very interesting thing (which is perhaps not often discussed). The presence of LVM snapshots can significantly (in my case I think it was almost an order of magnitude) slow even simple filesystem operations like file deletion. I noticed this by the way when it took forever to delete an old backup from the BackupPC trash. > - The FS may or may not behave differently. > - The inode layout after copying may or may not be less efficient. Even > significantly so. I can't tell you what a generically good order to create > the copied files, directories, pool entries for a BackupPC pool (tm) would > be, else I'd re-implement your program ;-). My understanding is that there > is no "good" layout for a BackupPC pool, but there are bound to be varying > degrees of bad layouts. True and very interesting point. Though I didn't mention that I also had the 'slow' problem when backing up a new machine in a new virgin TopDir. > - In theory, RAID1 might have eliminated many of the seeks (on reading, that > is), if the usage pattern of the pool and the driver implementation happen > to fit. Might be interesting to figure out how many mirrors and what hard- > or software raid would be optimal for BackupPC ;-). But that's a > 3.x topic I do plan to return to RAID1... and I will definitely see what if any improvement I get. > Does the backup "sound" seek-limited, or is the NAS disk idle some of the > time? You didn't also change NFS mount options, did you ;-? I don't think it is seek-limited... I have not changed the NFS mount options. I am using 'async' since as I discovered way back, async makes a huge difference in my setup (I think a factor of maybe 5). > > Is at least as much memory available (for disk cache, if that's not obvious) > on the NAS as before the kernel upgrade? Does the NAS swap? Is it swapping? Your thinking did trigger the thought that perhaps the low memory situation on the NAS (total 64MB) may not leave enough memory for efficient caching, particularly of directory listings either at the native filesystem level or at the NFS level. Perhaps this leads to repeated disk seeks and reads to re-read the directory listing or restat file information. Now while the total memory has not changed, I know that my new kernel is larger than the original stripped-down stock kernel and also lvm2 and ext4 very likely require more memory than straight ext2. So it could very well be that memory is the issue and that my straight NFS tests don't create the types of caching issues that BackupPC runs into. In particular 'free' gives the following: total used free shared buffers cached Mem: 61056 59660 1396 0 38972 3648 -/+ buffers/cache: 17040 44016 Swap: 530104 4360 525744 > I can't think of any more questions to ask right now. Good luck :). Thanks and it's great as always to hear your insight! ------------------------------------------------------------------------------ Colocation vs. Managed Hosting A question and answer guide to determining the best fit for your organization - today and in the future. http://p.sf.net/sfu/internap-sfd2d _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/