Hi Alex, Ok, thanks for that suggestion, I’d thought of it, but wasn’t sure if rsync would complain if the arg appeared twice, but apparently it doesn’t.
I am NOT sure whether bandwidth limitation is what I want. I am actually trying to throttle down not only the network bandwidth used but also the I/O load. This is a shared file system with hundreds of users accessing it. I’m only backing up our lab’s small portion of the data, and I’m only backing up files less than 1 MB in size. The full backups are done separately by someone else in a different manner. For my <1 MB files, I am doing a full backup once a year and an incremental backup once an hour. I want to have essentially 0 impact on the network bandwidth and on the I/O load between the server that talks to BackupPC and the network storage device. Since I’m just starting, I’m doing the first full backups, and they are taking forever. I have a bandwidth limit of 1 MB/s, very low. I need to explore how high I can go without impacting other’s access, and how high I need to go to finish the full backups and incremental backups in a timely fashion. I’m thinking a higher bandwidth limit for the full backups would get them done quicker with still little impact. For the incrementals, I haven’t done one yet so I don’t know how long it will take, but I may discover I have to increase that bandwidth also, and/or decrease the frequency of the incrementals. Based on that, do you think I should be using ionice too? And by the way, I do not have root access to the server. Ted _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/