On Tue, May 25, 2010 at 03:07:13PM -0500, Les Mikesell wrote: > On 5/25/2010 2:49 PM, Robin Lee Powell wrote: > > > >> > >> Without looking at the code, I'd guess that it would go through > >> the rest of the list first before retrying failed jobs - but > >> that's just a guess. Maybe it would help to lower > >> $Conf{MaxBackups} if you haven't already. > > > > No can do; I have many other hosts that need to be backed up > > besides this one. > > Have you timed lower settings? With most hardware it is probably > counterproductive to run more than 2 concurrently on a LAN. I've > always thought it would be nice if you could specify groups with > their own concurrency limit to accommodate multiple low-bandwidth > links or alias that point to the same host/disk.
This isn't exactly over a LAN. We have over one hundred hosts. Many of them routinely take 4+ hours to backup. Several of them take 12+ hours to backup. One of them takes more than a day. Despite all that, and ignoring backups on the semaphored host, only 3 backups are more than 1 day old right now. So, whatever we're doing, it's working just fine except for this case. Dropping it to 2 concurrently would be ... bad. Our bottleneck is usually I/O on the remote server, FWIW. Our customers have some very unfortunate directory structers, and GFS/RedHat Clustering really doesn't like them. -Robin -- http://singinst.org/ : Our last, best hope for a fantastic future. Lojban (http://www.lojban.org/): The language in which "this parrot is dead" is "ti poi spitaki cu morsi", but "this sentence is false" is "na nei". My personal page: http://www.digitalkingdom.org/rlp/ ------------------------------------------------------------------------------ _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/