Thanks for the input so far. The problem that i keep running in to is that i while i do have about 50gb of holding space, it doesn't really help at all because the DLEs that are the problem childs are much larger than the holding space. if the dle estimate is larger than holding space, it will not make use of the holding space. I guess this is due to making it nearly impossible to "track" what has been spooled to holding and at the same time what has actually been written. in order to make a dle that size, i'd have to drill down in the structure about 10 levels just to make that an option ; which creates it's own set of problems when managing the schedule.
I will try the idea of making a holding "disk" within the structure of the VTL for each client. maybe this will help, but i'm a bit skeptical on this due to the constant metadata writing on the cluster. Give me a couple of days in the office to implement, and i'll report back my findings. Anyway, keep the ideas coming. appreciate it so far. On Thu, 2021-04-08 at 18:24 -0400, Jon LaBadie wrote: > On Wed, Apr 07, 2021 at 10:52:54AM -0500, Justin Sanderson wrote: > > I have 4 client systems that have anywhere from 10T to 30T each and > > have > > setup a VTL. Everything thing seems to be working as expected after > > a > > couple of DLE tweaks. > > > > My problem is that i dont have any significant amount of holding > > space to > > help things from a performance perspective. > > > > So, my question is, is it possible to take advantage of multiple > > virtual > > drives to speed things up with no holding space? > > > > My fulls are taking forever (3-4days) and am looking for ways to > > reduce the > > the time it takes. > > > > I wrote a custom bash script to breakup the DLEs in to smaller > > chunks which > > helped. I'm running glusterfs for both the src and destination data > > endpoints. The glusters are very weak in small io operations but > > have > > excellent "stream" data throughput. > > > > Im not an amanda expert by any means and am looking for some > > pointers to > > tune the backup performance and identify potential bottlenecks. > > > > Any thoughts would be appreciated. > > As Debra suggests, get a holding disk. The vtapes are single stream > only > and to change that would require a major revision plus become > incompatible with physical tapes. > > My installation is simple (home office) but the speed up is > clear. Here > is a piece of one of my amreports. > > Total Full Incr. Level:# > -------- -------- -------- -------- > Estimate Time (hrs:min) 0:03 > Run Time (hrs:min) 1:44 > Dump Time (hrs:min) 4:09 3:38 0:30 > > Note the total dump time, estimate, dumping, taping, took 104 > minutes. > Yet during those 104 min, there were 249 minutes of actual dumping. > > Debra had a good idea for testing, use one of your vtapes. The > holding > "disk" is not a disk, it is a directory. In fact it can be multiple > directories, you can define multiple "holding disks". > > It is best if the holding disk is not on the same physical drive as > data being backed up by amanda. > > I was going to suggest an external USB drive as a holding disk but I > hesitate to do so. Of my 6 disks dedicated to vtapes, 4 are in an > external USB enclosure and work fine generally. But a few times I > wanted to move a vtape from one physical drive to another and my > system crashes when I attempt to do so with the USB drives. No > problem with the pair inside the amanda server. BUT, that may just > be my hardware!! Does anyone else use a usb drive as holding disk? > > Jon >
