Others will have to chime in on tuning the server for vos move. I have one thought that might or might not help - do something like this from a host at source site. I am far from sure of the syntax but you get the idea: vos dump srv part vol | ssh host-at-target-site vos restore srv part vol
hmm, probably simplest to have the target-site host be your fileserver or otherwise havea copy of the server key, and use -localauth. If the volume is actively being written to this will probably not do what you want, and you'll run into issues with ssh limiting your throughput, too, but it's a possible option. re ssh transfer slowness, see PSC's patches to openssh. also, "never underestimate the bandwidth of a station wagon full of magnetic tape." Which I wouldn't want to do, either, but I like saying it :) On Oct 22, 2010, at 1:44 PM, Eric Chris Garrison wrote: > On 10/22/10 1:29 PM, Dan Pritts wrote: >> I'm guessing you are going between Bloomington and Indianapolis so >> latency shouldn't be too high, but even 10ms surely will add up if the >> conversation goes back and forth a million times. > > That's correct. > >> I'm pretty sure you can run multiple vos move's in parallel, which would >> help dramatically. > > We have some half-TB volumes though, those are still going to take days > at this rate. > >> as far as your iperf results, my experience is that tuning UDP buffers >> generally is not necessary; the defaults are usually sufficient to get >> hundreds of megabits. > > Is there a way to make vos moves do this? I'm using the mvto script, > modified to use -localauth since the moves far exceed ticket lifetime. > I could modify it further if vos has any options for these big volumes. > >> in UDP mode, iperf does not attempt to scale the bandwidth; it tries to >> send at whatever bandwidth you specify on the acommand line. the >> default is 1Mbit/sec...is it possible that's where your 1mbit result >> came from? > > Yes, I used the default for the initial 1Mbit/sec result. In my email > to this list, I reported incorrectly that I changed the buffer size for > the second test; it was actually bandwidth. I used the -b 1024M option > on iperf for the second try like I mentioned, and it was nearly line > rate. Which is why I asked here what I could do to speed up these AFS > transfers, since faster should be possible. > > Thanks, > > Chris _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
