On Fri, 23 Mar 2012 01:02:10 +0200 (EET) [email protected] wrote:
> > 109M single file: > > > > SSH to AFS ~525KB/s > > AFS to SSH ~800KB/s [...] > I tried turning off encryption, but it didn't make a notable > difference with either file transfer test Okay, well that's obviously better but still much lower than the theoretical. There are a few more switches you can fiddle with: - increase the chunksize by passing -chunksize to afsd (you have to restart the client to change this). Try '-chunksize 20' for 1M, or '-chunksize 23' for 8M, or something around there. - try memcache instead of disk cache (I assume you're using disk cache). Do this by passing -memcache to afsd, but this will use RAM instead of local disk for the caching stuff, so you might need to lower the cache size in 'cacheinfo'. - try 'cache bypass' (with disk cache) which you can turn on with 'fs bypassthresh'. But keep in mind cache bypass is still a new feature and may not be entirely stable. If it still seems slow no matter what you try, what may be helpful is if you can provide the output of: rxdebug <client> 7001 -rxstat -noconn rxdebug <server> 7000 -rxstat -noconn So we could see the rate of resends and such for Rx. Provide that output both immediately before and immediately after you try a transfer. You can also try a benchmark using 'rxperf' to see how fast rx itself goes, which could help rule out what the bottleneck is during the transfer. If you want to try that but are not sure how to use it, let us know. -- Andrew Deason [email protected] _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
