I have a tiny head scratcher...

Tested this case:

rsync 100 or so files every 5-10 minutes to an AFS volume which is remote
over a 100Mbit dedicated WAN (33ms ping times)

with cache size = 500,000
with chunksize = 19
rsync -qau -timeout=60 --contimeout=30 <blah blah>

first rsync after afs client restart shows 0.15 - 0.25 seconds to complete

second, and subsequent rsync shows in excess of 30 seconds. It never
recovers to below one second.

Tested some parms... Decreasing cache size to 50,000 AND decreasing
chunksize to 13 gives consistent performance below 0.3 seconds

Increase cache size to 500m with chunksize 13 and the same behavior occurs
as before.

This seems counter intuitive... the 100 or so files do not go over the
500,000 block cache size. They are fairly small (10's to 100's of
kilobytes). Why would increasing cache size impact performance Negatively
in such a case?


-- 
Timothy Balcer / IT Services
Telmate / San Francisco, CA
Direct / (415) 300-4313
Customer Service / (800) 205-5510

Reply via email to