Nate Gordon wrote: > So I loaded up ntop and isolated http, dns, ssh, afs, and mysql. I > ran the previous test (ab -k -c <users> -t 120 <url>) again for > various numbers of users and here is what I came up with: > 1 user: H:29.5M D:1.8M A:4.6K M:19.9M O:1.2M > 2 users: H:50.3M D:3.0M A:2.0K M:33.7M O:241.9K > 3 users: H:56.1M D:3.3M A:0K M:37.8M O:234.6K > 4 users: H:56.2M D:3.3M A:0K M:37.9M O:127.9K > 5 users: H:48.0M D:2.9M A:2.0K M:32.4M O:246.3K > 8 users: H:23.4M D:1.4M A:2.3K M:15.8M O:135.6K > 10 users: H:16.6M D:1.0M A:3.8K M:11.3M O:385.6K > 20 users: H:16.7M D:1.0M A:0K M:11.7M O:235.5K > 30 users: H:16.8M D:1.0M A:1.9K M:11.9M O:346.2K > > H - http, D - dns, A - afs, M - mysql, O - other > > This shows an interesting trend in that there is a peak level of > performance, as well as a worst case performance level as well. It > also shows that afs traffic is essentially zero. This would indicate > that my performance bottleneck is in accessing the afs cache.
How were you measuring the AFS traffic? That is so little data moving back and forth your users couldn't possibly be using the AFS cache if that's all the traffic there is. At the very least there would still have to be queries for access control, status fetches, server probes, etc. even assuming that all of the data the user requires is already in the cache.
smime.p7s
Description: S/MIME Cryptographic Signature
