On Wednesday, February 5, 2003, at 05:44  PM, Lyle Seaman wrote:

I think the point is why would the server be being hit harder in the
memcache case? Shouldn't the diskcache and memcache cases be accessing
the server in the same way when the filesize is larger than the cache
size?
dunno, one thing at a time. It's best to make sure you know what the problem
is you're chasing before you spend time on a wild goose chase.

If Steve's hypothesis is correct, you will not be seeing more load on the
server.
So I ran the full suite of iozone tests (13), but at a single file size (128M) and one record size (64K). I set the AFS cache size to 80000K for both memcache and diskcache.

Using the built-in tcsh time:

memcache 2.050u 180.430s 11:05.17 27.4% 0+0k 0+0io 0pf+0w
diskcache 2.830u 194.020s 6:29.66 50.5% 0+0k 6+6io 0pf+0w

nearly twice as long for the memcache, though only slight higher (client) cpu usage for diskcache.

On the server side, memcache resulted in about 1.4 million input and 0.92 million output packets, while diskcache was 0.80 million input and 0.90 million output. For the fileserver process, 1:46 cpu minutes for memcache and 1:03 cpu minutes for diskcache. The 1 minute load average never exceeded 0.45 in my samples, though the 5 minute load average peaked at 0.28 for memcache, 0.21 for diskcache.

As a reference point, NFS (between the same server and client) took much less client time:

nfs 1.440u 50.650s 4:31.08 19.2% 0+0k 0+196626io 0pf+0w

and resulted in 0.87 million input and 0.62 million output packets. Peak 1 and 5 minute load averages were 0.70 and 0.28.
------------------------------------------------------------------------ --
Edward Moy
Apple Computer, Inc.
[EMAIL PROTECTED]

(This message is from me as a reader of this list, and not a statement
from Apple.)

_______________________________________________
OpenAFS-devel mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-devel


Reply via email to