On Monday, May 16, 2005 10:56:30 PM -0500 Troy Benjegerdes <[EMAIL PROTECTED]> wrote:

On Fri, May 13, 2005 at 12:43:42PM -0400, Jeffrey Hutzelman wrote:
On Friday, May 13, 2005 09:43:05 AM +0200 Niklas Edmundsson
<[EMAIL PROTECTED]> wrote:

> I have a faint memory of the memcache only being able to store
> cachesize/chunksize number of files even if the files are much smaller
> than chunksize, ie that the memcache is an array of chunksize blocks.
> If this is correct, then you should probably reduce the chunksize when
> doing small-sized-memcache to be able to fit enough files.

You bring up an important point which I failed to mention.  The
proposals I  made regarding chunkSize, cacheFiles, and dCacheSize are
all relevant only  when using disk cache.  The memory cache architecture
works somewhat  differently; memcache chunks are fixed-size and take up
the same amount of  storage regardless of how "full" they are.  So, the
cache can store only  cachesize/chunksize chunks.

As you suggested, this argues for much smaller chunks than would be used
for a disk cache, and in fact the default chunkSize for memcache is only
13  (8K).  I'm not suggesting changing this default, though of course
folks who  think they can spare the memory could choose to raise it (and
cacheBlocks)  an order of magnitude or two.

When I was messing around trying to get anything close to 10% of gigabit wire speeds, I found the best performance with a 100mb memcache, and chunkSize set to either 18 or 20.

Is there a way to set the network transfer size independent from the
chunk size, or are these somewhat inextricably linked?

I assume you're interested in setting the transfer size larger than the chunk size. Doing this would require writing some code, and I'm not sure it would be easy. It also doesn't make much sense for a disk cache, which suggests that it might be worth exploring changing the way memcache manages space.


The problem is that FetchData RPC's are done to retrieve the contents of exactly one chunk. If you wanted the transfer size to be larger than the chunk size, you'd need to allocate multiple chunks in advance of making the FetchData call, and then fill them in in sequence as the data arrives. That's not impossible, but it would be different from what the current code does.


There's also another issue, which is that setting the transfer size too large may impact fileserver performance on large files for which there is lots of demand, because currently only one client can be fetching from a given file at a time. While it's probably a good idea to change that in the long term, doing so would probably mean non-trivial changes to the way the volume package works, so it's not going to be a quick fix.


In addition, a large transfer size means that when you access a large file, you have to wait longer before you get to access any of it. So while the average performance goes up if you are transferring entire large files, you lose if you are only interested in a couple of pages out of a large database.

-- Jeffrey T. Hutzelman (N3NHS) <[EMAIL PROTECTED]>
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

_______________________________________________
OpenAFS-devel mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-devel

Reply via email to