Hartmut Reuter wrote:
> Jeffrey Hutzelman wrote:
>> --On Wednesday, October 08, 2008 12:15:59 PM -0400 Jeffrey Altman
>> <[EMAIL PROTECTED]> wrote:
>>
>>> Hartmut Reuter wrote:
>>>
>>>>
>>>> After integration of my object storage stuff into 1.4.8pre1 I have made
>>>> some tests between two of our cell's fileserver machines,
>>>> Both Quadcore Intel(R) Xeon(R) CPU           E5405  @ 2.00GHz
>>>> with 4 GB main memory. They are both on the same Gbit-Ethernet switch.
>>>>
>>>> On the client I use 64 MB memory cache with chunk size 64 KB.
>>>
>>>
>>> Why such a small chunk size?
>>
>>
>> A good question.  This will adversely affect the transfer rates
>> through the cache manager, because a separate RXAFS_FetchData is done
>> for each chunk. What size reads is afsio doing?  If they're not the
>> same, then the measurements are really not comparable.
>>
>> -- Jeff
> 
> Normally I use 256 KB, but I think these  64 K are kind of a default
> value. With 64 MB it's not reasonable to use 1 MB chunk size because
> then you would have only 64 chunks at all.
> 
> It's true: afsio uses 1MB chunks and is therefore better off here. But
> the big advantage of Matt's bapass cache seems to be the parallelism you
> can get with multi processor machines (On my laptop it's slower than
> normal). The effective transfer size of Matt's code seems to be 128 K,
> but obviously the number of RPCs isn't the problem, but some kind of
> blocking (locking?). Otherwise a simple doubling of throughput is hard
> to explain...
> 
> -Hartmut

Assuming you were not limited to four active calls per connection, could
issue multiple StoreData / ReadData calls in parallel, and did not have
the slow start per call penalty, then I think using a small chunk size
makes sense.  However, given that this isn't true than (in theory) using
a larger chunk size allows Rx to maximize the bandwidth better than
other options.

Jeffrey Altman

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to