Jody,

> One interesting result is that smaller chunk sizes than we currently
> recommend can increase read performance by allowing larger request
> sizes.  I'm not sure why it is possible to do a 2 or 4MB request
> through the MD layer with 128K chunks but not with 256K chunks.  No
> error is shown on the console in the failing case.

What is "chunk size"?

> Considering the write results, we actually see higher performance with
> small chunk sizes (128K) no matter what request size is used, and again
> the smaller chunk size allows larger requests with correspondingly
> higher performance.
> 
> So far the best performance for both reads and writes I've observed is
> with a 128K chunk size and 4MB requests.  This request size corresponds
> to a stripe size of 4MB, assuming all layers from obdfilter to the MD
> device preserve the 4MB IO intact.

What is the page size of this machine?  There are issues with scatter/gather
descriptors larger than 1 page which you might be running into.

> My next plan is to test obdfilter-survey with various chunk and request
> sizes.  Assuming 4MB IOs are preserved intact all the way to the MD
> layer, I expect similar results.  I will then test even smaller chunk
> sizes with both sgpdd-survey and obdfilter-survey.

By default, the maximum request size issued by clients is 1MByte which
is the LNET MTU.  It's possible to build lustre with a larger maximum
I/O size that can be exploited in special cases (e.g. no routing, page
size on client and server > 4K and the LND supports it).  Supporting these
in the general case requires changes in lustre (e.g. multiple bulk
transfers in a single RPC).

                Cheers,
                        Eric


_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel

Reply via email to