Hello!
I guess I am a little bit late to the party, but I was just reading comments in
bug 16900 and have this question I really need to ask.
On Aug 23, 2010, at 10:58 PM, Jeremy Filizetti wrote:
The larger RPCs from bug 16900 offered some significant performance when
working over the WAN.
In the attachment I created that Andreas posted at
https://bugzilla.lustre.org/attachment.cgi?id=31423 if you look at graph 1
and 2 they are both using larger than default max_rpcs_in_flight. I believe
the data without the patch from bug 16900 had max_rpcs_in_flight=42. For
the data with the
Hello!
On Dec 22, 2010, at 12:43 AM, Jeremy Filizetti wrote:
In the attachment I created that Andreas posted at
https://bugzilla.lustre.org/attachment.cgi?id=31423 if you look at graph 1
and 2 they are both using larger than default max_rpcs_in_flight. I believe
the data without the
Hello,
The Lustre Operations Manual only covers configuration and tuning for XT3
running Catamount. However, the tuneables you are concerned about relate more
to what kind and how much storage you have and less about XT specific tunings.
Thanks,
-Cory
On 09/27/2010 05:59 PM, burlen wrote:
On 09/24/2010 06:36 PM, Andreas Dilger wrote:
On 2010-09-24, at 19:10, Andreas Dilger wrote:
On 2010-09-24, at 18:20, burlen wrote:
To be sure I understand this, is it correct that each OST has its own pool
of service threads? So system wide number of service threads is bound by
Hi, Thanks for all the help,
Andreas Dilger wrote:
When one of the server threads is ready to process a read/write request it
will get or put the data from/to the buffers that the client already
prepared. The number of currently active IO requests is exactly the number
of active service
On 2010-09-24, at 18:20, burlen wrote:
Andreas Dilger wrote:
When one of the server threads is ready to process a read/write request it
will get or put the data from/to the buffers that the client already
prepared. The number of currently active IO requests is exactly the number
of active
On 2010-09-24, at 19:10, Andreas Dilger wrote:
On 2010-09-24, at 18:20, burlen wrote:
To be sure I understand this, is it correct that each OST has its own pool
of service threads? So system wide number of service threads is bound by
oss_max_threads*num_osts?
Actuall, the current
On 2010-08-22, at 11:58, burlen wrote:
Andreas Dilger wrote:
Currently, 1MB is the largest bulk IO size, and is the typical size used by
clients for all IO.
Is my understanding correct?
A single RPC request will initiate an RDMA transfer of at most
max_pages_per_rpc. where the page unit
Andreas Dilger wrote:
On 2010-08-17, at 14:15, burlen wrote:
I have some question about Lustre RPC and the sequence of events that
occur during large concurrent write() involving many processes and large
data size per process. I understand there is a mechanism of flow
control by
On 2010-08-17, at 14:15, burlen wrote:
I have some question about Lustre RPC and the sequence of events that
occur during large concurrent write() involving many processes and large
data size per process. I understand there is a mechanism of flow
control by credits, but I'm a little
Hi, thanks for previous help.
I have some question about Lustre RPC and the sequence of events that
occur during large concurrent write() involving many processes and large
data size per process. I understand there is a mechanism of flow
control by credits, but I'm a little unclear on how it
12 matches
Mail list logo