On Apr 02, 2009 15:17 -0700, Jordan Mendler wrote:
I deployed Lustre on some legacy hardware and as a result my (4) OSS's each
have 32GB of RAM. Our workflow is such that we are frequently rereading the
same 15GB indexes over and over again from Lustre (they are striped across
all OSS's) by
Andreas,
In theory, this should work well. Each index is about 15gb, and usually the
same index is used sequentially at a given time. These are genome
alignments, so we will, for instance align all of a human experiment before
we switch indexes to align all of a rat genome. As such 32gb should be
Jordan Mendler wrote:
Hi all,
I deployed Lustre on some legacy hardware and as a result my (4) OSS's
each have 32GB of RAM. Our workflow is such that we are frequently
rereading the same 15GB indexes over and over again from Lustre (they
are striped across all OSS's) by all nodes on our
The parameter is called dirty, is that write cache, or is it read-write?
Current Lustre does not cache on OSTs at all. All IO is direct.
Future Lustre releases will provide an OST cache.
For now, you can increase the amount of data cached on clients, which
might help a little. Client
Yes, it is for dirty cache limiting on a per-osc basis.
There is also /proc/fs/lustre/llite/*/max_cached_mb that regulates how
much cached
data per client you can have. (default is 3/4 of RAM)
On Apr 3, 2009, at 2:52 PM, Lundgren, Andrew wrote:
The parameter is called dirty, is that write
Hi all,
I deployed Lustre on some legacy hardware and as a result my (4) OSS's each
have 32GB of RAM. Our workflow is such that we are frequently rereading the
same 15GB indexes over and over again from Lustre (they are striped across
all OSS's) by all nodes on our cluster. As such, is there any