Re: [Lustre-discuss] OSS Cache Size for read optimization

2009-04-06 Thread Andreas Dilger
On Apr 02, 2009 15:17 -0700, Jordan Mendler wrote: I deployed Lustre on some legacy hardware and as a result my (4) OSS's each have 32GB of RAM. Our workflow is such that we are frequently rereading the same 15GB indexes over and over again from Lustre (they are striped across all OSS's) by

Re: [Lustre-discuss] OSS Cache Size for read optimization (Andreas Dilger)

2009-04-06 Thread Jordan Mendler
Andreas, In theory, this should work well. Each index is about 15gb, and usually the same index is used sequentially at a given time. These are genome alignments, so we will, for instance align all of a human experiment before we switch indexes to align all of a rat genome. As such 32gb should be

Re: [Lustre-discuss] OSS Cache Size for read optimization

2009-04-03 Thread Cliff White
Jordan Mendler wrote: Hi all, I deployed Lustre on some legacy hardware and as a result my (4) OSS's each have 32GB of RAM. Our workflow is such that we are frequently rereading the same 15GB indexes over and over again from Lustre (they are striped across all OSS's) by all nodes on our

Re: [Lustre-discuss] OSS Cache Size for read optimization

2009-04-03 Thread Lundgren, Andrew
The parameter is called dirty, is that write cache, or is it read-write? Current Lustre does not cache on OSTs at all. All IO is direct. Future Lustre releases will provide an OST cache. For now, you can increase the amount of data cached on clients, which might help a little. Client

Re: [Lustre-discuss] OSS Cache Size for read optimization

2009-04-03 Thread Oleg Drokin
Yes, it is for dirty cache limiting on a per-osc basis. There is also /proc/fs/lustre/llite/*/max_cached_mb that regulates how much cached data per client you can have. (default is 3/4 of RAM) On Apr 3, 2009, at 2:52 PM, Lundgren, Andrew wrote: The parameter is called dirty, is that write

[Lustre-discuss] OSS Cache Size for read optimization

2009-04-02 Thread Jordan Mendler
Hi all, I deployed Lustre on some legacy hardware and as a result my (4) OSS's each have 32GB of RAM. Our workflow is such that we are frequently rereading the same 15GB indexes over and over again from Lustre (they are striped across all OSS's) by all nodes on our cluster. As such, is there any