[ ... ]
Lustre is originally designed to target at HPC clusters,
i.e., systems on a single LAN environment.
It is not so much single LAN, but streaming and low latency.
On the other hand, the cloud we are building is physically
distributed at different cities in the province of Alberta.
[
Thanks for your help Brian. We've resolved the problem by upgrading
the firmware on the HCA's from 1.0.7 to 1.2.0. mounts have stabilized.
Also upgraded to ofed 1.4 (minus the kernel-ib patches).
On Tue, Mar 31, 2009 at 4:29 PM, Brian J. Murrell brian.murr...@sun.com wrote:
On Tue, 2009-03-31
We are using e2scan since few days and we have noticed that date
specification is not being processed correctly by e2scan.
date
Fri Apr 3 15:56:49 BST 2009
/usr/sbin/e2scan -C /ROOT -l -N 2009-03-29 19:44:00 /dev/dm-0
file_list
generating list of files with
mtime newer than Sun
Jordan Mendler wrote:
Hi all,
I deployed Lustre on some legacy hardware and as a result my (4) OSS's
each have 32GB of RAM. Our workflow is such that we are frequently
rereading the same 15GB indexes over and over again from Lustre (they
are striped across all OSS's) by all nodes on our
The parameter is called dirty, is that write cache, or is it read-write?
Current Lustre does not cache on OSTs at all. All IO is direct.
Future Lustre releases will provide an OST cache.
For now, you can increase the amount of data cached on clients, which
might help a little. Client
Yes, it is for dirty cache limiting on a per-osc basis.
There is also /proc/fs/lustre/llite/*/max_cached_mb that regulates how
much cached
data per client you can have. (default is 3/4 of RAM)
On Apr 3, 2009, at 2:52 PM, Lundgren, Andrew wrote:
The parameter is called dirty, is that write