Andreas Dilger wrote:
On Aug 19, 2009 13:55 +0200, Arne Wiebalck wrote:
Brian J. Murrell wrote:
On Wed, 2009-08-19 at 11:45 +0200, Arne Wiebalck wrote:
Unless you are making lots and lots of small OSTs -- which is not
usually beneficial anyway -- typically, you will run into resource
Andreas,
Is there a reason to do this instead of, say, two 5TB OSTs using MD RAID-0?
Or for that matter one 8+2 8TB OST with MD RAID-6? That will give
better space utilization if you have large files, otherwise you will
What is a 'large' file for you?
TIA,
Arne
smime.p7s
Description:
Some additional details,
I mounted the mds as ldiskfs and deleted the files in OBJECTS/* and
CATALOGS,
Remounted as lustre, same issue.
I also did a write conf, restarted all the servers, saw messages on
the MGS, that new config logs were being created, but still same error
on the mds
Hi,
I'm trying to check out 1.8.1 on one of our spare server which is
running RHEL5.2 (kickstarted from scratch so it's fresh). Part of the
kickstart includes loading up all the necessary lustre rpms and install
them. The kernel-lustre rpm installation failed during kickstart. After
RHEL 5.2
On Aug 20, 2009 09:27 +0200, Arne Wiebalck wrote:
Is there a reason to do this instead of, say, two 5TB OSTs using MD RAID-0?
Or for that matter one 8+2 8TB OST with MD RAID-6? That will give
better space utilization if you have large files, otherwise you will
What is a 'large' file for
On Aug 20, 2009 09:09 -0400, Brock Palen wrote:
Some additional details,
I mounted the mds as ldiskfs and deleted the files in OBJECTS/* and
CATALOGS,
Remounted as lustre, same issue.
I also did a write conf, restarted all the servers, saw messages on
the MGS, that new config logs
Hello,
as a project for college I'm doing a behavioral comparison between Lustre
and CXFS when dealing with simple strided files using POSIX semantics. On
one of the tests, each participating process reads 16 chunks of data with a
size of 32MB each, from a common, strided file using the following
On Thu, 2009-08-20 at 23:52 +0200, Alvaro Aguilera wrote:
I'm facing the following problem: when this code is run in parallel
the read operations on certain processes start to need more and more
time to complete. I attached a graphical trace of this, when using
only 2 processes.
Just a
On Fri, 2009-08-14 at 08:58 -0400, CHU, STEPHEN H, ATTSI wrote:
Hi,
Hi,
I’m trying to check out 1.8.1 on one of our spare server which is
running RHEL5.2 (kickstarted from scratch so it’s fresh). Part of the
kickstart includes loading up all the necessary lustre rpms and
install them. The
Hello!
Any chance you can use more modern release like 1.8.1? There was a
number of bugs fixed including some readahead-logic fixes that could
impede read performance.
Bye,
Oleg
On Aug 20, 2009, at 10:38 PM, Alvaro Aguilera wrote:
Thanks for pointing that out. I was using the
Hello,
You may see bug 17197 and try to apply this patch
https://bugzilla.lustre.org/attachment.cgi?id=25062 to your lustre src.
Or you can wait 1.8.2.
Thanks
Wangdi
Alvaro Aguilera wrote:
Hello,
as a project for college I'm doing a behavioral comparison between
Lustre and CXFS when
11 matches
Mail list logo