On 09/20/2016 01:39 PM, Lewis Hyatt wrote:
Thanks very much for the suggestions. dmesg output is here:
http://pastebin.com/jCafCZiZ
We don't see any disk-related stuff there, and also our GUI shows all
the RAID arrays as being fine.
Hmmm I rarely trust GUIs for RAID. Do you have
On 09/20/2016 12:21 PM, Lewis Hyatt wrote:
We do not know if it's related, but this same OSS is in a very bad
state, with very high load average (200), very high I/O wait time, and
taking many seconds to respond to each read request, making the array
more or less unusable. That's the problem we
On 03/30/2012 02:30 AM, Jeff Johnson wrote:
Greetings,
Does anyone know the most recent kernel (RHEL/CentOS) that can be
successfully patched and compiled against the current Lustre 1.8 git
source tree? I attempted 2.6.18-308.1.1 but there are several patches
that fail. Quilt would not make
On 03/20/2012 12:18 PM, Samuel Aparicio wrote:
Hello, thanks for this - it's a 16 disk raid10 (with one spare) so 24Tb.
I previously tried 1.0 and 1.2 metadata, to no effect.
we are using 256 chunk sizes, I haven't tried reverting to 64k but will
do so.
this looks to me like something
On 03/14/2012 08:31 PM, Andreas Dilger wrote:
Whamcloud and EMC are jointly investigating how to be able to
contribute the Lustre client code into the upstream Linux kernel.
Would be good to have an active discussion of this at LUG as well.
As a prerequisite to this, EMC is working to clean
On 08/17/2011 10:43 PM, John Hanks wrote:
Hi,
I've been trying to get swap on lustre to work with not much success
using blockdev_attach and the resulting lloop0 device and using
losetup and the resulting loop device. This thread
On 08/17/2011 11:42 PM, David Dillow wrote:
On Wed, 2011-08-17 at 22:57 -0400, Joe Landman wrote:
The lustre client (and most NFS or even network block devices) all do
memory allocation of buffers ... which is anathema to migrating pages
out to disk. You can easily wind up in a death spiral
On 08/10/2011 01:40 AM, Jeff Johnson wrote:
Greetings,
The below console output is from a 1.8.4 OST (RHEL5.5,
2.6.18-194.3.1.el5_lustre.1.8.4, x86_64). Not saying it is a Lustre bug
for sure. Just wondering if anyone has seen this or something very
similar. Updating to 1.8.6 WC variant isn't
On 06/13/2011 04:07 PM, Michael Di Domenico wrote:
Joe,
I'm trying to compile 1.8.5 i686/x86_64 on RHEL5.6 with 2.6.8-238
kernel. Did you happen to hit this error with your compile? If so
how did you fix it? Has anyone else seen this?
CC [M] fs/jbd2/journal.o
fs/jbd2/journal.c:94:
On 06/01/2011 04:13 AM, Götz Waschk wrote:
On Tue, May 31, 2011 at 10:56 PM, Joe Landman
land...@scalableinformatics.com wrote:
Are there any gotchas? Or is it worth staying with the older Centos
5.4/5.5 based kernels from the download site?
Hi Joseph,
are you talking about the client
Are there any gotchas? Or is it worth staying with the older Centos
5.4/5.5 based kernels from the download site?
Thanks!
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web : http://scalableinformatics.com
On 11/06/2010 10:24 AM, Bob Ball wrote:
I am emptying a set of OST so that I can reformat the underlying RAID-6
more efficiently. Two questions:
1. Is there a quick way to tell if the OST is really empty? lfs_find
takes many hours to run.
Yes ...
df -H /path/to/OST/mount_point
if
On 10/21/2010 09:37 AM, Brock Palen wrote:
We recently added a new oss, it has 1 1Gb interface and 1 10Gb
interface,
The 10Gb interface is eth4 10.164.0.166 The 1Gb interface is eth0
10.164.0.10
They look like they are on the same subnet if you are using /24 ...
In modprobe.conf I have:
, Joe Landman wrote:
On 10/21/2010 09:37 AM, Brock Palen wrote:
We recently added a new oss, it has 1 1Gb interface and 1
10Gb interface,
The 10Gb interface is eth4 10.164.0.166 The 1Gb interface
is eth0 10.164.0.10
They look like they are on the same subnet if you are using /24
...
You
Michael Robbert wrote:
We have been struggling with our Lustre performance for some time now
especially with large directories. I recently did some informal
benchmarking (on a live system so I know results are not
scientifically valid) and noticed a huge drop in performance of
reads(stat
Joan J. Piles wrote:
And then 2 MDS like these:
- 2 x Intel 5520 (quad core) processor (or equivalent).
- 36Gb RAM.
- 2 x 64Gb SSD disks.
- 2 x10Gb Ethernet ports.
Hmmm
After having read the documentation, it seems to be a sensible
configuration, specially regarding the OSS.
Hate to reply to myself ... not an advertisement
On 07/23/2010 10:50 PM, Joe Landman wrote:
On 07/23/2010 10:25 PM, henry...@dell.com wrote:
[...]
It is possible to achieve 20GB/s, and quite a bit more, using Lustre.
As to whether or not that 20GB/s is meaningful to their code(s), thats
On 07/23/2010 10:25 PM, henry...@dell.com wrote:
Hello,
One of my customer want to set up HPC with thousands of compute nodes.
The parallel file system should have 20GB/s throughput. I am not sure
whether lustre can make it. How many IO nodes needed to achieve this target?
I hate to say it
Hi folks
I followed the directions
(http://wiki.lustre.org/index.php/Building_and_Installing_Lustre_from_Source_Code)
for building Lustre against the updated SLES11 2.6.27.45-0.1-default
kernel, and ran into this error during the
make rpms
step.
Brian J. Murrell wrote:
On Wed, 2010-03-24 at 13:31 -0400, Joe Landman wrote:
[...]
which, as you can see, is older than the kernel you are trying to build
for. Maybe you know that. I thought I would just point it out for
completeness in any case.
I was and am aware
or .45. I didn't see instructions on that.
Any pointers welcomed.
regards,
--
Joe Landman
land...@scalableinformatics.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
\
--with-linux-obj=/lib/modules/2.6.27.45-0.1-default/build
This seems to have worked on an unpatched kernel. Thanks. I presume we
need the OFED 1.4.2 stack installed before for o2ib kernel build?
--
Joe Landman
land...@scalableinformatics.com
22 matches
Mail list logo