On Tue, 2009-09-01 at 17:46 -0600, Andreas Dilger wrote:
Note that in 1.6 changing the oss thread count is not dynamic, it
needs a server restart.
Yeah. :-(
In 1.8.1 (IIRC) it is possible to increase
the thread count at runtime, though it can't yet be reduced.
Oh, not so bad as 1.6 then.
Hello
I would like to learn module SNMP included in patched linux kernel
(2.6.18-128) rpm packages is built in?
Thanks
--
Mail System.
--
Internet Explorer 8 - ускоритель интернета! http://ie.rambler.ru/
___
Lustre-discuss mailing list
Have anyone seen these kind of errors while running IOR or some other
benchmarks:
Im running lustre 1.8.1 on CentOS 5.3.
I have the following configuration:
4 JBDOs J4400 connected to 4 OSSs.
Each OSS has 3 OSTs (raid5 - 8 disks) connected using multipathd, mdadm on
/dev/dm* and
On Wed, 2009-09-09 at 14:31 -0300, Rafael David Tinoco wrote:
Have anyone seen these kind of errors while running IOR or some other
benchmarks:
On a note of e-mail formatting, so much vertical whitespace is not
really needed and makes reading a bit more difficult.
Also, personally, I don't
Just for the record, we've been running 1.8.1 for a several weeks now
with no problems. Well, truthfully, no problems is an exaggeration
but it is mostly working. We see lots of log messages we are not used
to regarding client and server csum differences.
Anyway, your email concerned us so
I'm not really sure why writethrough_cache_enable is being disabled but the
method we have used to disable the read_cache_enable is echo 0
/proc/fs/lustre/obdfilter/ost name/read_cache_enable without any issues.
-Original Message-
From: lustre-discuss-boun...@lists.lustre.org
Does this need to be run on EACH OSS? Is there a central way to do it on the
MDS?
You recommend disabling the read and the write as the settings indicate or just
the read as the text indicates?
-Original Message-
A patch is under testing and will be included in 1.8.1.1.
Until 1.8.1.1
On Wed, 2009-09-09 at 13:23 -0600, Lundgren, Andrew wrote:
Does this need to be run on EACH OSS? Is there a central way to do it on the
MDS?
You recommend disabling the read and the write as the settings indicate or
just the read as the text indicates?
A clarification would be good
Hello!
On Sep 9, 2009, at 1:31 PM, Rafael David Tinoco wrote:
One of my OSSs crashes, sometimes one, sometimes another. With the
following error:
That's not a crash.
That's watchdog timeout indicative of lustre spending too much time
waiting on io.
As such you need to somehow decrease the
Hello!
On Sep 9, 2009, at 2:07 PM, Charles A. Taylor wrote:
Anyway, your email concerned us so we issued the recommended commands
on our OSSs to disable the caching. That promptly crashed two of our
OSSs. We got the servers back up and after fsck'ing (fsck.ext4) all
the OSTs and
Im attaching the messages (only the error part) file so we don't have these
mail formatting problems.
--
Can you provide a bit more of the log before the above so we can see what the
stack trace is in reference to? Also, try to
eliminate the white-space between lines. Are you getting any
Forget the file.. sorry
-Original Message-
From: lustre-discuss-boun...@lists.lustre.org
[mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Rafael David
Tinoco
Sent: Wednesday, September 09, 2009 7:30 PM
To: 'Brian J. Murrell'; lustre-discuss@lists.lustre.org
Subject: Re:
It sms that using 64 threads for OST solved the problem.
:D
But.. too early to celebrate .. running all blocksizes and stripe widths
combinations.
Regards
Tinoco
-Original Message-
From: oleg.dro...@sun.com [mailto:oleg.dro...@sun.com]
Sent: Wednesday, September 09, 2009 7:26
13 matches
Mail list logo