On Thu, 10 Apr 2008, Kadlecsik Jozsef wrote:
But this is a good clue to what might bite us most! Our GFS cluster is an
almost mail-only cluster for users with Maildir. When the users experience
temporary hangups for several seconds (even when writing a new mail), it
might be due to the
christopher barry wrote:
On Tue, 2008-04-08 at 09:37 -0500, Wendy Cheng wrote:
[EMAIL PROTECTED] wrote:
my setup:
6 rh4.5 nodes, gfs1 v6.1, behind redundant LVS directors. I know it's
not new stuff, but corporate standards dictated the rev of rhat.
[...]
I'm
On Fri, 2008-04-11 at 10:28 -0500, Wendy Cheng wrote:
christopher barry wrote:
On Tue, 2008-04-08 at 09:37 -0500, Wendy Cheng wrote:
[EMAIL PROTECTED] wrote:
my setup:
6 rh4.5 nodes, gfs1 v6.1, behind redundant LVS directors. I know it's
not new stuff, but corporate
Kadlecsik Jozsef wrote:
On Thu, 10 Apr 2008, Kadlecsik Jozsef wrote:
But this is a good clue to what might bite us most! Our GFS cluster is an
almost mail-only cluster for users with Maildir. When the users experience
temporary hangups for several seconds (even when writing a new mail), it
On Wed, 9 Apr 2008, Wendy Cheng wrote:
What led me to suspect clashing in the hash (or some other lock-creating
issue) was the simple test I made on our five node cluster: on one node I
ran
find /gfs -type f -exec cat {} /dev/null \;
and on another one just started an editor,
On Tue, 2008-04-08 at 09:37 -0500, Wendy Cheng wrote:
[EMAIL PROTECTED] wrote:
my setup:
6 rh4.5 nodes, gfs1 v6.1, behind redundant LVS directors. I know it's
not new stuff, but corporate standards dictated the rev of rhat.
[...]
I'm noticing huge differences in compile times - or
Kadlecsik Jozsef wrote:
What is glock_inode? Does it exist or something equivalent in
cluster-2.01.00?
Sorry, typo. What I mean is inoded_secs (gfs inode daemon wake-up
time). This is the daemon that reclaims deleted inodes. Don't set it too
small though.
Isn't GFS_GL_HASH_SIZE too
On Wed, 9 Apr 2008, Wendy Cheng wrote:
Have been responding to this email from top of the head, based on folks'
descriptions. Please be aware that they are just rough thoughts and the
responses may not fit in general cases. The above is mostly for the original
problem description where:
1.
my setup:
6 rh4.5 nodes, gfs1 v6.1, behind redundant LVS directors. I know it's
not new stuff, but corporate standards dictated the rev of rhat.
[...]
I'm noticing huge differences in compile times - or any home file access
really - when doing stuff in the same home directory on the gfs on
On Mon, Apr 7, 2008 at 9:36 PM, christopher barry
[EMAIL PROTECTED] wrote:
Hi everyone,
I have a couple of questions about the tuning the dlm and gfs that
hopefully someone can help me with.
There are lots to say about this configuration.. It is not a simple tuning
issue.
my setup:
6
[EMAIL PROTECTED] wrote:
my setup:
6 rh4.5 nodes, gfs1 v6.1, behind redundant LVS directors. I know it's
not new stuff, but corporate standards dictated the rev of rhat.
[...]
I'm noticing huge differences in compile times - or any home file access
really - when doing stuff in the same home
On Tue, 8 Apr 2008, Wendy Cheng wrote:
The more memory you have, the more gfs locks (and their associated gfs file
structures) will be cached in the node. It, in turns, will make both dlm and
gfs lock queries take longer. The glock_purge (on RHEL 4.6, not on RHEL 4.5)
should be able to help
Hi everyone,
I have a couple of questions about the tuning the dlm and gfs that
hopefully someone can help me with.
my setup:
6 rh4.5 nodes, gfs1 v6.1, behind redundant LVS directors. I know it's
not new stuff, but corporate standards dictated the rev of rhat.
The cluster is a developer build
13 matches
Mail list logo