Hi
Steven: Thanks again.
If you have access to the Red Hat kbase system, then this is all
described in the docs on that site.
I do as we have RedHat support for other platforms, just not this one.
The docs I found that are worthy of a slow reading are probably:
https://access.redhat.com/kb
Hi,
On Fri, 2010-12-17 at 20:06 +0100, Kevin Maguire wrote:
> Hi
>
> > You can get a glock dump via debugfs which may show up contention, looks
> > for type 2 glocks which have lots of lock requests queued but not
> > granted. The lock requests (holders) are tagged with the relevant
> > proces
Hi
You can get a glock dump via debugfs which may show up contention, looks
for type 2 glocks which have lots of lock requests queued but not
granted. The lock requests (holders) are tagged with the relevant
process.
Note I am currently using GFS, not GFS2. And before going further I ran
th
Hi,
On Fri, 2010-12-17 at 17:35 +0100, Kevin Maguire wrote:
> Hi
>
> Bob/Steven/Ben - many thanks for responding.
>
> > There is some helpful stuff here on the tuning side:
> >
> > http://sources.redhat.com/cluster/wiki/FAQ/GFS#gfs_tuning
>
> Indeed, we have implemented many these suggestions,
Hi
Bob/Steven/Ben - many thanks for responding.
There is some helpful stuff here on the tuning side:
http://sources.redhat.com/cluster/wiki/FAQ/GFS#gfs_tuning
Indeed, we have implemented many these suggestions, "fast statfs" is on,
-r 2048 was used, quotas off, the cluster interconnect is a
There is some helpful stuff here on the tuning side:
http://sources.redhat.com/cluster/wiki/FAQ/GFS#gfs_tuning
-b
- "Bob Peterson" wrote:
> - "Kevin Maguire" wrote:
> | Hi
> |
> | We are running a 20 node cluster, using Scientific Linux 5.3, with
> a
> | GFS
> | shared filesystem ho
- "Kevin Maguire" wrote:
| Hi
|
| We are running a 20 node cluster, using Scientific Linux 5.3, with a
| GFS
| shared filesystem hosted on our SAN. Cluster nodes are dual core units
|
| with 4 GB of RAM, and a standard Qlogic FC HBA.
|
| Most of the 20 nodes form a batch-processing cluster
Hi,
On Thu, 2010-12-16 at 00:47 +0100, Kevin Maguire wrote:
> Hi
>
> We are running a 20 node cluster, using Scientific Linux 5.3, with a GFS
> shared filesystem hosted on our SAN. Cluster nodes are dual core units
> with 4 GB of RAM, and a standard Qlogic FC HBA.
>
> Most of the 20 nodes form
Hi
We are running a 20 node cluster, using Scientific Linux 5.3, with a GFS
shared filesystem hosted on our SAN. Cluster nodes are dual core units
with 4 GB of RAM, and a standard Qlogic FC HBA.
Most of the 20 nodes form a batch-processing cluster, and our users are
happy enough with the per
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Alan A
Sent: Tuesday, March 16, 2010 5:16 PM
To: linux clustering
Subject: Re: [Linux-cluster] GFS Tuning - it's just slow,to slow for
production
> statfs_slow = 0
> I do not see statfs_fas
day, March 16, 2010 4:01 PM
>> To: linux clustering
>> Subject: Re: [Linux-cluster] GFS Tuning - it's just slow,to slow for
>> production
>>
>> > /dev/mapper/vg_acct10-lv_acct10 /acct10 gfs2 -o notime 1 2
>>
>> Should be "noatime". We use th
ect: Re: [Linux-cluster] GFS Tuning - it's just slow,to slow for
> production
>
> > /dev/mapper/vg_acct10-lv_acct10 /acct10 gfs2 -o notime 1 2
>
> Should be "noatime". We use this in /etc/fstab on GFS1 without a problem.
>
> > It complained about /etc/fst
That should be "-o relatime" and "-o noatime", respectively =)
_
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Alan A
Sent: dinsdag 16 maart 2010 21:01
To: linux clustering
Subject: Re: [Linux-cluster] GFS Tuning - it
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Alan A
Sent: Tuesday, March 16, 2010 4:01 PM
To: linux clustering
Subject: Re: [Linux-cluster] GFS Tuning - it's just slow,to slow for production
> /dev/mapper/vg_acct10-lv_acct10 /acct10
I rebuild the share as GFS2, and mounted GFS2 instead of GFS.
In /etc/fstab I am mounting with:
/dev/mapper/vg_acct10-lv_acct10 /acct10 gfs2 defaults 1 2
I tried mounting:
/dev/mapper/vg_acct10-lv_acct10 /acct10 gfs2 -o reltime 1 2
and
/dev/mapper/vg_acct10-lv_acct10 /acct10 gfs2 -o notime 1 2
Are you concurrently accessing same directories from multiple nodes? If so,
re-arrange the access pattern so each node is accessing independent subtrees.
Otherwise you'll have massive lock churn and very poor latencies.
Gordan
"Alan A" wrote:
>We are trying to deploy GFS in production, and ar
Hi,
On Thu, 2010-03-04 at 09:13 -0600, Doug Tucker wrote:
> Steven,
>
> We discovered the same issue the day we went into production with ours.
> The tuning paramater that made it production ready for us was:
>
> /sbin/gfs_tool settune /mnt/users statfs_fast 1
>
> Why statfs_fast is not set to
Hi,
On Thu, 2010-03-04 at 09:17 -0600, Alan A wrote:
> Application is single threaded application that handles cgi-bin calls
> from Apache, opens up file for writing and writes data. We can have up
> to 200 concurrent sessions on single application instance hitting the
> GFS mount. We noticed majo
Hi,
On Thu, 2010-03-04 at 09:12 -0600, Alan A wrote:
> Hello all - GFS2 is what we have deployed. It is fiber channel
> connection/HBA to HP XP SAN.
>
> What workload are you tuning for? The chances are that you'll do a lot
> better by adjusting the way in which the application(s) use the
> file
Application is single threaded application that handles cgi-bin calls from
Apache, opens up file for writing and writes data. We can have up to 200
concurrent sessions on single application instance hitting the GFS mount. We
noticed major slowdown once we pass 30 concurrent users.
We can run 10 in
Steven,
We discovered the same issue the day we went into production with ours.
The tuning paramater that made it production ready for us was:
/sbin/gfs_tool settune /mnt/users statfs_fast 1
Why statfs_fast is not set to on by default is beyond my comprehension,
I don't think anyone could run pr
Hello all - GFS2 is what we have deployed. It is fiber channel
connection/HBA to HP XP SAN.
*What workload are you tuning for? The chances are that you'll do a lot
better by adjusting the way in which the application(s) use the
filesystem rather than tweeking any specific tuning parameters. What
m
On Thu, Mar 04, 2010 at 08:33:28AM -0600, Alan A wrote:
> We are trying to deploy GFS in production, and are experiencing major
> performance issues. What parameters in GFS settune can be changed to
> increase I/O, to better tune performance? Application we run utilizes a lot
> of I/O, please advis
Alan,
What are you using for backend storage? I did some minor tuning for my
EVA8100 and went from 200MB/s to 550MB/s. Also, by default, plocks are set
to some rediculously small number, 100 if I recall. You can rais or
eliminate that limit fairly easy. That seems to help as well.
If people know
Hi,
On Thu, 2010-03-04 at 08:33 -0600, Alan A wrote:
> We are trying to deploy GFS in production, and are experiencing major
> performance issues. What parameters in GFS settune can be changed to
> increase I/O, to better tune performance? Application we run utilizes
> a lot of I/O, please advise.
We are trying to deploy GFS in production, and are experiencing major
performance issues. What parameters in GFS settune can be changed to
increase I/O, to better tune performance? Application we run utilizes a lot
of I/O, please advise.
We experience OK performance when starting, but as things ra
On Thu, Jun 19, 2008 at 10:42 AM, Wendy Cheng <[EMAIL PROTECTED]> wrote:
> Wendy Cheng wrote:
>>
>> Terry wrote:
>>>
>>> On Tue, Jun 17, 2008 at 5:22 PM, Terry <[EMAIL PROTECTED]> wrote:
>>>
On Tue, Jun 17, 2008 at 3:09 PM, Wendy Cheng <[EMAIL PROTECTED]>
wrote:
>
> Hi,
Wendy Cheng wrote:
Terry wrote:
On Tue, Jun 17, 2008 at 5:22 PM, Terry <[EMAIL PROTECTED]> wrote:
On Tue, Jun 17, 2008 at 3:09 PM, Wendy Cheng
<[EMAIL PROTECTED]> wrote:
Hi, Terry,
I am still seeing some high load averages. Here is an example of a
gfs configuration. I left statf
On Thu, Jun 19, 2008 at 9:49 AM, Wendy Cheng <[EMAIL PROTECTED]> wrote:
> Terry wrote:
>>
>> On Tue, Jun 17, 2008 at 5:22 PM, Terry <[EMAIL PROTECTED]> wrote:
>>
>>>
>>> On Tue, Jun 17, 2008 at 3:09 PM, Wendy Cheng <[EMAIL PROTECTED]>
>>> wrote:
>>>
Hi, Terry,
>
> I am still
Terry wrote:
On Tue, Jun 17, 2008 at 5:22 PM, Terry <[EMAIL PROTECTED]> wrote:
On Tue, Jun 17, 2008 at 3:09 PM, Wendy Cheng <[EMAIL PROTECTED]> wrote:
Hi, Terry,
I am still seeing some high load averages. Here is an example of a
gfs configuration. I left statfs_fast off as it
On Tue, Jun 17, 2008 at 5:22 PM, Terry <[EMAIL PROTECTED]> wrote:
> On Tue, Jun 17, 2008 at 3:09 PM, Wendy Cheng <[EMAIL PROTECTED]> wrote:
>> Hi, Terry,
>>>
>>> I am still seeing some high load averages. Here is an example of a
>>> gfs configuration. I left statfs_fast off as it would not apply
On Tue, Jun 17, 2008 at 3:09 PM, Wendy Cheng <[EMAIL PROTECTED]> wrote:
> Hi, Terry,
>>
>> I am still seeing some high load averages. Here is an example of a
>> gfs configuration. I left statfs_fast off as it would not apply to
>> one of my volumes for an unknown reason. Not sure that would have
Hi, Terry,
I am still seeing some high load averages. Here is an example of a
gfs configuration. I left statfs_fast off as it would not apply to
one of my volumes for an unknown reason. Not sure that would have
helped anyways. I do, however, feel that reducing scand_secs helped a
little:
On Mon, Jun 16, 2008 at 2:48 PM, Wendy Cheng <[EMAIL PROTECTED]> wrote:
> Ross Vandegrift wrote:
>>
>> On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
>>
>>>
>>> I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
>>> averages on the host that is serving these volumes out via NFS.
Ross Vandegrift wrote:
On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
averages on the host that is serving these volumes out via NFS. I
notice that gfs_scand, dlm_recv, and dlm_scand are running with high
CPU%. I truly b
On Mon, Jun 16, 2008 at 2:16 PM, Ross Vandegrift <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
>> I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
>> averages on the host that is serving these volumes out via NFS. I
>> notice that gfs_scand, dlm_
On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
> I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
> averages on the host that is serving these volumes out via NFS. I
> notice that gfs_scand, dlm_recv, and dlm_scand are running with high
> CPU%. I truly believe the box is I/O
Hello,
I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
averages on the host that is serving these volumes out via NFS. I
notice that gfs_scand, dlm_recv, and dlm_scand are running with high
CPU%. I truly believe the box is I/O bound due to high awaits but
trying to dig into root c
On Tue, 2008-01-08 at 09:49 -0600, [EMAIL PROTECTED] wrote:
> > Web server disk I/O is likely to be mostly read-only, so I doubt disk
> > performance will ever be your bottleneck. It's bouncing write-locks
> > around that slows clustered file systems down.
>
> True and other than media, all writes
> Web server disk I/O is likely to be mostly read-only, so I doubt disk
> performance will ever be your bottleneck. It's bouncing write-locks
> around that slows clustered file systems down.
True and other than media, all writes are to the MySQL servers. Still, I
wondered since the web servers ar
[EMAIL PROTECTED] wrote:
Is there any GFS tuning I can do which might help speed up access to
these mailboxes?
You probably need GFS2 in this case. To fix mail server issues in GFS1
would be too intrusive with current state of development cycle.
I noticed you mention that GFS2 might be best f
[EMAIL PROTECTED] wrote:
Is there any GFS tuning I can do which might help speed up access to
these mailboxes?
You probably need GFS2 in this case. To fix mail server issues in GFS1
would be too intrusive with current state of development cycle.
Wendy,
I noticed you mention that
James Fidell wrote:
I have a 3-node cluster built on CentOS 5.1, fully updated, providing
Maildir mail spool filesystems to dovecot-based IMAP servers. As it
stands GFS is in its default configuration -- no tuning has been done
so far.
Mostly, it's working fine. Unfortunately we do have a few
>> Is there any GFS tuning I can do which might help speed up access to
>> these mailboxes?
>>
> You probably need GFS2 in this case. To fix mail server issues in GFS1
> would be too intrusive with current state of development cycle.
Wendy,
I noticed you mention that GFS2 might be best for this.
James Fidell wrote:
I have a 3-node cluster built on CentOS 5.1, fully updated, providing
Maildir mail spool filesystems to dovecot-based IMAP servers. As it
stands GFS is in its default configuration -- no tuning has been done
so far.
Mostly, it's working fine. Unfortunately we do have a few
I have a 3-node cluster built on CentOS 5.1, fully updated, providing
Maildir mail spool filesystems to dovecot-based IMAP servers. As it
stands GFS is in its default configuration -- no tuning has been done
so far.
Mostly, it's working fine. Unfortunately we do have a few people with
tens of th
46 matches
Mail list logo