Looks like this bug:
GFS2 - probably lost glock call back
https://bugzilla.redhat.com/show_bug.cgi?id=498976
This is fixed in the kernel included in RHEL 5.5.
Do a yum update to fix it.
Ricardo Arguello
On Tue, Mar 2, 2010 at 6:10 AM, Emilio Arjona emilio...@gmail.com wrote:
Thanks for your
Hi,
On Fri, 2010-02-26 at 16:52 +0100, Emilio Arjona wrote:
Hi,
we are experiencing some problems commented in an old thread:
http://www.mail-archive.com/linux-cluster@redhat.com/msg07091.html
We have 3 clustered servers under Red Hat 5.4 accessing a GFS2 resource.
fstab options:
Thanks for your response, Steve.
2010/3/2 Steven Whitehouse swhit...@redhat.com:
Hi,
On Fri, 2010-02-26 at 16:52 +0100, Emilio Arjona wrote:
Hi,
we are experiencing some problems commented in an old thread:
http://www.mail-archive.com/linux-cluster@redhat.com/msg07091.html
We have 3
Hi,
we are experiencing some problems commented in an old thread:
http://www.mail-archive.com/linux-cluster@redhat.com/msg07091.html
We have 3 clustered servers under Red Hat 5.4 accessing a GFS2 resource.
fstab options:
/dev/vg_cluster/lv_cluster /opt/datacluster gfs2
Hi,
On Thu, 2009-09-24 at 11:30 +0100, Gavin Conway wrote:
Hi All,
We have 6 nodes running GFS2 under CentOS 5.3 all connecting via
Cisco 2960G switches to an MD3000i with 8 x 146GB SAS 15K drives.
These nodes run a PHP website pulling their PHP and images files from
a GFS2
Hi Steve,
I’ve tuned the demote_secs down from 300 to 20 seconds on the
assumption that file locking is causing an issue.
That is unlikely to make any meaningful change and in fact it could well
hurt performance, depending on the workload.
gfs_controld plock_ownership=1
- Gavin Conway gavin.con...@uksolutions.co.uk wrote:
| We'll give this a go and see what it does. We did manage to track down
| the latest issue to a bad script that the customer had written which
| caused one of the nodes to exhaust all of its available memory. That
| then caused a knock-on
Hi All,
We have 6 nodes running GFS2 under CentOS 5.3 all connecting via Cisco 2960G
switches to an MD3000i with 8 x 146GB SAS 15K drives. These nodes run a PHP
website pulling their PHP and images files from a GFS2 volume being exported by
iSCSI from the MD3000i .
Problem we have is that