Hi, I am running ocfs2 cluster on 6 nodes. The OS version is RHEL 5.7 (Kernel :- 2.6.18-308.1.1.el5) and the ocfs2 rpm versions are as below,
[root@prod37 ~]# uname -r 2.6.18-308.1.1.el5 [root@prod37 ~]# rpm -qa |grep ocfs ocfs2-tools-devel-1.4.4-1.el5 ocfs2-2.6.18-308.1.1.el5debug-1.4.7-1.el5 ocfs2-2.6.18-308.1.1.el5-1.4.7-1.el5 ocfs2console-1.4.4-1.el5 ocfs2-tools-1.4.4-1.el5 ocfs2-2.6.18-308.1.1.el5xen-1.4.7-1.el5 ocfs2-2.6.18-308.1.1.el5-debuginfo-1.4.7-1.el5 ocfs2-tools-debuginfo-1.4.4-1.el5 Frequently one of the nodes in the cluster is goes to panic state and requires hard reboot to bring back the system , the alert log contents on the panicked node during this period is as below, Jun 4 12:42:54 prod37 kernel: lockres: O000000000000000023c87800000000, owner=2, state=0 Jun 4 12:42:54 prod37 kernel: last used: 4299790521, refcnt: 5, on purge list: yes Jun 4 12:42:54 prod37 kernel: on dirty list: no, on reco list: no, migrating pending: no Jun 4 12:42:54 prod37 kernel: inflight locks: 1, asts reserved: 0 Jun 4 12:42:54 prod37 kernel: refmap nodes: [ 2 ], inflight=1 Jun 4 12:42:54 prod37 kernel: granted queue: Jun 4 12:42:54 prod37 kernel: converting queue: Jun 4 12:42:54 prod37 kernel: blocked queue: Jun 4 14:22:25 prod37 syslogd 1.4.1: restart. Jun 4 14:22:25 prod37 kernel: klogd 1.4.1, log source = /proc/kmsg started. Jun 4 14:22:25 prod37 kernel: Linux version 2.6.18-308.1.1.el5 (mockbu...@hs20-bc2-3.build.redhat.com) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-52)) #1 SMP Fri Feb 17 16:51:01 EST 2012 Is this a known issue ? Any fix ? Regards, Ravi
_______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users