File a bugzilla for the first issue. The second issue is not related to
ocfs2.
Append the messages file of this node and node 0. Add whatever else you
deem relevant to the issue.... activity on the server/cluster, etc.
Daniel wrote:
Hello
System: Two brand new Dell 1950 servers with dual Intel Quadcore Xeon
connected to an EMC CX3-20 SAN. Running CentOS 5 x86_64 - both with
kernel 2.6.18-8.1.6-el5 x86_64.
I just noticed a panic on one of the servers:
Jul 2 04:08:52 megasrv2 kernel: (3568,2):dlm_drop_lockres_ref:2289
ERROR: while dropping ref on
87B24E40651A4C7C858EF03ED6F3595F:M00000000000000021af916b7dfbde4
(master=0) got -22.
Jul 2 04:08:52 megasrv2 kernel:
(3568,2):dlm_print_one_lock_resource:294 lockres:
M00000000000000021af916b7dfbde4, owner=0, state=64
Jul 2 04:08:52 megasrv2 kernel:
(3568,2):__dlm_print_one_lock_resource:309 lockres:
M00000000000000021af916b7dfbde4, owner=0, state=64
Jul 2 04:08:52 megasrv2 kernel:
(3568,2):__dlm_print_one_lock_resource:311 last used: 4747810336, on
purge list: yes
Jul 2 04:08:52 megasrv2 kernel:
(3568,2):dlm_print_lockres_refmap:277 refmap nodes: [ ], inflight=0
Jul 2 04:08:52 megasrv2 kernel:
(3568,2):__dlm_print_one_lock_resource:313 granted queue:
Jul 2 04:08:52 megasrv2 kernel:
(3568,2):__dlm_print_one_lock_resource:328 converting queue:
Jul 2 04:08:52 megasrv2 kernel:
(3568,2):__dlm_print_one_lock_resource:343 blocked queue:
Jul 2 04:08:52 megasrv2 kernel: ----------- [cut here ] ---------
[please bite here ] ---------
After booting the server I'm getting a lot of the following messages:
Jul 5 11:09:54 megasrv2 kernel: Additional sense: Logical unit
not ready, manual intervention required
Jul 5 11:09:54 megasrv2 kernel: end_request: I/O error, dev sdd, sector 0
Jul 5 11:09:54 megasrv2 kernel: Buffer I/O error on device sdd,
logical block 0
Jul 5 11:09:54 megasrv2 kernel: sd 1:0:0:2: Device not ready: <6>:
Current: sense key: Not Ready
But I guess this one has something to do with EMC PowerPath as sdd is
not a valid device. And there is no PowerPath for use with RHEL5 yet...
I'm sorry I haven't had the time to investigate this much. But right
now I have no clue what caused this panic, or if it will happen again...
------------------------------------------------------------------------
_______________________________________________
Ocfs2-users mailing list
[email protected]
http://oss.oracle.com/mailman/listinfo/ocfs2-users
_______________________________________________
Ocfs2-users mailing list
[email protected]
http://oss.oracle.com/mailman/listinfo/ocfs2-users