I am new to this list so I hope this is the correct list to post this question.
We are using OCFS2 for our shared Oracle home for CRS, ASM, and 2 database homes and the agent home. However, this shared filesystem locked up on us on Friday. The linux ls command would hang sometimes; CRS began evicting nodes. We had to stop everything, reboot the nodes with the OCFS2 mount point commented out, run fsck.ocfs2 in repair mode which found a bunch of stuff and repaired it. When we ran fsck.ocfs2 while the filesystem was mounted, it came up with 3 Inode messages. When we ran it without the filesystem mounted, it came up with a number of Inode zero length directory messages, many, many global cluster bitmap messages, and then a few Inode orphan directory messages. When we ran it a second time, it came up with 2 Inode link count messages. The third time we ran it, it same up clean. My questions are: 1) Is OCFS2 stable enough to support shared homes for all the Oracle homes? 2) Is there any way to find out if an OCFS2 mount point is having a problem before it gets to a critical point? 3) Would moving all the db dump directories to local mount point alleviate any of the issues with OCFS2? The reason I ask this is because of all the trace files and cdmp junk that Oracle creates, in addition everything CRS is logging. Thanks in advance for any suggestions or comments. Dale Roeschley Lead Database Administrator Chicago Stock Exchange, Inc. 312-663-2328
_______________________________________________ Ocfs2-users mailing list [email protected] http://oss.oracle.com/mailman/listinfo/ocfs2-users
