Benjamin Smith wrote:
I'm stumped. I'm doing some research on clustered file systems to be deployed
over winter break, and am testing on spare machines first.
I have two identically configured computers, each with a 10 GB
partition, /dev/hda2. I intend to combine these two LAN/RAID1 style to
represent 10 GB of redundant cluster storage, so that if either machine
fails, computing can resume with reasonable efficiency.
These machines are called "cluster1" and "cluster2", and are currently on a
local Gb LAN. They are running CentOS 4.4 (recompile of RHEL 4.4) I've set up
SSH RSA keys so that I can ssh directly from either to the other without
passwords, though I use a non-standard port, defined in ssh_config and
sshd_config.
I've installed the RPMs without incident. I've set up a cluster called "ocfs2"
with nodes "cluster1" and "cluster2", with the corresponding LAN IP
addresses. I've confirmed that configuration changes populate to cluster2
when I push the appropriate button in the X11 ocfs2console on cluster1. I've
checked the firewall(s) to allow inbound TCP to port 7777 connections on both
machines, and verified this with nmap. I've also tried turning off iptables
completely. On cluster1, I've formatted and mounted partition "oracle"
to /meda/cluster using the ocfs2console and I can r/w to this partition with
other applications. There's about a 5-second delay when mounting/unmounting,
and the FAQ reflects that this is normal. SELinux is completely off.
Questions:
1) How do I get this "oracle" partition to show/mount on host cluster2, and
subsequent systems added to the cluster? Should I be expecting a /dev/* block
device to mount, or is there some other program I should be using, similar to
smbmount?
As the previous post states, you need a shared storage. A quick and easy
way to do this is to install iscsi-target on another system (target1)
and then use open-iscsi to log into the target you just created. So
have a third system that create the shared target. Then on cluster1 log
into the target to create the ocfs2 cluster FS. At this point, you can
mount this target on cluster1. On cluster2, log into the target and
mount as you would normally. Of course you will need the correct cluster
set up. Now you have two systems mounting the shared storage and both r/w.
Note: You may be able to do this with just two systems. Use cluster1 as
the iscsi target system and ocfs2. On cluster1 install iscsi-target
software and log into the volume share from cluster1 itself. Cluster2
would just log in to the the target as normal.
2) How do I get this /dev/hda2 (aka "oracle") on cluster1 to combine (RAID1
style) with /dev/hda2 on cluster2, so that if either host goes down I still
have a complete FS to work from? Am I mis-understanding the abilities and
intentions of OCFS2? Do I need to do something with NBD, GNBD, ENDB, or
similar? If so, what's the "recommended" approach?
Yes you are misunderstanding how ocfs2 works. To use raid for the
described purposes, you must use it on the target1 system mentioned
above. On target1, raid two drives or two partitions and then use this
array as the target volume you export to cluster1 and cluster2. This way
you have a raid array for data protection and ocfs2 for service integrity.
Thanks,
-Ben
_______________________________________________
Ocfs2-users mailing list
[email protected]
http://oss.oracle.com/mailman/listinfo/ocfs2-users