Actually GFS(2) is also a shared disk clustered filesystem.

Lustre is a good example of a distributed fs.

paul fretter (TOC) wrote:
I had similar confusion myself when first looking for a suitable cluster
FS.  I'm not an expert at this, so forgive me if my language appears
simplistic.

There seemed to be 2 basic species:
- There are those which aggregate local storage LUNs from each host into
a single contiguous 'virtual' device, e.g. RedHat GFS etc.  Bear in mind
the4se 'local' LUNS could be a local disk, or a dedicated LUN on a SAN.

- Then there are those which expect all hosts to have direct (shared)
access to the same LUN.
OCFS2 falls into the latter category.

For disk redundancy you could use a shared disk shelf (e.g. IBM DS4xxx),
create a RAID(1,4 or 5) set in the hardware and present it as a single
shared-access LUN.  All your nodes then have a connection to the LUN
(e.g. by fibrechannel). In effect it is a small SAN!  OCFS2 is then the
glue which manages the file locking and metadata so that nodes don't try
to write to the same blocks at the same time.

By creating a "RAID" using multiple OCFS2 volumes you are bringing the
role of device level redundancy work into the OS, which is effectively
software RAID and is not conducive to high performance or reliability.

Then for high availability of the storage, a good way might be to create
a duplicate shared device and let the hardware perform mirroring for you
(e.g. over fibrechannel, infiniband, iSCSI) etc, and also let the
hardware do the failover for you.

So, by using a SAN with hardware RAID and hardware mirroring to a second
SAN device (with hardware RAID), you can achieve resilience and high
availability, leaving OCFS2 blissfully unaware.

Hope this helps.

Kind regards
Paul Fretter

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:ocfs2-users-
[EMAIL PROTECTED] On Behalf Of Randy Ramsdell
Sent: 18 October 2007 14:00
Cc: [email protected]
Subject: Re: [Ocfs2-users] Missing something basic...

Benjamin Smith wrote:
I'm stumped. I'm doing some research on clustered file systems to be
deployed
over winter break, and am testing on spare machines first.

I have two identically configured computers, each with a 10 GB
partition, /dev/hda2. I intend to combine these two LAN/RAID1 style
to
represent 10 GB of redundant cluster storage, so that if either
machine
fails, computing can resume with reasonable efficiency.

These machines are called "cluster1" and "cluster2", and are
currently on a
local Gb LAN. They are running CentOS 4.4 (recompile of RHEL 4.4)
I've set up
SSH RSA keys so that I can ssh directly from either to the other
without
passwords, though I use a non-standard port, defined in ssh_config
and
sshd_config.

I've installed the RPMs without incident. I've set up a cluster
called "ocfs2"
with nodes "cluster1" and "cluster2", with the corresponding LAN IP
addresses. I've confirmed that configuration changes populate to
cluster2
when I push the appropriate button in the X11 ocfs2console on
cluster1. I've
checked the firewall(s) to allow inbound TCP to port 7777
connections
on both
machines, and verified this with nmap. I've also tried turning off
iptables
completely. On cluster1, I've formatted and mounted partition
"oracle"
to /meda/cluster using the ocfs2console and I can r/w to this
partition with
other applications. There's about a 5-second delay when
mounting/unmounting,
and the FAQ reflects that this is normal. SELinux is completely off.

Questions:

1) How do I get this "oracle" partition to show/mount on host
cluster2, and
subsequent systems added to the cluster? Should I be expecting a
/dev/* block
device to mount, or is there some other program I should be using,
similar to
smbmount?

As the previous post states, you need a shared storage. A quick and
easy
way to do this is to install iscsi-target on another system (target1)
and then use open-iscsi to log into the target you just created.  So
have a  third system that create the shared target. Then on cluster1
log
into the target to create the ocfs2 cluster FS. At this point, you can
mount this target on cluster1. On cluster2, log into the target and
mount as you would normally. Of course you will need the correct
cluster
set up. Now you have two systems mounting the shared storage and both
r/w.

Note: You may be able to do this with just two systems.  Use cluster1
as
the iscsi target system and ocfs2. On cluster1 install iscsi-target
software and log into the volume share from cluster1 itself. Cluster2
would just log in to the the target as normal.

2) How do I get this /dev/hda2 (aka "oracle") on cluster1 to combine
(RAID1
style) with /dev/hda2 on cluster2, so that if either host goes down
I
still
have a complete FS to work from? Am I mis-understanding the
abilities
and
intentions of OCFS2? Do I need to do something with NBD, GNBD, ENDB,
or
similar? If so, what's the "recommended" approach?


Yes you are misunderstanding how ocfs2 works. To use raid for the
described purposes, you must use it on the target1 system mentioned
above. On target1, raid two drives or two partitions and then use this
array as the target volume you export to cluster1 and cluster2. This
way
you have a raid array for data protection and ocfs2 for service
integrity.
Thanks,

-Ben


_______________________________________________
Ocfs2-users mailing list
[email protected]
http://oss.oracle.com/mailman/listinfo/ocfs2-users

_______________________________________________
Ocfs2-users mailing list
[email protected]
http://oss.oracle.com/mailman/listinfo/ocfs2-users


_______________________________________________
Ocfs2-users mailing list
[email protected]
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Reply via email to