[Ocfs2-users] One node, two clusters?
Is it possible to have one machine be part of two different ocfs2 clusters with two different sans? Kind of to serve as a bridge for moving data between two clusters but without actually fully combining the two clusters? Thanks, Michael ___ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users
Re: [Ocfs2-users] One node, two clusters?
You don't need to have two clusters for this. This can be accomplished with one cluster with the default local heartbeat. Create one cluster.conf with all the nodes. All nodes, except the one machine, will mount from just one san. The common node will mount from both sans. If you look at the cluster membership, other than the common node, all nodes will be interacting (network connection, etc.) with nodes that they can see on the san. On 12/22/2011 09:40 AM, Werner Flamme wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Kushnir, Michael (NIH/NLM/LHC) [C] [22.12.2011 18:20]: Is it possible to have one machine be part of two different ocfs2 clusters with two different sans? Kind of to serve as a bridge for moving data between two clusters but without actually fully combining the two clusters? Thanks, Michael Michael, I asked this two years ago and the answer was no. When I look at /etc/ocfs2/cluster.conf, I do not see a possibility to configure a second cluster. Though the nodes must be assigned to a cluster (and exactly one cluster, this is), there ist only one entry cluster: in the file, and so there is no way to define a second one. We synced via rsync :-( HTH Werner -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.18 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk7za4EACgkQk33Krq8b42MvSwCfQAXzqVQRPyhOdFrKM8PCPqbf g0cAn20CV4rjzXNrTa/YGaUeNlO3+rmc =CBmQ -END PGP SIGNATURE- ___ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users ___ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users
Re: [Ocfs2-users] One node, two clusters?
Is there a separate DLM instance for each ocfs2 volume? I have two sub-clusters in the same cluster... A 10 node Hadoop cluster sharing a SATA RAID10 and a Two node web server cluster sharing a SSD RAID0. One server mounts both volumes to move data between as necessary. This morning I got the following error (see end of message), and all nodes lost access to all storage. I'm trying to mitigate risk of this happening again. My hadoop nodes are used to generate search engine indexes, so they can go down. But my web servers provide the search engine service so I need them to not be tied to my hadoop nodes. I just feel safer that way. At the same time, I need a bridge node to move data between the two. I can do it via NFS or SCP, but I figured it'd be worth while to ask if one node can be in two different clusters. Dec 22 09:15:42 lhce-imed-web1 kernel: (updatedb,1832,1):dlm_get_lock_resource:898 042F68B6AF134E5C9A9EDF4D7BD7BE99:O0013d2ef94: at least one node (11) to recover before lock mastery can begin Thanks, Mike -Original Message- From: Sunil Mushran [mailto:sunil.mush...@oracle.com] Sent: Thursday, December 22, 2011 1:21 PM To: Werner Flamme Cc: ocfs2-users ML Subject: Re: [Ocfs2-users] One node, two clusters? You don't need to have two clusters for this. This can be accomplished with one cluster with the default local heartbeat. Create one cluster.conf with all the nodes. All nodes, except the one machine, will mount from just one san. The common node will mount from both sans. If you look at the cluster membership, other than the common node, all nodes will be interacting (network connection, etc.) with nodes that they can see on the san. On 12/22/2011 09:40 AM, Werner Flamme wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Kushnir, Michael (NIH/NLM/LHC) [C] [22.12.2011 18:20]: Is it possible to have one machine be part of two different ocfs2 clusters with two different sans? Kind of to serve as a bridge for moving data between two clusters but without actually fully combining the two clusters? Thanks, Michael Michael, I asked this two years ago and the answer was no. When I look at /etc/ocfs2/cluster.conf, I do not see a possibility to configure a second cluster. Though the nodes must be assigned to a cluster (and exactly one cluster, this is), there ist only one entry cluster: in the file, and so there is no way to define a second one. We synced via rsync :-( HTH Werner -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.18 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk7za4EACgkQk33Krq8b42MvSwCfQAXzqVQRPyhOdFrKM8PCPqbf g0cAn20CV4rjzXNrTa/YGaUeNlO3+rmc =CBmQ -END PGP SIGNATURE- ___ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users ___ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users ___ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users
Re: [Ocfs2-users] One node, two clusters?
On 12/22/2011 10:39 AM, Kushnir, Michael (NIH/NLM/LHC) [C] wrote: Is there a separate DLM instance for each ocfs2 volume? I have two sub-clusters in the same cluster... A 10 node Hadoop cluster sharing a SATA RAID10 and a Two node web server cluster sharing a SSD RAID0. One server mounts both volumes to move data between as necessary. This morning I got the following error (see end of message), and all nodes lost access to all storage. I'm trying to mitigate risk of this happening again. My hadoop nodes are used to generate search engine indexes, so they can go down. But my web servers provide the search engine service so I need them to not be tied to my hadoop nodes. I just feel safer that way. At the same time, I need a bridge node to move data between the two. I can do it via NFS or SCP, but I figured it'd be worth while to ask if one node can be in two different clusters. Dec 22 09:15:42 lhce-imed-web1 kernel: (updatedb,1832,1):dlm_get_lock_resource:898 042F68B6AF134E5C9A9EDF4D7BD7BE99:O0013d2ef94: at least one node (11) to recover before lock mastery can begin You should add ocfs2 to PRUNEFS in /etc/updatedb.conf. updatedb generates a lot of io and network traffic. And it will happen around the same time on all nodes. Yes, each volume has a different dlm domain (instance). ___ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users