No problem. :-) The main points of having a RAC cluster as I understand it are availability and scalability on low-cost systems. Shouldn't ocfs2 have the ability to perform online expansion like this? I know that Red Hat's GFS can add journals to accomodate new nodes while the filesystems are online (gfs_jadd -j Number MountPoint).
What is the downside of installing a starter two-node cluster with ocfs2 filesystems configured for say 8 node slots? The documentation seems to suggest that you not do that. > You are absolutely right. Sorry about that. I think I didn't have enough > caffeine this morning ;-). > > The procedure is when you have available slots and just want to add a > node to the cluster. > > Any change to the superblock needs to have the partition offline. > > You don't need to umount to upgrade database/clusterware. > > In some situations, you need to upgrade ocfs2 in sync, meaning, all > nodes at once. Other times, you can run different versions in the same > cluster until you get all nodes upgraded. That is documented in the FAQ. > > If you plan to add more nodes later, it would be a good idea to have > additional slots defined to reduce cluster outage. > > About having it done online (with partitions mounted), I don't think we > will have it. > > > Regards, > > Marcos Eduardo Matsunaga > > Oracle USA > Linux Engineering > > > > > > Tim Lank wrote: >> So since tunefs.ocfs2 is the tool that actually does all of the work to >> add more slots and the tunefs.ocfs2 man page states: >> >> DESCRIPTION >> tunefs.ocfs2 is used to adjust OCFS2 file system parameters on >> disk. >> In order to prevent data loss, tunefs.ocfs2 will not perform >> any >> action on the specified device if it is mounted on any node in >> the >> cluster. This tool requires the O2CB cluster to be online. >> >> I return to my original question: >> "Is it true that we will need to umount these filesystems for the >> upgrade >> (i.e. Database and Clusterware also)?" >> >> Since our cluster is running entirely on ocfs2 filesystems, this will >> cause a cluster outage. >> >> Is there a way to do this while the filesystems are mounted? If not, >> are >> there plans for allowing a node count increase while the fileystems are >> mounted, and if so, when? >> >> >> >>> The console does that (You can use the console to add a new node). >>> tunefs.ocfs2 is actually the tool that will change the superblock to >>> add >>> more slots (see the man pages) and it is called by the console (a more >>> user friendly interface) to perform the action. >>> >>> o2cb_ctl only defines the new node(s) on the existing nodes that are >>> running. >>> >>> Regards, >>> >>> Marcos Eduardo Matsunaga >>> >>> Oracle USA >>> Linux Engineering >>> >>> >>> >>> >>> >>> Tim Lank wrote: >>> >>>> So does the o2cb_ctl command touch the ocfs2 filesystem superblock and >>>> increase the node slot value in this example? >>>> >>>> >>>> >>>> >>>>>> From >>>>>> http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html >>>>>> >>>>>> >>>>> 19 - How do I add a new node to an online cluster? >>>>> You can use the console to add a new node. However, you will need to >>>>> explicitly add the new node on all the online nodes. That is, adding >>>>> on >>>>> one node and propagating to the other nodes is not sufficient. If the >>>>> operation fails, it will most likely be due to bug#741 >>>>> <http://oss.oracle.com/bugzilla/show_bug.cgi?id=741>. In that case, >>>>> you >>>>> can use the o2cb_ctl utility on all online nodes as follows: >>>>> >>>>> # o2cb_ctl -C -i -n NODENAME -t node -a number=NODENUM -a >>>>> ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME >>>>> >>>>> Ensure the node is added both in /etc/ocfs2/cluster.conf and in >>>>> /config/cluster/CLUSTERNAME/node on all online nodes. You can then >>>>> simply copy the cluster.conf to the new (still offline) node as well >>>>> as >>>>> other offline nodes. At the end, ensure that cluster.conf is >>>>> consistent >>>>> on all the nodes. >>>>> >>>>> 20 - How do I add a new node to an offline cluster? >>>>> >>>>> You can either use the console or use o2cb_ctl or simply hand edit >>>>> cluster.conf. Then either use the console to propagate it to all >>>>> nodes >>>>> or hand copy using scp or any other tool. The o2cb_ctl command to do >>>>> the >>>>> same is: >>>>> >>>>> # o2cb_ctl -C -n NODENAME -t node -a number=NODENUM -a >>>>> ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME >>>>> >>>>> Notice the "-i" argument is not required as the cluster is not >>>>> online. >>>>> >>>>> >>>>> Regards, >>>>> >>>>> Marcos Eduardo Matsunaga >>>>> >>>>> Oracle USA >>>>> Linux Engineering >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Tim Lank wrote: >>>>> >>>>> >>>>>> We have a test 10gR2 RAC cluster using ocfs2 filesystems for the >>>>>> Clusterware files and the Database files. >>>>>> >>>>>> We need to increase the node slots to accomodate new RAC nodes. Is >>>>>> it >>>>>> true that we will need to umount these filesystems for the upgrade >>>>>> (i.e. >>>>>> Database and Clusterware also)? >>>>>> >>>>>> We are planning to use the following command format to perform the >>>>>> node >>>>>> slot increase: >>>>>> >>>>>> # tunefs.ocfs2 �N 3 /dev/mapper/mpath1p1 >>>>>> >>>>>> _______________________________________________ >>>>>> Ocfs2-users mailing list >>>>>> [email protected] >>>>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users >>>>>> >>>>>> >>>>>> >>>> >> >> > _______________________________________________ Ocfs2-users mailing list [email protected] http://oss.oracle.com/mailman/listinfo/ocfs2-users
