Paul, I will have to clear the release of my config files through our security folks. It will take a couple days.
Thanks for taking an interest. Dan -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of [email protected] Sent: Friday, May 07, 2010 12:01 PM To: [email protected] Subject: Linux-cluster Digest, Vol 73, Issue 7 Send Linux-cluster mailing list submissions to [email protected] To subscribe or unsubscribe via the World Wide Web, visit https://www.redhat.com/mailman/listinfo/linux-cluster or, via email, send a message with subject or body 'help' to [email protected] You can reach the person managing the list at [email protected] When replying, please edit your Subject line so it is more specific than "Re: Contents of Linux-cluster digest..." Today's Topics: 1. cman not looking at all valid interfaces (Simmons, Dan A) 2. Re: cman not looking at all valid interfaces (Paul Morgan) 3. is there a master rgmanager ? (Martin Waite) 4. Re: gfs rg and journal size (Carlos Maiolino) 5. Re: Get qdisk to use /dev/mapper/mpathx devices instead of /dev/dm-x (Charlie Brady) 6. Re: gfs rg and journal size (Bob Peterson) ---------------------------------------------------------------------- Message: 1 Date: Fri, 7 May 2010 00:07:03 +0000 From: "Simmons, Dan A" <[email protected]> To: "'[email protected]'" <[email protected]> Subject: [Linux-cluster] cman not looking at all valid interfaces Message-ID: <[email protected]> Content-Type: text/plain; charset="us-ascii" Hi, cman appears to not scan for interfaces correctly on my RHEL 5.3 64bit cluster. When I change /etc/hosts and /etc/cluster/cluster.conf to use names "mynode1" and "mynode2" cman boots up correctly and I get a good cluster. If I change my configuration to use the interfaces named "mynode1-clu" and "mynode2-clu" which are defined in /etc/hosts I get an error: Cman not started: Can't find local node name in cluster.conf /usr/sbin/cman_tool: aisexec daemon didn't start However, if I do `hostname mynode1-clu` on the first node and `hostname mynode2-clu` on the second node and then restart cman I get a good cluster. I think this proves that my -clu interfaces are valid and properly defined in /etc/cluster/cluster.conf. This node was created from a disk clone of another cluster. Are there any cluster files that would retain system name or interface information? I have triple checked my /etc/hosts, /etc/sysconfig/network /etc/sysconfig/network-scripts/*eth* , /etc/nsswitch.conf, /etc/cluster/cluster.conf and dns files. Any suggestions would be appreciated. J.Dan Simmons ------------------------------ Message: 2 Date: Thu, 6 May 2010 20:23:01 -0400 From: Paul Morgan <[email protected]> To: linux clustering <[email protected]> Subject: Re: [Linux-cluster] cman not looking at all valid interfaces Message-ID: <[email protected]> Content-Type: text/plain; charset="iso-8859-1" Can you post your configs? On May 6, 2010 8:16 PM, "Simmons, Dan A" <[email protected]> wrote: Hi, cman appears to not scan for interfaces correctly on my RHEL 5.3 64bit cluster. When I change /etc/hosts and /etc/cluster/cluster.conf to use names "mynode1" and "mynode2" cman boots up correctly and I get a good cluster. If I change my configuration to use the interfaces named "mynode1-clu" and "mynode2-clu" which are defined in /etc/hosts I get an error: Cman not started: Can't find local node name in cluster.conf /usr/sbin/cman_tool: aisexec daemon didn't start However, if I do `hostname mynode1-clu` on the first node and `hostname mynode2-clu` on the second node and then restart cman I get a good cluster. I think this proves that my -clu interfaces are valid and properly defined in /etc/cluster/cluster.conf. This node was created from a disk clone of another cluster. Are there any cluster files that would retain system name or interface information? I have triple checked my /etc/hosts, /etc/sysconfig/network /etc/sysconfig/network-scripts/*eth* , /etc/nsswitch.conf, /etc/cluster/cluster.conf and dns files. Any suggestions would be appreciated. J.Dan Simmons -- Linux-cluster mailing list [email protected] https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://www.redhat.com/archives/linux-cluster/attachments/20100506/ec8e805c/attachment.html> ------------------------------ Message: 3 Date: Fri, 7 May 2010 10:38:47 +0100 From: "Martin Waite" <[email protected]> To: "linux clustering" <[email protected]> Subject: [Linux-cluster] is there a master rgmanager ? Message-ID: <a78db34d00374344a0ab65b6523c05dc05739...@marsden.win.datacash.com> Content-Type: text/plain; charset="us-ascii" Hi, Is there a master rgmanager instance that makes decisions for the whole cluster, or does each rgmanager arrive at exactly the same decision as all the other instances based on the totally-ordered sequence of cluster events that update their state machines ? If there is a master rgmanager instance, is it possible to identify which node it is running on ? regards, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://www.redhat.com/archives/linux-cluster/attachments/20100507/8d6e6424/attachment.html> ------------------------------ Message: 4 Date: Fri, 7 May 2010 10:13:15 -0300 From: Carlos Maiolino <[email protected]> To: linux clustering <[email protected]> Subject: Re: [Linux-cluster] gfs rg and journal size Message-ID: <[email protected]> Content-Type: text/plain; charset=us-ascii On Thu, May 06, 2010 at 09:21:05AM -0600, Andrew A. Neuschwander wrote: > Is there a ways to determine to rg size and the journal size and count of a > mounted gfs filesystems? > > Thanks, > -Andrew > -- > Andrew A. Neuschwander, RHCE > Manager, Systems Engineer > Science Compute Services > College of Forestry and Conservation > The University of Montana > http://www.ntsg.umt.edu > [email protected] - 406.243.6310 > > -- > Linux-cluster mailing list > [email protected] > https://www.redhat.com/mailman/listinfo/linux-cluster I don't remember if there is a tool for it, but if you do not make a gfs FS with different parameters, the default is 128 MB -- --- Best Regards Carlos Eduardo Maiolino Red Hat - Global Support Services ------------------------------ Message: 5 Date: Fri, 7 May 2010 09:26:38 -0400 (EDT) From: Charlie Brady <[email protected]> To: linux clustering <[email protected]> Subject: Re: [Linux-cluster] Get qdisk to use /dev/mapper/mpathx devices instead of /dev/dm-x Message-ID: <[email protected]> Content-Type: TEXT/PLAIN; charset=US-ASCII On Wed, 5 May 2010, Celso K. Webber wrote: > In my experience, I tend no to use the label in qdisk specification, > because it leads qdisk to use the first device it encounters containing > that label. Your qdisk label should be unique, and you should be controlling visibility of qdisks. > Another case is when the device is not found at all, this > happened to me before. If that happens, you have a bigger problem than just using the qdisk label to identify the qdisk, don't you? ------------------------------ Message: 6 Date: Fri, 7 May 2010 09:57:51 -0400 (EDT) From: Bob Peterson <[email protected]> To: linux clustering <[email protected]> Subject: Re: [Linux-cluster] gfs rg and journal size Message-ID: <569169375.245311273240671353.javamail.r...@zmail06.collab.prod.int.phx2.redhat.com> Content-Type: text/plain; charset=utf-8 ----- "Andrew A. Neuschwander" <[email protected]> wrote: | Is there a ways to determine to rg size and the journal size and count | of a mounted gfs filesystems? | | Thanks, | -Andrew In gfs2, there's an easy way: [r...@roth-01 ~]# gfs2_tool journals /mnt/gfs2 journal2 - 128MB journal1 - 128MB journal0 - 128MB 3 journal(s) found. With gfs1, gfs_tool df tells you how many journals but it doesn't tell you their size, and there's no "journals" option in gfs_tool. You could use gfs_tool jindex, but the output is cryptic; it prints out the number of 64K segments, so 2048 corresponds to 128MB, assuming a default 4K block size. Regards, Bob Peterson Red Hat File Systems ------------------------------ -- Linux-cluster mailing list [email protected] https://www.redhat.com/mailman/listinfo/linux-cluster End of Linux-cluster Digest, Vol 73, Issue 7 ******************************************** -- Linux-cluster mailing list [email protected] https://www.redhat.com/mailman/listinfo/linux-cluster
