in your cluster.conf

fstype="ext3"

it should be fstype="gfs" or gfs2

BR

On Mon, Jun 28, 2010 at 8:42 PM, Rajkumar, Anoop
<[email protected]>wrote:

>  Hi
>
> I am not getting into the problem now of cluster getting staled after I
> create gfs file system instaed of gfs2. Here is my cluster.conf file.
>
> [r...@system1 cluster]# more cluster.conf
> <?xml version="1.0"?>
> <cluster alias="cluster1" config_version="33" name="cluster1">
>         <fence_daemon clean_start="0" post_fail_delay="0"
> post_join_delay="100"/>
>         <clusternodes>
>                 <clusternode name="system1.merck.com" nodeid="1"
> votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="system1r"/>
>                                 </method>
>                         </fence>
>                 </clusternode>
>                 <clusternode name="system2.merck.com" nodeid="2"
> votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="system2r"/>
>                                 </method>
>                         </fence>
>                 </clusternode>
>         </clusternodes>
>         <cman expected_votes="1" two_node="1"/>
>         <fencedevices>
>                 <fencedevice agent="fence_ilo" hostname="
> system1r.merck.com" login="admin
> " name="system1r" passwd="Anwyccdfy57"/>
>                 <fencedevice agent="fence_ilo" hostname="
> system2r.merck.com" login="admin
> " name="system1r" passwd="Anwyccdfy57"/>
>         </fencedevices>
>         <rm>
>                 <failoverdomains>
>                         <failoverdomain name="webdomain" nofailback="0"
> ordered="1" restricte
> d="1">
>                                 <failoverdomainnode name="
> system1.merck.com" priority="
> 1"/>
>                                 <failoverdomainnode name="
> system2.merck.com" priority="
> 2"/>
>                         </failoverdomain>
>                 </failoverdomains>
>                 <resources>
>                         <ip address="54.3.xyz.abc" monitor_link="1"/>
>                         <script file="/etc/init.d/orig.httpd" name="http
> startup script"/>
>                         <fs device="/dev/sda2" force_fsck="0"
> force_unmount="0" fsid="6443" f
> stype="ext3" mountpoint="/var/www/html" name="httpd-content" options=""
> self_fence="0"/>
>                         <fs device="/dev/sda1" force_fsck="0"
> force_unmount="0" fsid="30579"
> fstype="ext3" mountpoint="/var/lib/mysql" name="mysql-content" options=""
> self_fence="0"/>
>                         <script file="/etc/init.d/mysqld" name="mysql
> startup script"/>
>                         <ip address="192.168.0.3" monitor_link="1"/>
>                 </resources>
>                 <service autostart="1" domain="webdomain"
> name="http-service" recovery="resta
> rt">
>                         <script ref="http startup script"/>
>                         <fs ref="httpd-content"/>
>                         <ip ref="54.3.xyz.abc"/>
>                 </service>
>                 <service autostart="1" domain="webdomain" exclusive="0"
> name="mysql" recovery
> ="disable">
>                         <fs ref="mysql-content"/>
>                         <script ref="mysql startup script"/>
>                         <ip ref="192.168.0.3"/>
>                 </service>
>         </rm>
> </cluster>
>
> Thanks
> Anoop
>
> Notice:  This e-mail message, together with any attachments, contains
> information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station,
> New Jersey, USA 08889), and/or its affiliates Direct contact information
> for affiliates is available at http://www.merck.com/contact/contacts.html) 
> that may be confidential,
> proprietary copyrighted and/or legally privileged. It is intended solely
> for the use of the individual or entity named on this message. If you are
> not the intended recipient, and have received this message in error,
> please notify us immediately by reply e-mail and then delete it from
> your system.
>
>
> --
> Linux-cluster mailing list
> [email protected]
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to