Thanks for the direction

after running
/usr/lib64/heartbeat/hb2openais.sh -T /ha-cluster/ -U

I ran, the following command and got the following error:
crm_verify -V -x cib-out.xml
cib-out.xml:2: element configuration: Relax-NG validity error : Element
cib failed to validate content
crm_verify[3679]: 2012/08/29_09:54:47 ERROR: main: CIB did not pass
DTD/schema validation
Errors found during check: config not valid

here is the output of cib-out.xml

node1:/ha-cluster # cat cib-out.xml 
<cib admin_epoch="0" ccm_transition="2" cib-last-written="Tue Aug 28
17:39:04 2012" cib_feature_revision="2.0" crm_feature_set="2.0"
dc_uuid="3b5769af-3e49-4679-8ee1-f64697128600" epoch="119"
generated="true" have_quorum="true" ignore_dtd="false" num_peers="2"
num_updates="1">
        <configuration>
                <crm_config>
                        <cluster_property_set id="cib-bootstrap-options">
                                <attributes>
                                        <nvpair 
id="cib-bootstrap-options-dc-version" name="dc-version"
value="2.1.4-node: cb5def86a240911e45acb62caa713f044f0e3cf2"/>
                                        <nvpair 
id="cib-bootstrap-options-last-lrm-refresh"
name="last-lrm-refresh" value="1346139825"/>
                                        <nvpair 
id="cib-bootstrap-options-stonith-enabled"
name="stonith-enabled" value="true"/>
                                        <nvpair 
id="cib-bootstrap-options-no-quorum-policy"
name="no-quorum-policy" value="ignore"/>
                                        <nvpair 
id="cib-bootstrap-options-expected-nodes"
name="expected-nodes" value="2"/>
                                </attributes>
                        </cluster_property_set>
                </crm_config>
                <nodes>
                        <node id="node2" type="normal" uname="node2"/>
                        <node id="node1" type="normal" uname="node1"/>
                </nodes>
                <resources>
                        <primitive class="ocf" id="IP_Addr" provider="heartbeat"
type="IPaddr2">
                                <meta_attributes id="IP_Addr-meta-options">
                                        <attributes>
                                                <nvpair 
id="IP_Addr_metaattr_target_role" name="target_role"
value="stopped"/>
                                        </attributes>
                                </meta_attributes>
                                <instance_attributes 
id="IP_Addr_instance_attrs">
                                        <attributes>
                                                <nvpair 
id="c25028fd-23ac-48a5-a826-729cd3dca74c" name="ip"
value="192.168.37.155"/>
                                                <nvpair 
id="a75d0de3-bdab-49f8-be0f-dc1ef2cdc178" name="nic"
value="eth0"/>
                                                <nvpair 
id="155cd5e5-2428-4823-94d5-73a8cfa35fec"
name="cidr_netmask" value="24"/>
                                        </attributes>
                                </instance_attributes>
                                <operations>
                                        <op id="op_start" name="start" 
timeout="90"/>
                                        <op id="op_stop" name="stop" 
timeout="100"/>
                                        <op id="op_mon" interval="10s" 
name="monitor" start_delay="5s"
timeout="20s"/>
                                </operations>
                        </primitive>
                        <primitive class="ocf" id="FS" provider="heartbeat"
type="Filesystem">
                                <meta_attributes id="FS-meta-options">
                                        <attributes>
                                                <nvpair 
id="FS_metaattr_target_role" name="target_role"
value="stopped"/>
                                        </attributes>
                                </meta_attributes>
                                <instance_attributes id="FS_instance_attrs">
                                        <attributes>
                                                <nvpair 
id="42c8f7eb-ae25-433d-9d78-eda725df4500" name="device"
value="/dev/disk/by-id/scsi-14945540000000000302172db039b6eab207662fc73c0911c-part1"/>
                                                <nvpair 
id="c80bdbb4-ffa1-4ce7-9330-a58a2e7e96a0" name="directory"
value="/oracle"/>
                                                <nvpair 
id="bf81af08-c535-43ef-8f7e-2b101173764f" name="fstype"
value="ext3"/>
                                        </attributes>
                                </instance_attributes>
                                <operations>
                                        <op disabled="false" id="op_fs_start" 
interval="0" name="start"
role="Started" start_delay="0" timeout="60"/>
                                        <op disabled="false" id="op_fs_stop" 
interval="0" name="stop"
role="Started" start_delay="0" timeout="60"/>
                                        <op disabled="false" id="op_fs_mon" 
interval="20" name="monitor"
role="Started" start_delay="0" timeout="40"/>
                                </operations>
                        </primitive>
                        <clone id="pingd">
                                <meta_attributes id="pingd-meta-options">
                                        <attributes>
                                                <nvpair 
id="pingd_metaattr_clone_max" name="clone_max" value="2"/>
                                                <nvpair 
id="pingd_metaattr_clone_node_max" name="clone_node_max"
value="1"/>
                                                <nvpair 
id="pingd_metaattr_target_role" name="target_role"
value="stopped"/>
                                        </attributes>
                                </meta_attributes>
                                <primitive class="ocf" id="pingd_clone" 
provider="heartbeat"
type="pingd">
                                        <operations>
                                                <op id="op_pingdclone_monitor" 
interval="20" name="monitor"
start_delay="1m" timeout="40"/>
                                                <op id="op_pingdclone_start" 
name="start" timeout="90"/>
                                        </operations>
                                </primitive>
                                <instance_attributes id="pingd_instance_attrs">
                                        <attributes>
                                                <nvpair 
id="7432913c-f4ed-43d6-a4b4-50df702a026e"
name="pingd-dampen" value="5"/>
                                                <nvpair 
id="b1337206-2a5e-45d4-bdf8-013bcdd3d42e"
name="pingd-multiplier" value="100"/>
                                        </attributes>
                                </instance_attributes>
                        </clone>
                        <clone id="stonith_cloneset">
                                <meta_attributes 
id="stonith_cloneset-meta-options">
                                        <attributes>
                                                <nvpair 
id="stonith_cloneset_metaattr_clone_max" name="clone_max"
value="2"/>
                                                <nvpair 
id="stonith_cloneset_metaattr_clone_node_max"
name="clone_node_max" value="1"/>
                                                <nvpair 
id="stonith_cloneset_metaattr_target_role"
name="target_role" value="stopped"/>
                                                <nvpair 
id="stonith_cloneset_metaattr_globally_unique"
name="globally_unique" value="false"/>
                                        </attributes>
                                </meta_attributes>
                                <primitive class="stonith" id="stonith_clone" 
provider="heartbeat"
type="external/ssh">
                                        <instance_attributes 
id="stonith_clone_instance_attrs">
                                                <attributes>
                                                        <nvpair 
id="d1182804-5932-4c47-9efb-af91e42ff53d"
name="hostlist"/>
                                                </attributes>
                                        </instance_attributes>
                                        <instance_attributes 
id="stonith_clone:0_instance_attrs">
                                                <attributes>
                                                        <nvpair 
id="8db05d04-df53-4e64-a4b2-652af204692e" name="hostlist"
value="node1,node2"/>
                                                </attributes>
                                        </instance_attributes>
                                        <operations>
                                                <op id="stonith_mon" 
interval="5" name="monitor" prereq="nothing"
start_delay="35" timeout="35"/>
                                                <op id="stonith_start" 
name="start" prereq="nothing"
timeout="20"/>
                                        </operations>
                                </primitive>
                        </clone>
                </resources>
                <constraints/>
        </configuration>
</cib>


--
Regards,

Muhammad Sharfuddin

On Wed, 2012-08-08 at 14:32 +0200, Dejan Muhamedagic wrote:

> On Wed, Aug 08, 2012 at 11:44:11AM +0200, Ulrich Windl wrote:
> > >>> Muhammad Sharfuddin <[email protected]> schrieb am 08.08.2012 um 
> > >>> 05:45 in
> > Nachricht <[email protected]>:
> > > actually I want to know that 
> > >    Heartbeat V2 base cluster can be upgraded to Pacemaker+Corosync based
> > > cluster ?
> > 
> > See /usr/share/doc/packages/pacemaker/README.hb2openais ;-)
> 
> hb2openais.sh converts heartbeat configuration to openais.
> More needs to be done to finish the conversion to corosync, but
> the latter steps shouldn't be difficult.
> 
> Thanks,
> 
> Dejan
> 
> > 
> > > 
> > > or do we have to configure everything from scratch ?
> > > 
> > > --
> > > Regards,
> > > 
> > > Muhammad Sharfuddin
> > > Technical Manager
> > > Cell: +92-3332144823 | UAN: +92(21) 111-111-142 ext: 113 | NDS.COM.PK 
> > > 
> > > On Sat, 2012-08-04 at 00:59 +0500, Muhammad Sharfuddin wrote:
> > > 
> > > > Hello List
> > > > 
> > > > I have a customer running SAP Cluster atop SLES 10 SP3 via SUSE HAE. We
> > > > have to upgrade the OS to SLES 11 SP2.
> > > > 
> > > > is upgrade possible ? what are the possible issues ? 
> > > > 
> > > > Please help/suggest/recommend
> > > > 
> > > > --
> > > > Regards,
> > > > 
> > > > Muhammad Sharfuddin
> > > > _______________________________________________
> > > > Linux-HA mailing list
> > > > [email protected] 
> > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha 
> > > > See also: http://linux-ha.org/ReportingProblems 
> > > _______________________________________________
> > > Linux-HA mailing list
> > > [email protected] 
> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha 
> > > See also: http://linux-ha.org/ReportingProblems 
> > > 
> > 
> >  
> >  
> > 
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to