Thank you for answering

I tried a lot but i din't suceed

In this mail are less write errors (i hope so)

Here are different config files

vmstore1:/etc # crm_mon -1f

============

Last updated: Mon Mar 15 21:22:57 2010

Stack: openais

Current DC: vmstore2 - partition with quorum

Version: 1.0.7-d3fa20fc76c7947d6de66db7e52526dc6bd7d782

2 Nodes configured, 2 expected votes

0 Resources configured.

============


Online: [ vmstore1 vmstore2 ]



Migration summary:

* Node vmstore1:

* Node vmstore2:


----------------------------------------------------------------

crm(live)# configure show

ERROR: /root/.crm_help_index open: [Errno 2] No such file or directory: '/root/.crm_help_index'

ERROR: extensive help system is not available

node vmstore1 \

attributes standby="false"

node vmstore2 \

attributes standby="false"

property $id="cib-bootstrap-options" \

dc-version="1.0.7-d3fa20fc76c7947d6de66db7e52526dc6bd7d782" \

stonith-action="poweroff" \

expected-quorum-votes="2" \

cluster-infrastructure="openais" \

node-health-red="0" \

stonith-enabled="false"

rsc_defaults $id="rsc_defaults-options" \

is-managed="true"

------------------------------------------------------

In the first section you can see the resource in the cbl and when i make as next a cibadmin -Q the resource is gone again

-------------------------------------------------


vmstore1:/etc # cibadmin -C -o resources -x resourceIP.xml

Call cib_create failed (-47): Update does not conform to the configured schema/DTD

<cib validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="1" dc-uuid="vmstore2" admin_epoch="0" epoch="101" num_updates="1">

<configuration>

<crm_config>

<cluster_property_set id="cib-bootstrap-options">

<nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.7-d3fa20fc76c7947d6de66db7e52526dc6bd7d782"/>

<nvpair id="nvpair-638aff86-19c8-4d62-8f30-ef419416fa5a" name="stonith-action" value="poweroff"/>

<nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/>

<nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais"/>

<nvpair id="nvpair-16ff93f5-e0e7-48a5-a7d8-a0778d791af2" name="node-health-red" value="0"/>

<nvpair id="nvpair-c729123b-3fa4-4f78-88e1-2e2644e47310" name="stonith-enabled" value="false"/>

</cluster_property_set>

</crm_config>

<nodes>

<node id="vmstore1" type="normal" uname="vmstore1">

<instance_attributes id="nodes-vmstore1">

<nvpair id="standby-vmstore1" name="standby" value="false"/>

</instance_attributes>

</node>

<node id="vmstore2" type="normal" uname="vmstore2">

<instance_attributes id="nodes-vmstore2">

<nvpair id="standby-vmstore2" name="standby" value="false"/>

</instance_attributes>

</node>

</nodes>

<resources>

<primitive id="resIP" class="ocf" type="IPaddr2" provider="heartbeat">

<instances_attributes id="resIP_instance_attrs">

<nvpair id="resIP_IP" name="ip" value="192.168.1.23"/>

<nvpair id="resIP_NIC" name="nic" value="eth1"/>

</instances_attributes>

</primitive>

</resources>

<constraints/>

<op_defaults>

<meta_attributes id="op_defaults-options"/>

</op_defaults>

<rsc_defaults>

<meta_attributes id="rsc_defaults-options">

<nvpair id="nvpair-68b16f0e-1e5c-4dd5-85a4-28a14ea385fc" name="is-managed" value="true"/>

</meta_attributes>

</rsc_defaults>

</configuration>

<status>

<node_state uname="vmstore1" ha="active" in_ccm="true" crmd="online" join="member" expected="member" shutdown="0" id="vmstore1" crm-debug-origin="do_state_transition">

<lrm id="vmstore1">

<lrm_resources/>

</lrm>

<transient_attributes id="vmstore1">

<instance_attributes id="status-vmstore1">

<nvpair id="status-vmstore1-probe_complete" name="probe_complete" value="true"/>

</instance_attributes>

</transient_attributes>

</node_state>

<node_state uname="vmstore2" ha="active" in_ccm="true" crmd="online" join="member" expected="member" shutdown="0" id="vmstore2" crm-debug-origin="do_state_transition">

<transient_attributes id="vmstore2">

<instance_attributes id="status-vmstore2">

<nvpair id="status-vmstore2-probe_complete" name="probe_complete" value="true"/>

</instance_attributes>

</transient_attributes>

<lrm id="vmstore2">

<lrm_resources/>

</lrm>

</node_state>

</status>

</cib>

vmstore1:/etc # cibadmin -Q

<cib validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="1" dc-uuid="vmstore2" admin_epoch="0" epoch="100" num_updates="5">

<configuration>

<crm_config>

<cluster_property_set id="cib-bootstrap-options">

<nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.7-d3fa20fc76c7947d6de66db7e52526dc6bd7d782"/>

<nvpair id="nvpair-638aff86-19c8-4d62-8f30-ef419416fa5a" name="stonith-action" value="poweroff"/>

<nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/>

<nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais"/>

<nvpair id="nvpair-16ff93f5-e0e7-48a5-a7d8-a0778d791af2" name="node-health-red" value="0"/>

<nvpair id="nvpair-c729123b-3fa4-4f78-88e1-2e2644e47310" name="stonith-enabled" value="false"/>

</cluster_property_set>

</crm_config>

<nodes>

<node id="vmstore1" type="normal" uname="vmstore1">

<instance_attributes id="nodes-vmstore1">

<nvpair id="standby-vmstore1" name="standby" value="false"/>

</instance_attributes>

</node>

<node id="vmstore2" type="normal" uname="vmstore2">

<instance_attributes id="nodes-vmstore2">

<nvpair id="standby-vmstore2" name="standby" value="false"/>

</instance_attributes>

</node>

</nodes>

<resources/>

<constraints/>

<op_defaults>

<meta_attributes id="op_defaults-options"/>

</op_defaults>

<rsc_defaults>

<meta_attributes id="rsc_defaults-options">

<nvpair id="nvpair-68b16f0e-1e5c-4dd5-85a4-28a14ea385fc" name="is-managed" value="true"/>

</meta_attributes>

</rsc_defaults>

</configuration>

<status>

<node_state uname="vmstore1" ha="active" in_ccm="true" crmd="online" join="member" expected="member" shutdown="0" id="vmstore1" crm-debug-origin="do_state_transition">

<lrm id="vmstore1">

<lrm_resources/>

</lrm>

<transient_attributes id="vmstore1">

<instance_attributes id="status-vmstore1">

<nvpair id="status-vmstore1-probe_complete" name="probe_complete" value="true"/>

</instance_attributes>

</transient_attributes>

</node_state>

<node_state uname="vmstore2" ha="active" in_ccm="true" crmd="online" join="member" expected="member" shutdown="0" id="vmstore2" crm-debug-origin="do_state_transition">

<transient_attributes id="vmstore2">

<instance_attributes id="status-vmstore2">

<nvpair id="status-vmstore2-probe_complete" name="probe_complete" value="true"/>

</instance_attributes>

</transient_attributes>

<lrm id="vmstore2">

<lrm_resources/>

</lrm>

</node_state>

</status>

</cib>

------------------------------------------------------------------

I dont know what do make maybe using opensuse 11.1 because it is in the book ?? Maybe a lot of work but we are a monestary

(karmeliten Linz Austria)and we dont have enough money for datacore so we hoped we can manage this with pacemaker and drbd and openiscsi

but i am a one man team and some times this is really hard

So if possible please have a look again on my config files


Thank you in advance



Schwartzkopff schrieb:
Am Montag, 15. März 2010 14:11:58 schrieb Norbert Winkler:
Hello my name is Norbert
and i want to make a pacemaker cluster to use openiscsitarget with drbd
for a vmware storage ( I have  3 luns a 2TB)
i have synced the drbd luns and they seems to work well.
My problem is with the pacemaker crm

*I am using opensuse 11.2 with default kernel*
  on both nodes and i am logged in as root
I have also the user hacluster and the group haclient which i use to
login into gui
After  i fail to make a pacemaker cluster with opensuse 11.2
repositories i try the repository from clusterlab:
I am using:
cluster-glue 1.0.3-1 (x86_64) from clusterlab-rep
corosync 1.2.0-1 (x86_64) from clusterlab-rep
heartbeat 3.0.2-2 (x86_64) from clusterlab-rep
libcorosync 1.2.1-1 (x86-64) from clustrerlab-rep
libglue2 1.0.3-1 (x86_64) from clusterlab-rep
libopenais2 1.1.0-1 (x86_64) from clusterlab-rep
libpacemaker3 1.0.7-4 (x86_64) from clusterlab-rep
openais 1.1.0-1 (x86_64) from clusterlab-rep
pacemaker 1.0.7-4 (x86_64) from clusterlab-rep
resource-agents 1.0.1-1 (x86_64) from clusterlab-rep
pacemaker-pygui 1.4.15-1 (x86_64) opensuse 11.2 repo (no gui available
from clusterlab rep)

At the first moment everything seems to work well
login into console:
Cluster have qourum green point
vmstore1 (dc) online(dc)standby (with green point)
vmstore2 online-standby with green point
Resources with green point

I also can chance status to  active for  both nodes.

My problems occurs when i try to install a resource.
At first with the gui:
*The gui fails when i try to use  Resources to add primitive class ocf -
only a kill -9 corosync and the a  new login into hb_gui  is possible
after i try this .
*

Please first try the crm subshell if the GUI does not work.

THe configuration is standard. Resource defaults are empty and i tried
stonith enabled and disabled.

Please set stonith-enabled=alse"

*When i let stonith enabled and i try to install a resource the
commandline cibadmin (Ipaddr2) i tells me that it can not run this
resource because  no stonith resource is configured. when i disable
stonith it seems to insert the ressource into the crm because it shows
me the complete config file with the resource added. BUt when i try
crm_resource -L then the system tells me that  no resource added.
I am using the script from the book Clusterbau: Hochverfügbarkeit mit
pacemaker.. think the origninal english title should be: buliding
cluster: with pacemaker.. I am using the newest edition of this book
from Dr.Michael Schwartzkopff.*

Thanks ;-)

Any more descritpive error reports? output of crm_mon -1f?

I dont know how to go on and i am a little bit frustradet because i
thougt i was near the target and at the moment i feel miles away
What i think possible problems are:
1. Maybe i have problems with the user rights because the
/usr/lib/ocf/resource.d/heartbeat belongs to the user and group root
wtih drwxr-xr-x

Unlikely.

2. Maybe the path to the heartbeat resources is wrong maybe corosync
wants another path to the resources

Very Unlikely.

3. Maybe i had to configure resource defaults first but i dont know how.

Should work out of the box.

4. Maybe I must configure stonith resource firse but i dont know how.

set stonith-enabled=alse"

5. MaybeThe crmadmin couldnt wright the rosource in database because i
have a right problem

no.



_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to