Hi Emmanuel,

I took off the ocf:heartbeat:ManageVE configuration, and my DRBD and
FileSystem is going to the other node when I disconnect my network. At
least, it was my first goal! Thanks for your help!

I would like to ask other little configuration. I want to make one of the
nodes my preference when both are online. For this I use this command:
location ms_drbd_r8-master-prefer-cloud4 drbd_r8_ms rule role="Master" 50:
#uname eq cloud4

But the command "uname" on my linux shell return the string "Linux". Only
if I type "uname -n" I get cloud4. But I am not allowed to configure
"#uname -n eq cloud4". Do you know what to do?

Thanks for your help!
Felipe

This is my atual configuration:

crm configure property no-quorum-policy=ignore
crm configure property stonith-enabled=false

primitive net_conn ocf:pacemaker:ping params pidfile="/var/run/ping.pid"
host_list="192.168.188.1" op start interval="0" timeout="60s" op stop
interval="0" timeout="20s" op monitor interval="10s" timeout="60s"

clone clone_net_conn net_conn meta clone-node-max="1" clone-max="2"

primitive cluster_ip ocf:heartbeat:IPaddr2 params ip="192.168.188.80"
cidr_netmask="32" op monitor interval="10s"

primitive cluster_mon ocf:pacemaker:ClusterMon params
pidfile="/var/run/crm_mon.pid" htmlfile="/var/tmp/crm_mon.html" op start
interval="0" timeout="20s" op stop interval="0" timeout="20s" op monitor
interval="10s" timeout="20s"


primitive drbd_r8 ocf:linbit:drbd params drbd_resource="r8" op monitor
interval="60s" role="Master" op monitor interval="59s" role="Slave"
ms drbd_r8_ms drbd_r8 meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"

location ms_drbd_r8-no-conn drbd_r8_ms rule $id="ms_drbd_r8-no-conn-rule"
$role="Master" -inf: not_defined pingd or pingd number:lte 0

primitive drbd_r8_fs ocf:heartbeat:Filesystem params device="/dev/drbd8"
directory="/mnt/drbd8" fstype="ext3"
colocation fs_on_drbd inf: drbd_r8_fs drbd_r8_ms:Master
order fs_after_drbd inf: drbd_r8_ms:promote drbd_r8_fs:start

colocation coloc_mgmt inf: cluster_ip cluster_mon
colocation coloc_ms_ip inf: drbd_r8_ms:Master cluster_ip




On Mon, Dec 17, 2012 at 1:43 PM, Emmanuel Saint-Joanis
<[email protected]>wrote:

> 2012/12/17 Felipe Gutierrez <[email protected]>
>
>> The errors appeear when I type this command "primitive vz_svc lsb:vz op
>> monitor interval=10s"
>> Do you know why?
>> # crm configure
>> crm(live)configure# primitive vz_svc lsb:vz op monitor interval=10s
>> crm(live)configure# commit
>> crm(live)configure# bye
>> bye
>> root@cloud4:~# crm_mon -1 -V
>> crm_mon[26694]: 2012/12/17_14:16:38 ERROR: unpack_rsc_op: Hard error -
>> vz_svc_last_failure_0 failed with rc=6: Preventing vz_svc from re-starting
>> anywhere in the cluster
>> crm_mon[26694]: 2012/12/17_14:16:38 ERROR: unpack_rsc_op: Hard error -
>> vz_svc_last_failure_0 failed with rc=6: Preventing vz_svc from re-starting
>> anywhere in the cluster
>>
>
> I know less than nothing on this virtual stuff but, isn't
> ocf:heartbeat:ManageVE supposed to deal with this ?
> Anyway, you should first try to make sure that your LSB script is
> compliant with the return codes.
> *
> *
>



-- 
*--
-- Felipe Oliveira Gutierrez
-- [email protected]
-- https://sites.google.com/site/lipe82/Home/diaadia*
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to