In our 2-node-cluster with pacemaker 1.1.2 and corosync 1.2.1 we have defined a 
resource 'auvm1' with 'on-fail=restart' for op's start and monitor and we have 
set 'on-fail=restart' in op_defaults.
The resource 'auvm1' was started on node 'goat1' when we provoked an error 
preventing the resource to run on that node furthermore. Due to the 
on-fail-settings we expected a restart of the resource 'auvm1' on the second 
node 'sheep1'. However we saw that 'auvm1' was stopped on node 'goat1' and not 
restarted.
It seems to us that pacemaker ignores the operation specific on-fail-settings 
for the resource 'auvm1' and considers only the op_defaults (for stop op : 
restart, for start op and monitor op : stop ?).

1) How can we achieve that the operation-specific-settings for the resource 
have a higher priority than
     the default-setting ? We'd like to have the resource restarted on the 
second node without fencing
     the first node.

2) Is it possible to define different op_defaults for op stop, op start and op 
monitor ?

Regards,
Armin Haussecker


Extract from our CIB:
crm configure show
node goat1
node sheep1
primitive auvm1 ocf:heartbeat:Xen \
        op start on-fail="restart" start-delay="0s" \
        op monitor on-fail="restart" interval="60s" \
        op stop on-fail="block"  \
        meta is-managed="true" resource-stickiness="1000" 
migration-threshold="2" priority="300"
        target-role="Stopped" \
        params xmfile="/home/clusteradm/cluster/xen/auvm1"
..............
..............
op_defaults $id="op-options" \
        on-fail="restart"






_______________________________________________
Openais mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/openais

Reply via email to