Hi!

It seems there is a change in semantics (or a bug?) from SLES11 to 
SLES12/SLES15 regarding crm_resource -C -r ...:
I had a cloned resource that failed startup due to a missing configuration file:

Failed Resource Actions:
  * prm_test_raid_monitor_0 on h19 'not installed' (5): call=17, 
status='complete', exitreason='Configuration file [/etc/mdadm/mdadm.conf] does 
not exist, or can not be opened!', last-rc-change='2020-11-26 15:40:30 +01:00', 
queued=0ms, exec=26ms
  * prm_test_raid_monitor_0 on h18 'not installed' (5): call=17, 
status='complete', exitreason='Configuration file [/etc/mdadm/mdadm.conf] does 
not exist, or can not be opened!', last-rc-change='2020-11-26 15:40:30 +01:00', 
queued=0ms, exec=29ms

I cleaned up the resource sucessfully in SLES15 SP2, but the error condition 
did not reset, and no new start attempt happened:
# crm_resource -r prm_test_raid -C -n h18
Cleaned up prm_test_raid:0 on h16
Cleaned up prm_test_raid:1 on h19
Cleaned up prm_test_raid:1 on h18
Cleaned up prm_test_raid:2 on h16
Cleaned up prm_test_raid:2 on h19
Cleaned up prm_test_raid:2 on h18

Intended, or a bug? The manual description for "-C" says: "If resource has any 
past failures, clear its history and fail count."

Regards,
Ulrich


_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to