Hi,

 

Thank you a lot for your thoughts. 

 

Guess, I should clarify our way. I do not configuration, but my code do. It is 
python-based agent, connected to specific management system. When it receives 
configuration, it should complete sync action between configuration received 
and current resource configuration, it works as reflection of management system 
XML notation to CIB XML notation. Theoretically, it is possible to calculate 
step-by step changes and build a sequence of shell commands, but it takes more 
time to code and then it takes more time to apply. AFAIK, What CRM shell do is 
just edit CIB, so it seems like no difference. And my approach sounds more 
straight for coding unless CIB schema changes.

 

Chained resources may bring problems just because pacemaker only use one and 
latest CIB version, if I delete several resource chains from CIB and push, it 
will just stop resource randomly without respect to former constraints which 
are not in latest CIB. And this is behavior by design. To workaround this, I 
split deletion for steps: 1) keep resources and constraints in CIB, but set 
target state “stopped” and push 2)check state until every stopped 3) remove all 
of them from CIB and push. And this works fine!

 

My question was raised by idea that probably I need no to wait for all stops, 
removing step2. If step 1 queue tasks to stop with proper sequencing, the 
second push should not affect the tasks in queue and resource down sequence 
should be kept. IMHO, this comes from pacemaker architecture as id described in 
the docs.

 

I am looking for someone who can confirm my guess or decline it.

 

Sincerely,

 

Alex

 

From: Users <[email protected]> On Behalf Of Windl, Ulrich
Sent: Friday, December 19, 2025 10:17 AM
To: Cluster Labs - All topics related to open-source clustering welcomed 
<[email protected]>
Subject: Re: [ClusterLabs] [EXT] Action scheduling on cib change

 

Hi!

 

IMHO there are better ways to do configuration changes than editing the CIB; I 
prefer the crm shell.

It’s also good for documentation.

Regarding the other question, I would never start a new configuration change 
when the CRM is not in “idle” state.

Mostly because debugging is hard when something goes wrong under such 
circumstances.

 

Kind regards,

Ulrich Windl

 

From: Users <[email protected] 
<mailto:[email protected]> > On Behalf Of [email protected] 
<mailto:[email protected]> 
Sent: Friday, December 12, 2025 9:56 PM
To: [email protected] <mailto:[email protected]> 
Subject: [EXT] [ClusterLabs] Action scheduling on cib change

 

Sicherheits-Hinweis: Diese E-Mail wurde von einer Person außerhalb des UKR 
gesendet. Seien Sie vorsichtig vor gefälschten Absendern, wenn Sie auf Links 
klicken, Anhänge öffnen oder weitere Aktionen ausführen, bevor Sie die Echtheit 
überprüft haben.

Hi All,

 

We use pacemaker as cluster engine under the hood and manage it by getting CIB, 
change and then put. I can do a lot of changes at once.

 

The question is about internal action scheduler. As I found, the logic of 
applying new config sounds like:

1.      Validate, analyze new config, build target resource layout with respect 
to config, location constraints and current state
2.      Build action list for resource state change with respect to order 
constraints
3.      Schedule actions into queue 
4.      Render actions from queue

 

The question is: if I put two config change very fast, one by one and second 
arrive before every scheduled action finished, will every scheduled action from 
the first config complete? Mean, all the actions coming from second config to 
be queued after all scheduled on the first push.

 

The background of the question: I use “constraint chains”, means order like 
start A => B =>C stop C=>B=>A and it is essential to keep the order. However, 
If A, B and C running and I push config where all of them deleted, pacemaker 
has no constraints in the new config and schedule A,B,C stop as independent 
action, and this leads to resource failures. I already asked in the group and 
answered that this is behavior by design, Paacemaker only use the new config 
for change planning, so A,B and C become unconstrained orphans. My workaround 
was to send intermediate config, keeping resource to delete in config, but set 
target state “stopped’. For now, I am sending the intermediate config and 
request status until every resource to delete is stopped. However, if the 
scheduler put new actions after already planned, I need no wait and request 
status, I only need send cib with target state stopped and then immediately 
send cib with deleted resource. As the stop sequence already in the scheduler 
queue, it should work correctly and faster then my first approach.

 

Thank you in advance for every advice and suggestion!

 

Sincerely,

 

Alex

 

 

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to