03.07.2013 13:00, Lars Marowsky-Bree wrote:
> On 2013-07-03T00:20:19, Vladislav Bogdanov <[email protected]> wrote:
> 
>> I do not edit them. I my setup I generate full crm config with
>> template-based framework.
> 
> And then you do a load/replace? Tough; yes, that'll clearly overwrite

Actually 'load update'.
'replace' doesn't work when resources are running.

> what is already there and added by scripts that more dynamically modify
> the CIB.
> 
> Since we don't know your complete merging rules, it's probably easier if
> your template engine gains hooks to first read the CIB for setting those
> utilization values.

Probably. But not template framework itself (it is combination of make
ans m4 actually, so it is too stupid too lookup CIB). So I'd nee to move
that onto next model level (human or controlling framework, which I'm in
process of implementing) - but that is actually what I wanted to happen
(it breaks the whole idea).

So I'd probably just hack crmsh to not touch node utilization attributes
if whole 'utilization' part is missing in the update.
If/when pacemaker has support for transient utilization attributes, I
will move to that.


> 
>> That is very convenient way to f.e stop dozen of resources in one shot
>> for some maintenance. I have special RA which creates ticket on a
>> cluster start and deletes it on a cluster stop. And many resources may
>> depend on that ticket. If you request resource handled by that RA to
>> stop, ticket is revoked and all dependent resources stop.
>>
>> I wouldn't write that RA if I have cluster-wide attributes (which
>> perform like node attributes but for a whole cluster).
> 
> Right. But. Tickets *are* cluster wide attributes that are meant to
> control the "target-role" of many resources depending on them. So you're
> getting exactly what you need, no? What is missing?

They are volatile.

And, I'd prefer cluster attributes to have free-form values. I was
already hit by the fact that two-state 'granted/revoked' value is too
limited for me. I then expanded logic to also use "non-existent' ticket
state (it worked for some time), but then support for active/standby
came in and I switched to that.

Tat all was in lustre-server RA, which needs to control order in which
parts of a whole lustre fs are tuned/activated when it moves to another
cluster on a ticket revocation. I use additional internally-controlled
ticket there.

Best,
Vladislav

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to