[ClusterLabs] pcs 0.10.18 released

2024-01-17 Thread he / him
I am happy to announce the latest release of pcs, version 0.10.18.

Source code is available at:
https://github.com/ClusterLabs/pcs/archive/refs/tags/v0.10.18.tar.gz
or
https://github.com/ClusterLabs/pcs/archive/refs/tags/v0.10.18.zip

This is the last planned release of pcs 0.10. It primarily contains bug fixes,
notable new features are ability to move some clone and bundle resources and
setting all of the qdevice options without --force.

Complete change log for this release:
### Security
- Make use of filters when extracting tarballs to enhance security if provided
  by Python (`pcs config restore` command) ([rhbz#2219388])

### Added
- It is now possible to move bundle resources (requires pacemaker 2.1.6 or
  newer) and clone resources ([RHEL-7584])

### Fixed
- Do not display duplicate records in commands `pcs property [config] --all`
  and `pcs property describe` ([rhbz#2217850])
- Commands `pcs property defaults` and `pcs property describe` print error
  message in case specified properties do not have metadata. ([rhbz#2226778])
- Clarify messages informing users that cluster must be stopped in order to
  change certain corosync options ([rhbz#2227234])
- Prevent disabling `auto_tie_breaker` when it would make SBD not working
  properly ([rhbz#2227233])
- Improved error messages and documentation of `pcs resource move` command
  ([rhbz#2233763])
- Do not display warning in `pcs status` for expired constraints that were
  created by moving resources ([rhbz#2111583])
- Fixed validation for interval and timeout option values of an operation
  specified for `pcs resource create` command ([rhbz#2233766]).
- Improved error message of `pcs booth ticket grant|revoke` commands in case a
  booth site address parameter is needed ([RHEL-8467])
- When moving or banning a resource in a bundle, pcs now errors out instead of
  creating a move / ban constraint which does nothing ([RHEL-7584])

### Changed
- Allow `tls` and `keep_active_partition_tie_breaker` options for qdevice model
  "net" to be set using `pcs quorum device add` and `pcs quorum device update`
  commands ([rhbz#2234665])



Thanks / congratulations to everyone who contributed to this release, including
lixin, Michal Pospisil, Miroslav Lisik, Peter Romancik, Tomas Jelinek,
wangluwei.

Cheers,
Michal


[RHEL-7584]: https://issues.redhat.com/browse/RHEL-7584
[RHEL-8467]: https://issues.redhat.com/browse/RHEL-8467
[rhbz#2111583]: https://bugzilla.redhat.com/show_bug.cgi?id=2111583
[rhbz#2217850]: https://bugzilla.redhat.com/show_bug.cgi?id=2217850
[rhbz#2219388]: https://bugzilla.redhat.com/show_bug.cgi?id=2219388
[rhbz#2226778]: https://bugzilla.redhat.com/show_bug.cgi?id=2226778
[rhbz#2227233]: https://bugzilla.redhat.com/show_bug.cgi?id=2227233
[rhbz#2227234]: https://bugzilla.redhat.com/show_bug.cgi?id=2227234
[rhbz#2233763]: https://bugzilla.redhat.com/show_bug.cgi?id=2233763
[rhbz#2233766]: https://bugzilla.redhat.com/show_bug.cgi?id=2233766
[rhbz#2234665]: https://bugzilla.redhat.com/show_bug.cgi?id=2234665

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Beginner lost with promotable "group" design

2024-01-17 Thread Ken Gaillot
On Wed, 2024-01-17 at 14:23 +0100, Adam Cécile wrote:
> Hello,
> 
> 
> I'm trying to achieve the following setup with 3 hosts:
> 
> * One master gets a shared IP, then remove default gw, add another
> gw, 
> start a service
> 
> * Two slaves should have none of them but add a different default gw
> 
> I managed quite easily to get the master workflow running with
> ordering 
> constraints but I don't understand how I should move forward with
> the 
> slave configuration.
> 
> I think I must create a promotable resource first then assign my
> other 
> resources with started/stopped  setting depending on the promote
> status 
> of the node. Is that correct ? How to create a promotable
> "placeholder" 
> where I can later attach my existing resources ?

A promotable resource would be appropriate if the service should run on
all nodes, but one node runs with a special setting. That doesn't sound
like what you have.

If you just need the service to run on one node, the shared IP,
service, and both gateways can be regular resources. You just need
colocation constraints between them:

- colocate service and external default route with shared IP
- clone the internal default route and anti-colocate it with shared IP

If you want the service to be able to run even if the IP can't, make
its colocation score finite (or colocate the IP and external route with
the service).

Ordering is separate. You can order the shared IP, service, and
external route however needed. Alternatively, you can put the three of
them in a group (which does both colocation and ordering, in sequence),
and anti-colocate the cloned internal route with the group.

> 
> Sorry for the stupid question but I really don't understand what type
> of 
> elements I should create...
> 
> 
> Thanks in advance,
> 
> Regards, Adam.
> 
> 
> PS: Bonus question should I use "pcs" or "crm" ? It seems both
> command 
> seem to be equivalent and documentations use sometime one or another
> 

They are equivalent -- it's a matter of personal preference (and often
what choices your distro give you).
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] replica vol when a peer is lost/offed & qcow2 ?

2024-01-17 Thread lejeczek via Users

Hi guys.

I wonder if you might have any tips/tweaks for 
volume/cluster to make it more resilient? accommodating? to 
qcow2 files but! when a peer is lots or missing?
I have 3-peer cluster/volume: 2 + 1 arbiter & my experience 
is such, that when all is good then.. well, all is good, but...
when one peers was lots - even if it's the arbiter - then 
VMs begin to suffer from & to report, their filesystems, errors.
Perhaps it's not a volume's properties/config but whole 
cluster? Or perhaps 3-peer vol is not good for things such a 
qcow2s?


many thanks, L.___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Beginner lost with promotable "group" design

2024-01-17 Thread Adam Cécile

Hello,


I'm trying to achieve the following setup with 3 hosts:

* One master gets a shared IP, then remove default gw, add another gw, 
start a service


* Two slaves should have none of them but add a different default gw

I managed quite easily to get the master workflow running with ordering 
constraints but I don't understand how I should move forward with the 
slave configuration.


I think I must create a promotable resource first then assign my other 
resources with started/stopped  setting depending on the promote status 
of the node. Is that correct ? How to create a promotable "placeholder" 
where I can later attach my existing resources ?


Sorry for the stupid question but I really don't understand what type of 
elements I should create...



Thanks in advance,

Regards, Adam.


PS: Bonus question should I use "pcs" or "crm" ? It seems both command 
seem to be equivalent and documentations use sometime one or another


___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/