On Mon, 19 Aug 2024 13:39:29 +0300
Murat Inal wrote:
> Hi Jehan,
>
> When you install corosync & pacemaker packages in Ubuntu, they are
> DISABLED by default. pcsd is ENABLED by default.
>
> So when you do a sudo pcs cluster start, corosync & pacemaker starts.
>
> BTW, I already tried to solv
On Mon, 19 Aug 2024 12:58:09 +0300
Murat Inal wrote:
> [Resending the below due to message format problem]
>
>
> Dear List,
>
> I have been running two different 3-node clusters for some time. I am
> having a fatal problem with corosync: After a node failure, rebooted
> node does NOT start c
On Thu, 11 Jul 2024 22:20:03 +0200
Ede Wolf wrote:
> […]
> >> the other the language. Bash is realistic to read, perl probably not.
> >
> > That's really a matter of taste, and you might be surprised:
>
> Well, if we manage to get it up and running by help of documentation,
> and some trial an
On Thu, 11 Jul 2024 14:12:27 +0200
Ede Wolf wrote:
> Just let me take this mail as a placeholder to thank you all very much
> for your rather fast replies!
You are very welcome.
> Just wondering, besides implementation and documentation, is there an
> overview of features / percieved advantag
On Thu, 11 Jul 2024 11:52:45 +0200
Oyvind Albrigtsen wrote:
> On 11/07/24 11:33 GMT, Jehan-Guillaume de Rorthais wrote:
> >On Thu, 11 Jul 2024 10:17:24 +0200
> >Oyvind Albrigtsen wrote:
> >[…]
> >> >Since postgres ha is rather new to us and me having been l
On Thu, 11 Jul 2024 10:17:24 +0200
Oyvind Albrigtsen wrote:
[…]
> >Since postgres ha is rather new to us and me having been lucky not
> >having had to deal with perl so far, just reading the agents
> >themselves does not really shed that much light on this issue.
> The pgsql has seen more testin
Hello Ede,
On Wed, 10 Jul 2024 19:18:26 +0200
Ede Wolf wrote:
> […]
> we are about to set up a postgresql 15 ha solution and since we already
> have some experience with pacemaker, this seems the obvious route to go
> first.
Indeed!
> What however is somewhat confusing are the available resou
Bonjour Thierry,
On Mon, 25 Mar 2024 10:55:06 +
FLORAC Thierry wrote:
> I'm trying to create a PostgreSQL master/slave cluster using streaming
> replication and pgsqlms agent. Cluster is OK but my problem is this : the
> master node is sometimes restarted for system operations, and the slave
On Wed, 31 Jan 2024 18:23:40 +0100
lejeczek via Users wrote:
> On 31/01/2024 17:13, Jehan-Guillaume de Rorthais wrote:
> > On Wed, 31 Jan 2024 16:37:21 +0100
> > lejeczek via Users wrote:
> >
> >>
> >> On 31/01/2024 16:06, Jehan-Guillaume de Rorthais w
On Wed, 31 Jan 2024 16:37:21 +0100
lejeczek via Users wrote:
>
>
> On 31/01/2024 16:06, Jehan-Guillaume de Rorthais wrote:
> > On Wed, 31 Jan 2024 16:02:12 +0100
> > lejeczek via Users wrote:
> >
> >>
> >> On 29/01/2024 17:22, Ken Gaillot w
On Wed, 31 Jan 2024 16:02:12 +0100
lejeczek via Users wrote:
>
>
> On 29/01/2024 17:22, Ken Gaillot wrote:
> > On Fri, 2024-01-26 at 13:55 +0100, lejeczek via Users wrote:
> >> Hi guys.
> >>
> >> Is it possible to trigger some... action - I'm thinking specifically
> >> at shutdown/start.
> >> I
On Wed, 31 Jan 2024 15:41:28 +0100
Adam Cecile wrote:
[...]
> Thanks a lot for your suggestion, it seems I have something that work
> correctly now, final configuration is:
I would recommend configuring in an offline CIB then pushing it to production
as a whole. Eg.:
# get current CIB
pcs
On Wed, 24 Jan 2024 16:47:54 -0600
Ken Gaillot wrote:
...
> > Erm. Well, as this is a major upgrade where we can affect people's
> > conf and
> > break old things & so on, I'll jump in this discussion with a
> > wishlist to
> > discuss :)
> >
>
> I made sure we're tracking all these (links bel
Hi there !
On Wed, 03 Jan 2024 11:06:27 -0600
Ken Gaillot wrote:
> Hi all,
>
> I'd like to release Pacemaker 3.0.0 around the middle of this year.
> I'm gathering proposed changes here:
>
> https://projects.clusterlabs.org/w/projects/pacemaker/pacemaker_3.0_changes/
>
> Please review for an
On Fri, 8 Dec 2023 17:11:58 +0100
lejeczek via Users wrote:
...
> Apologies, perhaps I was quite vague.
> I was thinking - having a 3-node HA cluster and 3-node
> single-master->slaves pgSQL, now..
> say, I want pgSQL masters to spread across HA cluster so I
> theory - having each HA node identi
Hi,
On Wed, 6 Dec 2023 10:36:39 +0100
lejeczek via Users wrote:
> How do your colocate your promoted resources with balancing
> underlying resources as priority?
What do you mean?
> With a simple scenario, say
> 3 nodes and 3 pgSQL clusters
> what would be best possible way - I'm thinking mos
On Fri, 10 Nov 2023 20:34:40 +0100
lejeczek via Users wrote:
> On 10/11/2023 18:16, Jehan-Guillaume de Rorthais wrote:
> > On Fri, 10 Nov 2023 17:17:41 +0100
> > lejeczek via Users wrote:
> >
> > ...
> >>> Of course you can use "pg_stat_tmp&q
n your
cluster.
Thoughts?
> -Original Message-----
> From: Users On Behalf Of Jehan-Guillaume de
> Rorthais via Users Sent: Friday, November 10, 2023 1:13 PM
> To: lejeczek via Users
> Cc: Jehan-Guillaume de Rorthais
> Subject: [EXT] Re: [ClusterLabs] PAF / pgSQL fails after
On Fri, 10 Nov 2023 17:17:41 +0100
lejeczek via Users wrote:
...
> > Of course you can use "pg_stat_tmp", just make sure the temp folder exists:
> >
> >cat < /etc/tmpfiles.d/postgresql-part.conf
> ># Directory for PostgreSQL temp stat files
> >d /var/run/postgresql/14-paf.pg_stat_tmp
On Fri, 10 Nov 2023 12:27:24 +0100
lejeczek via Users wrote:
...
> >
> to share my "fix" for it - perhaps it was introduced by
> OS/packages (Ubuntu 22) updates - ? - as oppose to resource
> agent itself.
>
> As the logs point out - pg_stat_tmp - is missing and from
> what I see it's only th
On Wed, 13 Sep 2023 17:32:01 +0200
lejeczek via Users wrote:
> On 08/09/2023 17:29, Jehan-Guillaume de Rorthais wrote:
> > On Fri, 8 Sep 2023 16:52:53 +0200
> > lejeczek via Users wrote:
> >
> >> Hi guys.
> >>
> >> Before I start fiddling and
On Fri, 8 Sep 2023 16:52:53 +0200
lejeczek via Users wrote:
> Hi guys.
>
> Before I start fiddling and brake things I wonder if
> somebody knows if:
> pgSQL can work with: |wal_level = archive for PAF ?
> Or more general question with pertains to ||wal_level - can
> _barman_ be used with pgSQL
On Fri, 8 Sep 2023 10:26:42 +0200
lejeczek via Users wrote:
> On 07/09/2023 16:20, lejeczek via Users wrote:
> >
> >
> > On 07/09/2023 16:09, Andrei Borzenkov wrote:
> >> On Thu, Sep 7, 2023 at 5:01 PM lejeczek via Users
> >> wrote:
> >>> Hi guys.
> >>>
> >>> I'm trying to set ocf_heartbeat
On Fri, 5 May 2023 10:08:17 +0200
lejeczek via Users wrote:
> On 25/04/2023 14:16, Jehan-Guillaume de Rorthais wrote:
> > Hi,
> >
> > On Mon, 24 Apr 2023 12:32:45 +0200
> > lejeczek via Users wrote:
> >
> >> I've been looking up and fiddling with
Hi,
On Mon, 24 Apr 2023 12:32:45 +0200
lejeczek via Users wrote:
> I've been looking up and fiddling with this RA but
> unsuccessfully so far, that I wonder - is it good for
> current versions of pgSQLs?
As far as I know, the pgsql agent is still supported, last commit on it happen
in Jan 11t
On Tue, 21 Mar 2023 11:47:23 +0100
Jérôme BECOT wrote:
> Le 21/03/2023 à 11:00, Jehan-Guillaume de Rorthais a écrit :
> > Hi,
> >
> > On Tue, 21 Mar 2023 09:33:04 +0100
> > Jérôme BECOT wrote:
> >
> >> We have several clusters running for diffe
Hi,
On Tue, 21 Mar 2023 09:33:04 +0100
Jérôme BECOT wrote:
> We have several clusters running for different zabbix components. Some
> of these clusters consist of 2 zabbix proxies,where nodes run Mysql,
> Zabbix-proxy server and a VIP, and a corosync-qdevice.
I'm not sure to understand your
Hi,
What about using the Dummy resource agent (ocf_heartbeat_dummy(7)) and collocate
it with your IP address? This RA creates a local file on start and removes it
on stop. The game now is to watch for this path from a systemd path unit and
trigger the reload when file appears. See systemd.path(5).
Hi,
I definitely have some work/improvements to do on the pgsqlms agent, but
there's still some details I'm interested to discuss below.
On Fri, 6 Jan 2023 16:36:19 -0800
Reid Wahl wrote:
> On Fri, Jan 6, 2023 at 3:26 PM Jehan-Guillaume de Rorthais via Users
> wrote:
>>
gt; On Tue, 2023-01-03 at 18:18 +0100, lejeczek via Users wrote:
> >>>> On 03/01/2023 17:03, Jehan-Guillaume de Rorthais wrote:
> >>>>> Hi,
> >>>>>
> >>>>> On Tue, 3 Jan 2023 16:44:01 +0100
> >>>>> lejecze
Hi,
On Tue, 3 Jan 2023 16:44:01 +0100
lejeczek via Users wrote:
> To get/have Postgresql cluster with 'pgsqlms' resource, such
> cluster needs a 'master' IP - what do you guys do when/if
> you have multiple resources off this agent?
> I wonder if it is possible to keep just one IP and have all
On Mon, 7 Nov 2022 14:06:51 +
Robert Hayden wrote:
> > -Original Message-
> > From: Users On Behalf Of Valentin Vidic
> > via Users
> > Sent: Sunday, November 6, 2022 5:20 PM
> > To: users@clusterlabs.org
> > Cc: Valentin Vidić
> > Subject: Re: [ClusterLabs] [External] : Re: Fence A
On Sat, 5 Nov 2022 20:54:55 +
Robert Hayden wrote:
> > -Original Message-
> > From: Jehan-Guillaume de Rorthais
> > Sent: Saturday, November 5, 2022 3:45 PM
> > To: users@clusterlabs.org
> > Cc: Robert Hayden
> > Subject: Re: [ClusterLabs
On Sat, 5 Nov 2022 20:53:09 +0100
Valentin Vidić via Users wrote:
> On Sat, Nov 05, 2022 at 06:47:59PM +, Robert Hayden wrote:
> > That was my impression as well...so I may have something wrong. My
> > expectation was that SBD daemon should be writing to the /dev/watchdog
> > within 20 secon
On Mon, 3 Oct 2022 14:45:49 +0200
Tomas Jelinek wrote:
> Dne 28. 09. 22 v 18:22 Jehan-Guillaume de Rorthais via Users napsal(a):
> > Hi,
> >
> > A small addendum below.
> >
> > On Wed, 28 Sep 2022 11:42:53 -0400
> > "Kevin P. Fleming" wrote:
Hi,
A small addendum below.
On Wed, 28 Sep 2022 11:42:53 -0400
"Kevin P. Fleming" wrote:
> On Wed, Sep 28, 2022 at 11:37 AM Dave Withheld
> wrote:
> >
> > Is it possible to get corosync to use the private network and stop trying
> > to use the LAN for cluster communications? Or am I totally of
On Wed, 28 Sep 2022 02:33:59 -0400
Madison Kelly wrote:
> ...
> I'm happy to go into more detail, but I'll stop here until/unless you have
> more questions. Otherwise I'd write a book. :)
I would buy it ;)
___
Manage your subscription:
https://lists.cl
Hey,
On Wed, 7 Sep 2022 19:12:53 +0900
권오성 wrote:
> Hello.
> I am a student who wants to implement a redundancy system with raspberry pi.
> Last time, I posted about how to proceed with installation on raspberry pi
> and received a lot of comments.
> Among them, I searched a lot after looking at
Hi,
On Wed, 22 Jun 2022 16:36:03 +
CHAMPAGNE Julie wrote:
> ...
> # pcs resource create pgsqld ocf:heartbeat:pgsqlms \
> pgdata="/etc/postgresql/11/main" \
> bindir="/usr/lib/postgresql/11/bin" \
> datadir="/var/lib/postgresql/11/main" \
> recovery_template="/etc/postgresql/recovery.conf.pcm
Hi,
On Tue, 15 Mar 2022 12:35:11 -0400
"john tillman" wrote:
> I'm trying to guarantee that all my cloned drbd resources start on the
> same node and I can't figure out the syntax of the constraint to do it.
>
> I could nominate one of the drbd resources as a "leader" and have all the
> others
On Tue, 8 Mar 2022 17:44:36 +
lejeczek via Users wrote:
> On 08/03/2022 16:20, Jehan-Guillaume de Rorthais wrote:
> > Removing the node attributes with the resource might be legit from the
> > Pacemaker point of view, but I'm not sure how they can track the depe
Hi,
Sorry, your mail was really hard to read on my side, but I think I understood
and try to answer bellow.
On Tue, 8 Mar 2022 11:45:30 +
lejeczek via Users wrote:
> On 08/03/2022 10:21, Jehan-Guillaume de Rorthais wrote:
> >> op start timeout=60s \ op stop timeout=60s \ op pro
Make sure to read this page as well:
https://clusterlabs.github.io/PAF/administration.html
Regards,
> -Message d'origine-
> De : Jehan-Guillaume de Rorthais
> Envoyé : mardi 8 mars 2022 11:21
> À : CHAMPAGNE Julie
> Cc : Cluster Labs - All topics related to open-so
Hi,
On Tue, 8 Mar 2022 08:00:22 +
CHAMPAGNE Julie wrote:
> I've created the ressource pgsqld as follow (don't think the cluster creation
> command is necessary):
>
> pcs resource create pgsqld ocf:heartbeat:pgsqlms promotable \
The problem is here. The argument order given to pcs is import
On Mon, 7 Mar 2022 14:49:35 +
CHAMPAGNE Julie wrote:
> The return gives nothing for the first command.
> Then:
>
> name="test-debug" host="node1" value="testvalue" for node1.
>
> After executing both commands on node2, it gives me the following return on
> both server:
>
> name="test-debug
On Mon, 7 Mar 2022 14:32:46 +
CHAMPAGNE Julie wrote:
> root@node1 ~ > attrd_updater --private --lifetime reboot --name
> "lsn_location-pgsqld" --query Could not query value of lsn_location-pgsqld:
> attribute does not exist
Mh, sorry, could you please exec these two commands:
attrd_update
Hi,
Caution, this is an english spoken mailing list :)
Bellow my answer.
On Mon, 7 Mar 2022 12:31:07 +
CHAMPAGNE Julie wrote:
> Lorsque je crée un problème sur le noeud1,
What's the issue you are testing precisely?
> * pgsqld_promote_0 on node2 'error' (1): call=24, status='complete',
Hi,
On Wed, 2 Mar 2022 14:39:40 +0100
damiano giuliani wrote:
> ...
> my question is: what happens in case of failover of the master on another
> node to the wal logs that i am archiving to build the incrementals?
The new primary is supposed to archive WALs to your backup server.
> would I sti
On Tue, 22 Feb 2022 12:25:15 +0100
Oyvind Albrigtsen wrote:
> ...
> >Ping Oyvind, maybe you have some input about this as the resource-agents
> >package maintainer?
> I dont know how it got excluded on CentOS Stream only, but I've
> created a bz to fix it:
> https://bugzilla.redhat.com/show_bug
Hello,
On Tue, 22 Feb 2022 09:27:16 +
lejeczek via Users wrote:
> ...
> Perhaps as the author(s) you can chip in and/or help via comments to
> rectify this:
>
> ...
>
> Problem: package resource-agents-paf-4.9.0-7.el8.x86_64 requires
PAF doesn't share the same release plans than the r
On Mon, 21 Feb 2022 09:04:27 +
CHAMPAGNE Julie wrote:
...
> The last release is 2 years old, is it still in development?
There's no activity because there's not much to do on it. PAF is mainly in
maintenance (bug fix) mode.
I have few ideas here and there. It might land soon or later, but no
Hello,
On Fri, 18 Feb 2022 21:44:58 +
"Larry G. Mills" wrote:
> ... This happened again recently, and the running primary DB was demoted and
> then re-promoted to be the running primary. What I'm having trouble
> understanding is why the running Master/primary DB was demoted. After the
> mo
On Fri, 11 Feb 2022 08:07:33 +0100
"Ulrich Windl" wrote:
> >> Jehan-Guillaume de Rorthais schrieb am 10.02.2022 um
> 16:40 in Nachricht <20220210164000.2e395a37@karst>:
> > ...
> > I wonder if after the cluster shutdown complete, the target-role=Stop
On Thu, 10 Feb 2022 22:15:07 +0800
Roger Zhou via Users wrote:
>
> On 2/9/22 17:46, Lentes, Bernd wrote:
> >
> >
> > - On Feb 7, 2022, at 4:13 PM, Jehan-Guillaume de Rorthais
> > j...@dalibo.com wrote:
> >
> >> On Mon, 7 Feb 2022 14:
On Thu, 10 Feb 2022 15:10:20 +0100
"Ulrich Windl" wrote:
...
> > If you want to gracefully shutdown your cluster, then you can add one
> manual
> > step to first gracefully stop your resources instead of betting the cluster
> > will do the good things.
>
> It's the old discussion: Old HP Serv
On Wed, 9 Feb 2022 17:42:35 + (UTC)
Strahil Nikolov via Users wrote:
> If you gracefully shutdown a node - pacemaker will migrate all resources away
> so you need to shut them down simultaneously and all resources should be
> stopped by the cluster.
>
> Shutting down the nodes would be my c
On Wed, 9 Feb 2022 10:46:30 +0100 (CET)
"Lentes, Bernd" wrote:
> - On Feb 7, 2022, at 4:13 PM, Jehan-Guillaume de Rorthais j...@dalibo.com
> wrote:
>
> > On Mon, 7 Feb 2022 14:24:44 +0100 (CET)
> > "Lentes, Bernd" wrote:
> >
> >> H
On Mon, 7 Feb 2022 14:24:44 +0100 (CET)
"Lentes, Bernd" wrote:
> Hi,
>
> i'm currently changing a bit in my cluster because i realized the my
> configuration for a power outtage didn't work as i expected. My idea is
> currently:
> - first stop about 20 VirtualDomains, which are my services. This
On Mon, 31 Jan 2022 08:49:44 +0100
Klaus Wenninger wrote:
...
> Depending on the environment it might make sense to think about
> having the manual migration-step controlled by the cluster(s) using
> booth. Just thinking - not a specialist on that topic ...
Could you elaborate a bit on this?
Boo
Hi,
On Sat, 29 Jan 2022 16:51:47 -0500
Digimer wrote:
> ...
> Though going back to the original question, deleting the server from
> pacemaker while the VM is left running, is still something I am quite curious
> about.
As the real resource moved away, meaning it couldn't be stopped locally and
Le Fri, 21 Jan 2022 18:17:04 +0100,
damiano giuliani a écrit :
> Ehy,
>
> Take in account when a master node crash, you should re-allign the old
> master into the slave using pg_basebackup/pg_rewind and then rejoin the
> node into the cluster as a slave. This is the only way to avoid data
> corr
Hi,
Under EL and Debian, there's a PCMK_debug variable (iirc) in
"/etc/sysconfig/pacemaker" or "/etc/default/pacemaker".
Comments in there explain how to set debug mode for part or all of the
pacemaker processes.
This might be the environment variable you are looking for ?
Regards,
Le 31 oct
On Tue, 12 Oct 2021 09:46:04 +0200
"Ulrich Windl" wrote:
> >>> Jehan-Guillaume de Rorthais schrieb am 12.10.2021 um
> >>> 09:35 in
> Nachricht <20211012093554.4bb761a2@firost>:
> > On Tue, 12 Oct 2021 08:42:49 +0200
> > "Ulrich Wind
On Tue, 12 Oct 2021 08:42:49 +0200
"Ulrich Windl" wrote:
> ...
> >> sysctl ‑a | grep dirty
> >> vm.dirty_background_bytes = 0
> >> vm.dirty_background_ratio = 10
> >
> > Considering your 256GB of physical memory, this means you can dirty up to
> > 25GB
> > pages in cache before the kernel sta
Hi,
I kept the full answer in history to keep the list informed of your full
answer.
My answer down below.
On Mon, 11 Oct 2021 11:33:12 +0200
damiano giuliani wrote:
> ehy guys sorry for being late, was busy during the WE
>
> here i im:
>
>
> > Did you see the swap activity (in/out, not jus
On Sat, 9 Oct 2021 09:55:28 +0300
Andrei Borzenkov wrote:
> On 08.10.2021 16:00, damiano giuliani wrote:
> > ...
> > the servers are all resoruce overkills with 80 cpus and 256 gb ram even if
> > the db ingest milions records x day, the network si bonded 10gbs, ssd disks.
I don't remember if we
Le 9 octobre 2021 00:11:27 GMT+02:00, Strahil Nikolov a
écrit :
>What do you mean by 1s default timeout ?
I suppose Damiano is talking about the corosync totem token timeout.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listin
On Fri, 8 Oct 2021 15:00:30 +0200
damiano giuliani wrote:
> Hi Guys,
Hi,
Good to hear from you, thank for the follow up!
My answer below.
> ...
> So it turn out that a lil bit of swap was used and i suspect corosync
> process were swapped to disks creating lag where 1s default corosync
> time
On Fri, 23 Jul 2021 12:52:00 +0200
damiano giuliani wrote:
> the time query isnt the problem, is known that took its time. the network
> is 10gbs bonding, quite impossible to sature with queries :=).
Everything is possible, it's just harder :)
[...]
> checking again the logs what for me is not
On Thu, 22 Jul 2021 15:36:03 +0200
"Ulrich Windl" wrote:
> >>> Jehan-Guillaume de Rorthais schrieb am 22.07.2021 um
> 12:05 in
> Nachricht <20210722120537.0d65c2a1@firost>:
> > On Wed, 21 Jul 2021 22:02:21 -0400
> > "Frank D. Engel, Jr."
On Sat, 19 Jun 2021 08:32:02 +0100
lejeczek wrote:
> I've just yesterday updated OS packages among which some
> were for various PCS components, to versions:
> corosynclib-3.1.0-5.el8.x86_64
> pacemaker-schemas-2.1.0-2.el8.noarch
> pacemaker-cluster-libs-2.1.0-2.el8.x86_64
> pacemaker-cli-2.1.0-
Hi,
On Wed, 14 Jul 2021 07:58:14 +0200
"Ulrich Windl" wrote:
[...]
> Could it be that a command saturated the network?
> Jul 13 00:39:28 ltaoperdbs02 postgres[172262]: [20-1] 2021-07-13 00:39:28.936
> UTC [172262] LOG: duration: 660.329 ms execute : SELECT
> xmf.file_id, f.size, fp.full_path
On Thu, 22 Jul 2021 13:10:45 +0300
Andrei Borzenkov wrote:
> On Thu, Jul 22, 2021 at 1:05 PM Jehan-Guillaume de Rorthais
> wrote:
> > To do some rewording in regard with the current topic: if Pacemaker is able
> > to stop its resources after a quorum lost, it will not
On Thu, 22 Jul 2021 12:56:40 +0300
Andrei Borzenkov wrote:
> On Thu, Jul 22, 2021 at 12:43 PM Jehan-Guillaume de Rorthais
> wrote:
> >
> > On Wed, 21 Jul 2021 12:45:40 -0400
> > Digimer wrote:
> >
> > > On 2021-07-21 3:26 a.m., Jehan-Gu
On Wed, 21 Jul 2021 22:02:21 -0400
"Frank D. Engel, Jr." wrote:
> In OpenVMS, the kernel is aware of the cluster. As is mentioned in that
> presentation, it actually stops processes from running and blocks access
> to clustered storage when quorum is lost, and resumes them appropriately
> onc
On Wed, 21 Jul 2021 12:45:40 -0400
Digimer wrote:
> On 2021-07-21 3:26 a.m., Jehan-Guillaume de Rorthais wrote:
> > Hi,
> >
> > On Wed, 21 Jul 2021 04:28:30 + (UTC)
> > Strahil Nikolov via Users wrote:
> >
> >> Hi,
> >> consider usin
On Wed, 21 Jul 2021 04:50:09 -0400
"Frank D. Engel, Jr." wrote:
> OpenVMS can do this sort of thing without a requirement for fencing (you
> still need a third disk as a quorum device in a 2-node cluster), but
> Linux (at least in its current form) cannot.
Yes it can, as far as what you are de
Hi,
On Wed, 21 Jul 2021 04:28:30 + (UTC)
Strahil Nikolov via Users wrote:
> Hi,
> consider using a 3rd system as a Q disk. Also, you can use iscsi from that
> node as a SBD device, so you will have proper fencing .If you don't have a
> hardware watchdog device, you can use softdog kernel mod
On Thu, 15 Jul 2021 12:46:10 +0200
"Ulrich Windl" wrote:
> >>> Jehan-Guillaume de Rorthais schrieb am 15.07.2021 um
> 10:09 in
> Nachricht <20210715100930.06b45f5b@firost>:
> > Hi all,
> >
> > On Tue, 13 Jul 2021 19:55:30 + (UTC)
>
Hi all,
On Tue, 13 Jul 2021 19:55:30 + (UTC)
Strahil Nikolov wrote:
> In some cases the third location has a single IP and it makes sense to use it
> as QDevice. If it has multiple network connections to that location - use a
> full blown node .
By the way, what's the point of multiple ring
On Wed, 30 Jun 2021 14:36:29 +0200
damiano giuliani wrote:
> the replication is async, having a look into the postgres logs seems some
> updates failed cuz no master available.
'Not sure un understand what you mean. As Pacemaker recovered the primary on
the same node, standbys and clients lost t
Hi,
On Wed, 30 Jun 2021 13:44:28 +0200
damiano giuliani wrote:
> looks some applications lost connection to the master losing some
> update/insert.
>
> i found the cause into the logs, the psqld-monitor went timeout after
> 1ms and the master resource been demote, the instance stopped and t
On Wed, 26 May 2021 14:30:44 -0500
kgail...@redhat.com wrote:
> Without further comments, we've gone ahead with Libera.Chat as the new
> home of #clusterlabs. There is a new wiki page with the channel
> details:
>
> https://wiki.clusterlabs.org/wiki/ClusterLabs_IRC_channel
>
> so we can just po
On Wed, 28 Apr 2021 12:00:40 -0500
Ken Gaillot wrote:
> On Wed, 2021-04-28 at 18:14 +0200, Jehan-Guillaume de Rorthais wrote:
> > Hi all,
> >
> > It seems to me the concern raised by Ulrich hasn't been discussed:
> >
> > On Wed, 12 Apr 2021 Ulrich Windl
Hi all,
It seems to me the concern raised by Ulrich hasn't been discussed:
On Wed, 12 Apr 2021 Ulrich Windl wrote:
> Personally I think an RA calling crm_mon is inherently broken: Will it ever
> pass ocf-tester?
Would it be possible to rely on the following command ?
cibadmin --query --xpath
On Mon, 26 Apr 2021 18:04:41 + (UTC)
Strahil Nikolov wrote:
> I prefer that the stack is auto enabled. Imagine that you got a DB that is
> replicated and primary DB node is fenced. You would like that node to join
> the cluster and if possible to sync with the new primary instead of staying
>
On Tue, 13 Apr 2021 12:17:38 +0200
"Ulrich Windl" wrote:
[...]
> >good for SUSE! unfortunately RHEL didn't include the utility...
>
> Technically it should work, but there could be "political" reasons.
Few years ago, it was more incompatibilities reasons than political one.
I'm not sure o
On Sun, 11 Apr 2021 16:03:34 +0100
lejeczek wrote:
> On 10/04/2021 16:19, Jehan-Guillaume de Rorthais wrote:
> >
> > Le 10 avril 2021 14:22:34 GMT+02:00, lejeczek a
> > écrit :
> >> Hi guys.
> >>
> >> Any users perhaps experts on PAF agent
On Sun, 11 Apr 2021 04:21:02 + (UTC)
Strahil Nikolov wrote:
> Better check for a location constraint created via 'pcs resource move'!pcs
> constraint location --full | grep cli Best Regards,Strahil Nikolov
Oh, yes this is a good one, this should probably enters in our FAQ.
Thanks,
_
Le 10 avril 2021 14:22:34 GMT+02:00, lejeczek a écrit :
>Hi guys.
>
>Any users perhaps experts on PAF agent if happen to read
>this - a question - with pretty regular 3-node cluster when
>node on which "master" runs goes down then cluster/agent
>successfully moves 'master' to a next node.
H
Hi,
I'm one of the PAF author, so I'm biased.
On Fri, 26 Mar 2021 14:51:28 +
Isaac Pittman wrote:
> My team has the opportunity to update our PostgreSQL resource agent to either
> PAF (https://github.com/ClusterLabs/PAF) or pgsql
> (https://github.com/ClusterLabs/resource-agents/blob/master
On Thu, 18 Mar 2021 17:29:59 +0900
井上和徳 wrote:
> On Tue, Mar 16, 2021 at 10:23 PM Jehan-Guillaume de Rorthais
> wrote:
> >
> > > On Tue, 16 Mar 2021, 09:58 井上和徳, wrote:
> > >
> > > > Hi!
> > > >
> > > > Cluster (corosync an
> On Tue, 16 Mar 2021, 09:58 井上和徳, wrote:
>
> > Hi!
> >
> > Cluster (corosync and pacemaker) can be started with pcs,
> > but corosync-notifyd needs to be started separately with systemctl,
> > which is not easy to use.
Maybe you can add to the [Install] section of corosync-notifyd a dependency
On Thu, 11 Mar 2021 17:51:15 + (UTC)
Strahil Nikolov wrote:
> Interesting...
> Yet, this doesn't explain why token of 3 causes the nodes to never
> assemble a cluster (waiting for half an hour, using wait_for_all=1) , while
> setting it to 29000 works like a charm.
>
> Thankfully we got
On Tue, 26 Jan 2021 16:15:55 +0100
Tomas Jelinek wrote:
> Dne 25. 01. 21 v 17:01 Ken Gaillot napsal(a):
> > On Mon, 2021-01-25 at 09:51 +0100, Jehan-Guillaume de Rorthais wrote:
> >> Hi Digimer,
> >>
> >> On Sun, 24 Jan 2021 15:31:22 -0500
> >> Di
On Mon, 25 Jan 2021 10:22:20 +0100
"Ulrich Windl" wrote:
> Maybe it's time for target-role=stopped">... in CIB ;-)
Could you elaborate on what would be the differences with "stop‑all‑resources"?
Kind regards,
___
Manage your subscription:
https://lis
id ask for a migration. Is
> this the case?
AFAIK, yes, because each cluster shutdown request is handled independently at
node level. There's a large door open for all kind of race conditions if
requests are handled with some random lags on each nodes.
Regards,
--
Jehan-Gui
On Tue, 13 Oct 2020 04:48:04 -0400
Digimer wrote:
> On 2020-10-13 4:32 a.m., Jehan-Guillaume de Rorthais wrote:
> > On Mon, 12 Oct 2020 19:08:39 -0400
> > Digimer wrote:
> >
> >> Hi all,
> >
> > Hi you,
> >
> >>
> >&
On Mon, 12 Oct 2020 19:08:39 -0400
Digimer wrote:
> Hi all,
Hi you,
>
> I noticed that there appear to be a global "maintenance mode"
> attribute under cluster_property_set. This seems to be independent of
> node maintenance mode. It seemed to not change even when using
> 'pcs node maintenan
On Fri, 2 Oct 2020 15:18:18 +0300
Олег Самойлов wrote:
> > On 29 Sep 2020, at 11:34, Jehan-Guillaume de Rorthais
> > wrote:
> >
> >
> > Vagrant use virtualbox by default, which supports softdog, but it support
> > many other virtualization plateform,
1 - 100 of 320 matches
Mail list logo