Thank you!
_Vitaly
> On 02/26/2024 10:28 AM EST Ken Gaillot wrote:
>
>
> On Thu, 2024-02-22 at 08:05 -0500, vitaly wrote:
> > Hello.
> > We have a product with 2 node clusters.
> > Our current version is using Pacemaker 2.1.4 the new version will be
> &
Hello.
We have a product with 2 node clusters.
Our current version is using Pacemaker 2.1.4 the new version will be using
Pacemaker 2.1.6
During upgrade failure it is possible that one node will come up with the new
Pacemaker and work alone for a while.
Then old node would later come up and try
OK,
Thank you very much for your help!
_Vitaly
> On 07/05/2022 8:47 PM Reid Wahl wrote:
>
>
> On Tue, Jul 5, 2022 at 3:03 PM vitaly wrote:
> >
> > Hello,
> > Yes, the snippet has everything there was for the full second of Jul 05
> > 11:54:34. I did not c
s are not the same time as this morning as I reinstalled
cluster couple of times today.
Thanks,
_Vitaly
> On 07/05/2022 3:19 PM Reid Wahl wrote:
>
>
> On Tue, Jul 5, 2022 at 5:17 AM vitaly wrote:
> >
> > Hello,
> > Thanks for looking at this issue!
-25-left.lab.archivas.com : DISCONNECT->STREAMING|ASYNC.
Jul 5 11:54:35 d19-25-right pgsql-rhino(postgres)[2298353]: INFO: Setup
d19-25-left.lab.archivas.com into sync mode.
> On 07/04/2022 3:57 PM Reid Wahl wrote:
>
>
> On Mon, Jul 4, 2022 at 7:19 AM vitaly wrote:
> >
> > I get p
for pgsql-rhino.
Rhino configuration file
> On 07/04/2022 5:39 AM Reid Wahl wrote:
>
>
> On Mon, Jul 4, 2022 at 1:06 AM Reid Wahl wrote:
> >
> > On Sat, Jul 2, 2022 at 1:12 PM vitaly wrote:
> > >
> > > Sorry, I noticed that I am
Sorry, I noticed that I am missing meta "notice=true" and after adding it to
postgres-ms configuration "notice" events started to come through.
Item 1 still needs explanation. As pacemaker-controld keeps complaining.
Thanks!
_Vitaly
> On 07/02/2022 2:04 PM vitaly wrote:
&
Hello Everybody.
I have a 2 node cluster with clone resource “postgres-ms”. We are running
following versions of pacemaker/corosync:
d19-25-left.lab.archivas.com ~ # rpm -qa | grep "pacemaker\|corosync"
pacemaker-cluster-libs-2.0.5-9.el8.x86_64
pacemaker-libs-2.0.5-9.el8.x86_64
pacemaker-cli-2.0.5
Hello Everybody.
I am seeing occasionally the following behavior on two node cluster.
1. Abruptly rebooting both nodes of the cluster (using "reboot")
2. Both nodes start to come up. Node d18-3-left (2) comes up first
Apr 13 23:56:09 d18-3-left corosync[11465]: [MAIN ] Corosync Cluster Engine
Ken,
Thank you very much for your help!
3.1.15-3 seems to satisfy our need and needed very few fixed to build on CentOs
8.
We will go ahead with that version.
Thanks again!
_Vitaly
> On November 23, 2021 6:03 PM Ken Gaillot wrote:
>
>
> On Tue, 2021-11-23 at 17:36 -0500,
Postgres.
The only reason we shutdown the cluster now is because old Corosync does not
talk to new Corosync.
_Vitaly
> On November 24, 2021 2:22 AM Ulrich Windl
> wrote:
>
>
> >>> vitaly schrieb am 23.11.2021 um 20:11 in Nachricht
> <4567763
your help!
_Vitaly
> On November 23, 2021 5:12 PM Ken Gaillot wrote:
>
>
> On Tue, 2021-11-23 at 14:11 -0500, vitaly wrote:
> > Hello,
> > I am working on the upgrade from older version of pacemaker/corosync
> > to the current one. In the interim we need to sync new
ive/fedora/linux/releases/23/Everything/x86_64/os/Packages/c/corosync-2.3.5-1.fc23.x86_64.rpm
> which theoretically should be close enough.
>
>
> P.S.: I couldn't find those versions for Fedora 22, but they seem available
> for F23.
>
> Best Regards,
> Strahil Nikolov
>
Hello,
I am working on the upgrade from older version of pacemaker/corosync to the
current one. In the interim we need to sync newly installed node with the node
running old software. Our old node uses pacemaker 1.1.13-3.fc22 and corosync
2.3.5-1.fc22 and has crm_feature_set 3.0.10.
For interim
> On October 22, 2020 1:54 PM Strahil Nikolov wrote:
>
>
> Have you tried to backup the config via crmsh/pcs and when you downgrade to
> restore from it ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В четвъртък, 22 октомври 2020 г
Hello,
We are trying to upgrade our product from Corosync 2.X to Corosync 3.X. Our
procedure includes upgrade where we stopthe cluster, replace rpms and restart
the cluster. Upgrade works fine, but we also need to implement rollback in case
something goes wrong.
When we rollback and reload old
ion number and a
> request to use a different version number would be needed, too.
>
> Regards,
> Ulrich
>
> >>> Vitaly Zolotusky schrieb am 11.06.2020 um 04:14 in
> Nachricht
> <19881_1591841678_5EE1938D_19881_553_1_1163034878.247559.1591841668387@webmail6.
une 11, 2020 12:00 PM Strahil Nikolov wrote:
>
>
> Hi Vitaly,
>
> have you considered something like this:
> 1. Setup a new cluster
> 2. Present the same shared storage on the new cluster
> 3. Prepare the resource configuration but do not apply yet.
> 3. Po
uct upgrade to the new OS and product
> > version
> > Thanks again for your help!
> > _Vitaly
> >
> >> On June 11, 2020 3:30 AM Jan Friesse wrote:
> >>
> >>
> >> Vitaly,
> >>
> >>> Hello everybody.
> >>
the product online
3. start rolling upgrade for product upgrade to the new OS and product version
Thanks again for your help!
_Vitaly
> On June 11, 2020 3:30 AM Jan Friesse wrote:
>
>
> Vitaly,
>
> > Hello everybody.
> > We are trying to do a rolling upgrade from Co
Hello everybody.
We are trying to do a rolling upgrade from Corosync 2.3.5-1 to Corosync 2.99+.
It looks like they are not compatible and we are getting messages like:
Jun 11 02:10:20 d21-22-left corosync[6349]: [TOTEM ] Message received from
172.18.52.44 has bad magic number (probably sent by
have is that we modify
scripts to work with our hardware and I am in the process of going through
these changes.
Thanks again!
_Vitaly
> On September 24, 2019 at 12:29 AM Andrei Borzenkov
> wrote:
>
>
> 23.09.2019 23:23, Vitaly Zolotusky пишет:
> > Hello,
> > I am try
Hello,
I am trying to upgrade to Fedora 30. The platform is two node cluster with
pacemaker.
It Fedora 28 we were using old fence_sbd script from 2013:
# This STONITH script drives the shared-storage stonith plugin.
# Copyright (C) 2013 Lars Marowsky-Bree
We were overwriting the distrib
, after which both nodes will start corosync/pacemaker
> close in time. If one node never comes up, then it will wait 10 minutes
> before starting, after which the other node will be fenced (startup fencing
> and subsequent resource startup will only happen will only occur if
> no-qu
an indication that something is not working right and we would
like to get to the bottom of it.
Thanks again!
_Vitaly
> On December 18, 2018 at 1:47 AM Ulrich Windl
> wrote:
>
>
> >>> Vitaly Zolotusky schrieb am 17.12.2018 um 21:43 in
> >>> Nachr
s close to each other as possible.
Thanks again!
_Vitaly
> On December 17, 2018 at 6:01 PM Ken Gaillot wrote:
>
>
> On Mon, 2018-12-17 at 15:43 -0500, Vitaly Zolotusky wrote:
> > Hello,
> > I have a 2 node cluster and stonith is configured for SBD and
> > fence_
26 matches
Mail list logo