This is something it seems we’ll need to test. Ideally, it would be
great if nodes which don’t have any provider-subscriber relationships between
them didn’t need to maintain state about each other at all.
Tom (
On 8/4/17, 12:18 AM, "Steve Singer" <[email protected]> wrote:
On Thu, 3 Aug 2017, Tignor, Tom wrote:
>
> Thanks Steve. I should mention, the dependence on indirect subscribers
for a successful failover may provide a scalability limitation for us. We’re
required to complete failover reliably in just a few minutes. Getting
acknowledgements from all the multiple geographically distributed nodes in the
allotted timeframe has sometimes been challenging. Would this be a worthwhile
Slony-I feature? I believe I could find time in my schedule to do the dev work
myself, if that would be helpful.
>
> Tom (
If you remove the edge nodes from the admin conninfo section does this
solve
your issue? Does it introduce any other issues?
The trick is being able to figure out which nodes it actually needs to wait
for and which ones don't. Part of the problem is to think about how the
edge nodes will catch up with the events they haven't yet processed if they
then get the FAILNODE command earlier.
>
>
> On 8/2/17, 5:51 PM, "Steve Singer" <[email protected]> wrote:
>
> On Mon, 31 Jul 2017, Tignor, Tom wrote:
>
>
> I THINK, and I am not 100% sure of this, but looking at the code it
appears
> to do this is
>
> that the failover process will wait for each of the non-failed nodes to
> receive/confirm the FAILOVER event before finishng the failover
process.
>
>
> >
> > Hi Steve,
> > A question on one item:
> >
> > - Fix some failover issues when doing a multi-node failover
> > with a cascaded node.
> >
> > In cascaded node failover, is it necessary to sync with every
receiver node for a failed over set? Or is it sufficient to sync only with
nodes directly subscribing to the failed over node? Hoping for the latter!
> > Thanks,
> >
> > Tom (
> >
> >
> > On 7/30/17, 10:15 PM, "Steve Singer" <[email protected]> wrote:
> >
> >
> > I am thinking of releasing slony 2.2.6 later this week or early
next week.
> > Changes are checked into git on the REL_2_2_STABLE branch.
> >
> > Our version detection code doesn't work with the PG10+ version
numbering. I
> > wasn't planning on backporting these changes to 2.1 or earlier
but someone
> > could if they really wanted to.
> >
> >
> > The following are the changes I am planning on including in 2.2.6
> >
> > - slonik_build_env can now accept multiple -schema options on
the command
> > line
> > - Support for PG10. This involved changes to PG version
detection
> > - Disallow createEvent and data changes in the same
transaction.
> > This also fixes some issues when the logApply trigger
invokes the
> > data change trigger by inserting into a table with a
trigger that
> > in turns inserts into another replicated table.
> > - Fix some failover issues when doing a multi-node failover
> > with a cascaded node.
> > - Bug 341 - suppress log trigger/deny when running in 'local'
mode
> >
> >
> >
> > If I don't hear any objections, or requests for more time to test
I work
> > through the release process when I have a chance, likely Monday.
> >
> > Steve
> >
> > _______________________________________________
> > Slony1-general mailing list
> > [email protected]
> > http://lists.slony.info/mailman/listinfo/slony1-general
> >
> >
> >
>
>
>
>
_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general