2014-11-13 9:05 GMT+01:00 Heikki Linnakangas hlinnakan...@vmware.com:
Right. You have to be careful to make sure the standby really did fully
catch up with the master, though. If it happens that the replication
connection is momentarily down when you shut down the master, for example,
then
Hi,
On Sat, Nov 15, 2014 at 5:31 PM, Maeldron T. maeld...@gmail.com wrote:
A safely shut down master (-m fast is safe) can be safely restarted as
a slave to the newly promoted master. Fast shutdown shuts down all
normal connections, does a shutdown checkpoint and then waits for this
On 16/11/14 13:13, didier wrote:
I think you have to add
recovery_target_timeline = '2'
in recovery.conf
with '2' being the new primary timeline .
cf http://www.postgresql.org/docs/9.4/static/recovery-target-settings.html
Thank you.
Based on the link I have added:
recovery_target_timeline =
On 12/11/14 14:28, Ants Aasma wrote:
On Tue, Nov 11, 2014 at 11:52 PM, Maeldron T. maeld...@gmail.com wrote:
As far as I remember (I can’t test it right now but I am 99% sure) promoting
the slave makes it impossible to connect the old master to the new one without
making a base_backup. The
On 11/12/2014 03:28 PM, Ants Aasma wrote:
On Tue, Nov 11, 2014 at 11:52 PM, Maeldron T. maeld...@gmail.com wrote:
As far as I remember (I can’t test it right now but I am 99% sure) promoting
the slave makes it impossible to connect the old master to the new one without
making a base_backup.
On Thu, Nov 13, 2014 at 10:05 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
In this case the
old master will request recovery from a point after the timeline
switch and the new master will reply with an error. So it is safe to
try re-adding a crashed master as a slave, but this might
On Tue, Nov 11, 2014 at 11:52 PM, Maeldron T. maeld...@gmail.com wrote:
As far as I remember (I can’t test it right now but I am 99% sure) promoting
the slave makes it impossible to connect the old master to the new one
without making a base_backup. The reason is the timeline change. It
Hi,
2014-10-29 17:46 GMT+01:00 Robert Haas robertmh...@gmail.com
mailto:robertmh...@gmail.com:
Yes, but after the restart, the slave will also rewind to the most
recent restart-point to begin replay, and some of the sanity checks
that recovery.conf enforces will be lost during that
Hello,
I swear I have read a couple of old threads. Yet I am not sure if it safe
to failback to the old master in case of async replication without base
backup.
Considering:
I have the latest 9.3 server
A: master
B: slave
B is actively connected to A
I shut down A manually with -m fast (it's
On Wed, Oct 29, 2014 at 6:21 AM, Maeldron T. maeld...@gmail.com wrote:
I swear I have read a couple of old threads. Yet I am not sure if it safe to
failback to the old master in case of async replication without base backup.
Considering:
I have the latest 9.3 server
A: master
B: slave
B is
Thank you, Robert.
I thought that removing the recovery.conf file makes the slave master only
after the slave was restarted. (Unlike creating the a trigger_file). Isn't
this true?
I also thought that if there was a crash on the original master and it
applied WAL entries on itself that are not
On Wed, Oct 29, 2014 at 12:43 PM, Maeldron T. maeld...@gmail.com wrote:
Thank you, Robert.
I thought that removing the recovery.conf file makes the slave master only
after the slave was restarted. (Unlike creating the a trigger_file). Isn't
this true?
Yes, but after the restart, the slave
12 matches
Mail list logo