On 1/6/17 8:09 AM, Feike Steenbergen wrote:
On 6 January 2017 at 13:50, Magnus Hagander <mag...@hagander.net
<mailto:mag...@hagander.net>> wrote:
I think we're better off clearly documenting that we don't care about
it. And basically let the external command be responsible for that part.
So for example, your typical backup manager would listen to this
signal or whatever to react quickly. But it would *also* have some sort
of fallback. For example, whenever it's triggered it also checks if
there are any previous segments it missed (this would also cover the
startup sequence).
I'm fine with the backup manager doing all the work of keeping track of
what has been compressed, moved to archive, etc. No need to reinvent
the wheel here.
For my part I still prefer an actual command to be executed so it will
start/restart the archiver if it is not already running or died. This
reduces the number of processes that I need to ensure are running.
If the consensus is that a signal is better then I'll make that work. I
will say this raises the bar on what is required to write a good archive
command and we already know it is quite a difficult task.
For me this works fine. I just want to ensure that if there is any work
to be done, my backup manager will do the work quickly. My
implementation might be very simply a process that checks every n
seconds or when signalled.
Since we never actually remove anything (unlike archive_command which
has the integration with wal segment rotation), I think this can be done
perfectly safe.
Looking at the usecases where you have been doing it, are there any
where this would not work?
This would work for all usecases I've come across.
Agreed.
--
-David
da...@pgmasters.net
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers