Dimitri Fontaine <dimi...@2ndquadrant.fr> wrote:
> "Kevin Grittner" <kevin.gritt...@wicourts.gov> writes:
>> (1)  We're talking about a new /bin executable to do this which
>> could be referenced in an archive_command string or run from a
>> script called by archive_command, right?
> That, or an internal implementation.  That would be a function in
> the backend that would be called when archive_command is set to
> some specific value, like for example test and cd are command
> lines referring not to some executable on the PATH but to some
> internal code in bash.
> But I know some people here will frown upon that idea.
I think that would limit its usefulness.  We're doing some things in
our archive script (like sending the DBA team an email when an
archive command fails) which might not be included in such a
utility.  (Or would it???)
>> (2)  It should copy, not move, with protection against
>> overwriting an existing file.
> See, we need to provide a good production grade facility.  I've
> never tried to do it myself, I'm just using walmgr to manage my
> archives.
I'm not familiar with that product, so that doesn't mean much to me.
Could you talk in terms of what features you would want?
>> (4)  Maybe not in the initial version, but eventually it might be
>> nice to support checking for an "owner" file of some sort in the
>> target directory, to help sort out problems with copied databases
>> writing to the same location as the source.
> Then we need to provide the associated restore command which must
> not be one "owner" here I guess*
I'm not following you here.  My reasoning is that the main reason
I've seen given to avoid overwriting an existing file (and something
we've run into in this shop a few times when people weren't paying
proper attention), is that a database can be copied to another
location on the same machine (say for a test or development database
based on the production database) and the archive command in that
copy might directly or indirectly point to the same archive
directory as the original.  You don't want the databases clobbering
each other's WAL files -- you want an error here.  The feature I'm
proposing is one that would allow the original to continue without
any errors, and generate an error on an attempt to archive there
from the copy -- even if it tried to write the file first.
A couple other things which would need a little thought regarding
such a utility would be what to do about "cleaning" each WAL file
(we use pg_clearxlogtail and others use pglesslog) and what to do
about compression (we filter through gzip).  There is a net
performance win with running the WAL files through this filtering
even before you look at disk space or network bandwidth issues.
Maybe we could extract a subset of pg_standby or something to get
all these features; not sure.

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to