On Jun 12, 2013 4:56 PM, "Robert Haas" <robertmh...@gmail.com> wrote:
>
> On Sat, Jun 8, 2013 at 10:36 AM, MauMau <maumau...@gmail.com> wrote:
> > Yes, I feel designing reliable archiving, even for the simplest case -
copy
> > WAL to disk, is very difficult.  I know there are following three
problems
> > if you just follow the PostgreSQL manual.  Average users won't notice
them.
> > I guess even professional DBAs migrating from other DBMSs won't, either.
> >
> > 1. If the machine or postgres crashes while archive_command is copying
a WAL
> > file, later archive recovery fails.
> > This is because cp leaves a file of less than 16MB in archive area, and
> > postgres refuses to start when it finds such a small archive WAL file.
> > The solution, which IIRC Tomas san told me here, is to do like "cp %p
> > /archive/dir/%f.tmp && mv /archive/dir/%f.tmp /archive/dir/%f".
> >
> > 2. archive_command dumps core when you run pg_ctl stop -mi.
> > This is because postmaster sends SIGQUIT to all its descendants.  The
core
> > files accumulate in the data directory, which will be backed up with the
> > database.  Of course those core files are garbage.
> > archive_command script needs to catch SIGQUIT and exit.
> >
> > 3. You cannot know the reason of archive_command failure (e.g. archive
area
> > full) if you don't use PostgreSQL's server logging.
> > This is because archive_command failure is not logged in
syslog/eventlog.
> >
> >
> > I hope PostgreSQL will provide a reliable archiving facility that is
ready
> > to use.
>
> +1.  I think we should have a way to set an archive DIRECTORY, rather
> than an archive command.  And if you set it, then PostgreSQL should
> just do all of that stuff correctly, without any help from the user.

Wouldn't that encourage people to do local archiving, which is almost
always a bad idea?

I'd rather improve the experience with pg_receivexlog or another way that
does remote archiving...

> Of course, some users will want to archive to a remote machine via ssh
> or rsync or what-have-you, and those users will need to provide their
> own tools.  But it's got to be pretty common to archive to a local
> path that happens to be a remote mount, or to a local directory whose
> contents are subsequently copied off by a batch job.  Making that work
> nicely with near-zero configuration would be a significant advance.

I guess archiving to a nfs mount or so isn't too bad, but archiving and
using a cronjob to get the files off is typically a great way to loose
data, and we really shouldn't encourage that by default, Imo.

/Magnus

Reply via email to