>>> On Thu, Sep 6, 2007 at  7:31 PM, in message
<[EMAIL PROTECTED]>, "Kevin Grittner"
>>>> On Thu, Sep 6, 2007 at  7:03 PM, in message
> wrote: 
>> I think ... there's still room for a simple tool that can zero out
>> the meaningless data in a partially-used WAL segment before compression.
>> It seems reasonable to me, so long as you keep archive_timeout at
>> something reasonably high.
>> If nothing else, people that already have a collection of archived WAL
>> segments would then be able to compact them.
> That would be a *very* useful tool for us, particularly if it could work
> against our existing collection of old WAL files.
Management here has decided that it would be such a useful tool for our
organization that, if nobody else is working on it yet, it is something I
should be working on this week.  Obviously, I would much prefer to do it
in a way which would be useful to the rest of the PostgreSQL community,
so I'm looking for advice, direction, and suggestions before I get started.
I was planning on a stand-alone executable which could be run against a
list of files to update them in-place, or to handle as single file as a
stream.  The former would be useful for dealing with the accumulation of
files we've already got, the latter would be used in our archive script,
just ahead of gzip in the pipe.
Any suggestions on an existing executable to use as a model for "best
practices" are welcome, as are suggestions for the safest and most robust
techniques for identifying the portion of the WAL file which should be set
to zero.
Finally, I assume that I should put this on pgfoundry?

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to