>>> On Thu, Sep 27, 2007 at  6:56 AM, in message
> On Wed, 2007-09-26 at 16:31 -0500, Kevin Grittner wrote:
>> The one downside I've found is that it adds 0.2
>> seconds of CPU time per WAL file archive during our heaviest update
>> periods.  It's in the archiver process, not a backend process that's
>> running a query, and we're not generally CPU bound, so this is not a
>> problem for us. 
> OK, first time anybody's measured a significant cost to process creation
> during execution of the archive_command. Still fairly low though.
Confirmed in further tests on a normal production environment.  Starting
from a set of OS cached, 16 MB WAL files representing several days of
activity, the overall time to compress through gzip to disk went down
when piped through this filter, but the time for a full file went up.
Best case:
real    0m0.769s
user    0m0.759s
sys     0m0.009s
gz size: 4562441
pg_cleanxlogtail | gzip:
real    0m0.132s
user    0m0.119s
sys     0m0.024s
gz size: 16406
Worst case:
real    0m0.781s
user    0m0.770s
sys     0m0.010s
gz size: 4554307
pg_cleanxlogtail | gzip:
real    0m1.073s
user    0m1.018s
sys     0m0.063s
gz size: 4554307
Is it necessary to try to improve that worst case?
By the way, I realize that the error messages are still lame.
I'm going to do something about that.  I particularly don't like this
as a failure message:
> echo 7777777777777777777 `cat 0000000100000003000000EF` | pg_clearxlogtail > 
> /dev/null
pg_clearxlogtail: Warning, unexpected magic number
pg_clearxlogtail: stdin: Success
Is the filter-only approach acceptable, after the discussion here?
Is the magic number hanlding OK; if not, what would be?
Any other issues that I should address?

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at


Reply via email to