>>> On Mon, Sep 24, 2007 at 4:17 PM, in message <[EMAIL PROTECTED]>, "Kevin Grittner" <[EMAIL PROTECTED]> wrote: >>>>> On Thu, Sep 6, 2007 at 7:03 PM, in message >> <[EMAIL PROTECTED]>, Jeff Davis <[EMAIL PROTECTED]> >> wrote: >>> >>> I think ... there's still room for a simple tool that can zero out >>> the meaningless data in a partially-used WAL segment before compression. > > so I'm looking for advice, direction, and suggestions before I get started.
Lacking any suggestions, I plowed ahead with something which satisfies our needs. First, rough, version attached. It'll save us buying another drawer of drives, so it was worth a few hours of research to figure out how to do it. If anyone spots any obvious defects please let me know. We'll be running about 50,000 WAL files through it today or tomorrow; if any problems turn up in that process I'll repost with a fix. Given the lack of response to my previous post, I'll assume it's not worth the effort to do more in terms of polishing it up; but if others are interested in using it, I'll make some time for that. Adding this to the pipe in our archive script not only saves disk space, but reduces the CPU time overall, since gzip usually has less work to do. When WAL files switch because they are full, the CPU time goes from about 0.8s to about 1.0s. -Kevin
Description: Binary data
---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly