> In response to Jim Nasby <[EMAIL PROTECTED]>:
>> I was recently running defrag on my windows/parallels VM and noticed  
>> a bunch of WAL files that defrag couldn't take care of, presumably  
>> because the database was running. What's disturbing to me is that  
>> these files all had ~2000 fragments.

It sounds like that filesystem is too stupid to coalesce successive
write() calls into one allocation fragment :-(.  I agree with the
comments that this might not be important, but you could experiment
to see --- try increasing the size of "zbuffer" in XLogFileInit to
maybe 16*XLOG_BLCKSZ, re-initdb, and see if performance improves.

The suggestion to use ftruncate is so full of holes that I won't
bother to point them all out, but certainly we could write more than
just XLOG_BLCKSZ at a time while preparing the file.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to