Amit Kapila <amit.kapil...@gmail.com> writes:
> While working on write-ahead-logging of hash indexes, I noticed that
> this function allocates buckets in batches and the mechanism it uses
> is that it initialize the last page of batch with zeros and expect
> that the filesystem will ensure the intervening pages read as zeroes
> too.

Yes.  AFAIK that filesystem behavior is required by POSIX.

> I think to make it WAL enabled, we need to initialize the page header
> (using PageInit() or equivalent) instead of initializing it with
> zeroes as some part of our WAL replay machinery expects that the page
> should not be new as indicated by me in other thread [1].

I don't really see why that's a problem.  The only way one of the fill
pages would get to be not-zero is if there is a WAL action later in the
stream that overwrites it.  So how would things become inconsistent?

> Offhand, I don't see any problem with just
> initializing the last page and write the WAL for same with
> log_newpage(), however if we try to initialize all pages, there could
> be some performance penalty on split operation.

"Some" seems like rather an understatement.  And it's not just the
added I/O, it's the fact that you'd need to lock each bucket as you
went through them to avoid clobbering concurrently-inserted data.
If you weren't talking about such an enormous penalty, I might be okay
with zeroing the intervening pages explicitly rather than depending on
the filesystem to do it.  But since you are, I think you need a clearer
explanation of why this is necessary.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to