I've finished reviewing the code. I already marked this patch as waiting on
author. I'll be waiting for the revised patch, then proceed to running the
program when the patch seems reasonable.
Like worker_spi, save and restore errno in signal handlers.
Remove the following comment, and similar ones if any, because this module
is to be included in 9.5 and above.
* In Postgres version 9.4 and above, we use the dynamic background worker
* infrastructure for BlockReaders, and the BufferSaver process does the
* legwork of registering the BlockReader workers.
Expand the body following functions at their call sites and remove the
function definition, because they are called only once. It would be
straightforward to see the processing where it should be.
I don't understand the use case of pg_hibernator.enabled. Is this really
necessary? In what situations does this parameter "really" matter? If it's
not crucial, why don't we remove it and make the specification simpler?
As mentioned above, you can remove CreateDirectory(). Instead, you can just
mkdir() unconditionally and check the result, without first stat()ing.
Use AllocateDir/ReadDir/FreeDir instead of opendir/readdir/closedir in the
server process. Plus, follow the error handling style of other source files
The local variable hibernate_dir appears to be unnecessary because it always
holds SAVE_LOCATION as its value. If so, remove it.
I think the severity below should better be WARNING, but I don't insist.
ereport(LOG, (errmsg("registration of background worker failed")));
"iff" should be "if".
/* Remove the element from pending list iff we could register a worker
How is this true? Does the shared system catalog always have at least one
/* Make sure the first buffer we save belongs to global object. */
Assert(buf->database == InvalidOid);
* Special case for global objects. The sort brings them to the
* front of the list.
The format should be %u, not %d.
(errmsg("writer: writing block db %d filenode %d forknum %d blocknum
database_counter, prev_filenode, prev_forknum, buf->blocknum)));
Why is the first while loop in BlockReaderMain() necessary? Just opening
the target save file isn't enough?
Use MemoryContextAllocHuge(). palloc() can only allocate chunks up to 1GB.
* This is not a concern as of now, so deferred; there's at least one other
* place that allocates (NBuffers * (much_bigger_struct)), so this seems to
* be an acceptable practice.
saved_buffers = (SavedBuffer *) palloc(sizeof(SavedBuffer) * NBuffers);
It's better for the .save files to be created per tablespace, not per
database. Tablespaces are more likely to be allocated on different storage
devices for I/O distribution and capacity. So, it would be more natural to
expect that we can restore buffers more quickly by letting multiple Block
Readers do parallel I/O on different storage devices.
Reading the below description in the documentation, it seems that Block
Readers can exit with 0 upon successful completion, because bgw_restart_time
is set to BGW_NEVER_RESTART.
"If bgw_restart_time for a background worker is configured as
BGW_NEVER_RESTART, or if it exits with an exit code of 0 or is terminated by
TerminateBackgroundWorker, it will be automatically unregistered by the
postmaster on exit."
As others said, I also think that Buffer Saver should first write to a temp
file, say *.tmp, then rename it to *.save upon completion. That prevents
the Block Reader from reading possibly corrupted half-baked file that does
not represent any hibernated state.
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: