Fix data loss when restarting the bulk_write facility If a user started a bulk write operation on a fork with existing data to append data in bulk, the bulk_write machinery would zero out all previously written pages up to the last page written by the new bulk_write operation.
This is not an issue for PostgreSQL itself, because we never use the bulk_write facility on a non-empty fork. But there are use cases where it makes sense. TimescaleDB extension is known to do that to merge partitions, for example. Backpatch to v17, where the bulk_write machinery was introduced. Author: Matthias van de Meent <[email protected]> Reported-By: Erik Nordström <[email protected]> Reviewed-by: Erik Nordström <[email protected]> Discussion: https://www.postgresql.org/message-id/cacaa4vj%[email protected] Branch ------ master Details ------- https://git.postgresql.org/pg/commitdiff/ee937f0409d5c75855861bf31f88eeb77623b411 Modified Files -------------- src/backend/storage/smgr/bulk_write.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-)
