On Tue, Jul 4, 2017 at 4:27 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Peter Eisentraut <peter.eisentr...@2ndquadrant.com> writes:
>> On 7/3/17 09:53, Tom Lane wrote:
>>> Hm.  Before we add a bunch of code to deal with that, are we sure we
>>> *want* it to copy such files?  Seems like that's expending a lot of
>>> data-transfer work for zero added value --- consider e.g. a server
>>> with a bunch of old core files laying about in $PGDATA.  Given that
>>> it's already excluded all database-data-containing files, maybe we
>>> should just set a cap on the plausible size of auxiliary files.
>> It seems kind of lame to fail on large files these days, even if they
>> are not often useful in the particular case.
> True.  But copying useless data is also lame.

We don't want to complicate pg_rewind code with filtering
capabilities, so if the fix is simple I think that we should include
it and be done. That will avoid future complications as well.

>> Also, most of the segment and file sizes are configurable, and we have
>> had reports of people venturing into much larger file sizes.
> But if I understand the context correctly, we're not transferring relation
> data files this way anyway.  If we do transfer WAL files this way, we
> could make sure to set the cutoff larger than the WAL segment size.

WAL segments are not transferred. Only the WAL data of the the target
data folder is gone through to find all the blocks that have been
touched from the last checkpoint before WAL forked.

Now, I think that this is broken for relation files higher than 2GB,
see fetch_file_range where the begin location is an int32.

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to