On 08/06/2018 06:11 PM, Stephen Frost wrote:
Greetings,

* Tomas Vondra (tomas.von...@2ndquadrant.com) wrote:
On 08/06/2018 05:19 PM, Stephen Frost wrote:
* David Steele (da...@pgmasters.net) wrote:
I think for the stated scenario (known good standby that has been
shutdown gracefully) it makes perfect sense to trust the contents of
pg_wal.  Call this scenario #1.

An alternate scenario (#2) is that the data directory was copied using a
basic copy tool and the pg_wal directory was not excluded from the copy.
  This means the contents of pg_wal will be in an inconsistent state.
The files that are there might be partials (not with the extension,
though) and you can easily have multiple partials.  You will almost
certainly not have everything you need to get to consistency.

Yeah. But as Simon said, we do have fairly strong protections about applying
corrupted WAL - every record is CRC-checked. So why not to fall-back to the
restore_command only if the locally available WAL is not fully consistent?

"Corrupted" doesn't necessairly only mean "the data file was munged by
the storage somehow."  In this case, corrupted could be an old and only
partial WAL file, in which case we'd possibly be missing WAL that needs
to be replayed to bring the cluster back to a valid state, no?


Why wouldn't that be detected by checksums? Also, why wouldn't this be handled correctly by the logic I proposed, i.e. falling-back to remote WAL segment if the file is incomplete/broken in some sense?

But there's another good scenario (#3): where the pg_wal directory was
preloaded with all the WAL required to make the cluster consistent or
all the WAL that was available at restore time.  In this case, it would
be make sense to prefer the contents of pg_wal and only switch to
restore_command after that has been exhausted.

So, the choice of whether to prefer locally-stored or
restore_command-fetched WAL is context-dependent, in my mind.

Agreed.

Maybe, not sure.

The argument that David makes above in scenario #2 certainly looks
entirely likely to me and I don't think we've got any real protections
against that.  The current common use-cases happen to work around the
risk because tools like pg_basebackup ignore the existing pg_wal
directory when doing the backup and instead populate it with exactly the
correct WAL that's needed, and in cases where a restore command is
specified will always pull back only valid WAL, but I don't think we can
decide that this scenario (#2 from above):


I'm probably missing something, but why couldn't we detect that using CRC. Or make sure we can detect that, e.g. by adding some additional info into each WAL segment?


Ideally we could have a default that is safe in each scenario with
perhaps an override if the user knows better.  Scenario #1 would allow
WAL to be read from pg_wal by default, scenario #2 would prefer fetched
WAL, and scenario #3 could use a GUC to override the default fetch behavior.

Not sure how we'd be able to automatically realize which scenario we're
in though..?

But do we need to know it? I mean, can't we try the local WAL first, use it
if it passes the CRC checks (and possibly some other checks), and only
fallback to the remote WAL if it's identified as broken?

Maybe- but I think we need to be quite sure about that and I don't
believe that just checking the CRCs is enough.


Sure. So what else would we need to add to each WAL segment to make that possible? It needs to be cheap enough not to cause perf regression, but e.g. a CRC of each segment + amount of data actually stored in it should be sufficient.

Another idea - what if we instead allow fetching additional information of the archived WAL, and make decision based on that? For example, before restore_command we could request md5 checksum from the archive, compare it to the local WAL, and if they match then use the local one. Otherwise request the remote one. That'd still be a win.


regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to