On 06/19/2018 12:05 PM, Andres Freund wrote:
Hi,

On 2018-06-19 11:51:16 -0400, Andrew Dunstan wrote:
My initial thought was that as a fallback we should disable pg_upgrade on
databases containing such values, and document the limitation in the docs
and the release notes. The workaround would be to force a table rewrite
which would clear them if necessary.
I personally would say that that's not acceptable. People will start
using fast defaults - and you can't even do anything against it! - and
suddenly pg_upgrade won't work. But they will only notice that years
later, after collecting terrabytes of data in such tables.


Umm, barring the case that Tom mentioned by then it would just work. It's not the case that if they put in fast default values today they will never be able to upgrade.



If we can't fix it properly, then imo we should revert / neuter the
feature.


Have we ever recommended use of pg_upgrade for some manual catalog fix after
release? I don't recall doing so. Certainly it hasn't been common.
No, but why does it matter? Are you arguing we can delay pg_dump support
for fast defaults to v12?



Right now I'm more or less thinking out loud, not arguing anything.

I'd at least like to see what a solution might look like before ruling it out. I suspect I can come up with something in a day or so. The work wouldn't be wasted.

cheers

andrew

--
Andrew Dunstan                https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Reply via email to