>
> By the errors-to-warnings change, even when those libpq functions
> fail, we fall back to the normal ANALYZE processing, but I don't think
> that that is OK, because if those functions fail for some reasons
> (network-related issues like network disconnection or memory-related
> issues like out of memory), then libpq functions called later for the
> normal processing would also be likely to fail for the same reasons,
> causing the same failure again, which I don't think is good.  To avoid
> that, when libpq functions in fetch_relstats() and fetch_attstats()
> fail, shouldn't we just throw an error, as before?
>

Right now the code doesn't differentiate on the kind of an error that was
encountered. If it was a network error, then not only do we expect fallback
to also fail, but simple fdw queries will fail as well. If, however it were
a permission issue (no SELECT permission on pg_stats, for instance, or
pg_stats doesn't exist because the remote database is not a regular
Postgres like redshift or something) then I definitely want to fall back.
Currently, the code makes no judgement on the details of the error, it just
trusts that fallback-analyze will either succeed (because it was
permissions related) or it will quickly encounter the same insurmountable
effort, and one extra PQsendQuery isn't that much overhead.

If you think that inspecting the error that we get and matching against a
list of errors that should skip the retry or skip the fallback, I'd be in
favor of that. It's probably easier to start with a list of errorcodes that
we feel are hopeless and should remain ERRORs not WARNINGs.

Reply via email to