Re: pgsql: Move strtoint() to common
On 14 March 2018 at 08:10, Tom Lanewrote: > Peter Eisentraut writes: >> Move strtoint() to common > > Buildfarm seems to think this isn't quite baked for Windows. Yeah, "restrict" seems to be C99, and the Microsoft compilers don't quite know about that yet. The attached compiles fine for me on a windows machine. Changing "restrict" to "__restrict" also works, so it might, longer-term, be worth some configure test and a PG_RESTICT macro so we can allow this, assuming there are performance gains to be had. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services remove_restrict_keyword.patch Description: Binary data
pgsql: Add Oracle like handling of char arrays.
Add Oracle like handling of char arrays. In some cases Oracle Pro*C handles char array differently than ECPG. This patch adds a Oracle compatibility mode to make ECPG behave like Pro*C. Patch by David RaderBranch -- master Details --- https://git.postgresql.org/pg/commitdiff/3b7ab4380440d7b14ee390fabf39f6d87d7491e2 Modified Files -- src/interfaces/ecpg/ecpglib/data.c | 49 - src/interfaces/ecpg/ecpglib/extern.h | 3 +- src/interfaces/ecpg/preproc/ecpg.c | 6 +- src/interfaces/ecpg/preproc/extern.h | 4 +- src/interfaces/ecpg/test/Makefile | 2 + src/interfaces/ecpg/test/compat_oracle/.gitignore | 2 + src/interfaces/ecpg/test/compat_oracle/Makefile| 11 ++ .../ecpg/test/compat_oracle/char_array.pgc | 66 +++ src/interfaces/ecpg/test/ecpg_schedule | 1 + .../ecpg/test/expected/compat_oracle-char_array.c | 219 + .../test/expected/compat_oracle-char_array.stderr | 145 ++ .../test/expected/compat_oracle-char_array.stdout | 10 + 12 files changed, 513 insertions(+), 5 deletions(-)
pgsql: Fix double frees in ecpg.
Fix double frees in ecpg. Patch by Patrick KreckerBranch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/8e3f3ab5b85a230e9a008c743402c3e5a43085a1 Modified Files -- src/interfaces/ecpg/preproc/ecpg.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-)
pgsql: Fix double frees in ecpg.
Fix double frees in ecpg. Patch by Patrick KreckerBranch -- REL9_3_STABLE Details --- https://git.postgresql.org/pg/commitdiff/09f4ca92bbe5a68b228d6a3251e4d2be31bb6377 Modified Files -- src/interfaces/ecpg/preproc/ecpg.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-)
pgsql: Fix double frees in ecpg.
Fix double frees in ecpg. Patch by Patrick KreckerBranch -- master Details --- https://git.postgresql.org/pg/commitdiff/db2fc801f66a70969cbdd5673ed9d02025c70695 Modified Files -- src/interfaces/ecpg/preproc/ecpg.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-)
pgsql: Fix double frees in ecpg.
Fix double frees in ecpg. Patch by Patrick KreckerBranch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/fcc15bf38100edf26b38c73a809579fc0e9ccc78 Modified Files -- src/interfaces/ecpg/preproc/ecpg.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-)
pgsql: Fix double frees in ecpg.
Fix double frees in ecpg. Patch by Patrick KreckerBranch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/837d4f739ccf16091d41649a9d22d7e911636a3b Modified Files -- src/interfaces/ecpg/preproc/ecpg.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-)
pgsql: Fix double frees in ecpg.
Fix double frees in ecpg. Patch by Patrick KreckerBranch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/8559b40c5e3fb068d0dfd81d4a5a9f7411f2cbba Modified Files -- src/interfaces/ecpg/preproc/ecpg.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-)
pgsql: Add COSTS off to two EXPLAIN using tests.
Add COSTS off to two EXPLAIN using tests. Discussion: https://postgr.es/m/2018031023.i4sgkbl4oqtst...@alap3.anarazel.de Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/4f63e85eb149792492a703f01a3a5b89bf5509a7 Modified Files -- src/test/regress/expected/subselect.out | 24 src/test/regress/sql/subselect.sql | 4 ++-- 2 files changed, 14 insertions(+), 14 deletions(-)
pgsql: Expand AND/OR regression tests around NULL handling.
Expand AND/OR regression tests around NULL handling. Previously there were no tests verifying that NULL handling in AND/OR was correct (i.e. that NULL rather than false is returned if expression doesn't return true). Author: Andres Freund Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/1e22166e5ebbc3df7684209657ea58ba880546d6 Modified Files -- src/test/regress/expected/boolean.out | 83 +++ src/test/regress/sql/boolean.sql | 29 2 files changed, 112 insertions(+)
pgsql: Let Parallel Append over simple UNION ALL have partial subpaths.
Let Parallel Append over simple UNION ALL have partial subpaths. A simple UNION ALL gets flattened into an appendrel of subquery RTEs, but up until now it's been impossible for the appendrel to use the partial paths for the subqueries, so we can implement the appendrel as a Parallel Append but only one with non-partial paths as children. There are three separate obstacles to removing that limitation. First, when planning a subquery, propagate any partial paths to the final_rel so that they are potentially visible to outer query levels (but not if they have initPlans attached, because that wouldn't be safe). Second, after planning a subquery, propagate any partial paths for the final_rel to the subquery RTE in the outer query level in the same way we do for non-partial paths. Third, teach finalize_plan() to account for the possibility that the fake parameter we use for rescan signalling when the plan contains a Gather (Merge) node may be propagated from an outer query level. Patch by me, reviewed and tested by Amit Khandekar, Rajkumar Raghuwanshi, and Ashutosh Bapat. Test cases based on examples by Rajkumar Raghuwanshi. Discussion: http://postgr.es/m/ca+tgmoa6l9a1nnck3atdvzlz4kkhdn1+tm7mfyfvp+uqps7...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/0927d2f46ddd4cf7d6bf2cc84b3be923e0aedc52 Modified Files -- src/backend/optimizer/path/allpaths.c | 22 + src/backend/optimizer/plan/planner.c | 16 +++ src/backend/optimizer/plan/subselect.c| 17 ++- src/test/regress/expected/select_parallel.out | 65 +++ src/test/regress/sql/select_parallel.sql | 25 +++ 5 files changed, 143 insertions(+), 2 deletions(-)
Re: pgsql: Move strtoint() to common
Peter Eisentrautwrites: > Move strtoint() to common Buildfarm seems to think this isn't quite baked for Windows. regards, tom lane
pgsql: When updating reltuples after ANALYZE, just extrapolate from our
When updating reltuples after ANALYZE, just extrapolate from our sample. The existing logic for updating pg_class.reltuples trusted the sampling results only for the pages ANALYZE actually visited, preferring to believe the previous tuple density estimate for all the unvisited pages. While there's some rationale for doing that for VACUUM (first that VACUUM is likely to visit a very nonrandom subset of pages, and second that we know for sure that the unvisited pages did not change), there's no such rationale for ANALYZE: by assumption, it's looked at an unbiased random sample of the table's pages. Furthermore, in a very large table ANALYZE will have examined only a tiny fraction of the table's pages, meaning it cannot slew the overall density estimate very far at all. In a table that is physically growing, this causes reltuples to increase nearly proportionally to the change in relpages, regardless of what is actually happening in the table. This has been observed to cause reltuples to become so much larger than reality that it effectively shuts off autovacuum, whose threshold for doing anything is a fraction of reltuples. (Getting to the point where that would happen seems to require some additional, not well understood, conditions. But it's undeniable that if reltuples is seriously off in a large table, ANALYZE alone will not fix it in any reasonable number of iterations, especially not if the table is continuing to grow.) Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone, and in ANALYZE, just extrapolate from the sample pages on the assumption that they provide an accurate model of the whole table. If, by very bad luck, they don't, at least another ANALYZE will fix it; in the old logic a single bad estimate could cause problems indefinitely. In HEAD, let's remove vac_estimate_reltuples' is_analyze argument altogether; it was never used for anything and now it's totally pointless. But keep it in the back branches, in case any third-party code is calling this function. Per bug #15005. Back-patch to all supported branches. David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels Branch -- REL9_3_STABLE Details --- https://git.postgresql.org/pg/commitdiff/d44ce7b1afdbc8e0540086e4fc94a540924bf3b6 Modified Files -- src/backend/commands/analyze.c | 19 ++ src/backend/commands/vacuum.c | 44 +- 2 files changed, 24 insertions(+), 39 deletions(-)
pgsql: When updating reltuples after ANALYZE, just extrapolate from our
When updating reltuples after ANALYZE, just extrapolate from our sample. The existing logic for updating pg_class.reltuples trusted the sampling results only for the pages ANALYZE actually visited, preferring to believe the previous tuple density estimate for all the unvisited pages. While there's some rationale for doing that for VACUUM (first that VACUUM is likely to visit a very nonrandom subset of pages, and second that we know for sure that the unvisited pages did not change), there's no such rationale for ANALYZE: by assumption, it's looked at an unbiased random sample of the table's pages. Furthermore, in a very large table ANALYZE will have examined only a tiny fraction of the table's pages, meaning it cannot slew the overall density estimate very far at all. In a table that is physically growing, this causes reltuples to increase nearly proportionally to the change in relpages, regardless of what is actually happening in the table. This has been observed to cause reltuples to become so much larger than reality that it effectively shuts off autovacuum, whose threshold for doing anything is a fraction of reltuples. (Getting to the point where that would happen seems to require some additional, not well understood, conditions. But it's undeniable that if reltuples is seriously off in a large table, ANALYZE alone will not fix it in any reasonable number of iterations, especially not if the table is continuing to grow.) Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone, and in ANALYZE, just extrapolate from the sample pages on the assumption that they provide an accurate model of the whole table. If, by very bad luck, they don't, at least another ANALYZE will fix it; in the old logic a single bad estimate could cause problems indefinitely. In HEAD, let's remove vac_estimate_reltuples' is_analyze argument altogether; it was never used for anything and now it's totally pointless. But keep it in the back branches, in case any third-party code is calling this function. Per bug #15005. Back-patch to all supported branches. David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels Branch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/c9414e7867f72fd921a154ce93e21b1dbdb65b07 Modified Files -- src/backend/commands/analyze.c | 19 ++ src/backend/commands/vacuum.c | 44 +- 2 files changed, 24 insertions(+), 39 deletions(-)
pgsql: When updating reltuples after ANALYZE, just extrapolate from our
When updating reltuples after ANALYZE, just extrapolate from our sample. The existing logic for updating pg_class.reltuples trusted the sampling results only for the pages ANALYZE actually visited, preferring to believe the previous tuple density estimate for all the unvisited pages. While there's some rationale for doing that for VACUUM (first that VACUUM is likely to visit a very nonrandom subset of pages, and second that we know for sure that the unvisited pages did not change), there's no such rationale for ANALYZE: by assumption, it's looked at an unbiased random sample of the table's pages. Furthermore, in a very large table ANALYZE will have examined only a tiny fraction of the table's pages, meaning it cannot slew the overall density estimate very far at all. In a table that is physically growing, this causes reltuples to increase nearly proportionally to the change in relpages, regardless of what is actually happening in the table. This has been observed to cause reltuples to become so much larger than reality that it effectively shuts off autovacuum, whose threshold for doing anything is a fraction of reltuples. (Getting to the point where that would happen seems to require some additional, not well understood, conditions. But it's undeniable that if reltuples is seriously off in a large table, ANALYZE alone will not fix it in any reasonable number of iterations, especially not if the table is continuing to grow.) Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone, and in ANALYZE, just extrapolate from the sample pages on the assumption that they provide an accurate model of the whole table. If, by very bad luck, they don't, at least another ANALYZE will fix it; in the old logic a single bad estimate could cause problems indefinitely. In HEAD, let's remove vac_estimate_reltuples' is_analyze argument altogether; it was never used for anything and now it's totally pointless. But keep it in the back branches, in case any third-party code is calling this function. Per bug #15005. Back-patch to all supported branches. David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/1bfb5672306da2d3b3a5e12b3178c165f7aa2392 Modified Files -- src/backend/commands/analyze.c | 19 ++ src/backend/commands/vacuum.c | 44 +- 2 files changed, 24 insertions(+), 39 deletions(-)
pgsql: When updating reltuples after ANALYZE, just extrapolate from our
When updating reltuples after ANALYZE, just extrapolate from our sample. The existing logic for updating pg_class.reltuples trusted the sampling results only for the pages ANALYZE actually visited, preferring to believe the previous tuple density estimate for all the unvisited pages. While there's some rationale for doing that for VACUUM (first that VACUUM is likely to visit a very nonrandom subset of pages, and second that we know for sure that the unvisited pages did not change), there's no such rationale for ANALYZE: by assumption, it's looked at an unbiased random sample of the table's pages. Furthermore, in a very large table ANALYZE will have examined only a tiny fraction of the table's pages, meaning it cannot slew the overall density estimate very far at all. In a table that is physically growing, this causes reltuples to increase nearly proportionally to the change in relpages, regardless of what is actually happening in the table. This has been observed to cause reltuples to become so much larger than reality that it effectively shuts off autovacuum, whose threshold for doing anything is a fraction of reltuples. (Getting to the point where that would happen seems to require some additional, not well understood, conditions. But it's undeniable that if reltuples is seriously off in a large table, ANALYZE alone will not fix it in any reasonable number of iterations, especially not if the table is continuing to grow.) Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone, and in ANALYZE, just extrapolate from the sample pages on the assumption that they provide an accurate model of the whole table. If, by very bad luck, they don't, at least another ANALYZE will fix it; in the old logic a single bad estimate could cause problems indefinitely. In HEAD, let's remove vac_estimate_reltuples' is_analyze argument altogether; it was never used for anything and now it's totally pointless. But keep it in the back branches, in case any third-party code is calling this function. Per bug #15005. Back-patch to all supported branches. David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/d04900de7d0cb5b6ecb6d5bf9fdb6f3105824f81 Modified Files -- contrib/pgstattuple/pgstatapprox.c | 2 +- src/backend/commands/analyze.c | 19 +--- src/backend/commands/vacuum.c | 46 +++--- src/backend/commands/vacuumlazy.c | 2 +- src/include/commands/vacuum.h | 2 +- 5 files changed, 27 insertions(+), 44 deletions(-)
pgsql: When updating reltuples after ANALYZE, just extrapolate from our
When updating reltuples after ANALYZE, just extrapolate from our sample. The existing logic for updating pg_class.reltuples trusted the sampling results only for the pages ANALYZE actually visited, preferring to believe the previous tuple density estimate for all the unvisited pages. While there's some rationale for doing that for VACUUM (first that VACUUM is likely to visit a very nonrandom subset of pages, and second that we know for sure that the unvisited pages did not change), there's no such rationale for ANALYZE: by assumption, it's looked at an unbiased random sample of the table's pages. Furthermore, in a very large table ANALYZE will have examined only a tiny fraction of the table's pages, meaning it cannot slew the overall density estimate very far at all. In a table that is physically growing, this causes reltuples to increase nearly proportionally to the change in relpages, regardless of what is actually happening in the table. This has been observed to cause reltuples to become so much larger than reality that it effectively shuts off autovacuum, whose threshold for doing anything is a fraction of reltuples. (Getting to the point where that would happen seems to require some additional, not well understood, conditions. But it's undeniable that if reltuples is seriously off in a large table, ANALYZE alone will not fix it in any reasonable number of iterations, especially not if the table is continuing to grow.) Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone, and in ANALYZE, just extrapolate from the sample pages on the assumption that they provide an accurate model of the whole table. If, by very bad luck, they don't, at least another ANALYZE will fix it; in the old logic a single bad estimate could cause problems indefinitely. In HEAD, let's remove vac_estimate_reltuples' is_analyze argument altogether; it was never used for anything and now it's totally pointless. But keep it in the back branches, in case any third-party code is calling this function. Per bug #15005. Back-patch to all supported branches. David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels Branch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/25a2ba35edbc3b60121ca9cfbd6cb0137b5e2f32 Modified Files -- src/backend/commands/analyze.c | 19 ++ src/backend/commands/vacuum.c | 44 +- 2 files changed, 24 insertions(+), 39 deletions(-)
pgsql: When updating reltuples after ANALYZE, just extrapolate from our
When updating reltuples after ANALYZE, just extrapolate from our sample. The existing logic for updating pg_class.reltuples trusted the sampling results only for the pages ANALYZE actually visited, preferring to believe the previous tuple density estimate for all the unvisited pages. While there's some rationale for doing that for VACUUM (first that VACUUM is likely to visit a very nonrandom subset of pages, and second that we know for sure that the unvisited pages did not change), there's no such rationale for ANALYZE: by assumption, it's looked at an unbiased random sample of the table's pages. Furthermore, in a very large table ANALYZE will have examined only a tiny fraction of the table's pages, meaning it cannot slew the overall density estimate very far at all. In a table that is physically growing, this causes reltuples to increase nearly proportionally to the change in relpages, regardless of what is actually happening in the table. This has been observed to cause reltuples to become so much larger than reality that it effectively shuts off autovacuum, whose threshold for doing anything is a fraction of reltuples. (Getting to the point where that would happen seems to require some additional, not well understood, conditions. But it's undeniable that if reltuples is seriously off in a large table, ANALYZE alone will not fix it in any reasonable number of iterations, especially not if the table is continuing to grow.) Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone, and in ANALYZE, just extrapolate from the sample pages on the assumption that they provide an accurate model of the whole table. If, by very bad luck, they don't, at least another ANALYZE will fix it; in the old logic a single bad estimate could cause problems indefinitely. In HEAD, let's remove vac_estimate_reltuples' is_analyze argument altogether; it was never used for anything and now it's totally pointless. But keep it in the back branches, in case any third-party code is calling this function. Per bug #15005. Back-patch to all supported branches. David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/c2c4bc628bbe7f1c77a6d3044b1380753c2acbbe Modified Files -- src/backend/commands/analyze.c | 19 ++ src/backend/commands/vacuum.c | 44 +- 2 files changed, 24 insertions(+), 39 deletions(-)
pgsql: Avoid holding AutovacuumScheduleLock while rechecking table stat
Avoid holding AutovacuumScheduleLock while rechecking table statistics. In databases with many tables, re-fetching the statistics takes some time, so that this behavior seriously decreases the available concurrency for multiple autovac workers. There's discussion afoot about more complete fixes, but a simple and back-patchable amelioration is to claim the table and release the lock before rechecking stats. If we find out there's no longer a reason to process the table, re-taking the lock to un-claim the table is cheap enough. (This patch is quite old, but got lost amongst a discussion of more aggressive fixes. It's not clear when or if such a fix will be accepted, but in any case it'd be unlikely to get back-patched. Let's do this now so we have some improvement for the back branches.) In passing, make the normal un-claim step take AutovacuumScheduleLock not AutovacuumLock, since that is what is documented to protect the wi_tableoid field. This wasn't an actual bug in view of the fact that readers of that field hold both locks, but it creates some concurrency penalty against operations that need only AutovacuumLock. Back-patch to all supported versions. Jeff Janes Discussion: https://postgr.es/m/26118.1520865...@sss.pgh.pa.us Branch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/231329a17564ecd242b64cfb86209dc6559626d8 Modified Files -- src/backend/postmaster/autovacuum.c | 53 ++--- 1 file changed, 37 insertions(+), 16 deletions(-)
pgsql: Avoid holding AutovacuumScheduleLock while rechecking table stat
Avoid holding AutovacuumScheduleLock while rechecking table statistics. In databases with many tables, re-fetching the statistics takes some time, so that this behavior seriously decreases the available concurrency for multiple autovac workers. There's discussion afoot about more complete fixes, but a simple and back-patchable amelioration is to claim the table and release the lock before rechecking stats. If we find out there's no longer a reason to process the table, re-taking the lock to un-claim the table is cheap enough. (This patch is quite old, but got lost amongst a discussion of more aggressive fixes. It's not clear when or if such a fix will be accepted, but in any case it'd be unlikely to get back-patched. Let's do this now so we have some improvement for the back branches.) In passing, make the normal un-claim step take AutovacuumScheduleLock not AutovacuumLock, since that is what is documented to protect the wi_tableoid field. This wasn't an actual bug in view of the fact that readers of that field hold both locks, but it creates some concurrency penalty against operations that need only AutovacuumLock. Back-patch to all supported versions. Jeff Janes Discussion: https://postgr.es/m/26118.1520865...@sss.pgh.pa.us Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/4b0e717053e36f931f3cfc9b24060b281db7900b Modified Files -- src/backend/postmaster/autovacuum.c | 53 ++--- 1 file changed, 37 insertions(+), 16 deletions(-)
pgsql: Avoid holding AutovacuumScheduleLock while rechecking table stat
Avoid holding AutovacuumScheduleLock while rechecking table statistics. In databases with many tables, re-fetching the statistics takes some time, so that this behavior seriously decreases the available concurrency for multiple autovac workers. There's discussion afoot about more complete fixes, but a simple and back-patchable amelioration is to claim the table and release the lock before rechecking stats. If we find out there's no longer a reason to process the table, re-taking the lock to un-claim the table is cheap enough. (This patch is quite old, but got lost amongst a discussion of more aggressive fixes. It's not clear when or if such a fix will be accepted, but in any case it'd be unlikely to get back-patched. Let's do this now so we have some improvement for the back branches.) In passing, make the normal un-claim step take AutovacuumScheduleLock not AutovacuumLock, since that is what is documented to protect the wi_tableoid field. This wasn't an actual bug in view of the fact that readers of that field hold both locks, but it creates some concurrency penalty against operations that need only AutovacuumLock. Back-patch to all supported versions. Jeff Janes Discussion: https://postgr.es/m/26118.1520865...@sss.pgh.pa.us Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/38f7831d703be98aaece8af6625faeab5123a02c Modified Files -- src/backend/postmaster/autovacuum.c | 53 ++--- 1 file changed, 37 insertions(+), 16 deletions(-)
pgsql: Avoid holding AutovacuumScheduleLock while rechecking table stat
Avoid holding AutovacuumScheduleLock while rechecking table statistics. In databases with many tables, re-fetching the statistics takes some time, so that this behavior seriously decreases the available concurrency for multiple autovac workers. There's discussion afoot about more complete fixes, but a simple and back-patchable amelioration is to claim the table and release the lock before rechecking stats. If we find out there's no longer a reason to process the table, re-taking the lock to un-claim the table is cheap enough. (This patch is quite old, but got lost amongst a discussion of more aggressive fixes. It's not clear when or if such a fix will be accepted, but in any case it'd be unlikely to get back-patched. Let's do this now so we have some improvement for the back branches.) In passing, make the normal un-claim step take AutovacuumScheduleLock not AutovacuumLock, since that is what is documented to protect the wi_tableoid field. This wasn't an actual bug in view of the fact that readers of that field hold both locks, but it creates some concurrency penalty against operations that need only AutovacuumLock. Back-patch to all supported versions. Jeff Janes Discussion: https://postgr.es/m/26118.1520865...@sss.pgh.pa.us Branch -- REL9_3_STABLE Details --- https://git.postgresql.org/pg/commitdiff/5328b6135756822d19288b3ee366eb5e7cea0426 Modified Files -- src/backend/postmaster/autovacuum.c | 53 ++--- 1 file changed, 37 insertions(+), 16 deletions(-)
pgsql: Avoid holding AutovacuumScheduleLock while rechecking table stat
Avoid holding AutovacuumScheduleLock while rechecking table statistics. In databases with many tables, re-fetching the statistics takes some time, so that this behavior seriously decreases the available concurrency for multiple autovac workers. There's discussion afoot about more complete fixes, but a simple and back-patchable amelioration is to claim the table and release the lock before rechecking stats. If we find out there's no longer a reason to process the table, re-taking the lock to un-claim the table is cheap enough. (This patch is quite old, but got lost amongst a discussion of more aggressive fixes. It's not clear when or if such a fix will be accepted, but in any case it'd be unlikely to get back-patched. Let's do this now so we have some improvement for the back branches.) In passing, make the normal un-claim step take AutovacuumScheduleLock not AutovacuumLock, since that is what is documented to protect the wi_tableoid field. This wasn't an actual bug in view of the fact that readers of that field hold both locks, but it creates some concurrency penalty against operations that need only AutovacuumLock. Back-patch to all supported versions. Jeff Janes Discussion: https://postgr.es/m/26118.1520865...@sss.pgh.pa.us Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/4460964aedaa31eec6fe8be931049b094be46f23 Modified Files -- src/backend/postmaster/autovacuum.c | 53 ++--- 1 file changed, 37 insertions(+), 16 deletions(-)
pgsql: Set connection back to NULL after freeing it.
Set connection back to NULL after freeing it. Patch by Jeevan LadheBranch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/fe65f5931942e6aa7ff0f185cd777eb8d635e3ae Modified Files -- src/interfaces/ecpg/preproc/output.c | 3 +++ 1 file changed, 3 insertions(+)
pgsql: Set connection back to NULL after freeing it.
Set connection back to NULL after freeing it. Patch by Jeevan LadheBranch -- master Details --- https://git.postgresql.org/pg/commitdiff/b32fad52e94307261471d05a79c70f8382d71657 Modified Files -- src/interfaces/ecpg/preproc/output.c | 3 +++ 1 file changed, 3 insertions(+)
pgsql: Set connection back to NULL after freeing it.
Set connection back to NULL after freeing it. Patch by Jeevan LadheBranch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/bd7eb6fe65b34963135851f06210d8e5fe048ef4 Modified Files -- src/interfaces/ecpg/preproc/output.c | 3 +++ 1 file changed, 3 insertions(+)
pgsql: Set connection back to NULL after freeing it.
Set connection back to NULL after freeing it. Patch by Jeevan LadheBranch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/44a36a8d9a6e7ba209253d694dd2ebf3e13f0b5d Modified Files -- src/interfaces/ecpg/preproc/output.c | 3 +++ 1 file changed, 3 insertions(+)
pgsql: Set connection back to NULL after freeing it.
Set connection back to NULL after freeing it. Patch by Jeevan LadheBranch -- REL9_3_STABLE Details --- https://git.postgresql.org/pg/commitdiff/042badc3778ade0f430049eb7dac03972e544e5a Modified Files -- src/interfaces/ecpg/preproc/output.c | 3 +++ 1 file changed, 3 insertions(+)
pgsql: Set connection back to NULL after freeing it.
Set connection back to NULL after freeing it. Patch by Jeevan LadheBranch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/95f0260218ba5882828c245710964c828da6fb26 Modified Files -- src/interfaces/ecpg/preproc/output.c | 3 +++ 1 file changed, 3 insertions(+)
pgsql: Move strtoint() to common
Move strtoint() to common Several places used similar code to convert a string to an int, so take the function that we already had and make it globally available. Reviewed-by: Michael PaquierBranch -- master Details --- https://git.postgresql.org/pg/commitdiff/17bb62501787c56e0518e61db13a523d47afd724 Modified Files -- src/backend/nodes/read.c | 12 +--- src/backend/parser/scan.l | 9 - src/backend/utils/adt/datetime.c | 18 +- src/common/string.c | 15 +++ src/include/common/string.h | 1 + src/interfaces/ecpg/pgtypeslib/.gitignore | 1 + src/interfaces/ecpg/pgtypeslib/Makefile | 6 +- src/interfaces/ecpg/pgtypeslib/interval.c | 16 ++-- src/interfaces/ecpg/preproc/pgc.l | 10 +- 9 files changed, 39 insertions(+), 49 deletions(-)
pgsql: Fix CREATE TABLE / LIKE with bigint identity column
Fix CREATE TABLE / LIKE with bigint identity column CREATE TABLE / LIKE with a bigint identity column would fail on platforms where long is 32 bits. Copying the sequence values used makeInteger(), which would truncate the 64-bit sequence data to 32 bits. To fix, use makeFloat() instead, like the parser. (This does not actually make use of floats, but stores the values as strings.) Bug: #15096 Reviewed-by: Michael PaquierBranch -- master Details --- https://git.postgresql.org/pg/commitdiff/377b5ac4845c5ffbf992ee95c32d7d16d38b9081 Modified Files -- src/backend/commands/sequence.c | 19 +++-- src/test/regress/expected/create_table_like.out | 28 - src/test/regress/sql/create_table_like.sql | 2 +- 3 files changed, 28 insertions(+), 21 deletions(-)