pgsql: Set relispartition correctly for index partitions
Set relispartition correctly for index partitions Oversight in commit 8b08f7d4820f: pg_class.relispartition was not being set for index partitions, which is a bit odd, and was also causing the code to unnecessarily call has_superclass() when simply checking the flag was enough. Author: Álvaro Herrera Reported-by: Amit Langote Discussion: https://postgr.es/m/12085bc4-0bc6-0f3a-4c43-57fe06817...@lab.ntt.co.jp Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/9e9befac4a2228ae8a5309900645ecd8ead69f53 Modified Files -- src/backend/catalog/index.c | 1 + src/backend/commands/tablecmds.c | 51 ++-- 2 files changed, 45 insertions(+), 7 deletions(-)
pgsql: Ignore nextOid when replaying an ONLINE checkpoint.
Ignore nextOid when replaying an ONLINE checkpoint. The nextOid value is from the start of the checkpoint and may well be stale compared to values from more recent XLOG_NEXTOID records. Previously, we adopted it anyway, allowing the OID counter to go backwards during a crash. While this should be harmless, it contributed to the severity of the bug fixed in commit 0408e1ed5, by allowing duplicate TOAST OIDs to be assigned immediately following a crash. Without this error, that issue would only have arisen when TOAST objects just younger than a multiple of 2^32 OIDs were deleted and then not vacuumed in time to avoid a conflict. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- REL9_3_STABLE Details --- https://git.postgresql.org/pg/commitdiff/66d4b6bb8065368f313507e94a0003ae48d96405 Modified Files -- src/backend/access/transam/xlog.c | 19 ++- 1 file changed, 14 insertions(+), 5 deletions(-)
pgsql: Ignore nextOid when replaying an ONLINE checkpoint.
Ignore nextOid when replaying an ONLINE checkpoint. The nextOid value is from the start of the checkpoint and may well be stale compared to values from more recent XLOG_NEXTOID records. Previously, we adopted it anyway, allowing the OID counter to go backwards during a crash. While this should be harmless, it contributed to the severity of the bug fixed in commit 0408e1ed5, by allowing duplicate TOAST OIDs to be assigned immediately following a crash. Without this error, that issue would only have arisen when TOAST objects just younger than a multiple of 2^32 OIDs were deleted and then not vacuumed in time to avoid a conflict. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/d1e9079295e9e6fcab8f2ad9c69dd1be8e876d47 Modified Files -- src/backend/access/transam/xlog.c | 19 ++- 1 file changed, 14 insertions(+), 5 deletions(-)
pgsql: Ignore nextOid when replaying an ONLINE checkpoint.
Ignore nextOid when replaying an ONLINE checkpoint. The nextOid value is from the start of the checkpoint and may well be stale compared to values from more recent XLOG_NEXTOID records. Previously, we adopted it anyway, allowing the OID counter to go backwards during a crash. While this should be harmless, it contributed to the severity of the bug fixed in commit 0408e1ed5, by allowing duplicate TOAST OIDs to be assigned immediately following a crash. Without this error, that issue would only have arisen when TOAST objects just younger than a multiple of 2^32 OIDs were deleted and then not vacuumed in time to avoid a conflict. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/08e6cda1c536d22682e8a67e1e49202ae48ef015 Modified Files -- src/backend/access/transam/xlog.c | 19 ++- 1 file changed, 14 insertions(+), 5 deletions(-)
pgsql: Ignore nextOid when replaying an ONLINE checkpoint.
Ignore nextOid when replaying an ONLINE checkpoint. The nextOid value is from the start of the checkpoint and may well be stale compared to values from more recent XLOG_NEXTOID records. Previously, we adopted it anyway, allowing the OID counter to go backwards during a crash. While this should be harmless, it contributed to the severity of the bug fixed in commit 0408e1ed5, by allowing duplicate TOAST OIDs to be assigned immediately following a crash. Without this error, that issue would only have arisen when TOAST objects just younger than a multiple of 2^32 OIDs were deleted and then not vacuumed in time to avoid a conflict. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/6943fb9275a50f3a9d177da1a06ea387bf490ead Modified Files -- src/backend/access/transam/xlog.c | 19 ++- 1 file changed, 14 insertions(+), 5 deletions(-)
pgsql: Ignore nextOid when replaying an ONLINE checkpoint.
Ignore nextOid when replaying an ONLINE checkpoint. The nextOid value is from the start of the checkpoint and may well be stale compared to values from more recent XLOG_NEXTOID records. Previously, we adopted it anyway, allowing the OID counter to go backwards during a crash. While this should be harmless, it contributed to the severity of the bug fixed in commit 0408e1ed5, by allowing duplicate TOAST OIDs to be assigned immediately following a crash. Without this error, that issue would only have arisen when TOAST objects just younger than a multiple of 2^32 OIDs were deleted and then not vacuumed in time to avoid a conflict. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/060bb38d0750a04870b6e15fbb2a995a9dcd2b0a Modified Files -- src/backend/access/transam/xlog.c | 19 ++- 1 file changed, 14 insertions(+), 5 deletions(-)
pgsql: Do not select new object OIDs that match recently-dead entries.
Do not select new object OIDs that match recently-dead entries. When selecting a new OID, we take care to avoid picking one that's already in use in the target table, so as not to create duplicates after the OID counter has wrapped around. However, up to now we used SnapshotDirty when scanning for pre-existing entries. That ignores committed-dead rows, so that we could select an OID matching a deleted-but-not-yet-vacuumed row. While that mostly worked, it has two problems: * If recently deleted, the dead row might still be visible to MVCC snapshots, creating a risk for duplicate OIDs when examining the catalogs within our own transaction. Such duplication couldn't be visible outside the object-creating transaction, though, and we've heard few if any field reports corresponding to such a symptom. * When selecting a TOAST OID, deleted toast rows definitely *are* visible to SnapshotToast, and will remain so until vacuumed away. This leads to a conflict that will manifest in errors like "unexpected chunk number 0 (expected 1) for toast value n". We've been seeing reports of such errors from the field for years, but the cause was unclear before. The fix is simple: just use SnapshotAny to search for conflicting rows. This results in a slightly longer window before object OIDs can be recycled, but that seems unlikely to create any large problems. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/3767216fbdb39fddc7fbb94a2e06965c98dbe697 Modified Files -- src/backend/access/heap/tuptoaster.c | 6 -- src/backend/catalog/catalog.c| 15 --- 2 files changed, 12 insertions(+), 9 deletions(-)
pgsql: Do not select new object OIDs that match recently-dead entries.
Do not select new object OIDs that match recently-dead entries. When selecting a new OID, we take care to avoid picking one that's already in use in the target table, so as not to create duplicates after the OID counter has wrapped around. However, up to now we used SnapshotDirty when scanning for pre-existing entries. That ignores committed-dead rows, so that we could select an OID matching a deleted-but-not-yet-vacuumed row. While that mostly worked, it has two problems: * If recently deleted, the dead row might still be visible to MVCC snapshots, creating a risk for duplicate OIDs when examining the catalogs within our own transaction. Such duplication couldn't be visible outside the object-creating transaction, though, and we've heard few if any field reports corresponding to such a symptom. * When selecting a TOAST OID, deleted toast rows definitely *are* visible to SnapshotToast, and will remain so until vacuumed away. This leads to a conflict that will manifest in errors like "unexpected chunk number 0 (expected 1) for toast value n". We've been seeing reports of such errors from the field for years, but the cause was unclear before. The fix is simple: just use SnapshotAny to search for conflicting rows. This results in a slightly longer window before object OIDs can be recycled, but that seems unlikely to create any large problems. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/5b3ed6b7880b88a355bf809dad83cbe7cbc49316 Modified Files -- src/backend/access/heap/tuptoaster.c | 6 -- src/backend/catalog/catalog.c| 15 --- 2 files changed, 12 insertions(+), 9 deletions(-)
pgsql: Do not select new object OIDs that match recently-dead entries.
Do not select new object OIDs that match recently-dead entries. When selecting a new OID, we take care to avoid picking one that's already in use in the target table, so as not to create duplicates after the OID counter has wrapped around. However, up to now we used SnapshotDirty when scanning for pre-existing entries. That ignores committed-dead rows, so that we could select an OID matching a deleted-but-not-yet-vacuumed row. While that mostly worked, it has two problems: * If recently deleted, the dead row might still be visible to MVCC snapshots, creating a risk for duplicate OIDs when examining the catalogs within our own transaction. Such duplication couldn't be visible outside the object-creating transaction, though, and we've heard few if any field reports corresponding to such a symptom. * When selecting a TOAST OID, deleted toast rows definitely *are* visible to SnapshotToast, and will remain so until vacuumed away. This leads to a conflict that will manifest in errors like "unexpected chunk number 0 (expected 1) for toast value n". We've been seeing reports of such errors from the field for years, but the cause was unclear before. The fix is simple: just use SnapshotAny to search for conflicting rows. This results in a slightly longer window before object OIDs can be recycled, but that seems unlikely to create any large problems. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- REL9_3_STABLE Details --- https://git.postgresql.org/pg/commitdiff/7448e7e237996d51e575e97160d3b07daf4d7b89 Modified Files -- src/backend/access/heap/tuptoaster.c | 6 -- src/backend/catalog/catalog.c| 15 --- 2 files changed, 12 insertions(+), 9 deletions(-)
pgsql: Do not select new object OIDs that match recently-dead entries.
Do not select new object OIDs that match recently-dead entries. When selecting a new OID, we take care to avoid picking one that's already in use in the target table, so as not to create duplicates after the OID counter has wrapped around. However, up to now we used SnapshotDirty when scanning for pre-existing entries. That ignores committed-dead rows, so that we could select an OID matching a deleted-but-not-yet-vacuumed row. While that mostly worked, it has two problems: * If recently deleted, the dead row might still be visible to MVCC snapshots, creating a risk for duplicate OIDs when examining the catalogs within our own transaction. Such duplication couldn't be visible outside the object-creating transaction, though, and we've heard few if any field reports corresponding to such a symptom. * When selecting a TOAST OID, deleted toast rows definitely *are* visible to SnapshotToast, and will remain so until vacuumed away. This leads to a conflict that will manifest in errors like "unexpected chunk number 0 (expected 1) for toast value n". We've been seeing reports of such errors from the field for years, but the cause was unclear before. The fix is simple: just use SnapshotAny to search for conflicting rows. This results in a slightly longer window before object OIDs can be recycled, but that seems unlikely to create any large problems. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/0408e1ed599b06d9bca2927a50a4be52c9e74bb9 Modified Files -- src/backend/access/heap/tuptoaster.c | 8 src/backend/catalog/catalog.c| 15 --- 2 files changed, 12 insertions(+), 11 deletions(-)
pgsql: Do not select new object OIDs that match recently-dead entries.
Do not select new object OIDs that match recently-dead entries. When selecting a new OID, we take care to avoid picking one that's already in use in the target table, so as not to create duplicates after the OID counter has wrapped around. However, up to now we used SnapshotDirty when scanning for pre-existing entries. That ignores committed-dead rows, so that we could select an OID matching a deleted-but-not-yet-vacuumed row. While that mostly worked, it has two problems: * If recently deleted, the dead row might still be visible to MVCC snapshots, creating a risk for duplicate OIDs when examining the catalogs within our own transaction. Such duplication couldn't be visible outside the object-creating transaction, though, and we've heard few if any field reports corresponding to such a symptom. * When selecting a TOAST OID, deleted toast rows definitely *are* visible to SnapshotToast, and will remain so until vacuumed away. This leads to a conflict that will manifest in errors like "unexpected chunk number 0 (expected 1) for toast value n". We've been seeing reports of such errors from the field for years, but the cause was unclear before. The fix is simple: just use SnapshotAny to search for conflicting rows. This results in a slightly longer window before object OIDs can be recycled, but that seems unlikely to create any large problems. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4olosxh...@mail.gmail.com Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/8bba10f7e8349546924a4e8247346c0a48180e8f Modified Files -- src/backend/access/heap/tuptoaster.c | 8 src/backend/catalog/catalog.c| 15 --- 2 files changed, 12 insertions(+), 11 deletions(-)
pgsql: Make local copy of client hostnames in backend status array.
Make local copy of client hostnames in backend status array. The other strings, application_name and query string, were snapshotted to local memory in pgstat_read_current_status(), but we forgot to do that for client hostnames. As a result, the client hostname would appear to change in the local copy, if the client disconnected. Backpatch to all supported versions. Author: Edmund Horner Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com Branch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/310d1379dd710322da398f5223051368fc876e23 Modified Files -- src/backend/postmaster/pgstat.c | 7 +++ 1 file changed, 7 insertions(+)
pgsql: Make local copy of client hostnames in backend status array.
Make local copy of client hostnames in backend status array. The other strings, application_name and query string, were snapshotted to local memory in pgstat_read_current_status(), but we forgot to do that for client hostnames. As a result, the client hostname would appear to change in the local copy, if the client disconnected. Backpatch to all supported versions. Author: Edmund Horner Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/89c2ab34039864488b8a83c03d1b1d841adf4aaf Modified Files -- src/backend/postmaster/pgstat.c | 7 +++ 1 file changed, 7 insertions(+)
pgsql: Make local copy of client hostnames in backend status array.
Make local copy of client hostnames in backend status array. The other strings, application_name and query string, were snapshotted to local memory in pgstat_read_current_status(), but we forgot to do that for client hostnames. As a result, the client hostname would appear to change in the local copy, if the client disconnected. Backpatch to all supported versions. Author: Edmund Horner Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com Branch -- REL9_3_STABLE Details --- https://git.postgresql.org/pg/commitdiff/dfc383cf3975ccb1053ef7431d1dfdcff1dfefea Modified Files -- src/backend/postmaster/pgstat.c | 7 +++ 1 file changed, 7 insertions(+)
pgsql: Make local copy of client hostnames in backend status array.
Make local copy of client hostnames in backend status array. The other strings, application_name and query string, were snapshotted to local memory in pgstat_read_current_status(), but we forgot to do that for client hostnames. As a result, the client hostname would appear to change in the local copy, if the client disconnected. Backpatch to all supported versions. Author: Edmund Horner Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/a820b4c32946c499a2d19846123840a0dad071b5 Modified Files -- src/backend/postmaster/pgstat.c | 7 +++ 1 file changed, 7 insertions(+)
pgsql: Allocate enough shared string memory for stats of auxiliary proc
Allocate enough shared string memory for stats of auxiliary processes. This fixes a bug whereby the st_appname, st_clienthostname, and st_activity_raw fields for auxiliary processes point beyond the end of their respective shared memory segments. As a result, the application_name of a backend might show up as the client hostname of an auxiliary process. Backpatch to v10, where this bug was introduced, when the auxiliary processes were added to the array. Author: Edmund Horner Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/811969b218ac2e8030dfbbb05873344967461618 Modified Files -- src/backend/postmaster/pgstat.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
pgsql: Make local copy of client hostnames in backend status array.
Make local copy of client hostnames in backend status array. The other strings, application_name and query string, were snapshotted to local memory in pgstat_read_current_status(), but we forgot to do that for client hostnames. As a result, the client hostname would appear to change in the local copy, if the client disconnected. Backpatch to all supported versions. Author: Edmund Horner Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/74dc05e01e5a195e432c6bd31f6504eb24b8316b Modified Files -- src/backend/postmaster/pgstat.c | 7 +++ 1 file changed, 7 insertions(+)
pgsql: Make local copy of client hostnames in backend status array.
Make local copy of client hostnames in backend status array. The other strings, application_name and query string, were snapshotted to local memory in pgstat_read_current_status(), but we forgot to do that for client hostnames. As a result, the client hostname would appear to change in the local copy, if the client disconnected. Backpatch to all supported versions. Author: Edmund Horner Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com Branch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/fd2efda5d6f16a0476b41fe14de954549236f76a Modified Files -- src/backend/postmaster/pgstat.c | 7 +++ 1 file changed, 7 insertions(+)
pgsql: Allocate enough shared string memory for stats of auxiliary proc
Allocate enough shared string memory for stats of auxiliary processes. This fixes a bug whereby the st_appname, st_clienthostname, and st_activity_raw fields for auxiliary processes point beyond the end of their respective shared memory segments. As a result, the application_name of a backend might show up as the client hostname of an auxiliary process. Backpatch to v10, where this bug was introduced, when the auxiliary processes were added to the array. Author: Edmund Horner Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/93b3d43dc1880b2dafb8ccbb16700dab5cc3c6e7 Modified Files -- src/backend/postmaster/pgstat.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
pgsql: Fix ALTER TABLE .. ATTACH PARTITION ... DEFAULT
Fix ALTER TABLE .. ATTACH PARTITION ... DEFAULT If the table being attached contained values that contradict the default partition's partition constraint, it would fail to complain, because CommandCounterIncrement changes in 4dba331cb3dc coupled with some bogus coding in the existing ValidatePartitionConstraints prevented the partition constraint from being validated after all -- or rather, it caused to constraint to become an empty one, always succeeding. Fix by not re-reading the OID of the default partition in ATExecAttachPartition. To forestall similar problems, revise the existing code: * rename routine from ValidatePartitionConstraints() to QueuePartitionConstraintValidation, to better represent what it actually does. * add an Assert() to make sure that when queueing a constraint for a partition we're not overwriting a constraint previously queued. * add an Assert() that we don't try to invoke the special-purpose validation of the default partition when attaching the default partition itself. While at it, change some loops to obtain partition OIDs from partdesc->oids rather than find_all_inheritors; reduce the lock level of partitions being scanned from AccessExclusiveLock to ShareLock; rewrite QueuePartitionConstraintValidation in a recursive fashion rather than repetitive. Author: Álvaro Herrera. Tests written by Amit Langote Reported-by: Rushabh Lathia Diagnosed-by: Kyotaro HORIGUCHI, who also provided the initial fix. Reviewed-by: Kyotaro HORIGUCHI, Amit Langote, Jeevan Ladhe Discussion: https://postgr.es/m/CAGPqQf0W+v-Ci_qNV_5R3A=z9lsk4+jo7lzgddrncpp_rrn...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/72cf7f310c0729a331f321fad39835ac886603dc Modified Files -- src/backend/commands/tablecmds.c | 170 +- src/test/regress/expected/alter_table.out | 16 +++ src/test/regress/sql/alter_table.sql | 18 3 files changed, 106 insertions(+), 98 deletions(-)
Re: pgsql: Fix interference between cavering indexes and partitioned tables
On Wed, Apr 11, 2018 at 9:17 AM, Teodor Sigaevwrote: >> Several of the failing animals aren't using optimization, so it can't be >> just that. I think it might make sense considering reverting and trying > > Yep, but on my notebook - only with -02 I suggest using Valgrind to make sure that a patch + tests don't have a problem like this before pushing. That's not perfect, of course, but it's an easy way to save yourself some trouble. -- Peter Geoghegan
pgsql: Invoke submake-generated-headers during "make check", too.
Invoke submake-generated-headers during "make check", too. The MAKELEVEL hack to prevent submake-generated-headers from doing anything in child make runs means that we have to explicitly invoke it at top level for "make check", too, in case somebody proceeds directly to that without an explicit "make all". (I think this usage had parallel-make hazards even before the addition of more generated headers; but it was totally broken as of 3b8f6e75f.) Out of paranoia, force the submake-libpq target to depend on submake-generated-headers, too. This seems to not be absolutely necessary today, but it's not really saving us anything to omit the ordering dependency, and it'll likely break someday without it. Discussion: https://postgr.es/m/20180411103930.gb31...@momjian.us Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/cee83ef4a243c87683a4f472bab0e005b8b56f3c Modified Files -- src/Makefile.global.in | 14 +++--- 1 file changed, 7 insertions(+), 7 deletions(-)
pgsql: Temporary revert 5c6110c6a960ad6fe1b0d0fec6ae36ef4eb913f5
Temporary revert 5c6110c6a960ad6fe1b0d0fec6ae36ef4eb913f5 It discovers one more bug in CompareIndexInfo(), should be fixed first. Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/9282e13a089fb8915b02e45792998996530e Modified Files -- src/backend/commands/indexcmds.c | 27 +-- src/test/regress/expected/indexing.out | 24 src/test/regress/sql/indexing.sql | 19 --- 3 files changed, 13 insertions(+), 57 deletions(-)
Re: pgsql: Fix interference between cavering indexes and partitioned tables
Andres Freund wrote: On 2018-04-11 19:04:53 +0300, Teodor Sigaev wrote: Seems, something is wrong here, investigating... pg_upgrade test fails only with -O2 on my box. Assert, debug could be any. Several of the failing animals aren't using optimization, so it can't be just that. I think it might make sense considering reverting and trying Yep, but on my notebook - only with -02 to figure this out, without turning half the buildfarm red?Agree, five minutes -- Teodor Sigaev E-mail: teo...@sigaev.ru WWW: http://www.sigaev.ru/
Re: pgsql: Fix interference between cavering indexes and partitioned tables
On 2018-04-11 19:04:53 +0300, Teodor Sigaev wrote: > > > Seems, something is wrong here, investigating... > > pg_upgrade test fails only with -O2 on my box. Assert, debug could be any. Several of the failing animals aren't using optimization, so it can't be just that. I think it might make sense considering reverting and trying to figure this out, without turning half the buildfarm red? Greetings, Andres Freund
Re: pgsql: Fix interference between cavering indexes and partitioned tables
Seems, something is wrong here, investigating... pg_upgrade test fails only with -O2 on my box. Assert, debug could be any. -- Teodor Sigaev E-mail: teo...@sigaev.ru WWW: http://www.sigaev.ru/
pgsql: Fix clashing function names between jsonb_plperl and jsonb_plper
Fix clashing function names between jsonb_plperl and jsonb_plperlu This prevented them from being installed at the same time. Author: Dagfinn Ilmari MannsåkerBranch -- master Details --- https://git.postgresql.org/pg/commitdiff/651cb9094154ca323e889269d56b94f27afaceca Modified Files -- contrib/jsonb_plperl/jsonb_plperlu--1.0.sql | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-)
Re: pgsql: Fix interference between cavering indexes and partitioned tables
Seems, something is wrong here, investigating... Teodor Sigaev wrote: Fix interference between cavering indexes and partitioned tables The bug is caused due to the original IndexStmt that DefineIndex receives being overwritten when processing the INCLUDE columns. Use separate list of index params to propagate to child tables. Add tests covering this case. Amit Langote and Alexander Korotkov. Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/5c6110c6a960ad6fe1b0d0fec6ae36ef4eb913f5 Modified Files -- src/backend/commands/indexcmds.c | 27 ++- src/test/regress/expected/indexing.out | 24 src/test/regress/sql/indexing.sql | 19 +++ 3 files changed, 57 insertions(+), 13 deletions(-) -- Teodor Sigaev E-mail: teo...@sigaev.ru WWW: http://www.sigaev.ru/
pgsql: Fix interference between cavering indexes and partitioned tables
Fix interference between cavering indexes and partitioned tables The bug is caused due to the original IndexStmt that DefineIndex receives being overwritten when processing the INCLUDE columns. Use separate list of index params to propagate to child tables. Add tests covering this case. Amit Langote and Alexander Korotkov. Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/5c6110c6a960ad6fe1b0d0fec6ae36ef4eb913f5 Modified Files -- src/backend/commands/indexcmds.c | 27 ++- src/test/regress/expected/indexing.out | 24 src/test/regress/sql/indexing.sql | 19 +++ 3 files changed, 57 insertions(+), 13 deletions(-)
pgsql: doc: Add more information about logical replication privileges
doc: Add more information about logical replication privileges In particular, the requirement to have SELECT privilege for the initial table copy was previously not documented. Author: Shinoda, NoriyoshiBranch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/93e60b9494672ee49bbba8b485ef9d3c76fe3a20 Modified Files -- doc/src/sgml/logical-replication.sgml | 9 - 1 file changed, 8 insertions(+), 1 deletion(-)
pgsql: doc: Fix typos in pgbench documentation
doc: Fix typos in pgbench documentation Author: Fabien COELHOReviewed-by: Edmund Horner Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/036ca6f7bb186ae8564fb9e3a27852757a9450be Modified Files -- doc/src/sgml/ref/pgbench.sgml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
Re: pgsql: Support partition pruning at execution time
On 10 April 2018 at 08:55, Tom Lanewrote: > Alvaro Herrera writes: >> David Rowley wrote: >>> Okay, I've written and attached a fix for this. I'm not 100% certain >>> that this is the cause of the problem on pademelon, but the code does >>> look wrong, so needs to be fixed. Hopefully, it'll make pademelon >>> happy, if not I'll think a bit harder about what might be causing that >>> instability. > >> Pushed it just now. Let's see what happens with pademelon now. > > I've had pademelon's host running a "make installcheck" loop all day > trying to reproduce the problem. I haven't gotten a bite yet (although > at 15+ minutes per cycle, this isn't a huge number of tests). I think > we were remarkably (un)lucky to see the problem so quickly after the > initial commit, and I'm afraid pademelon isn't going to help us prove > much about whether this was the same issue. > > This does remind me quite a bit though of the ongoing saga with the > postgres_fdw test instability. Given the frequency with which that's > failing in the buildfarm, you would not think it's impossible to > reproduce outside the buildfarm, and yet I'm here to tell you that > it's pretty damn hard. I haven't succeeded yet, and that's not for > lack of trying. Could there be something about the buildfarm > environment that makes these sorts of things more likely? coypu just demonstrated that this was not the cause of the problem [1] I'll study the code a bit more and see if I can think why this might be happening. [1] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=coypu=2018-04-11%2004%3A17%3A38=install-check-C -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services