Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Peter Geoghegan
On Tue, Oct 24, 2017 at 10:20 PM, Justin Pryzby wrote: > I think you must have compared these: Yes, I did. My mistake. > On Tue, Oct 24, 2017 at 03:11:44PM -0500, Justin Pryzby wrote: >> ts=# SELECT * FROM bt_page_items(get_raw_page('sites_idx', 1)); >> >> itemoffset | 48

Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Justin Pryzby
On Tue, Oct 24, 2017 at 02:57:47PM -0700, Peter Geoghegan wrote: > On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby wrote: > > ..which I gather just verifies that the index is corrupt, not sure if > > there's > > anything else to do with it? Note, we've already removed the

Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Justin Pryzby
On Tue, Oct 24, 2017 at 02:57:47PM -0700, Peter Geoghegan wrote: > On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby wrote: > > ..which I gather just verifies that the index is corrupt, not sure if > > there's > > anything else to do with it? Note, we've already removed the

Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Justin Pryzby
On Tue, Oct 24, 2017 at 01:48:55PM -0500, Kenneth Marshall wrote: > I just dealt with a similar problem with pg_repack and a PostgreSQL 9.5 DB, > the exact same error. It seemed to caused by a tuple visibility issue that > allowed the "working" unique index to be built, even though a duplicate row

Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Peter Geoghegan
On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby wrote: > ..which I gather just verifies that the index is corrupt, not sure if there's > anything else to do with it? Note, we've already removed the duplicate rows. Yes, the index itself is definitely corrupt -- this failed

Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Justin Pryzby
On Tue, Oct 24, 2017 at 12:31:49PM -0700, Peter Geoghegan wrote: > On Tue, Oct 24, 2017 at 11:48 AM, Kenneth Marshall wrote: > >> Really ? pg_repack "found" and was victim to the duplicate keys, and > >> rolled > >> back its work. The CSV logs clearly show that our application

Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Peter Geoghegan
On Tue, Oct 24, 2017 at 11:48 AM, Kenneth Marshall wrote: >> Really ? pg_repack "found" and was victim to the duplicate keys, and rolled >> back its work. The CSV logs clearly show that our application INSERTed rows >> which are duplicates. >> >> [pryzbyj@database ~]$ rpm -qa

Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Kenneth Marshall
On Tue, Oct 24, 2017 at 01:30:19PM -0500, Justin Pryzby wrote: > On Tue, Oct 24, 2017 at 01:27:14PM -0500, Kenneth Marshall wrote: > > On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote: > > > > Note: > > > I run a script which does various combinations of ANALYZE/VACUUM > > >

Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Justin Pryzby
On Tue, Oct 24, 2017 at 01:27:14PM -0500, Kenneth Marshall wrote: > On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote: > > Note: > > I run a script which does various combinations of ANALYZE/VACUUM > > (FULL/ANALYZE) > > following the upgrade, and a script runs nightly with REINDEX

Re: [HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Kenneth Marshall
On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote: > I upgrade another instance to PG10 yesterday and this AM found unique key > violations. > > Our application is SELECTing FROM sites WHERE site_location=$1, and if it > doesn't find one, INSERTs one (I know that's racy and not

[HACKERS] unique index violation after pg_upgrade to PG10

2017-10-24 Thread Justin Pryzby
I upgrade another instance to PG10 yesterday and this AM found unique key violations. Our application is SELECTing FROM sites WHERE site_location=$1, and if it doesn't find one, INSERTs one (I know that's racy and not ideal). We ended up with duplicate sites, despite a unique index. We removed